score
int64
50
2.08k
text
stringlengths
698
618k
url
stringlengths
16
846
year
int64
13
24
121
PESWiki.com -- Pure Energy Systems Wiki: Finding and facilitating breakthrough clean energy technologies. Donations to PES are needed and greatly appreciated. Thank you. - See also: Directory:Propulsion Spacecraft propulsion is used to change the velocity of spacecraft and artificial satellites, or in short, to provide delta-v. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. Most spacecraft today are propelled by heating the reaction mass and allowing it to flow out the back of the vehicle. This sort of engine is called a rocket engine. All current spacecraft use chemical rocket (bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical rockets (often monopropellant rockets) or resistojet rockets to keep their station, although some use momentum wheels for attitude control. Newer geo-orbiting spacecraft are starting to use electric propulsion for north-south stationkeeping. Interplanetary vehicles mostly use chemical rockets as well, although a few have experimentally used ion thrusters with some success (a form of electric propulsion). The necessity for propulsion systems Artificial satellites must be launched into orbit, and once there they must be placed in their nominal orbit. Once in the desired orbit, they often need some form of attitude control so that they are correctly pointed with respect to the Earth, the Sun, and possibly some astronomical object of interest. They are also subject to drag from the thin atmosphere, so that to stay in orbit for a long period of time some form of propulsion is occasionally necessary to make small corrections (orbital stationkeeping). Many satellites need to be moved from one orbit to another from time to time, and this also requires propulsion. When a satellite has exhausted its ability to adjust its orbit, its useful life is over. Spacecraft designed to travel further also need propulsion methods. They need to be launched out of the Earth's atmosphere just as satellites do. Once there, they need to leave orbit and move around. For interplanetary travel, a spacecraft must use its engines to leave Earth orbit. Once it has done so, it must somehow make its way to its destination. Current interplanetary spacecraft do this with a series of short-term orbital adjustments. In between these adjustments, the spacecraft simply falls freely along its orbit. The simplest fuel-efficient means to move from one circular orbit to another is with a Hohmann transfer orbit: the spacecraft begins in a roughly circular orbit around the Sun. A short period of thrust in the direction of motion accelerates or decelerates the spacecraft into an elliptical orbit around the Sun which is tangential to its previous orbit and also to the orbit of its destination. The spacecraft falls freely along this elliptical orbit until it reaches its destination, where another short period of thrust accelerates or decelerates it to match the orbit of its destination. Special methods such as aerobraking are sometimes used for this final orbital adjustment. Some spacecraft propulsion methods such as solar sails provide very low but inexhaustible thrust; an interplanetary vehicle using one of these methods would follow a rather different trajectory, either constantly thrusting against its direction of motion in order to decrease its distance from the Sun or constantly thrusting along its direction of motion to increase its distance from the Sun. Spacecraft for interstellar travel also need propulsion methods. No such spacecraft has yet been built, but many designs have been discussed. Since interstellar distances are very great, a tremendous velocity is needed to get a spacecraft to its destination in a reasonable amount of time. Acquiring such a velocity on launch and getting rid of it on arrival will be a formidable challenge for spacecraft designers. Effectiveness of propulsion systems When in space, the purpose of a propulsion system is to change the velocity v of a spacecraft. Since this is more difficult for more massive spacecraft, designers generally discuss momentum, mv. The amount of change in momentum is called impulse. So the goal of a propulsion method in space is to create an impulse. When launching a spacecraft from the Earth, a propulsion method must overcome a higher gravitational pull to provide a net positive acceleration. In orbit, the spacecraft tangential velocity provides a centrifugal force that counterweighs the gravity pull at a given path (which is actually the orbit path) so that any additional impulse even very tiny will result in a change in the orbit path. The rate of change of velocity is called acceleration, and the rate of change of momentum is called force. To reach a given velocity, one can apply a small acceleration over a long period of time, or one can apply a large acceleration over a short time. Similarly, one can achieve a given impulse with a large force over a short time or a small force over a long time. This means that for maneuvering in space, a propulsion method that produces tiny accelerations but runs for a long time can produce the same impulse as a propulsion method that produces large accelerations for a short time. When launching from a planet, tiny accelerations cannot overcome the planet's gravitational pull and so cannot be used. The law of conservation of momentum means that in order for a propulsion method to change the momentum of a space craft it must change the momentum of something else as well. A few designs take advantage of things like magnetic fields or light pressure in order to change the spacecraft's momentum, but in free space the rocket must bring along some mass to accelerate away in order to push itself forward. Such mass is called reaction mass. In order for a rocket to work, it needs two things: reaction mass and energy. The impulse provided by launching a particle of reaction mass having mass m at velocity v is mv. But this particle has kinetic energy mv2/2, which must come from somewhere. In a conventional solid fuel rocket, the fuel is burned, providing the energy, and the reaction products are allowed to flow out the back, providing the reaction mass. In an ion thruster, electricity is used to accelerate ions out the back. Here some other source must provide the electrical energy (perhaps a solar panel or a nuclear reactor) while the ions provide the reaction mass. When discussing the efficiency of a propulsion system, designers often focus on effectively using the reaction mass. Reaction mass must be carried along with the rocket and is irretrievably consumed when used. One way of measuring the amount of impulse that can be obtained from a fixed amount of reaction mass is the specific impulse, the impulse per unit weight-on-Earth (typically designated by Isp). The unit for this value is seconds. Since the weight on Earth of the reaction mass is often unimportant when discussing vehicles in space, specific impulse can also be discussed in terms of impulse per unit mass. This alternate form of specific impulse uses the same units as velocity (e.g. m/s), and in fact it is equal to the effective exhaust velocity of the engine (typically designated ve). Confusingly, both values are sometimes called specific impulse. The two values differ by a factor of g, the acceleration due to gravity on the Earth's surface (Ispg = ve). A rocket with a high exhaust velocity can achieve the same impulse with less reaction mass. However, the energy required for that impulse is proportional to the square of the exhaust velocity, so that more mass-efficient engines require much more energy. This is a problem if the engine is to provide a large amount of thrust. To generate a large amount of impulse per second, it must use a large amount of energy per second. So highly efficient engines require enormous amounts of energy per second to produce high thrusts. As a result, most high-efficiency engine designs also provide very low thrust. Burning the entire usable propellant of a spacecraft through the engines in a straight line in free space would produce a net velocity change to the vehicle; this number is termed 'delta-v'. The total Δv of a vehicle can be calculated using the rocket equation, where M is the mass of fuel (or rather the mass of propellant), P is the mass of the payload (including the rocket structure), and ve is the velocity of the rocket exhaust. This is known as the Tsiolkovsky rocket equation: For historical reasons, as discussed above, ve is sometimes written as - ve = Ispgo For a long voyage, the majority of the spacecraft's mass may be reaction mass. Since a rocket must carry all its reaction mass with it, most of the first reaction mass goes towards accelerating reaction mass rather than payload. If we have a payload of mass P, the spacecraft needs to change its velocity by Δv, and the rocket engine has exhaust velocity ve, then the mass M of reaction mass which is needed can be calculated using the rocket equation and the formula for Isp For Δv much smaller than ve, this equation is roughly linear, and little reaction mass is needed. If Δv is comparable to ve, then there needs to be about twice as much fuel as combined payload and structure (which includes engines, fuel tanks, and so on). Beyond this, the growth is exponential; speeds much higher than the exhaust velocity require very high ratios of fuel mass to payload and structural mass. In order to achieve this, some amount of energy must go into accelerating the reaction mass. Every engine will waste some energy, but even assuming 100% efficiency, the engine will need energy amounting to Comparing the rocket equation (which shows how much energy ends up in the final vehicle) and the above equation (which shows the total energy required) shows that even with 100% engine efficiency, certainly not all energy supplied ends up in the vehicle - some of it, indeed usually most of it, ends up as kinetic energy of the exhaust. For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example a launch mission to low Earth orbit requires about 9.3-10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer. Suppose we want to send a 10,000 kg space probe to Mars. The required Δv from LEO is approximately 3000 m/s, using a Hohmann transfer orbit. (A manned probe would need to take a faster route and use more fuel). For the sake of argument, let us say that the following thrusters may be used: |Engine||Effective Exhaust Velocity| |Energy per kg| per N of thrust |Solid rocket||1,000||100||190,000||95||500 kJ||0.5 kW| |Bipropellant rocket||5,000||500||8,200||103||12.6 MJ||2.5 kW| |Ion thruster||50,000||5,000||620||775||1.25 GJ||25 kW| |VASIMR||300,000||30,000||100||4,500||45 GJ||150 kW| Observe that the more fuel-efficient engines can use far less fuel; its mass is almost negligible (relative to the mass of the payload and the engine itself) for some of the engines. However, note also that these require a large total amount of energy. For earth launch engines require a thrust to weight ratio of much more than unity. To do this they would have to be supplied with Gigawatts of power — equivalent to a major metropolitan generating station. This would need to be carried on the vehicle, which is clearly impractical. Instead, a much smaller, less powerful generator may be included which will take much longer to generate the total energy needed. This lower power is only sufficient to accelerate a tiny amount of fuel per second, but over long periods the velocity will be finally achieved. For example. it took the Smart 1 more than a year to reach the Moon, while with a chemical rocket it takes a few days. Because the ion drive needs much less fuel, the total launched mass is usually lower, which typically results in a lower overall cost. Interestingly, for a mission delta-v, there is a fixed Isp that minimises the overall energy used by the rocket. This comes to an exhaust velocity of about ⅔ of the mission delta-v (see also the energy computed from the rocket equation). Drives such as VASIMR, and to a lesser extent other Ion thrusters have exhaust velocities that can be enormously higher than this ideal, and thus end up powersource limited and give very low thrust. Where the vehicle performance is power limited, e.g. if solar power or nuclear power is used, then in the case of a large ve the maximum acceleration is inversely proportional to it. Hence the time to reach a required delta-v is proportional to ve. Thus the latter should not be too large. Propulsion methods can be classified based on their means of accelerating the reaction mass. There are also some special methods for launches, planetary arrivals, and landings. A rocket engine is a reaction engine that can be used for spacecraft propulsion as well as terrestrial uses, such as missiles. Rocket engines take their reaction mass from one or more tanks and form it into a jet, obtaining thrust in accordance with Newton's third law. Most rocket engines are internal combustion engines, although non combusting forms exist. Most rocket engines are internal combustion heat engines (although non combusting forms exist). Rocket engines generally produce a high temperature reaction mass, as a hot gas. This is achieved by combusting a liquid or gaseous fuel with an oxidiser within a combustion chamber. The extremely hot gas is then allowed to escape through a high-expansion ratio nozzle. This bell-shaped nozzle is what gives a rocket engine its characteristic shape. The effect of the nozzle is to dramatically accelerate the mass, converting most of the thermal energy into kinetic energy. Exhaust speeds as high as 10 times the speed of sound at sea level are not uncommon. Rockets emitting plasma can potentially carry out reactions inside a magnetic bottle and release the plasma via a magnetic nozzle, so that no solid matter need come in contact with the plasma. Of course, the machinery to do this is complex, but research into nuclear fusion has developed methods, some of which have been used in speculative propulsion systems. Classic rocket engines produce a high temperature, hypersonic gaseous exhaust. This is achieved by the combustion of solid, liquid or gaseous propellant, containing oxidiser and a fuel, within a combustion chamber at high pressure. The hot gas produced is then allowed to escape through a narrow hole (the 'throat'), into a high-expansion ratio nozzle. The effect of the nozzle is to dramatically accelerate the mass, converting most of the thermal energy into kinetic energy. The large bell or cone shaped expansion nozzle gives a rocket engine its characteristic shape. Exhaust speeds as high as 10 times the speed of sound at sea level are not uncommon. Part of the rocket engine's thrust comes from the gas pressure inside the combustion chamber but the majority comes from the pressure against the inside of the expansion nozzle. Inside the combustion chamber the gas produces a similar force against all the sides of the combustion chamber but the throat gives no force producing an unopposed resultant force from the diametrically opposite end of the chamber. As the gases (adiabatically) expand inside the nozzle they press against the bell's walls forcing the rocket engine in one direction, and accelerating the gases in the opposite direction. For optimum performance hot gas is used because it maximises the speed of sound at the throat-for aerodynamic reasons the flow goes sonic ("chokes") at the throat, so the highest speed there is desirable. By comparison, at room temperature the speed of sound in air is about 340m/s, the speed of sound in the hot gas of a rocket engine can be over 1700m/s. The expansion part of the rocket nozzle then multiplies the speed of the flow by a further factor, typically between 1.5 and 4 times, giving a highly collimated exhaust jet. The speed ratio of a rocket nozzle is mostly determined by its area expansion ratio—the ratio of the area of the throat to the area at the exit, but details of the gas properties are also important. Larger ratio nozzles are more massive and bulkier, but they are able to extract more heat from the combustion gases, which become lower in pressure and colder, but also faster. A significant complication arises when launching a vehicle from the Earth's surface as the ambient atmospheric pressure changes with altitude. For maximum performance it turns out that the pressure of the gas leaving a rocket nozzle should be the same as ambient pressure; if lower the vehicle will be slowed by the difference in pressure between the top of the engine and the exit, if higher then this represents pressure that the bell has not turned into thrust. To achieve this ideal, the diameter of the nozzle would need to increase with altitude, which is difficult to arrange. A compromise nozzle is generally used and some percentage reduction in performance occurs. To improve on this, various exotic nozzle designs such as the plug nozzle, stepped nozzles, the expanding nozzle and the aerospike have been proposed, each having some way to adapt to changing ambient air pressure and each allowing the gas to expand further against the nozzle giving extra thrust at higher altitude. Airbreathing engines for launch Studies generally show that conventional air-breathing engines, such as ramjets or turbojets are basically too heavy (have too low a thrust/weight ratio) to give any significant performance improvement when installed on a launch vehicle. However, they can be used on a separate lift vehicle (e.g. X-1, Pegasus and SS1). On the other hand, very lightweight or very high speed engines have been proposed that take advantage of the air during ascent: A jet engine is an engine that discharges a fast moving jet of fluid to generate thrust in accordance with Newton's third law of motion. This broad definition of jet engines includes turbojets, turbofans, rockets and ramjets and water jets, but in common usage, the term generally refers to a gas turbine used to produce a jet of high speed exhaust gases for special propulsive purposes. Engines that may need to operate at low hypersonic speeds could theoretically have much higher performance if a heat exchanger is used to cool the incoming air. The low temperature allows lighter materials to be used and combustors run at their maximum speeds (ordinarily, fuel flow must be reduced to prevent the turbines from melting, but doing so greatly reduces thrust) This leads to plausible designs like SABRE, that might permit single-stage-to-orbit. The Skylon Spaceplane, and ATREX, that might permit jet engines to be used as boosters for space vehicles. Electromagnetic acceleration of reaction mass Rather than relying on high temperature and fluid dynamics to accelerate the reaction mass to high speeds, there are a variety of methods that use electrostatic or electromagnetic forces to accelerate the reaction mass directly. Usually the reaction mass is a stream of ions. Such an engine requires electric power to run, and high exhaust velocities require large amounts of energy. For these drives it turns out that to a reasonable approximation fuel use, energetic efficiency and thrust are all inversely proportional to exhaust velocity. Their very high exhaust velocity means they require huge amounts of energy and thus with practical powersources provide low thrust, but use hardly any fuel. For some missions, solar energy may be sufficient, and has very often been used, but for others nuclear energy will be necessary; engines drawing their power from a nuclear source are called nuclear electric rockets. With any current source of power, chemical, nuclear or solar, the maximum amount of power that can be generated greatly limits the maximum amount of thrust that can be produced to a small value. Power generation also adds significant mass to the spacecraft, and ultimately the weight of the power source limits the performance of the vehicle. Current nuclear power generators are approximately half the weight of solar panels per watt of energy supplied, at terrestrial distances from the Sun. Chemical power generators are not used due to the far lower total available energy. Beamed power to the spacecraft shows potential. The dissipation of waste heat from the powerplant may make any propulsion system requiring a separate power source infeasible for interstellar travel. Some electromagnetic methods: - Ion thruster - Magnetoplasmadynamic thruster - Variable specific impulse magnetoplasma rocket - Mass drivers (for propulsion) Systems without reaction mass carried within the spacecraft The law of conservation of momentum states that any engine which uses no reaction mass cannot move the center of mass of a spaceship (changing orientation, on the other hand, is possible). But space is not empty, especially space inside the Solar System; there are gravitation fields, magnetic fields, solar wind and solar radiation. Various propulsion methods try to take advantage of these. However, since these phenomena are diffuse in nature, corresponding propulsion structures need to be proportionately large. Space drives that need no (or little) reaction mass: For changing the orientation of a satellite or other space vehicle, conservation of angular momentum does not pose a similar constraint. Thus many satellites use momentum wheels to control their orientations. These cannot be the only system for controlling satellite orientation, as the angular momentum built up due to torques from external forces such as solar, magnetic or tidal forces eventually needs to be "bled off" using a secondary system. High thrust is of vital importance for Earth launch, thrust has to be greater than weight (see also gravity drag). Many of the propulsion methods above give a thrust/weight ratio of much less than 1, and so cannot be used for launch. Exhaust toxicity or other side effects can also have detrimental effects on the environment the spacecraft is launching from, ruling out other propulsion methods, such as most nuclear engines, at least for use from the Earths surface. One advantage that spacecraft have in launch is the availability of infrastructure on the ground to assist them. Proposed ground-assisted launch mechanisms include: - Space elevator - Orbital airship - Space fountain - Hypersonic skyhook - Electromagnetic catapult (railgun, coilgun) - Space gun (Project HARP, ram accelerator) - Laser propulsion (Lightcraft) Planetary arrival and landing When a vehicle is to enter orbit around its destination planet, or when it is to land, it must adjust its velocity. This can be done using all the methods listed above (provided they can generate a high enough thrust), but there are a few methods that can take advantage of planetary atmospheres and/or surfaces. - Aerobraking allows a spacecraft to reduce the high point of an elliptical orbit by repeated brushes with the atmosphere at the low point of the orbit. This can save a considerable amount of fuel since it takes much less delta-V to enter an elliptical orbit compared to a low circular orbit. Since the braking is done over the course of many orbits, heating is comparatively minor, and a heat shield is not required. This has been done on several Mars missions such as Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter, and at least one Venus mission, Magellan. - Aerocapture is a much more aggressive manoeuver, converting an incoming hyperbolic orbit to an elliptical orbit in one pass. This requires a heat shield and much trickier navigation, since it must be completed in one pass through the atmosphere, and unlike aerobraking no preview of the atmosphere is possible. If the intent is to remain in orbit, then at least one more propulsive maneuver is required after aerocapture—otherwise the low point of the resulting orbit will remain in the atmosphere, resulting in eventual re-entry. Aerocapture has not yet been tried on a planetary mission, but the re-entry skip by Zond 6 and Zond 7 upon lunar return were aerocapture maneuvers, since they turned a hyperbolic orbit into an elliptical orbit. On these missions, since there was no attempt to raise the perigee after the aerocapture, the resulting orbit still intersected the atmosphere, and re-entry occurred at the next perigee. - Parachutes can land a probe on a planet with an atmosphere, usually after the atmosphere has scrubbed off most of the velocity, using a heat shield. - Airbags can soften the final landing. - Lithobraking, or stopping by simply smashing into the target, is usually done by accident. However, it may be done deliberately with the probe expected to survive (see, for example, Deep Space 2). Very sturdy probes and low approach velocities are required. Gravitational slingshots can also be used to carry a probe onward to other destinations. Methods requiring new principles of physics In addition, a variety of hypothetical propulsion techniques have been considered that would require entirely new principles of physics to realize. To date, such methods are highly speculative and include: - Diametric drive - Pitch drive - Bias drive - Disjunction drive - Alcubierre drive (Warp drive) - Differential sail - Wormholes (impossible to build with current technology) - Antigravity (true antigravity is currently theoretically impossible) - Reactionless drives (theoretically impossible) Table of methods and their specific impulse Below is a summary of some of the more popular, proven technologies, followed by increasingly speculative methods. Three numbers are shown. The first is the effective exhaust velocity: the equivalent speed that the propellant leaves the vehicle. This is not necessarily the most important characteristic of the propulsion method, thrust and power consumption and other factors can be, however: - if the delta-v is much more than the exhaust velocity, then exorbitant amounts of fuel are necessary (see the section on calculations, above) - if it is much more than the delta-v, then, proportionally more energy is needed; if the power is limited, as with solar energy, this means that the journey takes a proportionally longer time The second and third are the typical amounts of thrust and the typical burn times of the method. Outside a gravitational potential small amounts of thrust applied over a long period will give the same effect as large amounts of thrust over a short period. (This result does not apply when the object is significantly influenced by gravity.) |Method||Effective Exhaust Velocity| |Propulsion methods in current use| |Solid rocket||1,000 - 4,000||103 - 107||minutes| |Hybrid rocket||1,500 - 4,200||minutes| |Monopropellant rocket||1,000 - 3,000||0.1 - 100||milliseconds - minutes| |Bipropellant rocket||1,000 - 4,700||0.1 - 107||minutes| |Tripropellant rocket||2,500 - 4,500||minutes| |Resistojet rocket||2,000 - 6,000||10-2 - 10||minutes| |Arcjet rocket||4,000 - 12,000||10-2 - 10||minutes| |Hall effect thruster (HET)||8,000 - 50,000||10-3 - 10||months| |Electrostatic ion thruster||15,000 - 80,000||10-3 - 10||months| |Field Emission Electric Propulsion (FEEP)||100,000 - 130,000||10-6 - 10-3||weeks| |Magnetoplasmadynamic thruster (MPD)||20,000 - 100,000||100||weeks| |Pulsed plasma thruster (PPT)| |Pulsed inductive thruster (PIT)||50,000||20||months| |Nuclear electric rocket||As electric propulsion method used| |Tether propulsion||N/A||1 - 1012||minutes| |Currently feasible propulsion methods| |Solar sails||N/A|| 9 per km²| (at 1 AU) |Mass drivers (for propulsion)||30,000 - ?||104 - 108||months| |Orion Project (Near term nuclear pulse propulsion)||20,000 - 100,000||109 - 1012||several days| |Variable specific impulse magnetoplasma rocket (VASIMR)||10,000 - 300,000||40 - 1,200||days - months| |Nuclear thermal rocket||9,000||105||minutes| |Solar thermal rocket||7,000 - 12,000||1 - 100||weeks| |Air-augmented rocket||5,000 - 6,000||seconds-minutes| |Liquid air cycle engine||4,500||seconds-minutes| |Dual mode propulsion rocket| |Technologies requiring further research| |Mini-magnetospheric plasma propulsion||200,000||~1 N/kW||months| |Nuclear pulse propulsion (Project Daedalus' drive)||20,000 - 1,000,000||109 - 1012||half hour| |Gas core reactor rocket||10,000 - 20,000||103 - 106| |Antimatter catalyzed nuclear pulse propulsion||20,000 - 400,000||days-weeks| |Nuclear salt-water rocket||100,000||103 - 107||half hour| |Beam-powered propulsion||As propulsion method powered by beam| |Nuclear photonic rocket||300,000,000||10-5 - 1||years-decades| |Significantly beyond current engineering| |gravitoelectromagnetic toroidal launchers| - interplanetary travel - interstellar travel - List of aerospace engineering topics - specific impulse - rocket engine nozzles - Tsiolkovsky rocket equation - Magnetic sail - Low temperature - Chemical heating - Solid rocket - Hybrid rocket - Monopropellant rocket - Bipropellant rocket - Tripropellant rocket - Dual mode propulsion rocket - Electric heating - Resistojet rocket (electric heating) - Arcjet rocket (chemical burning aided by electrical discharge) - Pulsed plasma thruster (electric arc heating; emits plasma) - Solar heating - Nuclear heating - Nuclear thermal rocket (nuclear fission energy) - Radioisotope rocket/"Poodle thruster" (radioactive decay energy) - Antimatter catalyzed nuclear pulse propulsion (fission and/or fusion energy) - Gas core reactor rocket (nuclear fission energy) - Fission-fragment rocket (nuclear fission energy) - Fission sail (nuclear fission energy) - Nuclear salt-water rocket (nuclear fission energy) - Nuclear pulse propulsion (exploding fission/fusion bombs) - Fusion rocket (nuclear fusion energy) - Antimatter rocket (annihilation energy) References and external links - NASA Beginner's Guide to Propulsion - NASA Breakthrough Propulsion Physics project - Rocket Propulsion - Journal of Advanced Theoretical Propulsion - Different Rockets - Spaceflight Propulsion - a detailed survey by Greg Goebel, in the public domain - Wikipedia contributors, Wikipedia: The Free Encyclopedia. Wikimedia Foundation. - PowerPedia:Pulsed Plasma Thruster - PowerPedia:Jet Engine - Directory:Reverse Engineering UFO Craft
http://peswiki.com/index.php/PowerPedia:Spacecraft_propulsion
13
59
Let P and Q be two convex polygons whose intersection is a convex polygon.The algorithm for finding this convex intersection polygon can be described by these three steps: What's a pocket lid? Construct the convex hull of the union of P and Q; For each pocket lid of the convex hull, find the intersection of P and Q that lies in the pocket; Merge together the polygonal chains between the intersection points found A pocket lid is a line segment belonging to the convex hull of the union of P and Q, but which belongs to neither P nor Why does it connect a vertex of P with a vertex of Q? A pocket lid connects a vertex of P with a vertex of Q; if it were to connect two vertices of P, then P would not be convex, since the lid lies on the convex hull and is not a segment of P. Computing the Convex Hull: the Rotating Calipers To compute the convex hull of the two convex polygons, the algorithm uses the rotating calipers. It works as follows: Finding the intersection of P and Q in the pocket Find the leftmost vertex of each polygon. At each of those two vertices, place a vertical line passing through it. Associate that line to the polygon to which the vertex belongs. The line does not intersect its associated polygon, since the polygon is convex . See the figure below: Rotate these two lines (called calipers) by the smallest angle between a caliper and the segment following the vertex it passes through (in clockwise order). The rotation is done about the vertex through which the line passes on the associated polygon. If the line passes through more than one vertex of the assciated polygon, the farthest (in clockwise order) is taken. The result is show below: Whenever the order of the two calipers change, a pocket has been found. To detect this, a direction is associated to one of the lines (for example the green one, associated to P). Then all points of the red line (associated to Q) are either to the left or to the right of the green line. When a rotation makes them change from one side to the other of the green line, then the order of the two lines has changed. Here's what our example looks like just before and after the algorithm has found the first pocket; as you can see, if the line associated with P initially had its associated direction pointing up, then the line associated with Q was to the right of it at the beginning, and is now to the left of it: The algorithm terminates once it has gone around both polygons. Once the pockets have been found, the intersection of the polygons at the bottom of the pocket needs to be determined. The pockets themselves form a very special type of polygon: a sail polygon: that is, a polygon composed of two concave chains sharing a common vertex at one extermity, and connected by a segment (the mast) at the other end. By a procedure similar to a special-purpose triangulation for sail polygons, the segments of P and Q which intersect can be identified in O(k+l), where k and l are the number of vertices of P and Q which are inside the pocket. The idea is to start the triangulation from the mast, and as points from P and Q are considered, a check is made to see that the chain from Q is still on the same side as the chain from P. Here is a pseudo-code of this algorithm. It is assumed that the indices of the vertices of P and Q are in increasing order from the lid to the bottom of the pocket (i.e.: P and Q are not enumerated in the same order). i <- 1; j <- 1; finished <- true; while ( leftTurn( p(i), p(i+1), q(j+1) ) ) do j <- j + 1; finished <- false; while ( rightTurn( q(j), q(j+1), p(i+1) ) ) do i <- i + 1; finished <- false; At the end of this procedure, the indices i and j indicate the vertices of P and Q, respectively, which are at the start of the two intersecting segments (in other words, the two intersecting segments are p(i),p(i+1) and q(i),q(i+1). The intersection of these two segments is part of the intersection polygon, and can be found with your favorite line intersection What remains to be done is to build the resulting polygon. One way of doing this is to start at one of the vertices given by the above algorithm, compute the intersection, add that point, and then to continue adding points by following either P or Q deeper below the pocket until it comes out of another pocket (i.e. until the vertex to consider for addition happens to have been the output of the algorithm for another pocket). Then from that pocket the chain of the other polygon can be followed under the pocket. This would be done until the pocket the chain comes out of is the pocket that the merging started with. Checking for intersection All of this assumes that the polygons do intersect. However, there are three ways in which no polygonal intersection could For case 2, this is detected if, during the triangulation step, the algorithm makes a complete loop around one of the polygons. Detecting case 3 is even easier to detect ; in such a case no pockets will be found by the convex The intersection is either a point or a line. No provisions are made for this in the algorithm, and in this case the output will be a polygon consisting of either two vertices at the same location(in the case of a point), or four vertices on two distinct locations (in the case of a line); The polygons simply do not intersect each other and are seperable; The polygons are one inside another. One could argue that the intersection of two such polygons is the contained polygon, but that is the computer graphics way of seeing things. In mathematics, there is no intersection in such a case. In any event, the algorithm has to detect this case independently of wether or not it outputs it as an intersection or not. As implemented in the applet, the algorithm will only find intersections which form non-degenerate polygons. In other words, it will not handle properly intersections which consist of a single line or point. However, the algorithm does detect the cases where no intersection exists at all: in the case where one polygon is contained in another, the rotating calipers will not find any pockets; in the case where the two polygons are completely outside one another, the triangulation algorithm will detect that it has looped around one of the polygons. Also, the assumption of generality of the point positions also holds in the implementation. If the points are not in general position, the intersection might be a degenerate polygon, or the intersection polygon might contain two consecutive segments which lie on the same line. In the article, one polygon is enumerated in clockwise order, the other in counter-clockwise order. The implementation has both polygons in clockwise order; it simply involves a bit more housekeeping and a bit of extra care in the merging step. Forcing input of a convex polygon The interface for building the polygons has the nice feature (or annoying feature, depending on your character), of forcing the user to enter a convex polygon in clockwise order. This is done with a simple online algorithm that checks the following conditions while inserting a new vertex i+1 between vertex i and vertex i+2: In the code, this is actually implemented as a point being to the left (over) or to the right (under) of a line, rather than left turns and right turns, but the idea is exactly the same. vertices i-1, i, i+1 form a right turn; vertices i, i+1, i+2 form a right turn; vertices i+1, i+2, i+3 form a right turn. Follow the links below:
http://www.iro.umontreal.ca/~plante/compGeom/algorithm.html
13
56
Tools aligned to CLG expectations and/or indicators. - Skill Statements Describes how a student demonstrates an understanding of an indicator - Public Release Items HSA items and annotated student responses as appropriate - Lesson Plans Written by Maryland educators for teaching the concepts Goal 2: Geometry, Measurement, And Reasoning The student will demonstrate the ability to solve mathematical and real-world problems using measurement and geometric models and will justify solutions and explain processes used. 2.1 The student will represent and analyze two- and three-dimensional figures using tools and technology when appropriate. - 2.1.1 The student will analyze the properties of geometric figures. Essential properties, relationships, and geometric models include the following: - Congruence and similarity - line/segment/plane relationships (parallel, perpendicular, intersecting, bisecting, midpoint, median, altitude) - point relationships (collinear, coplanar) - angles and angle relationships (vertical, adjacent, complementary, supplementary, obtuse, acute, right, interior, exterior) - angle relationships with parallel lines - polygons (regular, non-regular, composite, equilateral, equiangular) - geometric solids (cones, cylinders, prisms, pyramids, composite figures) - circle/sphere (tangent, radius, diameter, chord, secant, central/inscribed angle, inscribed, circumscribed). - 2.1.2 The student will identify and/or verify properties of geometric figures using the coordinate plane and concepts from algebra. - “Verify properties” means to justify solutions using definitions and/or mathematical principles. - Properties, relationships, and geometric models include the following: - Congruence and similarity - line/segment relationships (parallel, perpendicular, intersecting, bisecting, midpoint, median, altitude) - point relationships (collinear) - angles and angle relationships (obtuse, acute, right) - polygons (regular, non-regular, equilateral, equiangular) - circle (tangent, radius, diameter, chord). - Items for this indicator may be set on the coordinate plane or may just have coordinates identified with no grid. - Concepts from algebra include applications of the distance, midpoint, and slope formulas. - 2.1.3 The student will use transformations to move figures, create designs, and/or demonstrate geometric properties. - Transformations include reflections, rotations, translations, and dilations. - Items should go beyond the identification of transformations. - Essential properties and relationships include the following: congruence, similarity, and symmetry. - The student's explanation of a transformation must include the following: - translation – distance and direction - reflection – line of reflection - rotation – center of rotation, angle measure, direction (clockwise or counterclockwise) - dilation – center and scale factor - Paper folding and the use of MirasTM and mirrors are appropriate methods for performing transformations, and their use must be referenced. - 2.1.4 The student will construct and/or draw and/or validate properties of geometric figures using appropriate tools and technology. - “Validate properties” in this indicator, means justifying solutions using definitions, mathematical principles and/or measurement. - Students may use a compass, straightedge, patty paper, a MiraTM, and/or a mirror as construction tools. Using a ruler or protractor cannot be part of the strategy. - Students may use a compass, ruler, patty paper, a MiraTM, a mirror and/or a protractor as drawing tools. - It is acceptable to do a construction when the item asks for a drawing. - Paper folding and the use of MirasTM and mirrors are appropriate methods for representing, constructing, and/or analyzing figures, and their use must be referenced. - Constructions and drawings are limited to the two-dimensional relationships listed in 2.1.1. 2.2 The student will apply geometric properties and relationships to solve problems using tools and technology when appropriate. - 2.2.1 The student will identify and/or verify congruent and similar figures and/or apply equality or proportionality of their corresponding parts. - Students will demonstrate geometric reasoning and justify conclusions. Although the focus is on geometric theory, answers to some items may include a numeric answer. - Corresponding measurements include length, angle measure, perimeter, circumference, area, volume, surface area and lateral area. - 2.2.2 The student will solve problems using two-dimensional figures and/or right-triangle trigonometry. - Students will demonstrate geometric reasoning and justify conclusions. - Trigonometric functions may be used to find sides or angles. - Trigonometric functions will be limited to sine, cosine, and tangent and their inverses. - 2.2.3 The student will use inductive or deductive reasoning. - Students are expected to demonstrate their geometric reasoning and justify conclusions. Although the focus is on geometric theory, answers to some questions may include a numeric answer. - Items may include geometric applications, patterns, and logic, including syllogisms. - Narrative, flow chart, or two-column proof may be used as a valid argument. 2.3 The student will apply concepts of measurement using tools and technology when appropriate. - 2.3.1 The student will use algebraic and/or geometric properties to measure indirectly. - “Measure indirectly” means to use mathematical concepts such as congruence, similarity, and ratio and proportion to calculate measurements. - Similarity and congruence will be directly stated or implied (scale drawings, enlargements). - Items may require the student to make comparisons. - This indicator may incorporate measuring. - This indicator does not include right-triangle trigonometry. - 2.3.2 The student will use techniques of measurement and will estimate, calculate, and/or compare perimeter, circumference, area, volume, and/or surface area of two-and three-dimensional figures and their parts. - Two-dimensional shapes include polygons, circles, and composite figures. - Three-dimensional shapes include cubes, prisms, pyramids, cylinders, cones, spheres, and composite figures. - Formulas will be provided. - No oblique solids will be used. - Items may involve applications of geometric properties and relationships. - Students may be required to make comparisons which do not require calculations.
http://www.mdk12.org/instruction/clg/geometry/goal2.html
13
64
New England town The New England town is the basic unit of local government in each of the six New England states. Without a direct counterpart in most other U.S. states, New England towns overlay the entire area of a state, similar to civil townships in other states where they exist, but are fully functioning incorporated municipalities, possessing powers similar to cities in other states. New England towns are often governed by town meeting. Virtually all corporate municipalities in New England are based on the town model; statutory forms based on the concept of a compact populated place, which is prevalent elsewhere in the U.S., are uncommon. County government in New England states is typically weak, sometimes even non-existent; for example, Connecticut and Rhode Island retain counties only as geographic subdivisions that have no governmental authority, while Massachusetts has abolished eight of fourteen county governments so far. Characteristics of the New England town system - Towns are laid out so that all land within the boundaries of a state is allocated to a town or other corporate municipality. Except in some very sparsely populated areas of the three northern New England states (primarily in the interior of Maine), the concept of unincorporated territory, even in rural areas, is unknown. With the exception of those very sparsely populated areas, all land in New England is within the boundaries of a town or other incorporated municipality. - Towns are municipal corporations, with their powers defined by a corporate charter, state statutes and the state constitution. Although they are technically creatures of the state, the laws regarding their authority have historically been very broadly construed. Thus, in practice, towns have significant autonomy in managing their own affairs, with nearly all the powers that cities in most other states would typically have. - Traditionally, a town's legislative body is the open town meeting, which is a form of direct democratic rule, with a board of selectmen possessing executive authority. Only two small Swiss Landsgemeinde remain as similarly democratic as the small New England town. - A town almost always contains a built-up populated place (the "town center") with the same name as the town. Additional built-up places with different names are often found within towns, along with a mixture of extraneous urban and rural territory. There is no unincorporated territory between the towns; leaving a town means entering another town or other municipality. In most parts of New England, towns are irregular in shape and size and are not laid out on any type of grid (the leading exception is that much of the interior of Maine was originally laid out as surveyed townships). The town center often contains a town common, often used today as a small park. - Since virtually all residents live within the boundaries of an incorporated municipality, residents receive most local services at the municipal level, and county government tends to be very weak. Differences among states do exist in the level of services provided at the municipal and county level, but generally most functions normally handled by county-level government in the rest of the United States are handled by town-level government in New England. In Connecticut, Rhode Island and most of Massachusetts, county government has been completely abolished, and counties serve merely as dividing lines for the judicial system. In other areas, some counties provide judicial and other limited administrative services. - Residents usually identify with their town for purposes of civic identity, thinking of the town in its entirety as a single, coherent community. There are some cases where residents identify more strongly with villages or sections of a town than with the town itself, but this is the exception, not the rule. - More than 90% of the municipalities in the six New England states are towns. Other forms of municipalities that exist—most notably, cities—are generally based on the town concept as well. Most New England cities are towns that have grown too large for a town meeting to be an effective legislative body, leading the residents to adopt a city form with a mayor and council. Municipality forms based on the concept of a compact populated place, such as a village or borough, are uncommon. In areas of New England where such forms do exist, they remain part of the parent town and do not have all of the corporate powers and authority of an independent municipality. Historical development Towns date back to the time of the earliest European colonial settlement of New England, and pre-date the development of counties in the region. Throughout the 17th, 18th and 19th centuries, as areas were settled, they would be organized into towns. Town boundaries were not usually laid out on any kind of regular grid, but were drawn up to reflect local settlement and transportation patterns or natural features. In early colonial times, recognition of towns was very informal, sometimes connected to local church divisions. By 1700, colonial governments had become more involved in the official establishment of new towns. Towns were typically governed by a town meeting form of government, as many still are today. Towns originally were the only form of incorporated municipality in New England. The city form of government was not introduced until much later. Boston, for instance, widely regarded as the unofficial capital of New England, was a town for the first two centuries of its existence. The entire land areas of Connecticut and Rhode Island had been divided into towns by the late 18th century, and Massachusetts was almost completely covered early in the 19th century. By 1850, the only New England state that still had large unincorporated areas left was Maine, and by the end of the 19th century most areas in Maine that could realistically be settled had been organized into towns. Early town organization in Vermont and much of New Hampshire proceeded in a somewhat different manner from that of the other New England states. In these areas, towns were often “chartered” long before any settlers moved into a particular area. This was very common in the mid to late 18th century (towns in southeastern New Hampshire such as Exeter whose existence predates that period were not part of this process, however). Once there were enough residents in a town to formally organize a town government, no further action was necessary to incorporate. This practice can lead to inconsistencies in the dates of incorporation for towns in this region. Dates given in reference sources sometime reflect the date the town was chartered – which may have been long before it was even settled – not the date its town government actually became active. In other parts of New England, it was not unheard of for “future towns” to be laid out along these lines, but such areas would not be formally incorporated as towns until they were sufficiently settled to organize a town government. A typical town in the northern three states was laid out in a 6 by 6 miles (9.7 by 9.7 km) square. Each contained 36 sections, 1 mile (1.6 km) squares or 640 acres (260 ha). One section was reserved for the support of public schools. This was copied when the Continental Congress laid out Ohio in 1785-87. Many early towns covered very large amounts of land, and once areas had become settled, new towns were sometimes formed by breaking areas away from the original existing towns. This was an especially common practice during the 18th century and early 19th century. More heavily populated areas were often subdivided on multiple occasions. As a result, towns and cities in such areas are often smaller in terms of land area than an average town in a rural area. Formation of new towns in this manner slowed in the later part of the 19th century and early part of the 20th century, however, and and has happened nowhere in New England in the last fifty years; in fact, boundary changes of any type are fairly rare. Other types of municipalities in New England Although towns are the basic building block of the New England municipality system, several other types of municipalities also exist. Every New England state has cities. In addition, Maine also has a unique type of entity called a plantation. Beneath the town level, Connecticut has incorporated boroughs, and Vermont has incorporated villages. In addition to towns, every New England state also has incorporated cities, which differ from towns only in their form of government, and even that distinction has become somewhat blurred in recent decades. Most cities are former towns that changed to a city form of government because they grew too large to be administered by a town meeting. Cities are typically governed by a mayor (and/or city manager) and city council or other similar arrangement. Cities and towns are regarded as equivalents under both state law and the attitudes of local residents. In common speech, people often generically refer to communities of either type as “towns”, drawing no distinction between the two. The presence of incorporated boroughs in Connecticut and incorporated villages in Vermont has influenced the evolution of cities in those states. In Connecticut in particular, the historical development of cities was quite different from in the other New England states, and at least technically, the relationship between towns and cities is even today different from elsewhere in New England. Just as boroughs in Connecticut overlay towns, so do cities; for example, while Hartford is commonly thought of as a city, it is coextensive and consolidated with the Town; governed by a single governmental entity with the powers and responsibilities of the Town being carried out by the entity referred to as the City. In legal theory though not in current practice Connecticut cities and boroughs could be coextensive (covering the same geography as the town) without being consolidated (a single government); also a borough or city can span more than one town. In practice, though, most cities in Connecticut today do not function any differently from their counterparts elsewhere in New England. See the section below on boroughs and villages for more background on this topic. There are far fewer cities in New England than there are towns, although cities are more common in heavily built-up areas, and most of the largest municipalities in the region are titled as cities. Across New England as a whole, only about 5% of all incorporated municipalities are cities. Cities are more common in the three southern New England states, which are much more densely populated, than they are in the three northern New England states. In early colonial times, all incorporated municipalities in New England were towns; there were no cities. Springfield, Massachusetts, for instance, was settled as a "plantation" (in colonial Massachusetts, the term was synonymous with town) as early as 1636, but the city of Springfield was not established until 1852. The oldest cities in New England date to the last few decades of the 18th century, (e.g. New Haven, Connecticut was chartered as a city in 1784.) In New England, cities were not widespread until well into the 19th century. New Hampshire did not have any cities until the 1840s, and for many years prior to the 1860s Vermont had just one city. Even Massachusetts, historically New England's most populous state, did not have any cities until 1822, when Boston was granted a city form of government by the state legislature. Population is not a determination in what makes a city or a town in New England and there are many examples of towns that have larger populations than nearby cities. The practical threshold to become a city seems to be higher in the three southern New England states than in the three northern New England states. In Massachusetts, Connecticut and Rhode Island, every city has at least 10,000 people, and there are only a few that have fewer than 20,000. In Maine, New Hampshire and Vermont, there are a number of cities with fewer than 10,000 people, even a couple with fewer than 5,000. Over time, some of the distinctions between a town and a city have become blurred. Since the early 20th century, towns have been allowed to modify the town meeting form of government in various ways (e.g., representative town meeting, adding a town manager). In recent decades, some towns have adopted what effectively amount to city forms of government, although they still refer to themselves as towns. As a practical matter, one municipality that calls itself a town and another that calls itself a city may have exactly the same governmental structure. With these changes in town government, a reluctance to adopt the title of city seems to have developed, and few towns have officially done so since the early 20th century. In Massachusetts, 13 municipalities (Agawam, Amesbury, Barnstable, Braintree, Easthampton, Franklin, Greenfield, Palmer, Randolph, Southbridge, Watertown, West Springfield and Weymouth) have adopted Mayor-Council or Council-Manager forms of government in their home rule charters, but nevertheless continue to call themselves "towns," although they are legally considered to be cities by the Secretary of the Commonwealth's office and are sometimes referred to in legislation and other legal documents as "the city known as the Town of ..." To an extent, whether or not a community is labeled a city is related more to how large it was relative to the general population a century ago than to how large its population is today. In addition to towns and cities, Maine has a third type of town-like municipality not found in any other New England state, the plantation. A plantation is, in essence, a town-like community that does not have enough population to require full town government or services. Plantations are organized at the county level, and are typically found in sparsely populated areas. There is no bright-line population divider between a town and a plantation, but no plantation currently has any more than about 300 residents. Plantations are considered to be “organized” but not “incorporated”. Not all counties have them; in some southern counties, all territory is sufficiently populated to be covered by a town or a city. In colonial times, Massachusetts also used the term “plantation” for a community in a pre-town stage of development (Maine originally got the term from Massachusetts, as Maine was part of Massachusetts until 1820, when it became a state via the Missouri Compromise). The term plantation had not been much used in Massachusetts since the 18th century. Massachusetts also once had “districts”, which served much the same purpose. They were considered to be incorporated, but lacked the full privileges of a town. Maine and Rhode Island are also known to have made limited use of the district concept. Districts have not been at all common since the first half of the 19th century, and there have not been any districts anywhere in New England in over a century. Maine is the only New England state that currently has a significant amount of territory that is not sufficiently populated to support town governments, thus the only New England state that still has a need for the plantation type of municipality. For a historical example in New Hampshire, see Plantation number four. Boroughs and villages Perhaps because the towns themselves are such strong entities, most areas of New England never developed municipal forms based on the compact populated place concept. This contrasts with states with civil townships, which typically have extensive networks of villages or boroughs that carve out or overlay the townships. Two of the New England states do have general-purpose municipalities of this type, however, to at least a limited extent. Connecticut has incorporated boroughs, and Vermont has incorporated villages. Such areas remain a part of their parent town, but assume some responsibilities for municipal services within their boundaries. In both states, they are typically regarded as less important than towns, and both seem to be in decline as institutions. In recent decades, many boroughs and villages have disincorporated, reverting to full town control. The term “village” is sometimes used in New England to describe a distinct, built-up place within a town or city. This may be a town center, which bears the same name as the town or city (almost every town has such a place), or a name related to that of the town, or a completely unrelated name. The town of Barnstable, Massachusetts, for example, includes “villages” called Barnstable, West Barnstable, and Hyannis. Except for the incorporated villages in Vermont, these “villages” are not incorporated municipalities and should not be understood as such. Towns do sometimes grant a certain measure of recognition to such areas, using highway signs that identify them as "villages", for example. These informal "villages" also sometimes correspond to underlying special-purpose districts such as fire or water districts, which are separately incorporated quasi-municipal entities that provide specific services within a part of a town (in Maine and New Hampshire, the term "village corporation" is used for a type of special-purpose district). Many villages also are recognized as places by the United States Postal Service (some villages have their own post offices, with their names used in mailing addresses) or the United States Census Bureau (which recognizes some villages as census-designated places and tabulates census data for them). For an example of the latter, see West Kennebunk, Maine, which is a constituent part of the town of Kennebunk, Maine. But they have no existence as general-purpose municipalities separate from the town (if they even have any legal existence at all), and are usually regarded by local residents as a part of the town in which they are located, less important than the whole. It is possible for a Connecticut borough or Vermont village to become a city. In Connecticut, cities overlay towns just as boroughs do, and, just like a borough, a city can cover only a portion of a town rather than being coextensive with the town. This is rare today—only one or two examples remain—but it was more common in the past. At least one borough historically spanned more than one town: the Borough of Danielsonville originally laid over parts of Killingly and Brooklyn, until the Brooklyn portion petitioned to be reorganized as a fire district and concurrently the Killingly portion was renamed Danielson by the General Assembly. There are no legal restrictions in Connecticut that would prevent a city or borough today from similarly overlaying the territory of more than one town provided it is not consolidated with one of the underlying towns. Cities actually developed earlier in Connecticut than in the other New England states, and were originally based on the borough concept. At one time, all cities were non-coextensive; the practice of making cities coextensive with their towns was a later adaptation intended to mimic the city concept that had emerged in the other New England states. Over time, many non-coextensive cities have expanded to become coextensive with their parent town. As with boroughs, many have also disincorporated and reverted to full town control. These two trends have combined to make non-coextensive cities very rare in recent times. In Vermont, if a village becomes a city, it does not continue to overlay its parent town, but breaks away and becomes a completely separate municipality. Most cities in Vermont today are actually former villages rather than former towns, and are much smaller than a typical town in terms of land area. The above process has created several instances where there are adjacent towns and cities with the same name. In all cases, the city was originally the “town center” of the town, but later incorporated as a city and became a separate municipality. Unorganized territory All three of the northern New England states (Vermont, New Hampshire and Maine) contain some areas that are unincorporated and unorganized, not part of any town, city or plantation. Maine has significantly more such area than the other two states. While these areas do exist, their importance should not be overstated. They are certainly the exception rather than the rule in the New England system, and the number of New England residents who live in them is extremely small in comparison to those who live in towns and cities, even in Maine. Most such areas are located in very sparsely populated regions. Much of the barely inhabited interior of Maine is unorganized, for example. The majority of the unincorporated areas in New Hampshire are in Coos County, and the majority of the unincorporated areas in Vermont are in Essex County. Two additional counties in New Hampshire and three additional counties in Vermont contain smaller amounts of unincorporated territory. In Maine, eight of the state’s sixteen counties contain significant amounts of unorganized territory (in essence, those counties in the northern and interior parts of the state). Four other counties contain smaller amounts. Most of these areas have no local government at all; indeed, some have no permanent population whatsoever. Some areas have a very rudimentary organization that does not rise to the level of an organized general-purpose municipal government (e.g., a town clerk’s office exists for the purpose of conducting elections for state or federal offices). In general, unorganized areas fall into one of the three categories below. Gores and similar entities During the 17th, 18th and 19th centuries, as town boundaries were being drawn up, small areas would sometimes be left over, not included in any town. Typically smaller than a normal-sized town, these areas were known by a variety of names, including gores, grants, locations, purchases, surpluses, and strips. Sometimes these areas were not included in any town due to survey errors (which is the technical meaning of the term “gore”). Sometimes they represent small areas that were left over when a particular region was carved into towns, not large enough to be a town on their own. Some appear to have simply been granted outside the usual town structure, sometimes in areas where it was probably not contemplated that towns would ever develop. Over time, those located in more populated areas were, in general, annexed to neighboring towns, or incorporated as towns in their own right. No such areas exist today in Massachusetts, Connecticut or Rhode Island, but some remain in New Hampshire, Vermont and Maine. - New Hampshire: Coos County contains a total of seventeen grants, purchases and locations. Together, these cover a significant amount of land area, but had only 61 residents as of the 2000 Census (44 of whom lived in a single entity, Wentworth's Location). The only remaining unincorporated gore-like entity outside of Coos County is Hale’s Location, in neighboring Carroll County, a 2.5-square-mile (6.5 km2) tract, which has reported population in only three censuses since 1900. (Note that Hart's Location, also in Carroll County, was incorporated as a town in 2001, although it continues to carry the word “location" in its name. Wentworth's Location was similarly incorporated as a town at one time.) - Vermont: Essex County contains three gores and grants. Together, they cover about 25 square miles (65 km2), and reported 10 residents in the 2000 Census. The only remaining unincorporated gore-like entity outside of Essex County is Buel's Gore, in Chittenden County, a 5-square-mile (13 km2) tract, which reported 12 residents in 2000. Up until the 1960s or 1970s, Franklin County contained a gore as well, which was ultimately eliminated by dividing it between two neighboring towns. - Maine: the interior of the state contains a number of entities of this type. There are a few remaining in more populated areas of the state as well. Examples include Hibberts Gore, in Lincoln County, and Batchelders Grant, in southern Oxford County. Unorganized townships All three of the northern New England states contain some town-sized unorganized entities, referred to as "unorganized townships" (sometimes, just "townships") or "unorganized towns". Most of these are areas that were drawn up on maps in the 18th and 19th centuries as what might be termed “future towns”, but never saw enough settlement to actually commence operation of a formal town government. - New Hampshire: Coös County contains six unorganized townships that do not appear to have ever been actively incorporated. Their collective population in the 2000 Census was 114, most of whom lived in one of two townships (Dixville and Millsfield). There are no other unorganized townships in the state that have never been incorporated. - Vermont: Essex County contains three unorganized townships that do not appear to have ever been actively incorporated. Their collective population in the 2000 Census was 41. There are no other unorganized townships in the state that have never been incorporated. - Maine: the interior of Maine contains hundreds of unorganized townships, most of which have never been incorporated or organized. Much of the interior of Maine is divided into surveyed townships that are identified only by letters and numbers that indicate their position on a grid. These were probably never seriously intended to ever become towns. Disincorporated towns All three of the northern New England states also include at least one unorganized township that was once a town, but has disincorporated and reverted to unorganized territory, in general, due to population loss. Maine also has some unorganized townships that were once organized as plantations. - New Hampshire: The town of Livermore, located in a mountainous area of Grafton County, disincorporated in 1951. Livermore reported no population in its final census as an incorporated town (1950), and has reported no more than three residents in any census since then. Most of its territory is now part of White Mountain National Forest. Since it was once incorporated as a town, Wentworth’s Location could also be put into this category as well. Wentworth’s Location disincorporated in 1966; its population in the 1970 Census was 37. - Vermont: The towns of Glastenbury and Somerset, located in the Green Mountains on opposite sides of the Bennington-Windham County line, disincorporated in 1937. In the 1940 Census, Glastenbury reported five residents, Somerset four. In only one census since then has the population of either reached double digits. - Maine: Dozens of towns and plantations have surrendered their municipal organization over the years and reverted to unorganized territory. An especially large number of municipal dissolutions took place between 1935 and 1945, but some have also occurred before and after that time period. Recent town disincorporations include Centerville (2004), Madrid (2000) and Greenfield (1993). The most recent plantations to surrender their organization were Prentiss Plantation and E Plantation, both in 1990. Maine has significantly more unorganized territory than Vermont or New Hampshire. Fewer than 100 Vermont residents and fewer than 250 New Hampshire residents live in unorganized areas. In Maine, by contrast, about 10,000 residents live in unorganized areas. As a result, Maine has developed more of an infrastructure for administration of unincorporated and unorganized areas than the other New England states. The existence of this fallback probably explains why Maine has had significantly more towns disincorporated over the years than any other New England state. There have been numerous instances of towns in Maine disincorporating despite populations that numbered in the hundreds. While these were not large communities, they were large enough to realistically operate a town government if they wanted to, but simply elected not to. In Vermont and New Hampshire, disincorporation has, in general, not been brought up for discussion unless a town’s population has approached single digits. Coastal waters In general, coastal waters in the New England states are administered directly by either state or federal agencies and are not part of any town. Several towns, however, have chosen to include all or part of their corresponding coastal waters in their territory. Coastal waters include man-made structures built within them. In Connecticut, for example, an artificial, uninhabited island in Long Island Sound at the boundary with New York State, housing the Stratford Shoal Light, is not part of any town and is administered directly by the United States Coast Guard. In general, inhabited minor off-shore islands are administered as part of a nearby town, and, in some cases, are their own independent towns, such as the town of Nantucket, in Massachusetts. Census treatment of the New England town system ||This section may contain original research. (November 2008)| Unlike municipalities in most other states, the United States Census Bureau does not classify New England towns as "incorporated places". They are instead classified as "minor civil divisions" (MCDs), the same category into which civil townships fall. The Census Bureau classifies New England towns in this manner because they are conceptually similar to civil townships from a geographic standpoint, typically exhibiting like population-distribution patterns. Like civil townships, but unlike most incorporated municipalities in other states, New England towns do not usually represent a single compact populated place. Plantations in Maine are similarly classified as MCDs. That New England towns serve, in essence, the same function as incorporated places in other states, but are not treated as incorporated places by the Census Bureau, can be a source of confusion. The Census classifications should not be understood to imply that New England towns are not incorporated, or necessarily serve a similar purpose to MCDs in other states in terms of governmental function or civic-identity importance. New England towns are classified as MCDs not because they are not "incorporated" but because, in Census terms, they are not "places". New England metropolitan areas are grouped by towns, while in other regions they are grouped by counties. Even though the Census Bureau does not treat New England towns as "incorporated places", it does classify cities in New England as such. The rationale behind this is that cities are likely to be more thoroughly built-up and therefore more readily comparable to cities in other states than towns are. Boroughs in Connecticut and incorporated villages in Vermont are also treated as incorporated places. That New England states, in general, regard cities and towns on equal footing, yet they are handled in two different ways by the Census Bureau, can be another source of confusion. The Census classifications should not be understood to imply that cities are incorporated but towns are not, or that cities and towns represent two fundamentally different types of entities. Note that the Census classifies New England municipalities strictly based on whether they are towns or cities, with no regard to the actual population-distribution pattern in a particular municipality. All municipalities titled as cities are classified as incorporated places, even if their population-distribution pattern is no different from that of a typical town; towns are never classified as incorporated places, even if they are thoroughly built-up. The ambiguity over whether certain municipalities in Massachusetts should be classified as cities or towns, and the Census Bureau's inconsistent handling of these municipalities (see the Statistics and Superlatives section below), further blurs matters. Census-designated places To fill in some of the "place" data, the Census Bureau sometimes recognizes census-designated places (CDPs) within New England towns. These often correspond to town centers or other villages, although not all such areas are recognized as CDPs. In cases where a town is entirely or almost entirely built-up, the Census sometimes recognizes a CDP which is coextensive with the entire town. CDPs are only recognized within towns, not cities. Because the primary role of CDPs is to establish "place" data for communities located in unincorporated areas, a CDP cannot be within an incorporated place. Since the Census Bureau recognizes New England cities as incorporated places, a CDP cannot be within a city. Data users from outside New England should be aware that New Englanders usually think in terms of entire towns (i.e., MCD data), making CDP data of marginal local interest. Since virtually all territory in New England outside of Maine is incorporated, CDPs do not really serve the same purpose as they do elsewhere; CDPs in New England invariably represent territory that is not "unincorporated", but part of a larger incorporated town. The extent to which such an area has its own distinct identity can vary, but is not usually as strong as identification with the town as a whole. There are numerous instances where the Census Bureau recognizes the built-up area around a town center as a CDP, resulting in a CDP that bears the same name as the town. In these cases, data for the CDP is, in general, meaningless to local residents, who seldom draw any particular distinction between the built-up area around the town center and outlying areas of the town. A local source citing data for such a community will almost always use the data for the entire town, not the CDP. At the same time, not all built-up places of significant population are recognized as CDPs. The Census Bureau has historically recognized relatively few CDPs within urbanized areas in particular. Many towns located in such areas do not contain any recognized CDPs, and will thus be completely absent from Census materials presenting population of “places”. Greenwich, Connecticut is one prominent example. While the Town of Greenwich appears in MCD materials, the Census Bureau does not recognize Greenwich as a "place". Unorganized areas ||This section may contain original research. (May 2012)| In New Hampshire and Vermont, the Census Bureau treats each individual unorganized entity (township, gore, grant, etc.) as an MCD. In Maine, it seems, due to the extent of unorganized area, the Census Bureau typically lumps contiguous townships, gores, and the like together into larger units called "unorganized territories" (UTs), which are then treated as MCDs. In a few cases in Maine where a township or gore does not border any other unorganized land, it is treated as its own MCD rather than being folded into a larger UT. In theory, a CDP could probably be defined within an MCD representing an unorganized area. Due to the extremely sparse population in most such areas, however, there are few if any cases in which the Census Bureau has actually done so. List of New England towns For a list of all New England towns and other town-level municipalities, see List of New England towns. That page also includes links to historical census population statistics for New England towns. Note: all population statistics are from the 2010 United States Census. Massachusetts contains 351 municipal corporations, consisting of cities and towns. These 351 municipalities together encompass the entire territory of Massachusetts; there is no area that is outside the bounds of a municipality. Using usual American terminology, there is no "unincorporated" land in Massachusetts. Of the 351 municipalities, the number that are cities and the number that are towns is a matter of some ambiguity. Depending on which source is consulted, anywhere from 39 to 53 are cities. The ambiguity is the result of questions around the legal status of municipalities that have since the 1970s, through home-rule petition, adopted corporate charters approved by the state legislature with forms of government that resemble city government and do not include elements traditionally associated with town government (especially, a board of selectmen and a town meeting). Of the fourteen communities that have done so, all but three call themselves a "town" in their municipal operations, and are usually referred to by residents as "towns", but the Massachusetts Secretary of the Commonwealth's Office considers all fourteen to be legally cities. Other sources within state government often refer to all fourteen municipalities as towns, however. The U.S. federal Census Bureau listed all as towns through the 1990 Census. For the 2000 Census, some were listed by the Federal government as towns and some as cities, a situation that continues in Census materials since 2000. Massachusetts appears to be the only New England state where this issue has arisen, though other New England states also have municipalities that have adopted what amounts to city forms of government but continue to call themselves "towns". In the other New England states, it does not appear that any need to officially label such municipalities as "cities" has been identified. For purposes of determining the "largest town", "smallest city", in this article, only the 42 municipalities that title themselves as cities are recognized as cities. This includes the 39 cities that adopted city forms of government through pre-home rule procedures. The other 309 municipalities in the state are treated as towns below. The same classification is used for identifying Massachusetts cities on the list of New England towns and its attendant pages with historical census population statistics. - The largest municipality in Massachusetts by population is the city of Boston (pop. 617,594). - The smallest that is a city and not a town is Palmer (pop. 12,140). - The largest that is a town and not a city is Framingham (pop. 68,318). - The smallest overall is the town of Gosnold (pop. 75). - The largest municipality by land area is the town of Plymouth (96 square miles (250 km2)). - The smallest town by area is the town of Nahant (1.24 square miles (3.2 km2)). Rhode Island Rhode Island contains 39 incorporated towns and cities. Eight are cities and 31 are towns. These 39 municipalities together cover the entire state; there is no unincorporated territory. - The largest municipality in Rhode Island, by population, is the city of Providence (pop. 178,042). - The largest that is a town and not a city is Coventry (pop. 35,014). - The smallest that is a city and not a town is Central Falls (pop. 19,376). - The smallest overall is the town of New Shoreham (pop. 1,051). - The largest municipality by land area is Coventry (59 square miles (150 km2)). - The smallest is Central Falls (1.21 square miles (3.1 km2)). Connecticut contains 169 incorporated towns. Put into terms that are equivalent to the other New England states, 20 are cities/boroughs and 149 are towns. (As discussed in the Cities section of Other types of municipalities in New England above, the relationship between towns and cities in Connecticut is different from the other New England states, at least on paper; thus, in the technical sense, all 169 of the above municipalities are really towns, with 20 overlaid by a coextensive city or borough of the same name). Together, these 169 municipalities cover the entire state. There is no unincorporated territory, but, as in all New England states, there are a fair number of unincorporated, named communities that lie within the incorporated territory of a municipality. Connecticut is one of two New England states to have any type of incorporated general-purpose municipality below the town level, namely incorporated boroughs (Vermont has incorporated villages). There are nine remaining in the state. They were once more numerous. Many of those that remain are very small. Connecticut also has at least one remaining city (Groton) that is within, but not coextensive with, its parent town. A second non-coextensive city, Winsted, still exists on paper, but its government has been consolidated with that of the town of Winchester for many years, making it more of a special-purpose district than a true municipality. Winsted is no longer recognized by the Census Bureau as an incorporated place, although data is tabulated for a Census Designated Place that is coextensive with that of the original city. - The largest municipality in Connecticut, by population, is the city of Bridgeport (pop. 144,229). - The largest that is a town and not a city is West Hartford (pop. 63,268). - The smallest that is a city and not a town, only including cities that are coextensive with their towns, is Derby (pop. 12,902), density 2,507/sq mi. The city-within-a-town of Groton, however, is smaller (pop. 10,389), and to the extent that Winsted is recognized as a non-coextensive city, it is even smaller than Groton is (pop. 7,712). - The smallest town is Union (pop. 854). - The largest municipality by land area is the town of New Milford (61.6 square miles (160 km2)). - The smallest town-level municipality is Derby (4.98 square miles (12.9 km2)). New Hampshire New Hampshire contains 234 incorporated towns and cities. Thirteen are cities and 221 are towns. These 234 municipalities together cover the vast majority of, but not all of, the state's territory. There are some unincorporated areas in the sparsely populated northern region of the state. Most of the unincorporated areas are in Coos County, the state's northernmost county. Carroll and Grafton counties also contain smaller amounts of unincorporated territory. This territory includes seven unincorporated townships and an assortment of gores, grants, purchases, and locations. The remaining seven counties in the state are entirely incorporated (Grafton County was also fully incorporated at one time, but lost that status when one of its towns disincorporated). Fewer than 250 of the state's residents live in unincorporated areas. - The largest municipality in New Hampshire, by population, is the city of Manchester (pop. 109,565). - The largest that is a town and not a city is Derry (pop. 33,109). - The smallest that is a city and not a town is Franklin (pop. 8,477). - The smallest incorporated municipality overall is the town of Hart's Location (pop. 41), which, despite its name, is an incorporated town. - The largest municipality by land area is the town of Pittsburg (282 square miles (730 km2)). - The smallest is the town of New Castle (0.83 square miles (2.1 km2)). Vermont contains 246 incorporated towns and cities, which together cover nearly all of the state's territory. Nine are cities and 237 are towns. There are some unincorporated areas in the sparsely populated mountainous regions of the state. Most of the unincorporated areas are in Essex County, in the northeastern part of the state. Bennington, Windham and Chittenden counties also contain smaller amounts of unincorporated territory. This territory includes five unincorporated townships and a handful of gores and grants. The remaining ten counties in the state are entirely incorporated (Bennington and Windham counties were also fully incorporated at one time, but lost that status when a town disincorporated). Fewer than 100 of the state's residents live in unincorporated areas. Vermont is one of two New England states to have any type of incorporated general-purpose municipality below the town level, namely incorporated villages (Connecticut has incorporated boroughs). There are about 40 in the state. There were once nearly double that number. Most of those that remain are very small. - The largest municipality in Vermont, by population, is the city of Burlington (pop. 42,417). - The largest which is a town and not a city is Essex (pop. 19,587). - The smallest which is a city and not a town is Vergennes (pop. 2,588). - The smallest incorporated town is Victory (pop. 62). - The largest municipality by land area is the town of Chittenden (73 square miles (190 km2)). - The smallest town-level municipality is the city of Winooski (1.43 square miles (3.7 km2)). Maine contains 488 organized municipalities of which 23 are incorporated as cities, 431 are incorporated as towns, and the remaining 34 are organized as plantations. These 488 organized municipalities together cover much of, but not all of, the state's territory. Of Maine's sixteen counties, only four are entirely incorporated. Four other counties are almost entirely incorporated, but include small amounts of unincorporated/unorganized territory (three of these four counties were entirely incorporated or organized at one time, but lost that status when a town disincorporated or a plantation surrendered its organization). The remaining eight counties contain significant amounts of unincorporated/unorganized territory. Most of these areas are in very sparsely populated regions, however. Only about 1.3% of the state's population lives in areas not part of a town, city, or plantation. (Since the 2000 Census, two towns, Madrid and Centerville, have disincorporated. Thus, at the time of the 2000 Census, Maine had 22 cities, 434 towns, and 34 plantations, for a total of 490 organized municipalities. Also, since the 2010 Census, Sanford adopted a new charter that included designation as a city.) - The largest municipality in Maine, by population, is the city of Portland (pop. 66,194). - The largest that is a town and not a city is Brunswick (pop. 20,278). - The smallest that is a city and not a town is Eastport (pop. 1,331). - The smallest town is Frye Island, a resort town that reported a year-round population of 5 in the 2010 Census. - The smallest town aside from Frye Island is Beddington (pop. 50). (At the time of the 2000 Census, the smallest town aside from Frye Island was Centerville (pop. 26), but Centerville disincorporated in 2004.) - The largest municipality by land area is the town of Allagash (128 square miles (330 km2)). - The smallest is the island plantation of Monhegan (0.86 square miles (2.2 km2)). See also - New England City and Town Area - U.S. Census statistical area and terminology for metropolitan areas using New England towns as building blocks, rather than counties - Unincorporated community (New Jersey) - a concept for named localities within towns that are not separately incorporated, similar to a "village" in New England - "Connecticut State Register and Manual, Section VI: Counties". Connecticut Secretary of the State. Retrieved 2010-01-23. "THERE ARE NO COUNTY SEATS IN CONNECTICUT. County government was abolished effective October 1, 1960; counties continue only as geographical subdivisions." - "Facts & History". Retrieved 2010-01-23. "Rhode Island has no county government. It is divided into 39 municipalities, each having its own form of local government." - "Historical Data Relating to the Incorporation of and Abolishment of Counties in the Commonwealth of Massachusetts". Massachusetts Secretary of the Commonwealth. Retrieved 2010-01-23. - Joseph Francis Zimmerman (1999). The New England Town Meeting: Democracy in Action. Retrieved 2010-11-02. "The only other currently assembled voters' law-making body is the Swiss Landsgemeinde in the half-cantons of Appenzell Inner-Rhoden and Out-Rhoden, Nidwalden, Obwalden, and the canton of Glarus, where the traditional annual open-air meeting of voters is held to decide issues." - Morison, Samuel Eliot (1972). The Oxford History of the American People. New York City: Mentor. pp. 388–9. ISBN 0-451-62600-1. - See Villages and Cities, Vermont Secretary of State (No date). Retrieved February 22, 2008. - Massachusetts Department of Housing and Community Development. Massachusetts communities operating under home rule charters (prepared and adopted under provisions of the Home Rule Amendment and M.G.L., c. 43B) - For a complete list of the forms of government of all cities and towns in Massachusetts, see 2008-09 Massachusetts Municipal Directory, Massachusetts Municipal Association. pp 178-181. http://www.mma.org/component/docman/doc_download/26-form-of-government-for-each-community-in-massachusetts - Massachusetts Cities and Towns Secretary of the Commonwealth of Massachusetts. Retrieved January 14, 2007. As of 2005, the Massachusetts state government specifically identifies eleven municipalities as cities that call themselves "Town of ____." Those municipalities are: Agawam, Amesbury, Barnstable, Easthampton, Franklin, Greenfield, Methuen, Southbridge, Watertown, West Springfield, and Weymouth. See also Secretary of the Commonwealth: A Listing of Counties and the Cities and Towns Within, which indicates that there are 301 towns and 50 cities, and again specifying the eleven cities that call themselves "Town of ___" and which also indicates that the courts recognize the city attribution for those eleven municipalities. Of these, Easthampton, Greenfield, and Methuen as of 2009 call themselves as cities, and since 2005, the municipalities of Braintree, Palmer, and Winthrop have adopted city forms of government. - U.S. Census Bureau, Geographic Areas Reference Manual, Chap. 8 - R.A. Ferry, "A short directory of the names, past and current of Connecticut boroughs", (Connecticut Ancestry Society, 1996) - See Village (Vermont) - List of Incorporated Villages Vermont State Archives. Vermont Secretary of State. (No Date) (Retrieved February 22, 2008.) - J.A. Fairlee, Local government in counties, towns, and villages, (The Century Co., New York, 1906), Chap. 8 (online version) - R.E. Murphy, "Town Structure and Urban Concepts in New England," The Professional Geographer 16, 1 (1964). - J.S. Garland, New England town law : a digest of statutes and decisions concerning towns and town officers, (Boston, Mass., 1906), pp. 1–83. (online version) - A. Green, New England's gift to the nation—the township.: An oration, (Angell, Burlingame & Co., Providence, 1875) (online version) - J. Parker, The origin, organization, and influence of the towns of New England : a paper read before the Massachusetts Historical Society, December 14, 1865, (Cambridge, 1867) (online version) - S. Whiting, The Connecticut town-officer, Part I: The powers and duties of towns, as set forth in the statutes of Connecticut, which are recited, (Danbury, 1814), pp. 7–97 (online version) - Census Bureau Geographic Area Reference Manual, Chapter 8 This document indicates that the US Census distinguishes between New England towns and Midwestern townships while including them in the same statistical category.
http://en.wikipedia.org/wiki/New_England_town
13
97
Division by zero ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (September 2011)| In mathematics, division by zero is division where the divisor (denominator) is zero. Such a division can be formally expressed as a/0 where a is the dividend (numerator). Whether this expression can be assigned a well-defined value depends upon the mathematical setting. In ordinary (real number) arithmetic, the expression has no meaning, as there is no number which, multiplied by 0, gives a (a≠0), and so division by zero is undefined. Since any number multiplied by zero is zero, the expression 0/0 has no defined value and is called an indeterminate form. Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value to a /0 is contained in George Berkeley's criticism of infinitesimal calculus in The Analyst ("ghosts of departed quantities"). In computing, a program error may lead to an attempt to divide a number by zero. Depending on the programming environment and the type of number (e.g. floating point, integer) being divided by zero, it may: generate positive or negative infinity by the IEEE 754 floating point standard, generate an exception, generate an error message, cause the program to terminate, or result in a special not-a-number value. In elementary arithmetic When division is explained at the elementary arithmetic level, it is often considered as splitting a set of objects into equal parts. As an example, consider having ten cookies, and these cookies are to be distributed equally to five people at a table. Each person would receive = 2 cookies. Similarly, if there are ten cookies, and only one person at the table, that person would receive = 10 cookies. So for dividing by zero – what is the number of cookies that each person receives when 10 cookies are evenly distributed amongst 0 people at a table? Certain words can be pinpointed in the question to highlight the problem. The problem with this question is the "when". There is no way to evenly distribute 10 cookies to nobody. In mathematical jargon, a set of 10 items cannot be partitioned into 0 subsets. So , at least in elementary arithmetic, is said to be either meaningless, or undefined. Similar problems occur if one has 0 cookies and 0 people, but this time the problem is in the phrase "the number". A partition is possible (of a set with 0 elements into 0 parts), but since the partition has 0 parts, vacuously every set in our partition has a given number of elements, be it 0, 2, 5, or 1000. If there are, say, 5 cookies and 2 people, the problem is in "evenly distribute". In any integer partition of a 5-set into 2 parts, one of the parts of the partition will have more elements than the other. But the problem with 5 cookies and 2 people can be solved by cutting one cookie in half. The problem with 5 cookies and 0 people cannot be solved in any way that preserves the meaning of "divides". Another way of looking at division by zero, is that division can always be checked using multiplication. Considering the 10/0 example above, let's set x = 10/0. If x equals ten divided by zero, then x times zero equals ten, but there is no x that, multiplied by zero, gives ten (or any other number than zero). If instead of x=10/0 we have x=0/0, then every x satisfies the question 'what number x, multiplied by zero, gives zero?' Early attempts The Brahmasphutasiddhanta of Brahmagupta (598–668) is the earliest known text to treat zero as a number in its own right and to define operations involving zero. The author failed, however, in his attempt to explain division by zero: his definition can be easily proven to lead to algebraic absurdities. According to Brahmagupta, A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. Zero divided by zero is zero. Bhaskara II (1114–1185) tried to solve the problem by defining (in modern notation) . This definition makes some sense, as discussed below, but can lead to paradoxes if not treated carefully. These paradoxes were not treated until modern times. In algebra It is generally regarded among mathematicians that a natural way to interpret division by zero is to first define division in terms of other arithmetic operations. Under the standard rules for arithmetic on integers, rational numbers, real numbers, and complex numbers, division by zero is undefined. Division by zero must be left undefined in any mathematical system that obeys the axioms of a field. The reason is that division is defined to be the inverse operation of multiplication. This means that the value of a/b is the solution x of the equation bx = a whenever such a value exists and is unique. Otherwise the value is left undefined. For b = 0, the equation bx = a can be rewritten as 0x = a or simply 0 = a. Thus, in this case, the equation bx = a has no solution if a is not equal to 0, and has any x as a solution if a equals 0. In either case, there is no unique value, so is undefined. Conversely, in a field, the expression is always defined if b is not equal to zero. Division as the inverse of multiplication The concept that explains division in algebra is that it is the inverse of multiplication. For example, since 2 is the value for which the unknown quantity in is true. But the expression requires a value to be found for the unknown quantity in But any number multiplied by 0 is 0 and so there is no number that solves the equation. requires a value to be found for the unknown quantity in Again, any number multiplied by 0 is 0 and so this time every number solves the equation instead of there being a single number that can be taken as the value of 0/0. In general, a single value can't be assigned to a fraction where the denominator is 0 so the value remains undefined (see below for other applications). 0/0 is known as indeterminate. Fallacies based on division by zero With the following assumptions: The following must be true: Dividing by zero gives: The fallacy is the implicit assumption that dividing by 0 is a legitimate operation with the same properties as dividing by any other number. In calculus Extended real line At first glance it seems possible to define a/0 by considering the limit of a/b as b approaches 0. For any positive a, the limit from the right is however, the limit from the left is and so the is undefined (the limit is also undefined for negative a). Furthermore, there is no obvious definition of 0/0 that can be derived from considering the limit of a ratio. The limit does not exist. Limits of the form in which both ƒ(x) and g(x) approach 0 as x approaches 0, may equal any real or infinite value, or may not exist at all, depending on the particular functions ƒ and g (see l'Hôpital's rule for discussion and examples of limits of ratios). These and other similar facts show that the expression 0/0 cannot be well-defined as a limit. Formal operations A formal calculation is one carried out using rules of arithmetic, without consideration of whether the result of the calculation is well-defined. Thus, it is sometimes useful to think of a/0, where a ≠ 0, as being . This infinity can be either positive, negative, or unsigned, depending on context. For example, formally: As with any formal calculation, invalid results may be obtained. A logically rigorous (as opposed to formal) computation would assert only that Since the one-sided limits are different, the two-sided limit does not exist in the standard framework of the real numbers. Also, the fraction 1/0 is left undefined in the extended real line, therefore it and are meaningless expressions. Real projective line The set is the real projective line, which is a one-point compactification of the real line. Here means an unsigned infinity, an infinite quantity that is neither positive nor negative. This quantity satisfies , which is necessary in this context. In this structure, can be defined for nonzero a, and . It is the natural way to view the range of the tangent and cotangent functions of trigonometry: tan(x) approaches the single point at infinity as x approaches either or from either direction. This definition leads to many interesting results. However, the resulting algebraic structure is not a field, and should not be expected to behave like one. For example, is undefined in the projective line. Riemann sphere The set is the Riemann sphere, which is of major importance in complex analysis. Here too is an unsigned infinity – or, as it is often called in this context, the point at infinity. This set is analogous to the real projective line, except that it is based on the field of complex numbers. In the Riemann sphere, , but is undefined, as is . Extended non-negative real number line The negative real numbers can be discarded, and infinity introduced, leading to the set [0, ∞], where division by zero can be naturally defined as a/0 = ∞ for positive a. While this makes division defined in more cases than usual, subtraction is instead left undefined in many cases, because there are no negative numbers. In higher mathematics Although division by zero cannot be sensibly defined with real numbers and integers, it is possible to consistently define it, or similar operations, in other mathematical structures. Non-standard analysis Distribution theory In distribution theory one can extend the function to a distribution on the whole space of real numbers (in effect by using Cauchy principal values). It does not, however, make sense to ask for a 'value' of this distribution at x = 0; a sophisticated answer refers to the singular support of the distribution. Linear algebra In matrix algebra (or linear algebra in general), one can define a pseudo-division, by setting a/b = ab+, in which b+ represents the pseudoinverse of b. It can be proven that if b−1 exists, then b+ = b−1. If b equals 0, then b+ = 0; see Generalized inverse. Abstract algebra Any number system that forms a commutative ring — for instance, the integers, the real numbers, and the complex numbers — can be extended to a wheel in which division by zero is always possible; however, in such a case, "division" has a slightly different meaning. The concepts applied to standard arithmetic are similar to those in more general algebraic structures, such as rings and fields. In a field, every nonzero element is invertible under multiplication; as above, division poses problems only when attempting to divide by zero. This is likewise true in a skew field (which for this reason is called a division ring). However, in other rings, division by nonzero elements may also pose problems. For example, the ring Z/6Z of integers mod 6. The meaning of the expression should be the solution x of the equation . But in the ring Z/6Z, 2 is not invertible under multiplication. This equation has two distinct solutions, x = 1 and x = 4, so the expression is undefined. In field theory, the expression is only shorthand for the formal expression ab−1, where b−1 is the multiplicative inverse of b. Since the field axioms only guarantee the existence of such inverses for nonzero elements, this expression has no meaning when b is zero. Modern texts include the axiom 0 ≠ 1 to avoid having to consider the trivial ring or a "field with one element", where the multiplicative identity coincides with the additive identity. In computer arithmetic The IEEE floating-point standard, supported by almost all modern floating-point units, specifies that every floating point arithmetic operation, including division by zero, has a well-defined result. The standard supports signed zero, as well as infinity and NaN (not a number). There are two zeroes, +0 (positive zero) and −0 (negative zero) and this removes any ambiguity when dividing. In IEEE 754 arithmetic, a ÷ +0 is positive infinity when a is positive, negative infinity when a is negative, and NaN when a = ±0. The infinity signs change when dividing by −0 instead. The justification for this definition is to preserve the sign of the result in case of arithmetic underflow. For example, in the double-precision computation 1/(x/2), where x = ±2−149, the computation x/2 underflows and produces ±0 with sign matching x, and the result will be ±∞ with sign matching x. The sign will match that of the exact result ±2150, but the magnitude of the exact result is too large to represent, so infinity is used to indicate overflow. Integer division by zero is usually handled differently from floating point since there is no integer representation for the result. Some processors generate an exception when an attempt is made to divide an integer by zero, although others will simply continue and generate an incorrect result for the division. The result depends on how division is implemented, and can either be zero, or sometimes the largest possible integer. Because of the improper algebraic results of assigning any value to division by zero, many computer programming languages (including those used by calculators) explicitly forbid the execution of the operation and may prematurely halt a program that attempts it, sometimes reporting a "Divide by zero" error. In these cases, if some special behavior is desired for division by zero, the condition must be explicitly tested (for example, using an if statement). Some programs (especially those that use fixed-point arithmetic where no dedicated floating-point hardware is available) will use behavior similar to the IEEE standard, using large positive and negative numbers to approximate infinities. In some programming languages, an attempt to divide by zero results in undefined behavior. In two's complement arithmetic, attempts to divide the smallest signed integer by are attended by similar problems, and are handled with the same range of solutions, from explicit error conditions to undefined behavior. Historical accidents - On September 21, 1997, a division by zero error on board the USS Yorktown (CG-48) Remote Data Base Manager brought down all the machines on the network, causing the ship's propulsion system to fail. See also - Kaplan, Robert (1999). The nothing that is: A natural history of zero. New York: Oxford University Press. pp. 68–75. ISBN 0-19-514237-3. - Cody, W.J. (March 1981). "Analysis of Proposals for the Floating-Point Standard". Computer 14 (3): 65. doi:10.1109/C-M.1981.220379. Retrieved 11 September 2012. "With appropriate care to be certain that the algebraic signs are not determined by rounding error, the affine mode preserves order relations while fixing up overflow. Thus, for example, the reciprocal of a negative number which underflows is still negative." - "Sunk by Windows NT". Wired News. 1998-07-24. - William Kahan (14 October 2011). "Desperately Needed Remedies for the Undebuggability of Large Floating-Point Computations in Science and Engineering". - Patrick Suppes 1957 (1999 Dover edition), Introduction to Logic, Dover Publications, Inc., Mineola, New York. ISBN 0-486-40687-3 (pbk.). This book is in print and readily available. Suppes's §8.5 The Problem of Division by Zero begins this way: "That everything is not for the best in this best of all possible worlds, even in mathematics, is well illustrated by the vexing problem of defining the operation of division in the elementary theory of arithmetic" (p. 163). In his §8.7 Five Approaches to Division by Zero he remarks that "...there is no uniformly satisfactory solution" (p. 166) - Charles Seife 2000, Zero: The Biography of a Dangerous Idea, Penguin Books, NY, ISBN 0 14 02.9647 6 (pbk.). This award-winning book is very accessible. Along with the fascinating history of (for some) an abhorent notion and others a cultural asset, describes how zero is misapplied with respect to multiplication and division. - Alfred Tarski 1941 (1995 Dover edition), Introduction to Logic and to the Methodology of Deductive Sciences, Dover Publications, Inc., Mineola, New York. ISBN 0-486-28462-X (pbk.). Tarski's §53 Definitions whose definiendum contains the identity sign discusses how mistakes are made (at least with respect to zero). He ends his chapter "(A discussion of this rather difficult problem [exactly one number satisfying a definiens] will be omitted here.*)" (p. 183). The * points to Exercise #24 (p. 189) wherein he asks for a proof of the following: "In section 53, the definition of the number '0' was stated by way of an example. To be certain this definition does not lead to a contradiction, it should be preceded by the following theorem: There exists exactly one number x such that, for any number y, one has: y + x = y" Further reading |Wikinews has related news: British computer scientist's new "nullity" idea provokes reaction from mathematicians| - Jakub Czajko (July 2004) "On Cantorian spacetime over number systems with division by zero", Chaos, Solitons and Fractals, volume 21, number 2, pages 261–271. - Ben Goldacre (2006-12-07). "Maths Professor Divides By Zero, Says BBC". - To Continue with Continuity Metaphysica 6, pp. 91–109, a philosophy paper from 2005, reintroduced the (ancient Indian) idea of an applicable whole number equal to 1/0, in a more modern (Cantorian) style.
http://en.wikipedia.org/wiki/Division_by_zero
13
79
Contents - Previous - Next When a force is transmitted through a body, the body tends to change its shape. Although these deformations seldom can be seen by the naked eye, the many fibres or particles which make up the body, transmit the force throughout the length and section of the body, and the fibres doing this work are said to be in a state of stress. Thus, a stress may be described as a mobilized internal reaction which resists any tendency towards deformation. Since the effect of the force is distributed over the cross-section area of the body, stress is defined as force transmitted or resisted per unit area. Thus Stress =Force/Area The unit for stress in S.l. is newtons per square metre (N/m²). This is also called a Pascal (Pa). However, it is often more convenient to uses the multiple N/mm². Note that 1N /mm² = 1 MN 1 m² = 1 M Pa Tensile and compressive stress, which result from forces acting perpendicular to the plane of cross section in question, are known as normal stress and are usually symbolized with (the Greek letter sigma), sometimes given a suffix t for tension (at) or a c for compression (c). Shear stress is produced by forces acting parallel or tangential to the plane of cross section and is symbolized with r (Greek letter tau). Consider a steel bar which is thinner at the middle of its length than elsewhere, and which is subject to an axial pull of 45kN. If the bar were to fail in tension, it would be due to breaking where the amount of material is a minimum. The total force tending to cause the bar to fracture is 45kN at all cross sections, but whereas the effect of the force is distributed over a cross-sectional area of 1200mm² for part of the length of the bar, it is distributed over only 300mm² at the middle position. Thus, the tensile stress is greatest in the middle and is: at = 300 2 = 15ON/mm² A brick pier is 0.7m square and 3m high and weighs l9kN/ m³. It is supporting an axial load from a column of 490kN. The load is spread uniformly over the top of the pier, so the arrow shown merely represents the resultant of the load. Calculate a) the stress in the brickwork immediately under the column, b) the stress at the bottom of the pier. Cross-section area = 0.49m² Stress = s c = 490kN / 0,49m2 = l000kN/ m² or 1 N/mm² Weight of pier: = 0.7m x 0.7m x 3.0m x 19kN/m³ = 28kN Total load = 490 + 28 = 518kN and Stress = s c = 518kN / 0.49m2 = 1057kN/ m² or 1.06N/mm² A rivet is connecting two pieces of flat steel. If the loads are large enough, the rivet could fail in shear, i.e., not breaking but sliding of its fibres. Calculate the shear stress of the rivet when the steel bars are subject to an axial pull of 6kN. Note that the rivets do, in fact, strengthen the connection by pressing the two steel bars together, but this strength, due to friction, cannot be calculated easily and is therefore neglected, i.e., the rivet is assumed to give all strength to the connection. Cross-section area of rivet = 1/4 x P x 102 = 78.5 mm² Shear stress = r = 6kN / 78.5mm2 = 76N/mm² When loads of any type are applied to a body, the body will always undergo dimension changes, this is called deformation. Thus, tensile and compressive stresses cause changes in length; torsional-shearing stresses cause twisting, and bearing stresses cause indentation in the bearing surface. In farm structures, where mainly a uniaxial state of stress is considered, the major deformation is in the axial direction. There are always small deformations present in the other two dimensions, but they are seldom of significance. Direct Strain = Change in length / Original length = e = D L By definition strain is a ratio of change and thus it is a dimensionless quantity. All solid materials will deform when they are stressed, and as the stress is increased, the deformation also increases. In many cases, when the load causing the deformation is removed, the material returns to its original size and shape and is said to be elastic. If the stress is steadily increased, a point is reached when, after the removal of the load, not all of the induced strain is recovered. This limiting value of stress is called the elastic limit. Within the elastic range, strain is proportional to the stress causing it. This is called the modulus of elasticity. The greatest stress for which strain is still proportional is called the limit of proportionality (Hooke's law). Thus, if a graph is produced of stress against strain as the load is gradually applied, the first portion of the graph will be a straight line. The slope of this straight line is the constant of proportionality, modulus of elasticity (E), or Young's modulus and should be thought of as a measure of the stiffness of a material. Modulus of elasticity = E = Stress/Strain = FL/AD L The modulus of elasticity will have the same units as stress (N/mm²). This is because strain has no units. A convenient way of demonstrating elastic behaviour is to plot a graph of the results of a simple tensile test carried out on a thin mild-steel rod. The rod is hung vertically and a series of forces applied at the lower end. Two gauge points are marked on the rod and the distance between them measured after each force increment has been added. The test is continued until the rod breaks. Figure 4.1 Behaviour of mild-steel rod under tension. Two timber posts, 150mm square and 4m high, are subject to an axial load of 108 kN each. One post is made of pine timber (E = 7800N/mm²) and the other is Australian blackwood (E = 15300N/mm²). How much will they shorten due to the load? Cross-section area A = 22500mm²; length L = 4000mm Pine: D L = FL / AE= (108000 x 4000) / (22500 x 15300) = 1.3m Australian blackwood: D L = (108000 x 4000) / (22500 x 15300) = 1.3mm Factor of Safety The permissible stresses must, of course, be less than the stresses which would cause failure of the members of the structure; in other words there must be an ample safety margin. (In 2000 B.C. a building code declared the life of the builder be forfeited should the house collapse and kill the owner). Also deformations must be limited since excessive deflection may give rise to troubles such as cracking of ceilings, partitions and finishes, as well as adversely affecting the functional needs. Structural design is not an exact science, and calculated values of reactions, stresses etc., whilst they may be mathematically correct for the theoretical structure (i.e., the model), may be only approximate as far as the actual behaviour of the structure is concerned. For these and other reasons it is necessary to make the design stress, working stress, allowable stress and permissible stress less than the ultimate stress or the yield stress. This margin is celled factor of safety. Design stress = [Ultimate (or yield) stress]/Factor of safety In the case of a material such as concrete, which does not have a well-defined yield point, or brittle materials which behave in a linear manner up to failure, the factor of safety is related to the ultimate stress (maximum stress before breakage). Other materials, such as steel, have a yield point where a sudden increase in strain occurs, and at which point the stress is lower than the ultimate stress. In this case, the factor of safety is related to the yield stress in order to avoid unacceptable deformations. The value of the factor of safety has to be chosen with a variety of conditions in mind, such as: However, values of 3 to 5 are normally chosen when the factor of safety is related to ultimate stress and values of 1.4 to 2.4 when related to yield-point stress. In the case of building materials such as steel and timber, different factors of safety are sometimes considered for common loading systems and for exceptional loading systems in order to save materials. Common loadings are those which occur frequently, whereas a smaller safety margin may be considered for exceptional loadings, which occur less frequently and seldom with full intensity, e.g., wind pressure, earthquakes, etc. These fall into three main categories: dead loads, wind loads and other imposed loads. Dead loads are loads due to the self-weight of all permanent construction, including roof, walls, floor, etc. The self- weights of some parts of a structure, e.g., roof cladding, can be calculated from the manufacturer's data sheets, but the self-weight of the structural elements cannot be accurately determined until the design is completed. Hence estimates of self-weight of some members must be made before commencing a design analysis and the values checked at the completion of the design. Wind loads are imposed loads, but are usually treated as a separate category owing to their transitory nature and their complexity. Very often wind loading proves to be the most critical load imposed on agricultural buildings. Wind loads are naturally dependent on wind speed, but also on location, size, shape, height and construction of a building. Specific information concerning various types of loads is presented in Chapter 5. In designing a structure, it is necessary to consider which combination of dead and imposed loads can give rise to the most critical condition of loading. Not all the imposed loads will necessarily reach their maximum values at the same time. In some cases, for example light open sheds, wind loads may tend to cause the roof structure to lift, producing an effect opposite in direction to that of the dead load. Imposed loads are loads related to the use of the structure and to the environmental conditions, e.g., weight of stored products, equipment, livestock, vehicles, furniture and people who use the building. Imposed loads include earthquake loads, wind loads and snow loads where applicable; and are sometimes referred to as superimposed loads, because they are in addition to the dead loads. Dynamic loading is due to the change of loading, resulting directly from movement of loads. For example, a grain bin may be effected by dynamic loading if filled suddenly from a suspended hopper; it is not sufficient to consider the load only when the bin is either empty or full. Principle of Superposition This states that the effect of a number of loads applied at the same time is the algebraic sum of the effects of the loads, applied singly. Using standard load cases, and applying the principle of superposition, complex loading patterns can be solved. Standard case values of shear force, bending moment or deflection at particular positions along a member can be evaluated and then the total value of such parameters for the actual loading system found by algebraic summation. Effects of Loading When the loads have been transformed into definable load systems, the designer must then consider how the loads will be transmitted through the structure. Loads are not transmitted as such, but as load effects. When considering a structural member which occupies a certain space, it is usual to orientate the Cartesian z-z axis along the length of the member and the x-x and y-y axes along the horizontal and vertical cross-sectional axes respectively. Primary Load Effects A primary load effect is defined as being the direct result of a force or a moment, which has a specific orientation with respect to the three axes. Any single load or combination of loads can give rise to one or more of these primary load effects. In most cases a member will be designed basically to sustain one load effect, usually the one producing the greatest effect. In more complex situations the forces and moments are resolved into their components along the axes and then the load effects are first studied separately for one axis at a time, and then later their combined effects are considered when giving the member its size and shape. The choice of material for a member may be influenced to some extent by the type of loading. For instance, concrete has little or no strength in tension and can therefore hardly be used by itself as a tie. Tension, compression, shear, bending and torsion are all primary load effects. Secondary load effects such as deflection are derived from the primary load effects. Cables, cords, strings, ropes and wires are flexible because of their small lateral dimensions in relation to their length and have therefore very limited resistance to bending. Cables are the most efficient structural elements since they allow every fibre of the cross section to resist the applied loads up to any allowable stress. Their applications are however, limited by the fact that they can be used only in tension. Rods or bars under compression are the basis for vertical structural elements such as columns, stanchions, piers and pillars. They are often used to transfer load effects from beams, slabs and roof trusses to the foundations. They may be loaded axially or they may have to be designed to resist bending when the load is eccentric. Ties and Struts When bars are connected with pin joints and the resulting structure loaded at the joints, a structural framework called a pin jointed truss or lattice frame is obtained. The members are only subjected to axial loads and members in tension are called ties while members in compression are called struts. A beam is a member used to resist a load acting across its longitudinal axis by transferring the effect over a distance between supports - referred to as the span. The load on a beam causes longitudinal tension and compression stresses and shear stresses. The magnitudes of these will vary along and within the beam. The span that a beam can usefully cover is limited due to the self-weight of the beam, i.e., it will eventually reach a length when it is only capable of supporting itself. This problem is overcome to a degree with the hollow web beam and the lattice girder or frame. The safe span for long lightly loaded beams can be increased somewhat by removing material from the web even though the shear capacity will be reduced. Hollow web beam The arch can be shaped such that, for a particular loading, all sections of the arch are under simple compression with no bending. Arches exert vertical and horizontal thrusts on their supports, which can prove troublesome in the design of supporting walls. This problem of horizontal thrust can be removed by connecting a tension member between the support points. Tensile systems allow maximum use of the material because every fibre of the cross section can be extended to resist the applied loads up to any allowable stress. As with other structural systems, tensile systems require depth to economically transfer loads across a span. As the sag (h) is decreased, the tensions in the cable (T1 and T2) increase. Further decreases in the sag would again increase the magnitudes of T1 and T2 until the ultimate condition, an infinite force, would be required to transfer a vertical load across a cable that is horizontal (obviously an impossibility). A distinguishing feature of tensile systems is that vertical loads produce both vertical and horizontal reactions. Because cables cannot resist bending or shear, they transfer all loads in tension along their lengths. The connection of a cable to its supports acts as a pin joint (hinge) with the result that the reaction (R) must be exactly equal and opposite to the tension in the cable (T). The R can be resolved into the vertical and horizontal directions producing the forces V and H. The horizontal reaction (H) is known as the thrust. The values of the components of the reactions can be obtained by using the conditions of static equilibrium and resolving the cable tensions into vertical and horizontal components at the support points. Two identical ropes support a load P of 5 kN as shown in the figure. Calculate the required diameter of the rope, if its ultimate strength is 30 N/mm² and a factor of safety of 4.0 is applied. Also determine the horizontal support reaction at B. The allowable stress in the rope is 30/4 = 7.5N/mm² Stress = Force / area required = (4.3 x 103 ) / 7.5 = 573mm² A = p r2 =p d2 / 4 At support B. reaction composed of two components. Bv = T2 sin 30° = 2.5 sin 30° = 1.25kN BH = T2 cos 30° = 2.5 cos 30° = 2.17kN A column which is short (i.e., the height is small compared to the cross section area) is likely to fail due to crushing of the material. Note however, that slender columns, being tall compared to the cross section area, are more likely to fail by buckling with a much smaller load than that which would cause failure due to crushing. Buckling is dealt with later. A square concrete column, which is 0.5m high, is made of a nominal concrete mix of 1:2:4, with a permissible direct compression stress of 5.3N/mm². What is the required cross section area if the column is required to carry an axial load of 300kN? A = F/s = 300000N/5.3N/mm2 = 56600mm² i.e., the column should be minimum 240mm square. It will be necessary, for example, when designing beams in bending, columns in buckling, etc., to refer to a number of basic geometrical properties of the cross sections of struc tural members. Cross-section areas (A) are generally calculated in mm², since the dimensions of most structural members are given in mm, and values for design stresses found in tables are usually given in N/mm². Centre of Gravity or Centroid This is a point about which the area of the section is evenly distributed. Note that the centroid is sometimes outside the actual cross section of the structural element. It is usual to consider the reference axes of structural sections as those passing through the centroid. In general, the x-x axis is drawn perpendicular to the greatest lateral dimension of the section, and the y-y axis is drawn perpendicular to the x-x axis, intersecting it at the centroid. Moment of Inertia Area moment of inertia (1), or as it is correctly called, second moment of area, is a property which measures the distribution of area around a particular axis of a cross section, and is an important factor in its resistance to bending. Other factors such as the strength of the material from which a beam is made, are also important for resistance to bending, and are allowed for in other ways. The moment of inertia only measures how the geometric properties or shape of a section affects its value as a beam or slender column. The best shape for a section is one which has the greater part of its area as far as possible away from its centroidal, neutral axis. For design purposes it is necessary to use the moment of inertia of a section about the relevant axis or axes. Calculation of Moment of Inertia Consider a rectangle and let it consist of an infinite number of strips. The moment of inertia about the x-x axis of such a strip is then the area of the strip multiplied by the square of the perpendicular distance from its centroid to the x-x axis, i.e.: b x y x y2 Calculation of Moment of Inertia The sum of all such products is the moment of inertia about the x-x axis for the whole cross section. By applying calculus and integrating as follows, the exact value for the moment of inertia can be obtained. For a circular cross section: IXX = p D4 / 64 Moments of inertia for other cross sections are given later and in Table 4.3. For structural rolled-steel sections, the moment of inertia can be found tabulated in handbooks. Some examples are given in Appendix V:3. Principle of Parallel Axes The principle of parallel axes states: To find the moment of inertia of any area (e.g., top flange of beam shown below) about any axis parallel to its centroidal axis, the product of the area of the shape and the square of the perpendicular distance between the axes must be added to the moment of inertia about the centroidal axis of that shape. Determine the moment of inertia about the x-x axis and the y-y axis for the I-beam shown in the figure. The beam has a web of 10mm plywood and the flanges are made of 38 by 100mm timber, which are nailed and glued to the plywood web. The whole cross section of the beam and the cross section of the web both have their centroids on the x-x axis, which therefore is their centroidal axis. Similarly, the F-F axis is the centroidal axis for the top flange. Ixx of the bd3 /12 =(10 x 3003)/12 = 22.5x106 mm4 The moment of intertia of one flange about its own centroidal axis (F-F): IFF of one flange = (86 x 1003)/12 = 7.2 x 106mm4 and from the principle of parallel axes, the lxx of one flange equals: 7.2 x 106 + 86 x 100 x 2002 = 351.2 x 106mm4 The total Ixx of the web plus two flanges thus equals: Ixx = 22.5 x 106 + 351.2 x 106 + 351.2 x 106 = 725 x 106mm4 The Iyy of the above beam section is most easily found by adding the Iyy of the three rectangles of which it consists, because the y-y axis is their common neutral axis, and moments of inertia may be added or subtracted if they are related to the same axis. Iyy = 2 x [(100 x 863)/12] + (300 x 103)/12 = 2 x 5.3 x 106 + 0.025 x 106 = 10.6 x 106mm4 In problems involving bending stresses in beams, a property called section modulus (Z) is useful. It is the ratio of the moment of inertia (1) about the neutral axis of the section to the distance (C) from the neutral axis to the edge of the section. Unsymmetrical Cross Sections Sections for which a centroidal reference axis is not an axis of symmetry will have two section moduli for that axis. Zxx1 = Ixx/y1; Zxx2 = Ixx/y2 Unsymmetrical Cross Sections Radius of Gyration Radius of gyration (r) is the property of a cross section which measures the distribution of the area of the cross section in relation to the axis. In structural design it is used in relation to the length of compression members, such as columns and struts, to estimate their slenderness ratio and hence their tendency for buckling. Slender compression members tend to buckle about the axis for which the radius of gyration is a minimum value. From the equations below, it will be seen that the least radius of gyration is related to the axis about which the least moment of inertia occurs. (general relationship I = Ar2) Table 4.3 Properties of Structural Sections Contents - Previous - Next
http://www.fao.org/docrep/s1250e/S1250E0C.HTM
13
285
In this glossary you will find only a very short description of each concept. If you need a more detailed explanation, please follow one of the external links. - Analysis of variance - Research data are often based on samples drawn from a larger population of cases. This is true of most types of questionnaire surveys, among other examples. When, on the basis of the analysed results of such sample surveys, we conclude that the same results are valid for the population from which the sample has been drawn, we are making a generalisation with some degree of uncertainty attached. Analysis of variance is a method that allows us to determine whether differences between the means of groups of cases in a sample are too great to be due to random sampling errors. This can, for instance, help us to determine whether observed differences between the income of men and women (in a sample) are great enough to conclude that these differences are also present in the population from which the sample has been drawn. In other words, the method tells us whether the variable Gender has any impact on the variable Income, or more generally, whether a non-metric variable (which divides the cases into groups) is a factor in deciding the values that the cases have on a separate and metric variable. Analysis of variance looks at the total variance of the metric variable and tries to determine how much of this variance is due to, or can be explained by, the non-metric grouping variable. The method consists of the following building blocks; 1) total variance 2) within-group variance and 3) between-group variance. Total variance is the sum of the squared differences between the data values of each of the units and the mean value of all the units. This total variance can be broken down into within-group variance, which is the sum of the squared differences between the data values of each of the units and the mean of the group to which the unit belongs, and between-group variance, which is the sum of the squared differences between each of the group means and the mean of all the units. - Asymmetrical distribution - If you split the distribution in half at its mean, then the distribution of the two sides of this central point would not be the same (i.e., not symmetrical) and the distribution would be considered skewed. In a symmetrical distribution, the two sides of this central point would be the same (i.e., symmetrical). - A parameter estimate of a regression equation that measures the increase or decrease in the dependent variable for a one-unit difference in the independent variable. In other words, b shows how sensitive the dependent variable is to changes in the independent variable. - Standardised b shows by how many standard deviations the dependent variable changes when the independent variable increases by 1 standard deviation. The beta coefficient should be used when comparing the relative explanatory power of several independent variables. - Birth cohort - A birth cohort is normally defined as consisting of all those who were born in the region or country of interest in a certain calendar year. In this document, however, we redefine a birth cohort so that it consists of all those who were born in a certain calendar year and were living in the country or countries of interest at the time of the second ESS interview round. - Box-and-whisker plot - The distribution can also be shown by means of a box-and-whisker plot. This form of presentation is based on a group of statistical measures which are known as median measures. All such measures are based on a sorted distribution, i.e. a distribution in which the cases have been sorted from the lowest to the highest value. The first quartile is the data value of the case where 25 % of the cases have lower values and 75 % of the cases have greater values. The third quartile is the data value of the case where 75 % of the cases have lower values and 25 % of the cases have greater values. The inter-quartile range is the distance from the first to the third quartile, i.e. the variation area of the half of the cases that lies at the centre of the distribution. The box-and-whisker plot consists of a rectangular box divided by one vertical line, with one horizontal line (whisker) extending from either end. The left and right ends of the box mark the first and third quartiles respectively. The dividing line at the centre represents the median. The length of the box is equal to the inter-quartile range and contains the middle half of the cases included in the sorted distribution. The end points of the two extending lines are determined by the data values of the most extreme cases, but do not extend more than one inter-quartile range from either end of the box (i.e. from the first and third quartiles). The maximum length of each of these lines, therefore, is equal to the length of the box itself. If there are no cases this far from either quartile, the lines will be shorter. If the maximum or minimum values of the entire distribution are beyond the end points of the line, a cross will indicate their position. - By cases we mean the objects about which a data set contains information. If we are working with opinion polls or other forms of interview data, the cases will be the individual respondents. If data have been collected about the municipalities of a county or about the countries of the world, the cases will be the geographical areas, i.e. the municipalities or countries. - Central tendency - The central tendency is a number summarising the average value of a set of scores. The mode, the median and the mean are the commonly used central tendency statistics. - Chi-square distribution - A family of distributions, each of which has different degrees of freedom, on which the chi-square test statistic is based. - Chi-square test - A test of statistical significance based on a comparison of the observed cell frequencies of a joint contingency table with frequencies that would be expected under the null hypothesis of no relationship. - Compute is used for creating new variables on the basis of variables already present in the data matrix. It is possible to do calculations with existing variables, such as calculating the sums of the values on several variables, percentaging, multiplying variables by a constant, etc. The results of such arithmetic operations are saved as new variables, which can then be used just like the other variables in the data set. - Conditional mean - A conditional mean value is the mean value of a variable for a group of respondents whose members have a particular combination of values on other variables, whereas an overall mean is the mean value of a variable for all respondents. - Confidence interval - Research data are very often based on samples drawn from a larger population of cases. This applies to most types of questionnaire surveys, for instance. When we assume, on the basis of results from the analysis of such sample surveys, that the same results apply to the population from which the samples are drawn, we are making generalisations with some degree of uncertainty attached. The confidence interval is a method of estimating the uncertainty associated with computing mean values in sample data. We normally use a confidence interval with a significance level of 95 %. This means that there is a 5 % chance of being wrong if we assume that a mean value for a sample lies within the confidence interval. - Constants in linear functions Some of you may remember that a line on a plane can be expressed as an equation. Assume first that there are two variables, one that is measured along the plane’s vertical axis and whose values are symbolised by the letter y, and one that is measured along the horizontal axis and whose values are symbolised by the letter x. The function that represents a linear association between these two variable values can be expressed as follows: y = a + b∙x Here, a and b symbolise constants (fixed numbers). The number b indicates how much the variable value y increases or decreases as x changes. When x increases by 1 unit, y increases by b units. To see this, assume, for instance, that x has the initial value 5. Insert this value into the equation. You get: y = a + b∙5. Then, let the value x increase by one unit to x = 5 + 1 and insert this in the equation instead of the former value. Now the equation can be written y = a + b∙5 + b∙1. Thus, by letting x increase by 1 unit, we have made y increase by b∙1 = b units. This implies that if b is equal to, say, 2, y will increase by 2 units whenever x increases by 1 unit, or if b is negative and equal to, say, -0.5, y will decrease by 0.5 units whenever x increases by 1 unit.Figure A. Graphic presentation of a linear function The latter case is illustrated in Figure A, where we have drawn a line in accordance with the function y = 7 - 0.5∙x (where we have inserted the randomly chosen value 7 for a). As explained above, x and y are variables, which means that they can take on a whole range of different values, while 7 and -0.5 are constants, i.e. values that determine the position of the line on the plane and which cannot change without causing the position of the line to change. In figure A, the x-values range between 0 and 10, while the y-values range between 2 and 7. It is to be hoped that you realise that for each value x takes on between 0 and 10, the equation and the corresponding line assigns a unique numerical value to y as shown in the figure. Thus, if x = 0, y takes the value of 7, which is the value we have given to the constant a. Check this out by inserting 0 in place of x in the equation y = 7 - 0.5∙x and then compute the value of the expression on the right-hand side of the equals sign. (You should get y = 7.) This illustrates the important point that the constant a in the equation y = a + b∙x is identical to the value that y takes on when x = 0. From a graphical point of view (see Figure A), a can be interpreted as the distance between two points on the vertical axis, namely the distance between its zero point (y = 0) and the point where this axis and the line given by the equation meet each other. (Assuming that the vertical axis crosses the horizontal axis in the latter’s zero point.) Thus, the constant a is often called the intercept. Now for the graphical interpretation of the constant b: We repeat the exercise from the numerical example presented above by starting from the point on the line in Figure A where x = 5 (as marked by vertical line k). We then increase the x-value by 1 unit to x = 6 as we move downwards along the line (the new x-value is marked by vertical line l). This change in x makes the corresponding value of y decrease from 4.5 (marked by horizontal line m) to 4 (marked by horizontal line n), i.e. it ‘increases’ by -0.5 units (decreases by 0.5 units).Thus, Figure A confirms what we just saw in our numerical example: b is the change in y that takes place when x increases by 1 unit, provided that the changes occur along the line that is determined by the function y = a + b∙x. If b is positive, y increases whenever x increases, and if b has a negative numerical value, y decreases when x increases. Note also that b can be interpreted as a measure of the steepness of the line. The more y changes when x is increased by 1 unit, the steeper the line gets. - Correlation is another word for association between variables. There are many measures of correlation between different types of variables, but most often the word is used to designate linear association between metric variables. This type of correlation between two variables is measured by the Pearson correlation coefficient, which varies between -1 and 1. A coefficient value of 0 means no correlation. A coefficient of -1 or 1 means that if we plot the observations on a plane with one variable measured along each of the two axes, all observations would lie on a straight line. In regression terminology this corresponds to a situation where all observations lie on the (linear) regression line and all residuals have the value 0. - Dependent and independent variables - The idea behind these concepts is that the values of some variables may be affected by the values of other variables, and that this relation makes the former dependent on the latter, which, from the perspective of this particular relationship, are therefore called independent. In practice, the analysts are the ones who determine which variables shall be treated as dependent and which shall be treated as independent. - Descriptive statistics - Descriptive statistics is a branch of statistics that denotes any of the many techniques used to summarize a set of data. The techniques are commonly classified as: 1) Graphical description (graphs) 2) Tabular description (frequency, cross table) 3) Parametric description (central tendency, statistical variability). - Dichotomies are variables with only two values, e.g. the variable Gender with the two values Male and Female. - Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of factors. The eigenvalue for a given factor reflects the variance in all the variables, which is accounted for by that factor. A factor's eigenvalue may be computed as the sum of its squared factor loadings for all the variables. The ratio of eigenvalues is the ratio of explanatory importance of the factors with respect to the variables. If a factor has a low eigenvalue, then it is contributing little to the explanation of variances in the variables and may be ignored. Note that the eigenvalues associated with the unrotated and rotated solution will differ, though their total will be the same. - Factor analysis - The objective with this technique is to explain the most of the variablility among a number of observable random variables in term of a smaller number of unobservable random variables called factors. The observable random variables are modeled as linear combinations of the factors, pluss 'error' terms. The main application of factor analytic techniques are: 1. to reduce the number of variables and 2. to detect structure in the relationships between variables. - Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of factors. The factor loadings are the correlation coefficients between the variables and factors. Factor loadings are the basis for imputing a label to different factors. Analogous to Pearson's r, the squared factor loading is the percentage of variance in the variable, explained by a factor. The sum of the squared factor loadings for all factors for a given variable is the variance in that variable accounted for by all the factors, and this is called the communality. In complete principal components analysis, with no factors dropped, communality is equal to 1.0, or 100% of the variance of the given variable. - Frequency is a method of describing how the cases are distributed over the different data values on a particular variable. The Frequency table gives an overview of the number of cases that have each of the values on a variable. Frequency tables are most suitable for variables with few data values. - A function is a mathematical equation in which the values of one dependent variable are seen as uniquely determined by the values of one or more other independent variable. The function y = a + b∙x, for instance, expresses the dependent variable y as a linear function of the independent variable x. - Identification number - Unique number given to each member of a survey sample. The identification numbers of the ESS survey sample members are stored as the variable ‘idno’. - An index variable is a variable that in one way or another summarises information about several other variables. Index variables are most frequently used where the data set includes several measures of the same basic phenomenon, e.g. political participation, status, etc. We can combine these measures or indicators in one index to create a variable which gives an overall impression of the basic phenomenon. But note that many authors use the word scale (e.g. a summated scale) rather than the word index to denote variables that are composed of several measures of the same basic phenomenon. - This statistic shows how the distribution of the variables deviates from the normal distribution. The normal distribution of a variable is a bell-shaped symmetric curve where approximately 2/3 of the cases are within 1 standard deviation on either side of the mean value and approximately 95 % of the cases fall within these 2 standard deviations. Kurtosis is a measure of the degree to which a variable meets this condition - whether it has a more concentrated (more peaked) or more even (flat) distribution. A positive kurtosis tells us that the distribution of the variable is more peaked than the normal distribution. A negative kurtosis tells us that the distribution is less peaked than the normal distribution. (Skewness tells us whether the variable meets the normal distribution curve's symmetry requirement.) - When respondents answer to a Likert questionnaire item, they normally specify their level of agreement to a statement on a five point scale which ranges from ‘strongly disagree’ to ‘strongly agree’ through ‘disagree’, ‘neither agree nor disagree’, and ‘agree’. - Listwise deletion is a method used to exclude cases with missing values on the specified variable(s). The cases used in the analysis are cases without missing values on the variable(s) specified. - Logical operators Use these relational logical operators in If statements in SPSS commands: EQ or = Equal to NE or ~= or = or <> Not equal to LT or < Less than LE or <= Less than or equal to GT or > Greater than GE or >= Greater than or equal to Two or more relations can be logically joined using the logical operators AND and OR. Logical operators combine relations according to the following rules: - The ampersand (&) symbol is a valid substitute for the logical operator AND. The vertical bar ( | ) is a valid substitute for the logical operator OR. - Only one logical operator can be used to combine two relations. However, multiple relations can be combined into a complex logical expression. - Regardless of the number of relations and logical operators used to build a logical expression, the result is either true, false or indeterminate because of missing values. - Operators or expressions cannot be implied. For example, X EQ 1 OR 2 is illegal; you must specify X EQ 1 OR X EQ 2. - The ANY and RANGE functions can be used to simplify complex expressions. AND Both relations must be true for the complex expression to be true. OR If either relation is true, the complex expression is true. The following table lists the outcomes for AND and OR combinations.Logical outcomes Expression Outcome Expression Outcome true AND true = true true OR true = true true AND false = false true OR false = true false AND false = false false OR false = true true AND missing = missing true OR missing = true missing AND missing = missing missing OR missing = missing false AND missing = false false OR missing = missing - Data matrix - When preparing data for statistical analysis, we structure the material in a data matrix. A data matrix has one row for each case and a fixed column for each variable. The cases are distributed over the values of each variable, so that the values are shown in the cells of the matrix. - A statistical method for estimating population parameters (as the mean and variance) from sample data that selects as estimates those parameter values maximizing the probability of obtaining the observed data. - The arithmetical mean is a measure for the central tendency for metric variables. The arithmetical mean is the sum of all the cases’ variable values divided by the number of cases. - The median is a measure of the central tendency for ordinal or metric variables. The median is the value that divides a sorted distribution into two equal parts, i.e. the value of the case with 50 % of the cases above it and 50 % below. For example: If you have measured the variable Height for 25 persons and sorted the values in ascending order, the median will be the height of person no. 13 in the sorted sample, i.e. the person that divides the sample into two with an equal number of cases above and below him/her. - Metric variable - A variable is metric if we can measure the size of the difference between any two variable values. Age measured in years is metric because the size of the difference between the ages of two persons can be measured quantitatively in years. Other examples of metric variables are length of education measured in years, and income measured in monetary units. Thus, we can use linear regression to assess the association between these two variables. - Missing Values SPSS acknowledges two types of missing values: System-missing and User-missing. If a case has not been (or cannot automatically be) assigned a value on a variable, that case’s value on that variable is automatically set to ‘System missing’ and will appear as a . (dot) in the data matrix. Cases with System-missing values on a variable are not used in computations which include that variable. If a case has been assigned a value code on a variable, the user may define that code as User-missing. By default, User-missing values are treated in the same way as System-missing values. In the ESS dataset, refusals to answer and ‘don’t know’ answers etc. have been preset as User-missing to prevent you from making unwarranted use of them in numeric calculations. If you need to use these values to create dummy variables or for other purposes, you must first redefine them as non-missing. One way to achieve this is to open the ‘Variable View’ in the data editor, find the row of the variable whose missing values you want to redefine, go right to the ‘Missing’ column, click the right-hand side of the cell, and tick ‘No missing values’ in the dialogue box that pops up. You can also use the MISSING VALUES syntax command (see SPSS’s help function for instructions). Cases with System-missing values can be assigned valid values using the ‘Recode into different variables’ feature in the ‘Transform’ menu. Be careful when you use this option, that you do not overwrite value assignments that you would have preferred to keep as they are. Moreover, if you need to define more values as User-missing, you can use the syntax command MISSING VALUES or the relevant variable’s cell in the ‘Missing’ column in the ‘Variable View’. - The mode is a measure of the central tendency. The mode of a sample is the value which occurs most frequently in the sample. - Nominal variable - Nominal variables measure whether any two observations have equal or different values but not whether one value is larger or smaller than another. Occupation and nationality are examples of such variables. - Normal distribution - Normal distribution is a theoretical distribution which many given empirical variable distributions resemble. If a variable has a normal distribution (i.e. resembles the theoretical normal distribution), the highest frequencies are concentrated round the variable's mean value. The distribution curve is symmetric round the mean and shaped like a bell. Approximately 2/3 of all cases will fall within 1 standard deviation to either side of the mean value. Approximately 95 % of the cases fall within 2 such deviations. - Operationalisation is the process of converting concepts into specific observable behaviours or attitudes. For example, highest education completed could be an operationalisation of the concept academic skill. - Ordinal variable - Ordinal variables measure whether one observation (case / individual) has a larger or smaller value than another but not the exact size of the difference. Measurements of opinions with values such as excellent, very good, good etc. are examples of ordinal variables because we know little about the size of the difference between ‘very good’ and ‘good’ etc. - Overall mean - The overall mean of a variable is the mean of all participating individuals (cases / observations) irrespective of their values on other variables. - SPSS distinguishes between pairwise and listwise analyses. In a pairwise analysis the correlations between each pair of variables are determined on the basis of all cases with valid values on those two variables. This takes place regardless of the values of these cases on other specified variables. - Policy cycle - This is the technical term used to refer to the process of policy development from the identification of need, through assessment and piloting, to implementation and evaluation. - Recode reassigns the values of existing variables or collapses ranges of existing values into new values. For example, you could collapse income into income range categories. - Regression is a method of estimating some conditional aspect of a dependent variable’s value distribution given the values of some other variable or variables. The most common regression method is linear regression, by means of which a variable’s conditional mean is estimated as a linear function of one or more other variables. The objective is to explain or predict variations in the dependent variable by means of the independent variables. - In psychometrics, reliability is the accuracy of the scores of a measure. The most common internal consistency measure is Cronbach's alpha. Reliability does not imply validity. A reliable measure is measuring something consistently. A valid measure is measuring what it is supposed to measure. A Rolex may be a very reliable instrument for measuring the time, but if it is wrong, it does not give a valid measure of what the time really is. - A residual is the difference between the observed and the predicted dependent variable value of a particular person (case / observation). - Response bias - A response bias is a systematic bias towards a certain type of response (e.g. low, or extreme responses) that masks true levels of the construct that one is attempting to measure. - The scattergram is a graphic method of presentation, in which the different cases are plotted as points along two (three) axes defined by the two (three) variables included in the analysis. - Select cases - Select the cases (persons / observations) you want to use in your analysis by clicking ‘Data’ and ‘Select Cases’ on the SPSS menu bar. Next, select ‘If condition is satisfied’ and click ‘If’. A new dialogue box opens. Type an expression where you use variable names, logical operators and value codes to delineate the cases you want to retain in the analysis from those that you want to exclude. - Structural equation modelling - Structural equation modelling (SEM) is a very general statistical modelling technique. Factor analysis, path analysis and regression all represent special cases of SEM. SEM is a largely confirmatory, rather than exploratory, technique. In SEM, interest usually focuses on latent constructs, for example well-being, rather than on the manifest variables used to measure aspects of well-being. Measurement is recognized as difficult and error-prone. By explicitly modelling measurement error, SEM users seek to derive unbiased estimates for the relations between latent constructs. To this end, SEM allows multiple measures to be associated with a single latent construct. - In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. ‘A statistically significant difference’ simply means there is statistical evidence that there is a difference; it does not mean that the difference is necessarily large, important, or significant in the common meaning of the word. - Skewness is a measure that shows how the variable distribution deviates from the normal distribution. A variable with normal distribution has a bell-shaped symmetric distribution round the mean value of the variable. This means that the highest frequencies are in the vicinity of the mean value and that there is an equal number of cases on either side of the mean. Skewness measures deviation from this symmetry. Positive skewness tells us that the value of the majority of the cases is below the mean and hence there is a predominance of cases with positive extreme values. Negative skewness tells us that the majority of the cases are greater than the mean while there is a predominance of negative extreme values. - Squared value - A squared value (for instance a squared distance) is that value (distance) multiplied by itself. - Square root - A value’s square root is a number that, multiplied by itself, produces that value. Thus, a is the square root of b if a∙a = b. - Standard deviation - A variable’s standard deviation is the square root of its variance. Standard deviation is a measure of statistical dispersion. - Standard error - The standard error of a parameter (e.g. a regression coefficient) is the standard deviation of that parameter’s sampling distribution. - Standardised values - A standardised variable value is the value you get if you take the difference between that value and the variable’s mean value and divide that difference by the variable’s standard deviation. If this is done to all the observed values of a variable, we get a standardised variable. Standardised variables have a 0 mean value and a standard deviation of 1. - Statistics is the science and practice of developing human knowledge through the use of empirical data. It is based soundly on statistical theory, which is a branch of applied mathematics. An SPSS syntax is a text command or a combination of text commands used to instruct SPSS to perform operations or calculations on a data set. Such text commands are written and stored in syntax files, which are characterised by the extension .spx. In order to run the syntax commands that have been provided with this course pack, we suggest that you first open an SPSS syntax file. Either create a new syntax file (click ‘New’ and ‘Syntax’ on the SPSS menu bar’s ‘File’ menu) or open an old one that already contains commands that you want to combine with the new commands (click ‘Open File’ and ‘Syntax’ in the ‘File’ menu, and select the appropriate file). Then find, select and copy the relevant syntax from this course pack’s website and paste it into the open syntax file window. While doing exercises, you may have to make partial changes to the commands by editing the text. Run commands from syntax files by selecting them with the cursor (or the shift/arrow key combination) before you click the blue arrow on the syntax window’s tool bar. If you use the menu system, you can create syntaxes by clicking ‘Paste’ instead of ‘OK’ before exiting the dialogue boxes. This causes SPSS to write the commands you have prepared to a new or to an open syntax file without executing them. Use this option to store your commands in a file so that you can run them again without having to click your way through a series of menus each time. New commands can be created from old ones by copying old syntaxes and editing the copies. This saves time. - This is a measure of the covariance of independent variables (normally called colinearity). The tolerance value shows how much of the variance of each independent variable is shared by other independent variables. The value can range from 0 (all variance is shared by other variables) to 1 (all variance is unique to the variable in question). If the tolerance value approaches 0, the results of the analysis may be unreliable. In such cases it is also difficult to determine which of the independent variables explain the variance of the dependant variable. - After an estimation of a coefficient, the t-statistic for that coefficient is the ratio of the coefficient to its standard error. That can be tested against a t distribution to determine how probable it is that the true value of the coefficient is really zero. - T test - The t test is used to determine whether a difference between a sample parameter value (e.g. a mean or a regression coefficient) and a null hypothesis value (or the difference between two parameters) is sufficiently great for us to conclude that the difference is not due to sampling errors. This method can, for instance, help us to decide whether the observed differences in income between men and women (in a sample) are great enough for us to be able to conclude that they are also present in the population from which the sample has been drawn. - Type I error - In statistical hypothesis testing, a type I error involves rejecting a null hypothesis (there is no connection) that is true. In other words, finding a result to be significant when this in fact happened by chance. - Type II error - In statistical hypothesis testing, a type II error consists of failing to reject an invalid null hypothesis (i.e. falsely accepting an invalid hypothesis. As the likelihood of type II error decreases, the likelihood of type I error increases. - Validity refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. While reliability is concerned with the accuracy of the actual measuring instrument or procedure, validity is concerned with the study's success at measuring what the researchers set out to measure. - Value codes and value labels A case's (a person's) value on a variable must be given a code in order for it to be recognised by SPSS. Codes can be of various types, for instance dates, numbers or strings of letters. A variable's codes must consist of numbers if you want to use it in mathematical computations. Value codes may have explanatory labels that tell us what the codes stand for. One way to access these explanations is to open the data file and keep the ‘Variable view’ of the SPSS 'Data editor' window open. (You can toggle between 'Variable view' and 'Data view' by clicking the buttons in the lower part of the window.) In the 'Data view', each variable has its own row. Find the cell where the row of the variable you are interested in meets the column called 'Values'. Click the right end of the cell. A dialogue box that displays codes and corresponding explanatory labels appears. These dialogue boxes can be used to assign labels to the codes of variables that you have created yourself (recommended). Codes of continuously varying variables do not have explanatory labels. The meaning of the codes of such variables must be stated in the variable label. The value labels can also be accessed from the 'Variables' option in the 'Utilities' menu or from the variable lists that appear in many dialogue boxes. Right click the variable label and click 'Variable information'. - By variables we mean characteristics or facts about cases about which the data contain information, e.g. the individual questions on a questionnaire. For instance, if the cases are countries or other geographical areas, we may have population figures, etc. There are several types of variables, and the variable type determines which methods and forms of presentation should be used. Where the individual groups or values have no obvious order or ranking, the variable is called a nominal variable (gender). The next type is called an ordinal variable. In addition to the actual classification, there is a natural principle ranking the different values in a particular order. It can obviously be claimed that the response "Very interested in politics" is evidence of a stronger political interest than "Quite interested". The values of ordinal variables have a natural order, but the intervals between the values cannot be measured. The third type is called metric variables. These are variables that in some way measure parameters, quantities, percentages etc., using a scale based on the numerical system. The numerical values of these variables have a direct and intuitive meaning. They are not codes used as surrogates for the real responses as in the case of the nominal and ordinal variables. It follows that these variables also have the arithmetic properties of numbers. The values have a natural order, and the intervals between them can be measured. We can say that one person is three times the age of another without violating any logical or mathematical rules. It is also possible to compute the average age of a group of people. - A variable’s variance is the sum of all the squared differences between its observed values and its overall mean value, divided by the number of observations. (Subtract 1 from the number of observations if you are computing an estimate of a population’s variance by means of sample data.) The variance is a measure of the statistical dispersion, and it is defined as the mean square deviation of a continuous distribution. - The variation is a number indicating the dispersion in a distribution: How typical is the central tendency of the other sample observations? For continuous variables, the variance and the standard deviation are the most commonly used measures for dispersion. - Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of factors. Varimax rotation seeks to maximize the variances of the squared normalized factor loadings across variables for each factor. This is equivalent to maximizing the variances in the columns of the matrix of the squared normalized factor loadings. The goal of rotation is to obtain a clear pattern of loadings, i .e., the factors are somehow clearly marked by high loadings for some variables and low loadings for other variables. This general pattern is called ‘Simple Structure’. - Weighting allows you to assign a different weight to the different cases in the analysis file. In SPSS, a weight variable can be used to assign different weights to different cases when used in calculations. If case A has value 4 and weight 0.5, while case B has value 6 and weight 1.5, their weighted mean is (4 ∙ 0.5 + 6 ∙ 1.5)/2 = 5.5, whereas their unweighted mean is (4 + 6)/2 = 5. Use the ‘Weight Variable’ procedure in the ‘Data’ menu to make SPSS perform weighting. Information about how and why you may want to use weights when analysing ESS data can be found on NSD’s web pages (follow the link to the reference site). Weighting is usually used for correcting skewness in a sample that is meant to represent a particular population. It can also be used for "blowing up" sample data so that the analysis results are shown in figures that are in accordance with the size of the population.
http://essedunet.nsd.uib.no/cms/glossary/index.html
13
209
Arithmetic is the fundamental building block of math. The other three subject areas tested in GRE Math are all pretty much unthinkable without arithmetic. You’ll certainly need to know your arithmetic to power through algebra, geometry, and data analysis problems, but the Math section also includes some pure arithmetic problems as well. So it makes sense to start Math 101 with a discussion of numbers and the typical things we do with them. Common Math Symbols You may remember these from way back when, but in case you need a quick refresher, here’s a list of some of the most commonly used math symbols you should know for the GRE. We’ll discuss some of them in this arithmetic section and others later in the chapter. The quantity to the left of the symbol is less than the quantity to the right. The quantity to the left of the symbol is greater than the quantity to the right. Less than or equal to The quantity to the left of the symbol is less than or equal to the quantity to the right. Greater than or equal to The quantity to the left of the symbol is greater than or equal to the quantity to the right. A number which when multiplied by itself equals the value under the square root symbol. | x | The positive distance a number enclosed between two vertical bars is from 0. The product of all the numbers up to and including a In geometry, two lines separated by this symbol have the same slope (go in exactly the same In geometry, two lines separated by this symbol meet at right angles. A measure of the size of an angle. There are 360 degrees in a circle. The ratio of the circumference of any circle to its diameter; approximately equal to 3.14. The test makers assume that you know your numbers. Make sure you do by comparing your knowledge to our definitions below. The set of counting numbers, including 0, 1, 2, 3 The set of whole positive numbers except 1, 2, 3, 4 The set of all positive and negative whole numbers, including zero, not including fractions and decimals. Integers in a sequence, such as those in the example to the right, are called consecutive integers. –3, –2, –1, 0, 1, 2, 3 The set of all numbers that can be expressed as integers in fractions—that is, any number that can be expressed in the form , where m and n are integers The set of all numbers that cannot be expressed as integers in a fraction Every number on the number line, including all rational and irrational numbers Every number you can think of Even and Odd Numbers An even number is an integer that is divisible by 2 with no remainder, Even numbers: –10, –4, 0, 4, 10 An odd number is an integer that leaves a remainder of 1 when divided Odd numbers: –9, –3, –1, 1, 3, 9 Even and odd numbers act differently when they are added, subtracted, multiplied, and divided. The following chart shows the rules for addition, subtraction, and multiplication (multiplication and division are the same in terms of even and odd). even + even = even even – even = even even × even = even even + odd = odd even – odd = odd even × odd = even odd + odd = even odd – odd = even odd × odd = odd Zero, as we’ve mentioned, is even, but it has its own special properties when used in calculations. Anything multiplied by 0 is 0, and 0 divided by anything is 0. However, anything divided by 0 is undefined, so you won’t see that on the GRE. Positive and Negative Numbers A positive number is greater than 0. Examples include , 15, and 83.4. A negative number is less than 0. Examples include –0.2, –1, and –100. One tip-off is the negative sign (–) that precedes negative numbers. Zero is neither positive nor negative. On a number line, positive numbers appear to the right of zero, and negative numbers appear to the left: –5, –4, –3, –2, –1, 0, 1, 2, 3, 4, 5 Positive and negative numbers act differently when you add, subtract, multiply, or divide them. Adding a negative number is the same as subtracting a positive number: 5 + (–3) = 2, just as 5 – 3 = 2 Subtracting a negative number is the same as adding a 7 – (–2) = 9, just as 7 + 2 = 9 To determine the sign of a number that results from multiplication or division of positive and negative numbers, memorize the following rules. positive × positive = positive positive ÷ positive = positive positive × negative = negative positive ÷ negative = negative negative × negative = positive negative ÷ negative = positive Here’s a helpful trick when dealing with a series of multiplied or divided positive and negative numbers: If there’s an even number of negative numbers in the series, the outcome will be positive. If there’s an odd number, the outcome will be negative. When negative signs and parentheses collide, it can get pretty ugly. However, the principle is simple: A negative sign outside parentheses is distributed across the parentheses. Take this question: 3 + 4 – (3 + 1 – 8) = ? You’ll see a little later on when we discuss order of operations that in complex equations we first work out the parentheses, which gives us: 3 + 4 – (4 – 8) This can be simplified to: 3 + 4 – (– 4) As discussed earlier, subtracting a negative number is the same as adding a positive number, so our equation further simplifies to: 3 + 4 + 4 = 11 An awareness of the properties of positive and negative numbers is particularly helpful when comparing values in Quantitative Comparison questions, as you’ll see later in chapter 4. A remainder is the integer left over after one number has been divided by another. Take, for example, 92 ÷ 6. Performing the division we see that 6 goes into 92 a total of 15 times, but 6 × 15 = 90, so there’s 2 left over. In other words, the remainder is 2. Integer x is said to be divisible by integer y when x divided by y yields a remainder of zero. The GRE sometimes tests whether you can determine if one number is divisible by another. You could take the time to do the division by hand to see if the result is a whole number, or you could simply memorize the shortcuts in the table below. Your choice. We recommend All whole numbers are divisible by 1. A number is divisible by 2 if it’s even. A number is divisible by 3 if the sum of its digits is divisible by 3. This means you add up all the digits of the original number. If that total is divisible by 3, then so is the number. For example, to see whether 83,503 is divisible by 3, we calculate 8 + 3 + 5 + 0 + 3 = 19. 19 is not divisible by 3, so neither A number is divisible by 4 if its last two digits, taken as a single number, are divisible by 4. For example, 179,316 is divisible by 4 because 16 is divisible by 4. A number is divisible by 5 if its last digit is 0 or 5. Examples include 0, 430, and –20. A number is divisible by 6 if it’s divisible by both 2 and 3. For example, 663 is not divisible by 6 because it’s not divisible by 2. But 570 is divisible by 6 because it’s divisible by both 2 and 3 (5 + 7 + 0 = 12, and 12 is divisible by 3). 7 may be a lucky number in general, but it’s unlucky when it comes to divisibility. Although a divisibility rule for 7 does exist, it’s much harder than dividing the original number by 7 and seeing if the result is an integer. So if the GRE happens to throw a “divisible by 7” question at you, you’ll just have to suck it up and do the math. A number is divisible by 8 if its last three digits, taken as a single number, are divisible by 8. For example, 179,128 is divisible by 8 because 128 is divisible by 8. A number is divisible by 9 if the sum of its digits is divisible by 9. This means you add up all the digits of the original number. If that total is divisible by 9, then so is the number. For example, to see whether 531 is divisible by 9, we calculate 5 + 3 + 1 = 9. Since 9 is divisible by 9, 531 is as A number is divisible by 10 if the units digit is a 0. For example, 0, 490, and –20 are all divisible by This one’s a bit involved but worth knowing. (Even if it doesn’t come up on the test, you can still impress your friends at parties.) Here’s how to tell if a number is divisible by 11: Add every other digit starting with the leftmost digit and write their sum. Then add all the numbers that you didn’t add in the first step and write their sum. If the difference between the two sums is divisible by 11, then so is the original number. For example, to test whether 803,715 is divisible by 11, we first add 8 + 3 + 1 = 12. To do this, we just started with the leftmost digit and added alternating digits. Now we add the numbers that we didn’t add in the first step: 0 + 7 + 5 = 12. Finally, we take the difference between these two sums: 12 – 12 = 0. Zero is divisible by all numbers, including 11, so 803,715 is divisible by A number is divisible by 12 if it’s divisible by both 3 and 4. For example, 663 is not divisible by 12 because it’s not divisible by 4. 162,480 is divisible by 12 because it’s divisible by both 4 (the last two digits, 80, are divisible by 4) and 3 (1 + 6 + 2 + 4 + 8 + 0 = 21, and 21 is divisible by 3). A factor is an integer that divides into another integer evenly, with no remainder. In other words, if is an integer, then b is a factor of a For example, 1, 2, 4, 7, 14, and 28 are all factors of 28, because they go into 28 without having anything left over. Likewise, 3 is a factor of 28 since dividing 28 by 3 yields a remainder of 1. The number 1 is a factor of every number. Some GRE problems may require you to determine the factors of a number. To do this, write down all the factors of the given number in pairs, beginning with 1 and the number you’re factoring. For example, to factor 24: - 1 and 24 (1 × 24 = 24) - 2 and 12 (2 × 12 = 24) - 3 and 8 (3 × 8 = 24) - 4 and 6 (4 × 6 = 24) Five doesn’t go into 24, so you’d move on to 6. But we’ve already included 6 as part of the 4 × 6 equation, and there’s no need to repeat. If you find yourself beginning to repeat numbers, then the factorization’s complete. The factors of 24 are therefore 1, 2, 3, 4, 6, 8, 12, and 24. Everyone’s always insisting on how unique they are. Punks wear leather. Goths wear black. But prime numbers actually are unique. They are the only numbers whose sole factors are 1 and themselves. More precisely, a prime number is a number that has exactly two positive factors, 1 and itself. For example, 3, 5, and 13 are all prime, because each is only divisible by 1 and itself. In contrast, 6 is not prime, because, in addition to being divisible by 1 and itself, 6 is also divisible by 2 and 3. Here are a couple of points about primes that are worth memorizing: - All prime numbers are positive. This is because every negative number has –1 as a factor in addition to 1 and itself. - The number 1 is not prime. Prime numbers must have two positive factors, and 1 has only one positive factor, - The number 2 is prime. It is the only even prime number. All prime numbers besides 2 are odd. Here’s a list of the prime numbers less than 100: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, and 97 It wouldn’t hurt to memorize this list. In addition, you can determine whether a number is prime by using the divisibility rules listed earlier. If the number is divisible by anything other than 1 and itself, it’s not prime. If a number under consideration is larger than the ones in the list above, or if you’ve gone and ignored our advice to memorize that list, here’s a quick way to figure out whether a number is prime: Estimate the square root of the number. Check all the prime numbers that fall below your estimate to see if they are factors of the number. If no prime below your estimate is a factor of the number, then the number Let’s see how this works using the number 97. Estimate the square root of the number: Check all the prime numbers that fall below 10 to see if they are factors of 97: Is 97 divisible by 2? No, it does not end with an even Is 97 divisible by 2? No, it does not end with an even Is 97 divisible by 3? No, 9 + 7 = 16, and 16 is not divisible by 3. Is 97 divisible by 5? No, 97 does not end with 0 or 5. Is 97 divisible by 7? No, 97 ÷ 7 = 13, with a remainder of 6. Therefore, 97 is prime. (Of course, you knew that already from familiarizing yourself with the prime numbers less than 100. . . .) Come on, say it aloud with us: “prime factorization.” Now imagine Arnold Schwarzenegger saying it. Then imagine if he knew how to do it. Holy Moly. He would probably be governor of the entire United States! A math problem may ask you to directly calculate the prime factorization of a number. Other problems, such as those involving greatest common factors or least common multiples (which we’ll discuss soon), are easier to solve if you know how to calculate the prime factorization. Either way, it’s good to know how to do it. To find the prime factorization of a number, divide it and all its factors until every remaining integer is prime. The resulting group of prime numbers is the prime factorization of the original integer. Want to find the prime factorization of 36? We thought so: 36 = 2 × 18 = 2 × 2 × 9 = 2 × 2 × 3 × 3 That’s two prime 2s, and two prime 3s, for those of you keeping track at home. It can be helpful to think of prime factorization in the form of a As you may already have noticed, there’s more than one way to find the prime factorization of a number. Instead of cutting 36 into 2 and 18, you could have factored it into 6 × 6, and then continued from there. As long as you don’t screw up the math, there’s no wrong path—you’ll always get the same result. Let’s try one more example. The prime factorization of 220 could be found like so: 220 = 10 × 22 10 is not prime, so we replace it with 5 × 2: 10 × 22 = 2 × 5 × 22 22 is not prime, so we replace it with 2 × 11: 2 × 5 × 22 = 2 × 2 × 5 × 11 2, 5, and 11 are all prime, so we’re done. The prime factorization of 220 is thus 2 × 2 × 5 × 11. Greatest Common Factor The greatest common factor (GCF) of two numbers is the largest number that is a factor of both numbers—that is, the GCF is the largest factor that both numbers have in common. For example, the GCF of 12 and 18 is 6, because 6 is the largest number that divides evenly into 12 and 18. Put another way, 6 is the largest number that is a factor of both 12 To find the GCF of two numbers, you can use their prime factorizations. The GCF is the product of all the numbers that appear in both prime factorizations. In other words, the GCF is the overlap of the For example, let’s calculate the GCF of 24 and 150. First, we figure out their prime factorizations: 24 = 2 × 2 × 2 × 3 150 = 2 × 3 × 5 × 5 Both factorizations contain 2 × 3. The overlap of the two factorizations is 2 and 3. The product of the overlap is the GCF. Therefore, the GCF of 24 and 150 is 2 × 3 = 6. A multiple can be thought of as the opposite of a factor: If is an integer, then x is a multiple of y . Less formally, a multiple is what you get when you multiply an integer by another integer. For example, 7, 14, 21, 28, 70, and 700 are all multiples of 7, because they each result from multiplying 7 by an integer. Similarly, the numbers 12, 20, and 96 are all multiples of 4 because 12 = 4 × 3, 20 = 4 × 5, and 96 = 4 × 24. Keep in mind that zero is a multiple of every number. Also, note that any integer, , is a multiple of 1 and n , because 1 Least Common Multiple The least common multiple (LCM) of two integers is the smallest number that is divisible by the two original integers. As with the GCF, you can use prime factorization as a shortcut to find the LCM. For example, to find the least common multiple of 10 and 15, we begin with their prime factorizations: 10 = 5 × 2 15 = 5 × 3 The LCM is equal to the product of each factor by the maximum number of times it appears in either number. Since 5 appears once in both factorizations, we need to include it once in our final product. The same goes for the 2 and the 3, since each of these numbers appears one time in each factorization. The LCM of 10 and 15, then, is 5 × 3 × 2 = 30. In other words, 30 is the smallest number that is divisible by both 10 and 15. Remember that the LCM is the least common multiple—you have to choose the smallest number that is a multiple of each original number. So, even though 60 is a multiple of both 10 and 15, 60 is not the LCM, because it’s not the smallest multiple of those two numbers. This is a bit tricky, so let’s try it again with two more numbers. What’s the LCM of 60 and 100? First, find the prime factorizations: 60 = 2 × 2 × 3 × 5 100 = 2 × 2 × 5 × 5 So, 2 occurs twice in each of these factorizations, so we’ll need to include two 2s in our final product. We have one 5 in our factorization of 60, but two 5s in our factorization of 100. Since we’re looking to include the maximum number of appearances of each factor, we’ll include two 5s in our product. There’s also one 3 in the first factorization, and no 3s in the second, so we have to add one 3 to the mix. This results in an LCM of 2 × 2 × 3 × 5 × 5 = 300. Order of Operations What if you see something like this on the test: You basically have two choices. You can (a) run screaming from the testing site yelling “I’ll never, ever, EVER get into graduate school!!!” or (b) use PEMDAS. PEMDAS is an acronym for the order in which mathematical operations should be performed as you move from left to right through an expression or equation. It stands for: You may have had PEMDAS introduced to you as “Please Excuse My Dear Aunt Sally.” Excuse us, but that’s a supremely lame 1950s-style acronym. We prefer, Picking Eminem Made Dre A Star. Whatever. Come up with one of your own if you want. Just remember PEMDAS. If an equation contains any or all of these PEMDAS elements, first carry out the math within the parentheses, then work out the exponents, then the multiplication, and the division. Addition and subtraction are actually a bit more complicated. When you have an equation to the point that it only contains addition and subtraction, perform each operation moving from left to right across the equation. Let’s see how this all plays out in the context of the example above: First work out the math in the parentheses, following PEMDAS even within the parentheses. So here we focus on the second parentheses and do the multiplication before the subtraction: Now taking care of the subtraction in both sets of parentheses: Now work out the exponent (more on those later): Then do the multiplication: Then the division: 12 – 7 + 17 We’re left with just addition and subtraction, so we simply work from left to right: 5 + 17 Piece of cake! Well, not exactly, but it beats fleeing the room in hysterics. PEMDAS is the way to crunch down the most difficult-looking equations or expressions. Take it one step at a time, and you’ll do just Much of what we’ve covered so far concerns whole numbers. Now we enter the vast universe that exists between those nice round numbers: the world of fractions. The GRE loves fractions. The number of questions on the Math section that involve fractions in some way or another is nothing short of stupefying. This means you must know fractions inside and out. Know how to compare them, reduce them, add them, and multiply them. Know how to divide them, subtract them, and convert them to mixed numbers. Know them. Love them like the GRE does. Make them your friend on the test, not your enemy. To begin, here are the basics: A fraction is a part of a whole. It’s composed of two expressions, a numerator and a denominator. The numerator of a fraction is the quantity above the fraction bar, and the denominator is the quantity below the fraction bar. For example, in the fraction , 1 is the numerator and 2 is the denominator. The denominator tells us how many units there are in all, while the numerator tells us how many units out of that total are specified in a given instance. For example, if your friend has five cookies and offers you two of them, you’d be entitled to eat of her cookies. How many you sneak when she’s not looking is up to you. The general concept of fractions isn’t difficult, but things can get dicey when you have to do things with them. Hence, the following subtopics that you need to have under your belt. Fractions represent a part of a whole, so if you increase both the part and whole by the same multiple, you will not change the relationship between the part and the whole. To determine if two fractions are equivalent, multiply the denominator and numerator of one fraction so that the denominators of the two fractions are equal (this is one place where knowing how to calculate LCM and GCF comes in handy). For example, because if you multiply the numerator and denominator of by 3, you get: . As long as you multiply or divide both the numerator and denominator of a fraction by the same nonzero number, you will not change the overall value of the Reducing fractions makes life simpler, and we all know life is complicated enough without crazy fractions weighing us down. Reducing takes unwieldy monsters like and makes them into smaller, friendlier critters. To reduce a fraction to its lowest terms, divide the numerator and denominator by their GCF. For example, for , the GCF of 450 and 600 is 150. So the fraction reduces down to A fraction is in its simplest, totally reduced form when the GCF of its numerator and denominator is 1. There is no number but 1, for instance, that can divide into both 3 and 4, so is a fraction in its lowest form, reduced as far as it can go. The same goes for the fraction is a different story because 3 is a common factor of both the numerator and denominator. Dividing each by this common factor yields , the fraction in its most reduced form. Adding, Subtracting, and Comparing Fractions To add fractions with the same denominators, all you have to do is add up the numerators and keep the denominator the same: Subtraction works similarly. If the denominators of the fractions are equal, just subtract one numerator from the other and keep the denominator the same: Remember that fractions can be negative too: Some questions require you to compare fractions. Again, this is relatively straightforward when the denominators are the same. The fraction with the greater numerator will be the larger fraction. For is greater than is greater than . (Be careful of those negative numbers! Since –5 is less negative than –13, –5 is greater than –13.) Working with fractions with the same denominators is one thing, but working with fractions with different denominators is quite another. So we came up with an easy alternative: the Magic X. For adding, subtracting, and comparing fractions with different denominators, the Magic X is a lifesaver. Sure, you can go ahead and find the least common denominator, a typical way of tackling such problems, but we don’t call our trick the “Magic X” for nothing. Here’s how it works in each Consider the following equation: You could try to find the common denominator by multiplying by 9 and by 7, but then you’d be working with some pretty big numbers. Keep things simple, and use the Magic X. The key is to multiply diagonally and up , which in this case means from the 9 to the 3 and also from the 7 to the 2: In an addition problem, we add the products to get our numerator: 27 + 14 = 41. For the denominator, we simply multiply the two denominators to get: Believe it or not, we’re already done! The numerator is 41, and the denominator is 63, which results in a final answer of Same basic deal, except this time we subtract the products that we get when we multiply diagonally and up. See if you can feel the magic in this one: Multiplying diagonally and up gives: The problem asks us to subtract fractions, so this means we need to subtract these numbers to get our numerator: 24 – 25 = –1. Just like in the case of addition, we multiply across the denominators to get the denominator of our answer: That’s it! The numerator is –1 and the denominator is 30, giving us an answer of . Not the prettiest number you’ll ever see, but it’ll do. The Magic X is so magical that it can also be used to compare two fractions, with just a slight modification: omitting the step where we multiply the denominators. Say you’re given the following Quantitative Comparison problem. We’ll explain much more about QCs in chapter 4, but for now remember that the basic idea is to compare the quantity in Column A with the quantity in Column B to see which, if either, is bigger. (In some cases, the answer will be that you can’t determine which is bigger, but as you’ll learn, when the two quantities are pure numbers with no variables, that option is impossible.) See what you can make of this sample QC: Now, if you were a mere mortal with no magic at your fingertips, this would be quite a drag. But the Magic X makes it a pleasure. Again, begin by multiplying diagonally and up: Now compare the numbers you get: 161 is larger than 150, so is greater than Why does this work? Who knows? Who cares? It just does. (Actually, the rationale isn’t too complex, but it doesn’t add anything to your GRE repertoire, so let’s skip it.) Learn how to employ the Magic X in these three circumstances, and you’re likely to save yourself some time and Multiplying fractions is a breeze, whether the denominators are equal or not. The product of two fractions is merely the product of their numerators over the product of their denominators: Want an example with numbers? You got one: You can make multiplying fractions even easier by canceling out. If the numerator and denominator of any of the fractions you need to multiply share a common factor, you can divide by the common factor to reduce both numerator and denominator before multiplying. For example, consider this fraction multiplication problem: You could simply multiply the numerators and denominators and then reduce, but that would take some time. Canceling out provides a shortcut. We can cancel out the numerator 4 with the denominator 8 and the numerator 10 with the denominator 5, like this: Then, canceling the 2s, you get: Canceling out can dramatically cut the amount of time you need to spend working with big numbers. When dealing with fractions, whether they’re filled with numbers or variables, always be on the lookout for chances to cancel out. Multiplication and division are inverse operations. It makes sense, then, that to perform division with fractions, all you have to do is flip the second fraction and then multiply. Check it out: Here’s a numerical example: Compound fractions are nothing more than division problems in disguise. Here’s an example of a compound fraction: It looks intimidating, sure, but it’s really only another way of , which now looks just like the previous example. Again, the rule is to invert and multiply. Take whichever fraction appears on the bottom of the compound fraction, or whichever fraction appears second if they’re written in a single line, and flip it over. Then multiply by the other fraction. In this case, we get . Now we can use our trusty canceling technique to reduce this to , or plain old 6. A far cry from Sick of fractions yet? We don’t blame you. But there’s one topic left to cover, and it concerns fractions mixed with integers. Specifically, a mixed number is an integer followed by a fraction, like . But operations such as addition, subtraction, multiplication, and division can’t be performed on mixed numbers, so you have to know how to convert them into standard Since we already mentioned , it seems only right to convert it. The method is easy: Multiply the integer (the 1) of the mixed number by the denominator of the fraction part, and add that product to the numerator: 1 × 3 + 2 = 5. This will be the numerator. Now, put that over the original denominator, 3, to finalize the converted fraction: Let’s try a more complicated example: Pretty ugly as far as fractions go, but definitely something we can work with. A decimal is any number with a nonzero digit to the right of the decimal point. Like fractions, decimals are a way of writing parts of wholes. Some GRE questions ask you to identify specific digits in a decimal, so you need to know the names of these different digits. In this case, a picture is worth a thousand (that is, 1000.00) words: Notice that all of the digits to the right of the decimal point have a th in their names. In the number 839.401, for example, here are the values of the Left of the decimal point Right of the decimal point Converting Fractions to Decimals So, what if a problem contains fractions, but the answer choices are all decimals? In that case, you’ll have to convert whatever fractional answer you get to a decimal. A fraction is really just shorthand for division. For example, is exactly the same as 6 ÷ 15. Dividing this out on your scratch paper results in its decimal Converting Decimals to Fractions What comes around goes around. If we can convert fractions to decimals, it stands to reason that we can also convert decimals to fractions. Here’s how: Remove the decimal point and make the decimal number Let the denominator be the number 1 followed by as many zeros as there are decimal places in the original decimal Reduce this fraction if possible. Let’s see this in action. To convert .3875 into a fraction, first eliminate the decimal point and place 3875 as the numerator: Since .3875 has four digits after the decimal point, put four zeros in the denominator following the number 1: We can reduce this fraction by dividing the numerator and denominator by the GCF, which is 125, or, if it’s too difficult to find the GCF right off the bat, we can divide the numerator and denominator by common factors such as 5 until no more reduction is possible. Either way, our final answer in reduced form comes out to Ratios look like fractions and are related to fractions, but they don’t quack like fractions. Whereas a fraction describes a part of a whole, a ratio compares one part to another part. A ratio can be written in a variety of ways. Mathematically, it can or as 3:1. In words, it would be written out as “the ratio of 3 to 1.” Each of these three forms of the ratio 3:1 means the same thing: that there are three of one thing for every one of another. For example, if you have three red alligators and one blue alligator, then your ratio of red alligators to blue alligators would be 3:1. For the GRE, you must remember that ratios compare parts to parts rather than parts to a whole. Why do you have to remember that? Because of questions like this: ||For every 40 games a baseball team plays, it loses 12 games. What is the ratio of the team’s losses to wins? The question says that the team loses 12 of every 40 games, but it asks you for the ratio of losses to wins, not losses to games. So the first thing you have to do is find out how many games the team wins per 40 games played: 40 – 12 = 28. So for every 12 losses, the team wins 28 games, for a ratio of 12:28. You can reduce this ratio by dividing both sides by 4 to get 3 losses for every 7 wins, or 3:7. Choice C is therefore correct. If you instead calculated the ratio of losses to games played (part to whole), you might have just reduced the ratio 12:40 to 3:10, and then selected choice A. For good measure, the test makers include 10:3 to entice anyone who went with 40:12 before reducing. There’s little doubt that on ratio problems, you’ll see an incorrect part : whole choice and possibly these other kinds of traps that try to trip you up. Just because you have a ratio of three red alligators to one blue alligator doesn’t mean that you can only have three red alligators and one blue one. It could also mean that you have six red and two blue alligators or that you have 240 red and 80 blue alligators. (Not that we have any idea where you’d keep all those beasts, but you get the point.) Ratios compare only relative magnitude. To know how many of each color alligator you actually have, in addition to knowing the ratio, you also need to know how many total alligators there are. This concept forms the basis of another kind of ratio problem you may see on the GRE, a problem that provides you with the ratio among items and the total number of items, and then asks you to determine the number of one particular item in the group. Sounds confusing, but as always, an example should clear things up: Egbert has red, blue, and green marbles in the ratio of 5:4:3, and he has a total of 36 marbles. How many blue marbles does Egbert have? First let’s clarify what this means. For each group of 5 red marbles, Egbert (who does sound like a marble collector, doesn’t he?) has a group of 4 blue marbles and a group of 3 green marbles. If he has one group of each, then he’d simply have 5 red, 4 blue, and 3 green marbles for a total of 12. But he doesn’t have 12—we’re told he has 36. The key to this kind of problem is determining how many groups of each item must be included to reach the total. We have to multiply the total we’d get from having one group of each item by a certain factor that would give us the total given in the problem. Here, as we just saw, having one group of each color marble would give Egbert 12 marbles total, but since he has 36 marbles, we have to multiply by a factor of 3 (since 36 ÷ 12 = 3). That means Egbert has 3 groups of red marbles with 5 marbles in each group, for a total of 3 × 5 = 15 red marbles. Multiplying the other marbles by our factor of 3 gives us 3 × 4 = 12 blue marbles, and 3 × 3 = 9 green marbles. Notice that the numbers work out, because 15 + 12 + 9 does add up to 36 marbles total. The answer to the question is therefore 12 blue marbles. So here’s the general approach: Add up the numbers given in the ratio. Divide the total items given by this number to get the factor by which you need to multiply each group. Then find the item type you’re looking for and multiply its ratio number by the factor you determined. In the example above, that would look like this: 5 (red) + 4 (blue) + 3 (green) = 12 36 ÷ 12 = 3 (factor) 4 (blue ratio #) × 3 (factor) = 12 (answer) For the algebraic-minded among you, you can also let x equal the factor, and work the problem out this way: 5x + 4x + 3x 12x = 36 x = 3 blue = (4)(3) = 12 Percents occur frequently in Data Interpretation questions but are also known to appear in Problem Solving and Quantitative Comparison questions as well. The basic concept behind percents is pretty simple: Percent means divide by 100. This is true whether you see the word percent or you see the percentage symbol, %. For example, 45% is the same as Here’s one way percent may be tested: 4 is what percent of 20? The first thing you have to know how to do is translate the question into an equation. It’s actually pretty straightforward as long as you see that “is” is the same as “equals,” and “what” is the same as “x.” So we can rewrite the problem as 4 equals x percent of 20, or: Since a percent is actually a number out of 100, this means: Now just work out the math: Therefore, 4 is 20% of 20. Percent problems can get tricky, because some seem to be phrased as if the person who wrote them doesn’t speak English. The GRE test makers do this purposefully because they think that verbal tricks are a good way to test your math skills. And who knows—they may even be right. Here’s an example of the kind of linguistic trickery we’re talking about: What percent of 2 is 5? Because the 2 is the smaller number and because it appears first in the question, your first instinct may be to calculate what percent 2 is of 5. But as long as you remember that “is” means “equals” and “what” means “x” you’ll be able to correctly translate the word problem into math: So 5 is 250% of 2. You may also be asked to figure out a percentage based on a specific occurrence. For example, if there are 200 cars at a car dealership, and 40 of those are used cars, then we can divide 40 by 200 to find the percentage of used cars at the dealership: The general formula for this kind of calculation is: Percent of a specific occurrence = Converting Percents into Fractions or Decimals Converting percents into fractions or decimals is an important GRE skill that may come into play in a variety of situations. - To convert from a percent to a fraction, take the percentage number and place it as a numerator over the denominator 100. If you have 88 percent of something, then you can quickly convert it into the fraction . - To convert from a percent to a decimal, you must take a decimal point and insert it into the percent number two spaces from the right: 79% equals .79, while 350% equals 3.5.Percent Increase and Decrease Percent Increase and Decrease One of the most common ways the GRE tests percent is through the concept of percent increase and decrease. There are two main varieties: problems that give you one value and ask you to calculate another, and problems that give you two values and ask you to calculate the percent increase or decrease between them. Let’s have a look at both. One Value Given In this kind of problem, they give you a single number to start, throw some percentage increases or decreases at you, and then ask you to come up with a new number that reflects these changes. For example, if the price of a $10 shirt increases 10%, the new price is the original $10 plus 10% of the $10 original. If the price of a $10 shirt decreases 10%, the new price is the original $10 minus 10% of the $10 original. One of the classic blunders test takers make on this type of question is to forget to carry out the necessary addition or subtraction after figuring out the percent increase or decrease. Perhaps their joy or relief at accomplishing the first part distracts them from finishing the problem. In the problem above, since 10% of $10 is $1, some might be tempted to choose $1 as the final answer, when in fact the answer to the percent increase question is $11, and the answer to the percent decrease question is Try the following example on your own. Beware of the kind of distractor we’ve just discussed. ||A vintage bowling league shirt that cost $20 in 1990 cost 15% less in 1970. What was the price of the shirt in 1970? First find the price decrease (remember that 15% = .15): $20 × .15 = $3 Now, since the price of the shirt was less back in 1970, subtract $3 from the $20 1990 price to get the actual amount this classic would have set you back way back in 1970 (presumably before it achieved “vintage” status): $20 – $3 = $17 Seventeen bucks for a bowling shirt!? We can see that . . . If you finished only the first part of the question and looked at the choices, you might have seen $3 in choice A and forgotten to finish the problem. B is the choice that gets the point. Want a harder example? Sure you do! This one involves a double-percent maneuver, which should be handled by only the most experienced of percent mavens. Do not attempt this at home! Oh, wait . . . Do attempt this at home, or wherever you’re reading this book. The original price of a banana in a store is $2.00. During a sale, the store reduces the price by 25% and Joe buys the banana. Joe then raises the price of the banana 10% from the price at which he bought it and sells it to Sam. How much does Sam pay for the banana? This question asks you to determine the cumulative effect of two successive percent changes. The key to solving it is realizing that each percentage change is dependent on the last. You have to work out the effect of the first percentage change, come up with a value, and then use that value to determine the effect of the second We begin by finding 25% of the original price: Now subtract that $.50 from the original price: $2 – $.50 = $1.50 That’s Joe’s cost. Then increase $1.50 by 10%: Sam buys the banana for $1.50 + $.15 = $1.65. A total rip-off, but still 35 cents less than the original price. Some test takers, sensing a shortcut, are tempted to just combine the two percentage changes on double-percent problems. This is not a real shortcut. It’s more like a dark alley filled with cruel and nasty people who want you to do badly on the GRE. Here, if we reasoned that the first percentage change lowered the price 25%, and the second raised the price 10%, meaning that the total change was a reduction of 15%, then we’d get: Subtract that $.30 from the original price: $2 – $.30 = $1.70 = WRONG! We promise you that if you see a double-percent problem on the GRE, it will include this sort of wrong answer as a trap. Two Values Given In the other kind of percent increase/decrease problem, they give you both a first value and a second value, and then ask for the percent by which the value changed from one to the other. If the value goes up, that’s a percent increase problem. If it goes down, then it’s a percent decrease problem. Luckily, we have a handy formula for both: percent increase = percent decrease = To borrow some numbers from the banana example, Sam pays $1.65 for a banana that was originally priced at $2.00. The percent decrease in the banana’s price would look like this: percent decrease = So Sam comes out with a 17.5% discount from the original price, despite lining Joe’s pockets in the process. A basic question of this type would simply provide the two numbers for you to plug into the percent decrease formula. A more difficult question might start with the original banana question above, first requiring you to calculate Sam’s price of $1.65 and then asking you to calculate the percent decrease from the original price on top of that. If you find yourself in the deep end of the GRE’s question pool, that’s what a complicated question might look Common Fractions, Decimals, and Percents Some fractions, decimals, and percents appear frequently on the GRE. Being able to quickly convert these into each other will save time on the exam, so it pays to memorize the following table. (the little line above the 6 means that the 6 repeats 0.166 = .1666666666 . . .) An exponent is a shorthand way of saying, “Multiply this number by itself this number of times.” In ab, a is multiplied by itself b times. Here’s a numerical example: 25 = 2 × 2 × 2 × 2 × 2. An exponent can also be referred to as a power: 25 is “two to the fifth power.” Before jumping into the exponent nitty-gritty, learn these five terms: - Base. The base refers to the 3 in 35. In other words, the base is the number multiplied by itself however many times specified by the exponent. - Exponent. The exponent is the 5 in 35. The exponent tells how many times the base is to be multiplied by itself. - Squared. Saying that a number is squared is a common code word to indicate that it has an exponent of 2. In the expression 62, 6 has been squared. - Cubed. Saying that a number is cubed means it has an exponent of 3. In the expression 43, 4 has been cubed. - Power. The term power is another way to talk about a number being raised to an exponent. A number raised to the third power has an exponent of 3. So 6 raised to the third power is 63. It can be very helpful and a real time saver on the GRE if you can easily translate back and forth between a number and its exponential form. For instance, if you can easily see that 36 = 62, it can really come in handy when you’re dealing with binomials, quadratic equations, and a number of other algebraic topics we’ll cover later in this chapter. Below are some lists of common exponents. Powers of 2 We’ll start with the squares of the first ten Here are the first five cubes: Finally, the powers of 2 up to 210 are useful to know for various applications: 12 = 1 13 = 1 20 = 1 22 = 4 23 = 8 21 = 2 32 = 9 22 = 4 23 = 8 Adding and Subtracting Exponents The rule for adding and subtracting values with exponents is pretty simple, and you can remember it as the inverse of the Nike Just Don’t Do It. This doesn’t mean that you won’t see such addition and subtraction problems; it just means that you can’t simplify them. For example, the expression 215 + 27 does not equal 222. The expression 215 + 27 is written as simply as possible, so don’t make the mistake of trying to simplify it further. If the problem is simple enough, then work out each exponent to find its value, then add the two numbers. For example, to add 33 + 42, work out the exponents to get (3 × 3 × 3) + (4 × 4) = 27 + 16 = 43. However, if you’re dealing with algebraic expressions that have the same base variable and exponents, then you can add or subtract them. For example, 3x4 + 5x4 = 8x4. The base variables are both x, and the exponents are both 4, so we can add them. Just remember that expressions that have different bases or exponents cannot be added or Multiplying and Dividing Exponents with Equal Bases Multiplying or dividing exponential numbers or terms that have the same base is so quick and easy it’s like a little math oasis. When multiplying, just add the exponents together. This is known as the To divide two same-base exponential numbers or terms, subtract the exponents. This is known as the Quotient Rule: Quick and easy, right? Multiplying and Dividing Exponents with Unequal Bases You want the bad news or the bad news? The same isn’t true if you need to multiply or divide two exponential numbers that have the same base, such as, say, . When two exponents have different bases, you just have to do your work the old-fashioned way: Multiply the numbers out and multiply or divide the result accordingly: There is, however, one trick you should know. Sometimes when the bases aren’t the same, it’s still possible to simplify an expression or equation if one base can be expressed in terms of the other. For 25 × 89 Even though 2 and 8 are different bases, 8 can be rewritten as a power of 2; namely, 8 = 23. This means that we can replace 8 with 23 in the original expression: 25 × (23)9 Since the base is the same for both values, we can simplify this further, but first we’re going to need another rule to deal with the This is called . . . Raising an Exponent to an Exponent This one may sound like it comes from the Office of Redundancy Office, but it doesn’t. To raise one exponent to another exponent (also called taking the power of a power), simply multiply the exponents. This is known as the Power Rule: Let’s use the Power Rule to simplify the expression that we were just working on: 25 × 23×9 = 25 × 227 Our “multiplication with equal bases rule” tells us to now add the exponents, which yields: 25 × 227 = 232 is a pretty huge number, and the GRE would never have you calculate out something this large. This means that you can leave it as 232, because that’s how it would appear in the answer choices. Multiply the exponents when raising one exponent to another, and add the exponents when multiplying two identical bases with exponents. The test makers expect lots of people to mix these operations up, and they’re usually not disappointed. Fractions Raised to an Exponent To raise a fraction to an exponent, raise both the numerator and denominator to that exponent: That’s it; nothing fancy. Negative Numbers Raised to an Exponent When you multiply a negative number by another negative number, you get a positive number, and when you multiply a negative number by a positive number, you get a negative number. Since exponents result in multiplication, a negative number raised to an exponent follows these - A negative number raised to an even exponent will be positive. For example, (–2)4 = 16. Why? Because (–2)4 means –2 × –2 × –2 × –2. When you multiply the first two –2s together, you get positive 4 because you’re multiplying two negative numbers. When you multiply the +4 by the next –2, you get –8, since you’re multiplying a positive number by a negative number. Finally, you multiply the –8 by the last –2 and get +16, since you’re once again multiplying two negative numbers. The negatives cancel themselves out and vanish. - A negative number raised to an odd exponent will be negative. To see why, just look at the example above, but stop the process at –23, which equals –8. It’s helpful to know a few special types of exponents for the GRE. Any base raised to the power of zero is equal to 1. Strange, 1230 = 1 0.87750 = 1 a million trillion gazillion0 = 1 Like we said: strange, but true. You should also know that 0 raised to any positive power is 0. For example: 01 = 0 073 = 0 Any base raised to the power of 1 is equal to itself: 21 = 2, –671 = –67, and x1 = x. This fact is important to know when you have to multiply or divide exponential terms with the same base: The number 1 raised to any power is 1: 12 = 1 14,000 = 1 Any number or term raised to a negative power is equal to the reciprocal of that base raised to the opposite power. Got that? Didn’t think so. An example will make it clearer: Here’s a more complicated example: Here’s an English translation of the rule: If you see a base raised to a negative exponent, put the base as the denominator under a numerator of 1 and then drop the negative from the exponent. From there, just simplify. Exponents can be fractions too. When a number or term is raised to a fractional power, it is called taking the root of that number or term. This expression can be converted into a more known as the radical sign , and anything under the radical is called the radicand. We’ve got a whole section devoted to roots and radicals coming right up. But first let’s look at an example with real numbers: , because 4 × 4 × 4 = 64. Here we treated the 2 as an ordinary exponent and wrote the 3 outside the radical. Roots and Radicals The only roots that appear with any regularity on the GRE are square roots, designated by a fancier-looking long division symbol, like . Usually the test makers will ask you to simplify roots and radicals. As with exponents, though, you’ll also need to know when such expressions can’t be simplified. Square roots require you to find the number that, when multiplied by itself, equals the number under the radical sign. A few examples: = 5, because 5 × 5 = 25 = 10, because 10 × 10 = 100 = 1, because 1 × 1 = 1 Here’s another way to think about square roots: When the GRE gives you a number under a square root sign, that number is always going to be positive. For example, is just 5, even though in real life it could be –5. If you take the square root of a variable, however, the answer could be positive or negative. For example, if you solve = 100 by taking the square root of both sides, x could be 10 or –10. Both values work because 10 × 10 = 100 and –10 × –10 = 100 (recall that a negative times a negative is a positive). Very rarely, you may see cube and higher roots on the GRE. These are similar to square roots, but the number of times the final answer must be multiplied by itself will be three or more. You’ll always be able to determine the number of multiplications required from the little number outside the radical, as in this example: = 2, because 2 × 2 × 2 = 8 Here the little 3 indicates that the correct answer must be multiplied by itself a total of three times to equal 8. A few more examples: = 3, because 3 × 3 × 3 = 27 = 5, because 5 × 5 × 5 × 5 = 625 = 1, because 1 × 1 × 1 × 1 = 1 Roots can only be simplified when you’re multiplying or dividing them. Equations that add or subtract roots cannot be simplified. That is, you can’t add or subtract roots. You have to work out each root separately and then perform the operation. For example, to solve , do not add the 9 and 4 together You can multiply or divide the numbers under the radical sign as long as the roots are of the same degree—that is, both square roots, both cube roots, etc. You cannot multiply, for example, a square root by a cube root. Here’s the rule in general form: Here are some examples with actual numbers. We can simplify the expressions below because every term in them is a square root. To simplify multiplication or division of square roots, combine everything under a single radical sign. You can also use this rule in reverse. That is, a single number under a radical sign can be split into two numbers whose product is the original number. For example: The reason we chose to split 200 into 100 × 2 is because it’s easy to take the square root of 100, since the result is an integer, 10. The goal in simplifying radicals is to get as much as possible out from under the radical sign. When splitting up square roots this way, try to think of the largest perfect square that divides evenly into the original number. Here’s another example: It’s important to remember that as you’ve seen earlier, you can’t add or subtract roots. You have to work out each root separately and then add (or subtract). For example, to solve , you cannot add 25 + 9 and put 34 under a radical sign. Instead, = 5 + 3 = 8. The absolute value of a number is the distance that number is from zero, and it’s indicated with vertical bars, like this: |8|. Absolute values are always positive or zero—never negative. So, the absolute value of a positive number is that number: |8| = 8. The absolute value of a negative number is the number without the negative sign: |–12| = 12. Here are some |5| = 5 | –4.234 | = 4.234 |0| = 0 It is also possible to have expressions within absolute value bars: 3 – 2 + |3 – 7| Think of absolute value bars as parentheses. Do what’s inside them first, then tackle the rest of the problem. You can’t just make that –7 positive because it’s sitting between absolute value bars. You have to work out the math first: 3 – 2 + | –4 | Now you can get rid of the bars and the negative sign from that 4. 3 – 2 + 4 = 5 You’ll see more of absolute value in the algebra section of this Math 101 chapter. And speaking of which, it’s time to head there now.
http://www.sparknotes.com/testprep/books/gre/chapter2section1.rhtml
13
323
Basic Physics of Nuclear Medicine/Print version |This is the print version of Basic Physics of Nuclear Medicine You won't see this message or any elements not part of the book's content when you print or preview this page. Note: current version of this book can be found at http://en.wikibooks.org/wiki/Basic_Physics_of_Nuclear_Medicine Atomic & Nuclear Structure You will have encountered much of what we will cover here in your high school physics. We are going to review this material again below so as to set the context for subsequent chapters. This chapter will also provide you with an opportunity to check your understanding of this topic. The chapter covers atomic structure, nuclear structure, the classification of nuclei, binding energy and nuclear stability. Atomic Structure The atom is considered to be the basic building block of all matter. Simple atomic theory tells us that it consists of two components: a nucleus surrounded by an electron cloud. The situation can be considered as being similar in some respects to planets orbiting the sun. From an electrical point of view, the nucleus is said to be positively charged and the electrons negatively charged. From a size point of view, the radius of an atom is about 10-10 m while the radius of a nucleus is about 10-14 m, i.e. about ten thousand times smaller. The situation could be viewed as something like a cricket ball, representing the nucleus, in the middle of a sporting arena with the electrons orbiting somewhere around where the spectators would sit. This perspective tells us that the atom should be composed mainly of empty space. However, the situation is far more complex than this simple picture portrays in that we must also take into account the physical forces which bind the atom together. The Nucleus From a mass point of view the mass of a proton is roughly equal to the mass of a neutron and each of these is about 2,000 times the mass of an electron. So most of the mass of an atom is concentrated in the small region at its core. From an electrical point of view the proton is positively charged and the neutron has no charge. An atom all on its own (if that were possible to achieve!) is electrically neutral. The number of protons in the nucleus of such an atom must therefore equal the number of electrons orbiting that atom. Classification of Nuclei The term Atomic Number is defined in nuclear physics as the number of protons in a nucleus and is given the symbol Z. From your chemistry you will remember that this number also defines the position of an element in the Periodic Table of Elements. The term Mass Number is defined as the number of nucleons in a nucleus, that is the number of protons plus the number of neutrons, and is given the symbol A. Note that the symbols here are a bit odd, in that it would prevent some confusion if the Atomic Number were given the symbol A, and the Mass Number were given another symbol, such as M, but its not a simple world! It is possible for nuclei of a given element to have the same number of protons but differing numbers of neutrons, that is to have the same Atomic Number but different Mass Numbers. Such nuclei are referred to as Isotopes. All elements have isotopes and the number ranges from three for hydrogen to over 30 for elements such as caesium and barium. Chemistry has a relatively simple way of classifying the different elements by the use of symbols such as H for hydrogen, He for helium and so on. The classification scheme used to identify different isotopes is based on this approach with the use of a superscript before the chemical symbol to denote the Mass Number along with a subscript before the chemical symbol to denote the Atomic Number. In other words an isotope is identified as: where X is the chemical symbol of the element; A is the "Mass Number," (protons+ neutrons); Z is the "Atomic Number," (number identifying the element on the periodic chart). Let us take the case of hydrogen as an example. It has three isotopes: - the most common one consisting of a single proton orbited by one electron, - a second isotope consisting of a nucleus containing a proton and a neutron orbited by one electron, - a third whose nucleus consists of one proton and two neutrons, again orbited by a single electron. A simple illustration of these isotopes is shown below. Remember though that this is a simplified illustration given what we noted earlier about the size of a nucleus compared with that of an atom. But the illustration is nevertheless useful for showing how isotopes are classified. The first isotope commonly called hydrogen has a Mass Number of 1, an Atomic Number of 1 and hence is identified as: The second isotope commonly called deuterium has a Mass Number of 2, an Atomic Number of 1 and is identified as: The third isotope commonly called tritium is identified as: The same classification scheme is used for all isotopes. For example, you should now be able to figure out that the uranium isotope, , contains 92 protons and 144 neutrons. A final point on classification is that we can also refer to individual isotopes by giving the name of the element followed by the Mass Number. For example, we can refer to deuterium as hydrogen-2 and we can refer to as uranium-236. Before we leave this classification scheme let us further consider the difference between chemistry and nuclear physics. You will remember that the water molecule is made up of two hydrogen atoms bonded with an oxygen atom. Theoretically if we were to combine atoms of hydrogen and oxygen in this manner many, many of billions of times we could make a glass of water. We could also make our glass of water using deuterium instead of hydrogen. This second glass of water would theoretically be very similar from a chemical perspective. However, from a physics perspective our second glass would be heavier than the first since each deuterium nucleus is about twice the mass of each hydrogen nucleus. Indeed water made in this fashion is called heavy water. Atomic Mass Unit The conventional unit of mass, the kilogram, is rather large for use in describing characteristics of nuclei. For this reason, a special unit called the Atomic Mass Unit (amu) is often used. This unit is sometimes defined as 1/12th of the mass of the stable most commonly occurring isotope of carbon, i.e. 12C. In terms of grams, 1 amu is equal to 1.66 x 10-24 g, that is, just over one million, million, million millionth of a gram. The masses of the proton, mp and neutron, mn on this basis are: while that of the electron is just 0.00055 amu. Binding Energy We are now in a position to consider the subject of nuclear stability. From what we have covered so far, we have seen that the nucleus is a tiny region in the centre of an atom and that it is composed of neutrally and positively charged particles. So, in a large nucleus such as that of uranium (Z=92) we have a large number of positively charged protons concentrated into a tiny region in the centre of the atom. An obvious question which arises is that with all these positive charges in close proximity, why doesn't the nucleus fly apart? How can a nucleus remain as an entity with such electrostatic repulsion between the components? Should the orbiting negatively-charged electrons not attract the protons away from the atoms centre? Let us take the case of the helium-4 nucleus as an example. This nucleus contains two protons and two neutrons so that in terms of amu we can figure out from what we covered earlier that the Therefore we would expect the total mass of the nucleus to be 4.03298 amu. The experimentally determined mass of a helium-4 nucleus is a bit less - just 4.00260 amu. In other words there is a difference of 0.03038 amu between what we might expect as the mass of this nucleus and what we actually measure. You might think of this difference as very small at just 0.75%. But remember that since the mass of one electron is 0.00055 amu the difference is actually equivalent to the mass of about 55 electrons. Therefore it is significant enough to wonder about. It is possible to consider that this missing mass is converted to energy which is used to hold the nucleus together; it is converted to a form of energy called Binding Energy. You could say, as with all relationships, energy must be expended in order to maintain them! Like the gram in terms of the mass of nuclei, the common unit of energy, the joule is rather cumbersome when we consider the energy needed to bind a nucleus together. The unit used to express energies on the atomic scale is the electron volt, symbol: eV. One electron volt is defined as the amount of energy gained by an electron as it falls through a potential difference of one volt. This definition on its own is not of great help to us here and it is stated purely for the sake of completeness. So do not worry about it for the time being. Just appreciate that it is a unit representing a tiny amount of energy which is useful on the atomic scale. It is a bit too small in the case of binding energies however and the mega-electron volt (MeV) is often used. Albert Einstein introduced us to the equivalence of mass, m, and energy, E, at the atomic level using the following equation: where c is the velocity of light. It is possible to show that 1 amu is equivalent to 931.48 MeV. Therefore, the mass difference we discussed earlier between the expected and measured mass of the helium-4 nucleus of 0.03038 amu is equivalent to about 28 MeV. This represents about 7 MeV for each of the four nucleons contained in the nucleus. Nuclear Stability In most stable isotopes the binding energy per nucleon lies between 7 and 9 MeV. There are two competing forces in the nuclei, electrostatic repulsion between protons and the attractive nuclear force between nucleons (protons and neutrons). The electrostatic force is a long range force that becomes more difficult to compensate for as more protons are added to the nucleus. The nuclear force, which arises as the residual strong force (the strong force binds the quarks together within a nucleon), is a short range force that only operates on a very short distance scale (~ 1.5 fm) as it arises from a Yukawa potential. (Electromagnetism is a long range force as the force carrier, the photon, is massless; the nuclear force is a short range force as the force carrier, the pion, is massive). Therefore, larger nuclei tend to be less stable, and require a larger ratio of neutrons to protons (which contribute to the attractive strong force, but not the long-range electrostatic repulsion). For the low Z nuclides the ratio of neutrons to protons is approximately 1, though it gradually increases to about 1.5 for the higher Z nuclides as shown below on the Nuclear Stability Curve. In other words to combat the effect of the increase in electrostatic repulsion when the number of protons increases the number of neutrons must increase more rapidly to contribute sufficient energy to bind the nucleus together. As we noted earlier there are a number of isotopes for each element of the Periodic Table. It has been found that the most stable isotope for each element has a specific number of neutrons in its nucleus. Plotting a graph of the number of protons against the number of neutrons for these stable isotopes generates what is called the Nuclear Stability Curve: Note that the number of protons equals the number of neutrons for small nuclei. But notice also that the number of neutrons increases more rapidly than the number of protons as the size of the nucleus gets bigger so as to maintain the stability of the nucleus. In other words more neutrons need to be there to contribute to the binding energy used to counteract the electrostatic repulsion between the protons. There are about 2,450 known isotopes of the approximately one hundred elements in the Periodic Table. You can imagine the size of a table of isotopes relative to that of the Periodic Table! The unstable isotopes lie above or below the Nuclear Stability Curve. These unstable isotopes attempt to reach the stability curve by splitting into fragments, in a process called Fission, or by emitting particles and/or energy in the form of radiation. This latter process is called Radioactivity. It is useful to dwell for a few moments on the term radioactivity. For example what has nuclear stability to do with radio? From a historical perspective remember that when these radiations were discovered about 100 years ago we did not know exactly what we were dealing with. When people like Henri Becquerel and Marie Curie were working initially on these strange emanations from certain natural materials it was thought that the radiations were somehow related to another phenomenon which also was not well understood at the time - that of radio communication. It seems reasonable on this basis to appreciate that some people considered that the two phenomena were somehow related and hence that the materials which emitted radiation were termed radio-active. We know today that the two phenomena are not directly related but we nevertheless hold onto the term radioactivity for historical purposes. But it should be quite clear to you having reached this stage of this chapter that the term radioactive refers to the emission of particles and/or energy from unstable isotopes. Unstable isotopes for instance those that have too many protons to remain a stable entity are called radioactive isotopes - and called radioisotopes for short. The term radionuclide is also sometimes used. Finally about 300 of the 2,450-odd isotopes mentioned above are found in nature. The rest are man-made, that is they are produced artificially. These 2,150 or so artificial isotopes have been made during the last 100 years or so with most having been made since the second world war. We will return to the production of radioisotopes in a later chapter of this wikibook and will proceed for the time being with a description of the types of radiation emitted by radioisotopes. Multiple Choice Questions Click here to access multiple choice questions on atomic and nuclear structure. External Links - Novel Periodic Table - an interactive table providing information about each element. - Marie and Pierre Curie and the Discovery of Polonium and Radium - an historical essay from The Nobel Foundation. - Natural Radioactivity - an overview of radioactivity in nature - includes sections on primordial radionuclides, cosmic radiation, human produced radionuclides, as well as natural radioactivity in soil, in the ocean, in the human body and in building materials - from the University of Michigan Student Chapter of the Health Physics Society. - The Particle Adventure - an interactive tour of the inner workings of the atom which explains the modern tools physicists use to probe nuclear and sub-nuclear matter and how physicists measure the results of their experiments using detectors - from the Particle Data Group at the Lawrence Berkeley National Lab, USA and mirrored at CERN, Geneva. - WebElements - an excellent web-based Periodic Table of the Elements which includes a vast array of data about each element - originally from Mark Winter at the University of Sheffield, England. Radioactive Decay We saw in the last chapter that radioactivity is a process used by unstable nuclei to achieve a more stable situation. It is said that such nuclei decay in an attempt to achieve stability. So, an alternative title for this chapter is Nuclear Decay Processes. We also saw in the previous chapter that we can use the Nuclear Stability Curve as a means of describing what is going on. So a second alternative title for this chapter is Methods of Getting onto the Nuclear Stability Curve. We are going to follow a descriptive or phenomenological approach to the topic here by describing in a fairly simple fashion what is known about each of the major decay mechanisms. Once again you may have already covered this material in high school physics. But bear with us because the treatment here will help us set the scene for subsequent chapters. Methods of Radioactive Decay Rather than considering what happens to individual nuclei it is perhaps easier to consider a hypothetical nucleus that can undergo many of the major forms of radioactive decay. This hypothetical nucleus is shown below: Firstly we can see two protons and two neutrons being emitted together in a process called alpha-decay. Secondly, we can see that a proton can release a positron in a process called beta-plus decay, and that a neutron can emit an electron in a process called beta-minus decay. We can also see an electron being captured by a proton. Thirdly we can see some energy (a photon) being emitted which results from a process called gamma-decay as well as an electron being attracted into the nucleus and being ejected again. Finally there is the rather catastrophic process where the nucleus cracks in half called spontaneous fission. We will now describe each of these decay processes in turn. Spontaneous Fission This is a very destructive process which occurs in some heavy nuclei which split into 2 or 3 fragments plus some neutrons. These fragments form new nuclei which are usually radioactive. Nuclear reactors exploit this phenomenon for the production of radioisotopes. Its also used for nuclear power generation and in nuclear weaponry. The process is not of great interest to us here and we will say no more about it for the time being. Alpha Decay In this decay process two protons and two neutrons leave the nucleus together in an assembly known as an alpha particle. Note that an alpha particle is really a helium-4 nucleus. So why not call it a helium nucleus? Why give it another name? The answer to this question lies in the history of the discovery of radioactivity. At the time when these radiations were discovered we didn't know what they really were. We found out that one type of these radiations had a double positive charge and it was not until sometime later that we learned that they were in fact nuclei of helium-4. In the initial period of their discovery this form of radiation was given the name alpha rays (and the other two were called beta and gamma rays), these terms being the first three letters of the Greek alphabet. We still call this form of radiation by the name alpha particle for historical purposes. Calling it by this name also contributes to the specific jargon of the field and leads outsiders to think that the subject is quite specialized! But notice that the radiation really consists of a helium-4 nucleus emitted from an unstable larger nucleus. There is nothing strange about helium since it is quite an abundant element on our planet. So why is this radiation dangerous to humans? The answer to this question lies with the energy with which they are emitted and the fact that they are quite massive and have a double positive charge. So when they interact with living matter they can cause substantial destruction to molecules which they encounter in their attempt to slow down and to attract two electrons to become a neutral helium atom. An example of this form of decay occurs in the uranium-238 nucleus. The equation which represents what occurs is: Here the uranium-238 nucleus emits a helium-4 nucleus (the alpha particle) and the parent nucleus becomes thorium-234. Note that the Mass Number of the parent nucleus has been reduced by 4 and the Atomic Number is reduced by 2 which is a characteristic of alpha decay for any nucleus in which it occurs. Beta Decay There are three common forms of beta decay: (a) Electron Emission - Certain nuclei which have an excess of neutrons may attempt to reach stability by converting a neutron into a proton with the emission of an electron. The electron is called a beta-minus particle - the minus indicating that the particle is negatively charged. - We can represent what occurs as follows: - where a neutron converts into a proton and an electron. Notice that the total electrical charge is the same on both sides of this equation. We say that the electric charge is conserved. - We can consider that the electron cannot exist inside the nucleus and therefore is ejected. - Once again there is nothing strange or mysterious about an electron. What is important though from a radiation safety point of view is the energy with which it is emitted and the chemical damage it can cause when it interacts with living matter. - An example of this type of decay occurs in the iodine-131 nucleus which decays into xenon-131 with the emission of an electron, that is - The electron is what is called a beta-minus particle. Note that the Mass Number in the above equation remains the same and that the Atomic Number increases by 1 which is characteristic of this type of decay. - You may be wondering how an electron can be produced inside a nucleus given that the simple atomic description we gave in the previous chapter indicated that the nucleus consists of protons and neutrons only. This is one of the limitations of the simple treatment presented so far and can be explained by considering that the two particles which we call protons and neutrons are themselves formed of smaller particles called quarks. We are not going to consider these in any way here other than to note that some combinations of different types of quark produce protons and another combination produces neutrons. The message here is to appreciate that a simple picture is the best way to start in an introductory text such as this and that the real situation is a lot more complex than what has been described. The same can be said about the treatment of beta-decay given above as we will see in subsequent chapters. (b) Positron Emission - When the number of protons in a nucleus is too large for the nucleus to be stable it may attempt to reach stability by converting a proton into a neutron with the emission of a positively-charged electron. - That is not a typographical error! An electron with a positive charge also called a positron is emitted. The positron is the beta-plus particle. - The history here is quite interesting. A brilliant Italian physicist, Enrico Fermi developed a theory of beta decay and his theory predicted that positively-charged as well as negatively-charged electrons could be emitted by unstable nuclei. These particles could be called pieces of anti-matter and they were subsequently discovered by experiment. They do not exist for very long as they quickly combine with a normal electron and the subsequent reaction called annihilation gives rise to the emission of two gamma rays. - Science fiction writers had a great time following the discovery of anti-matter and speculated along with many scientists that parts of our universe may contain negatively-charged protons forming nuclei which are orbited by positively-charged electrons. But this is taking us too far away from the topic at hand! - The reaction in our unstable nucleus which contains one too many protons can be represented as follows: - Notice, once again, that electric charge is conserved on each side of this equation. - An example of this type of decay occurs in sodium-22 which decays into neon-22 with the emission of a positron: - Note that the Mass Number remains the same and that the Atomic Number decreases by 1. (c) Electron Capture - In this third form of beta decay an inner orbiting electron is attracted into an unstable nucleus where it combines with a proton to form a neutron. The reaction can be represented as: - This process is also known as K-capture since the electron is often attracted from the K-shell of the atom. - How do we know that a process like this occurs given that no radiation is emitted? In other words the event occurs within the atom itself and no information about it leaves the atom. Or does it? The signature of this type of decay can be obtained from effects in the electron cloud surrounding the nucleus when the vacant site left in the K-shell is filled by an electron from an outer shell. The filling of the vacancy is associated with the emission of an X-ray from the electron cloud and it is this X-ray which provides a signature for this type of beta decay. - This form of decay can also be recognised by the emission of gamma-rays from the new nucleus. - An example of this type of radioactive decay occurs in iron-55 which decays into manganese-55 following the capture of an electron. The reaction can be represented as follows: - Note that the Mass Number once again is unchanged in this form of decay and that the Atomic Number is decreased by 1. Gamma Decay Gamma decay involves the emission of energy from an unstable nucleus in the form of electromagnetic radiation. You should remember from your high school physics that electromagnetic radiation is the biggest physical phenomenon we have so far discovered. The radiation can be characterised in terms of its frequency, its wavelength and its energy. Thinking about it in terms of the energy of the radiation we have very low energy electromagnetic radiation called radio waves, infra-red radiation at a slightly higher energy, visible light at a higher energy still, then ultra-violet radiation and the higher energy forms of this radiation are called X-rays and gamma-rays. You should also remember that these radiations form what is called the Electromagnetic Spectrum. Before proceeding it is useful to pause for a moment to consider the difference between X-rays and gamma-rays. These two forms of radiation are high energy electromagnetic rays and are therefore virtually the same. The difference between them is not what they consist of but where they come from. In general we can say that if the radiation emerges from a nucleus it is called a gamma-ray and if it emerges from outside the nucleus from the electron cloud for example, it is called an X-ray. One final point is of relevance before we consider the different forms of gamma-decay and that is what such a high energy ray really is. It has been found in experiments that gamma-rays (and X-rays for that matter!) sometimes manifest themselves as waves and other times as particles. This wave-particle duality can be explained using the equivalence of mass and energy at the atomic level. When we describe a gamma ray as a wave it has been found useful to use terms such as frequency and wavelength just like any other wave. In addition when we describe a gamma ray as a particle we use terms such as mass and electric charge. Furthermore the term electromagnetic photon is used for these particles. The interesting feature about these photons however is that they have neither mass nor charge! There are two common forms of gamma decay: (a) Isomeric Transition - A nucleus in an excited state may reach its ground or unexcited state by the emission of a gamma-ray. - An example of this type of decay is that of technetium-99m - which by the way is the most common radioisotope used for diagnostic purposes today in medicine. The reaction can be expressed as: - Here a nucleus of technetium-99 is in an excited state, that is, it has excess energy. The excited state in this case is called a metastable state and the nucleus is therefore called technetium-99m (m for metastable). The excited nucleus looses its excess energy by emitting a gamma-ray to become technetium-99. (b) Internal Conversion - Here the excess energy of an excited nucleus is given to an atomic electron, e.g. a K-shell electron. Decay Schemes Decay schemes are widely used to give a visual representation of radioactive decay. A scheme for a relatively straight-forward decay is shown below: This scheme is for hydrogen-3 which decays to helium-3 with a half-life of 12.3 years through the emission of a beta-minus particle with an energy of 0.0057 MeV. A scheme for a more complicated decay is that of caesium-137. This isotope can decay through through two beta-minus processes. In one which occurs in 5% of disintegrations a beta-minus particle is emitted with an energy of 1.17 MeV to produce barium-137. In the second which occurs more frequently (in the remaining 95% of disintegrations) a beta-minus particle of energy 0.51 MeV is emitted to produce barium-137m - in other words a barium-137 nucleus in a metastable state. The barium-137m then decays via isomeric transition with the emission of a gamma-ray of energy 0.662 MeV. The general method used for decay schemes is illustrated in the diagram on the right. The energy is plotted on the vertical axis and atomic number on the horizontal axis - although these axes are rarely displayed in actual schemes. The isotope from which the scheme originates is displayed at the top - X in the case above. This isotope is referred to as the parent. The parent looses energy when it decays and hence the products of the decay referred to as daughters are plotted at a lower energy level. The diagram illustrates the situation for common forms of radioactive decay. Alpha-decay is illustrated on the left where the mass number is reduced by 4 and the atomic number is reduced by 2 to produce daughter A. To its right the scheme for beta-plus decay is shown to produce daughter B. The situation for beta-minus decay followed by gamma-decay is shown on the right side of the diagram where daughters C and D respectively are produced. Multiple Choice Questions Click here to access multiple choice questions on radioactive decay. External Links - Basics about Radiation - overview of the different types of ionising radiation from the Radiation Effects Research Foundation - a cooperative Japan-United States Research Organization which conducts research for peaceful purposes. - Radiation and Life - from the World Nuclear Association website. - Radiation and Radioactivity - a self-paced lesson developed by the University of Michigan's Student Chapter of the Health Physics Society, with sections on radiation, radioactivity, the atom, alpha radiation, beta radiation and gamma radiation. The Radioactive Decay Law We covered radioactive decay from a phenomenological perspective in the last chapter. In this chapter we consider the topic from a more general analytical perspective. The reason for doing this is so that we can develop a form of thinking which will help us to understand what is going on in a quantitative, mathematical sense. We will be introduced to concepts such as the Decay Constant and the Half Life as well as units used for the measurement of radioactivity. You will also have a chance to develop your understanding by being brought through three questions on this subject. The usual starting point in most forms of analysis in physics is to make some assumptions which simplify the situation. By simplifying the situation we can dispose of irrelevant effects which tend to complicate matters but in doing so we sometimes make the situation so simple that it becomes a bit too abstract and apparently hard to understand. For this reason we will try here to relate the subject of radioactive decay to a more common situation which we will use as an analogy and hopefully we will be able to overcome the abstract feature of the subject matter. The analogy we will use here is that of making popcorn. So think about putting some oil in a pot, adding the corn, heating the pot on the cooker and watching what happens. You might also like to try this out while considering the situation! For our radioactive decay situation we first of all consider that we have a sample containing a large number of radioactive nuclei all of the same kind. This is our unpopped corn in the pot for example. Secondly we assume that all of the radioactive nuclei decay by the same process be it alpha, beta or gamma-decay. In other words our unpopped corn goes pop at some stage during the heating process. Thirdly take a few moments to ponder on the fact that we can only really consider what is going on from a statistical perspective. If you look at an individual piece of corn, can you figure out when it is going to pop? Not really. You can however figure out that a large number of them will have popped after a period of time. But its rather more difficult to figure out the situation for an individual piece of corn. So instead of dealing with individual entities we consider what happens on a larger scale and this is where statistics comes in. We can say that the radioactive decay is a statistical one-shot process, that is when a nucleus has decayed it cannot repeat the process again. In other words when a piece of corn has popped it cannot repeat the process. Simple! In addition as long as a radioactive nucleus has not decayed the probability for it doing so in the next moment remains the same. In other words if a piece of corn has not popped at a certain time the chance of it popping in the next second is the same as in the previous second. The bets are even! Let us not push this popcorn analogy too far though in that we know that we can control the rate of popping by the heat we apply to the pot for example. However as far as our radioactive nuclei are concerned there is nothing we can do to control what is going on. The rate at which nuclei go pop (or decay, in other words) cannot be influenced by heating up the sample. Nor by cooling it for that matter or by putting it under greater pressures, by changing the gravitational environment by taking it out into space for instance, or by changing any other aspect of its physical environment. The only thing that determines whether an individual nucleus will decay seems to be the nucleus itself. But on the average we can say that it will decay at some stage. The Radioactive Decay Law Let us now use some symbols to reduce the amount of writing we have to do to describe what is going on and to avail ourselves of some mathematical techniques to simplify the situation even further than we have been able to do so far. Let us say that in the sample of radioactive material there are N nuclei which have not decayed at a certain time, t. So what happens in the next brief period of time? Some nuclei will decay for sure. But how many? On the basis of our reasoning above we can say that the number which will decay will depend on overall number of nuclei, N, and also on the length of the brief period of time. In other words the more nuclei there are the more will decay and the longer the time period the more nuclei will decay. Let us denote the number which will have decayed as dN and the small time interval as dt. So we have reasoned that the number of radioactive nuclei which will decay during the time interval from t to t+dt must be proportional to N and to dt. In symbols therefore: the minus sign indicating that N is decreasing. Turning the proportionality in this equation into an equality we can write: where the constant of proportionality, λ, is called the Decay Constant. Dividing across by N we can rewrite this equation as: So this equation describes the situation for any brief time interval, dt. To find out what happens for all periods of time we simply add up what happens in each brief time interval. In other words we integrate the above equation. Expressing this more formally we can say that for the period of time from t = 0 to any later time t, the number of radioactive nuclei will decrease from N0 to Nt, so that: This final expression is known as the Radioactive Decay Law. It tells us that the number of radioactive nuclei will decrease in an exponential fashion with time with the rate of decrease being controlled by the Decay Constant. Before looking at this expression in further detail let us review the mathematics which we used above. First of all we used integral calculus to figure out what was happening over a period of time by integrating what we knew would occur in a brief interval of time. Secondly we used a calculus relationship that the where ln x represents the natural logarithm of x. And thirdly we used the definition of logarithms that when Now, to return to the Radioactive Decay Law. The Law tells us that the number of radioactive nuclei will decrease with time in an exponential fashion with the rate of decrease being controlled by the Decay Constant. The Law is shown in graphical form in the figure below: The graph plots the number of radioactive nuclei at any time, Nt, against time, t. We can see that the number of radioactive nuclei decreases from N0 that is the number at t = 0 in a rapid fashion initially and then more slowly in the classic exponential manner. The influence of the Decay Constant can be seen in the following figure: All three curves here are exponential in nature, only the Decay Constant is different. Notice that when the Decay Constant has a low value the curve decreases relatively slowly and when the Decay Constant is large the curve decreases very quickly. The Decay Constant is characteristic of individual radionuclides. Some like uranium-238 have a small value and the material therefore decays quite slowly over a long period of time. Other nuclei such as technetium-99m have a relatively large Decay Constant and they decay far more quickly. It is also possible to consider the Radioactive Decay Law from another perspective by plotting the logarithm of Nt against time. In other words from our analysis above by plotting the expression: in the form Notice that this expression is simply an equation of the form y = mx + c where m = -l and c = ln N0. As a result it is the equation of a straight line of slope -l as shown in the following figure. Such a plot is sometimes useful when we wish to consider a situation without the complication of the direct exponential behaviour. Most of us have not been taught to think instinctively in terms of logarithmic or exponential terms even though many natural phenomena display exponential behaviours. Most of the forms of thinking which we have been taught in school are based on linear changes and as a result it is rather difficult for us to grasp the Radioactive Decay Law intuitively. For this reason an indicator is usually derived from the law which helps us think more clearly about what is going on. This indicator is called the Half Life and it expresses the length of time it takes for the radioactivity of a radioisotope to decrease by a factor of two. From a graphical point of view we can say that when: the time taken is the Half Life: Note that the half-life does not express how long a material will remain radioactive but simply the length of time for its radioactivity to halve. Examples of the half lives of some radioisotopes are given in the following table. Notice that some of these have a relatively short half life. These tend to be the ones used for medical diagnostic purposes because they do not remain radioactive for very long following administration to a patient and hence result in a relatively low radiation dose. |Radioisotope||Half Life (approx.)| |238U||4.51 x 109 years| But they do present a logistical problem when we wish to use them when there may not be a radioisotope production facility nearby. For example suppose we wish to use 99mTc for a patient study and the nearest nuclear facility for making this isotope is 5,000 km away. The production facility could be in Sydney and the patient could be in Perth for instance. After making the isotope at the nuclear plant it would be decaying with a half life of 6 hours. So we put the material on a truck and drive it to Sydney airport. The isotope would be decaying as the truck sits in Sydney traffic then decaying still more as it waits for a plane to take it to Perth. Then decaying more as it is flown across to Perth and so on. By the time it gets to our patient it will have substantially reduced in radioactivity possibly to the point of being useless for the patient's investigation. And what about the problem if we needed to use 81mKr instead of 99mTc for our patient? You will see in another chapter of this book that logistical challenges such as this have given rise to quite innovative solutions. More about that later! You can appreciate from the table above that other isotopes have a very long half lives. For example 226Ra has a half life of over 1,500 years. This isotope has been used in the past for therapeutic applications in medicine. Think about the logistical problems here. They obviously do not relate to transporting the material from the point of production to the point of use. But they relate to how the material is kept following its arrival at the point of use. We must have a storage facility so that the material can be kept safely for a long period of time. But for how long? A general rule of thumb for the quantities of radioactivity used in medicine is that the radioactivity will remain significant for about 10 half lives. So we would have to have a safe environment for storage of the 226Ra for about 16,000 years! This storage facility would have to be secure from many unforeseeable events such as earthquakes, bombing etc. and be kept in a manner which our children's, children's children can understand. A very serious undertaking indeed! Relationship between the Decay Constant and the Half Life On the basis of the above you should be able to appreciate that there is a relationship between the Decay Constant and the Half Life. For example when the Decay Constant is small the Half Life should be long and correspondingly when the Decay Constant is large the Half Life should be short. But what exactly is the nature of this relationship? We can easily answer this question by using the definition of Half Life and applying it to the Radioactive Decay Law. Once again the law tells us that at any time, t: and the definition of Half Life tells us that: We can therefore re-write the Radioactive Decay Law by substituting for Nt and t as follows: These last two equations express the relationship between the Decay Constant and the Half Life. They are very useful as you will see when solving numerical questions relating to radioactivity and usually form the first step in solving a numerical problem. Units of Radioactivity The SI or metric unit of radioactivity is named after Henri Becquerel, in honour of his discovery of radioactivity, and is called the becquerel with the symbol Bq. The becquerel is defined as the quantity of radioactive substance that gives rise to a decay rate of 1 decay per second. In medical diagnostic work 1 Bq is a rather small amount of radioactivity. Indeed it is easy to remember its definition if you think of it as a buggerall amount of radioactivity. For this reason the kilobecquerel (kBq) and megabecquerel (MBq) are more frequently used. The traditional unit of radioactivity is named after Marie Curie and is called the curie, with the symbol Ci. The curie is defined as the amount of radioactive substance which gives rise to a decay rate of 3.7 x 1010 decays per second. In other words 37 thousand, million decays per second which as you might appreciate is a substantial amount of radioactivity. For medical diagnostic work the millicurie (mCi) and the microcurie (µCi) are therefore more frequently used. Why two units? It in essence like all other units of measurement depends on what part of the world you are in. For example the kilometer is widely used in Europe and Australia as a unit of distance and the mile is used in the USA. So if you are reading an American textbook you are likely to find the curie used as the unit of radioactivity, if you are reading an Australian book it will most likely refer to becquerels and both units might be used if you are reading a European book. You will therefore find it necessary to know and understand both units. Multiple Choice Questions Click here to access an MCQ on the Radioactive Decay Law. Three questions are given below to help you develop your understanding of the material presented in this chapter. The first one is relatively straight-forward and will exercise your application of the Radioactive Decay Law as well as your understanding of the concept of Half Life. The second question is a lot more challenging and will help you relate the Radioactive Decay Law to the number of radioactive nuclei which are decaying in a sample of radioactive material. The third question will help you understand the approach used in the second question by asking a similar question from a slightly different perspective. (a) The half-life of 99mTc is 6 hours. After how much time will 1/16th of the radioisotope remain? (b) Verify your answer by another means. - (a) Starting with the relationship we established earlier between the Decay Constant and the Half Life we can calculate the Decay Constant as follows: - Now applying the Radioactive Decay Law, - we can re-write it in the form: - The question tells us that N0 has reduced to 1/16th of its value, that is: - which we need to solve for t. One way of doing this is as follows: - So it will take 24 hours until 1/16th of the radioactivity remains. - (b) A way in which this answer can be verified is by using the definition of Half Life. We are told that the Half Life of 99mTc is 6 hours. Therefore after six hours half of the radioactivity remains. - Therefore after 12 hours a quarter remains; after 18 hours an eighth remains and after 24 hours one sixteenth remains. And we arrive at the same answer as in part (a). So we must be right! - Note that this second approach is useful if we are dealing with relatively simple situations where the radioactivity is halved, quartered and so on. But supposing the question asked how long would it take for the radioactivity to decrease to a tenth of its initial value. Deduction from the definition of half life is rather more difficult in this case and the mathematical approach used for part (a) above will yield the answer more readily. Find the radioactivity of a 1 g sample of 226Ra given that t1/2: 1620 years and Avogadro's Number: 6.023 x 1023. - We can start the answer like we did with Question 1(a) by calculating the Decay Constant from the Half Life using the following equation: - Note that the length of a year used in converting from 'per year' to 'per second' above is 365.25 days to account for leap years. In addition the reason for converting to units of 'per second' is because the unit of radioactivity is expressed as the number of nuclei decaying per second. - Secondly we can calculate that 1 g of 226Ra contains: - Thirdly we need to express the Radioactive Decay Law in terms of the number of nuclei decaying per unit time. We can do this by differentiating the equation as follows: - The reason for expressing the result above in absolute terms is to remove the minus sign in that we already know that the number is decreasing. - We can now enter the data we derived above for λ and N: - So the radioactivity of our 1 g sample of radium-226 is approximately 1 Ci. - This is not a surprising answer since the definition of the curie was originally conceived as the radioactivity of 1 g of radium-226! What is the minimum mass of 99mTc that can have a radioactivity of 1 MBq? Assume the half-life is 6 hours and that Avogadro's Number is 6.023 x 1023. - Starting again with the relationship between the Decay Constant and the Half Life: - Secondly the question tells us that the radioactivity is 1 MBq. Therefore since 1 MBq = 1 x 106 decays per second, - Finally the mass of these nuclei can be calculated as follows: - In other words a mass of just over five picograms of 99mTc can emit one million gamma-rays per second. The result reinforces an important point that you will learn about radiation protection which is that you should treat radioactive materials just like you would handle pathogenic bacteria! Units of Radiation Measurement A Typical Radiation Situation A typical radiation set-up is shown in the figure below. Firstly there is a source of radiation, secondly a radiation beam and thirdly some material which absorbs the radiation. So the quantities which can be measured are associated with the source, the radiation beam and the absorber. This type of environment could be one where the radiation from the source is used to irradiate a patient (that is the absorber) for diagnostic purposes where we would place a device behind the patient for producing an image or for therapeutic purposes where the radiation is intended to cause damage to a specific region of a patient. It is also a situation where we as an absorber may be working with a source of radiation. The Radiation Source When the radiation source is a radioactive one the quantity that is typically measured is the radioactivity of the source. We saw in the previous chapter that the units used to express radioactivity are the becquerel (SI unit) and the curie (traditional unit). The Radiation Beam The characteristic of a radiation beam that is typically measured is called the Radiation Exposure. This quantity expresses how much ionisation the beam causes in the air through which it travels. We will see in the following chapter that one of the major things that happens when radiation encounters matter is that ions are formed - air being the form of matter it encounters in this case. So the radiation exposure produced by a radiation beam is expressed in terms of the amount of ionisation which occurs in air. A straight-forward way of measuring such ionisation is to determine the amount of electric charge which is produced. You will remember from your high school physics that the SI unit of electric charge is the coulomb. The SI unit of radiation exposure is the coulomb per kilogram - and is given the symbol C kg-1. It is defined as the quantity of X- or gamma-rays such that the associated electrons emitted per kilogram of air at standard temperature and pressure (STP) produce ions carrying 1 coulomb of electric charge. The traditional unit of radiation exposure is the roentgen, named in honour of Wilhelm Roentgen (who discovered X-rays) and is given the symbol R. The roentgen is defined as the quantity of X- or gamma-rays such that the associated electrons emitted per kilogram of air at STP produce ions carrying 2.58 x 10-4 coulombs of electric charge. So 1 R is a small exposure relative to 1 C kg-1 - in fact it is 3,876 times smaller. Note that this unit is confined to radiation beams consisting of X-rays or gamma-rays. Often it is not simply the exposure that is of interest but the exposure rate, that is the exposure per unit time. The units which tend to be used in this case are the C kg-1 s-1 and the R hr-1. The Absorber Energy is deposited in the absorber when radiation interacts with it. It is usually quite a small amount of energy but energy nonetheless. The quantity that is measured is called the Absorbed Dose and it is of relevance to all types of radiation be they X- or gamma-rays, alpha- or beta-particles. The SI unit of absorbed dose is called the gray, named after a famous radiobiologist, LH Gray, and is given the symbol Gy. The gray is defined as the absorption of 1 joule of radiation energy per kilogram of material. So when 1 joule of radiation energy is absorbed by a kilogram of the absorber material we say that the absorbed dose is 1 Gy. The traditional unit of absorbed dose is called the rad, which supposedly stands for Radiation Absorbed Dose. It is defined as the absorption of 10-2 joules of radiation energy per kilogram of material. As you can figure out 1 Gy is equal to 100 rad. There are other quantities derived from the gray and the rad which express the biological effects of such absorbed radiation energy when the absorber is living matter - human tissue for example. These quantities include the Equivalent Dose, H, and the Effective Dose, E. The Equivalent Dose is based on estimates of the ionization capability of the different types of radiation which are called Radiation Weighting Factors, wR, such that where D is the absorbed dose. The Effective Dose includes wR as well as estimates of the sensitivity of different tissues called Tissue Weighting Factors, wT, such that where the summation, Σ, is over all the tissue types involved. Both the Equivalent Dose and the Effective Dose are measured in derived SI units called sieverts (Sv). Let us pause here for a bit to ponder on the use of the term dose. It usually has a medical connotation in that we can say that someone had a dose of the 'flu, or that the doctor prescribed a certain dose of a drug. What has it to do with the deposition of energy by a beam of radiation in an absorber? It could have something to do with the initial applications of radiation in the early part of the 20th century when it was used to treat numerous diseases. As a result we can speculate that the term has stayed in the vernacular of the field. It would be much easier to use a term like absorbed radiation energy since we are talking about the deposition of energy in an absorber. But this might make the subject just a little too simple! Specific Gamma Ray Constant A final quantity is worth mentioning with regard to radiation units. This is the Specific Gamma-Ray Constant for a radioisotope. This quantity is an amalgam of the quantities we have already covered and expresses the exposure rate produced by the gamma-rays emitted from a radioisotope. It is quite a useful quantity from a practical viewpoint when we are dealing with a radioactive source which emits gamma-rays. Supposing you are using a gamma-emitting radioactive source (for example 99mTc or 137Cs) and you will be standing at a certain distance from this source while you are working. You most likely will be interested in the exposure rate produced by the source from a radiation safety point of view. This is where the Specific Gamma-Ray Constant comes in. It is defined as the exposure rate per unit activity at a certain distance from a source. The SI unit is therefore the and the traditional unit is the These units of measurement are quite cumbersome and a bit of a mouthful. It might have been better if they were named after some famous scientist so that we could call the SI unit 1 smith and the traditional unit 1 jones for example. But again things are not that simple! The Inverse Square Law Before we finish this chapter we are going to consider what happens as we move our absorber away from the radiation source. In other words we are going to think about the influence of distance on the intensity of the radiation beam. You will find that a useful result emerges from this that has a very important impact on radiation safety. The radiation produced in a radioactive source is emitted in all directions. We can consider that spheres of equal radiation intensity exist around the source with the number of photons/particles spreading out as we move away from the source. Consider an area on the surface of one of these spheres and assume that there are a certain number of photons/particles passing though it. If we now consider a sphere at a greater distance from the source the same number of photons/particles will now be spread out over a bigger area. Following this line of thought it is easy to appreciate that the radiation intensity, I will decrease with the square of the distance, r from the source, i.e. This effect is known as the Inverse Square Law. As a result if we double the distance from a source, we reduce the intensity by a factor of two squared, that is 4. If we triple the distance the intensity is reduced by a factor of 9, that is three squared, and so on. This is a very useful piece of information if you are working with a source of radiation and are interested in minimising the dose of radiation you will receive. External Links - Radiation and Risk - covers the effect of radiation, how risks are determined, comparison of radiation with other risks and radiation doses. - Radiation Effects Overview - results of studies of victims of nuclear bombs including early effects on survivors, effects on the in utero exposed, and late effects on the survivors - from the Radiation Effects Research Foundation, a cooperative Japan-United States Research Organization. - The Radiation and Health Physics Home Page - all you ever wanted to know about radiation but were afraid to ask....with hundreds of WWW links - from the Student Chapter of the Health Physics Society, University of Michigan containing sections on general information, regulatory Information, professional organizations and societies, radiation specialties, health physics research and education. - What You Need to Know about Radiation - to protect yourself to protect your family to make reasonable social and political choices - covers sources of radiation and radiation protection - by Lauriston S. Taylor. Interaction of Radiation with Matter We have focussed in previous chapters on the source of radiation and the types of radiation. We are now in a position to consider what happens when this radiation interacts with matter. Our main reason for doing this is to find out what happens to the radiation as it passes through matter and also to set ourselves up for considering how it interacts with living tissue and how to detect radiation. Since all radiation detectors are made from some form of matter it is useful to first of all know how radiation interacts so that we can exploit the effects in the design of such detectors in subsequent chapters of this wikibook. Before we do this let us first remind ourselves of the physical characteristics of the major types of radiation. We have covered this information in some detail earlier and it is summarised in the table below for convenience. We will now consider the passage of each type of radiation through matter with most attention given to gamma-rays because they are the most common type used in nuclear medicine. One of the main effects that you will notice irrespective of the type of radiation is that ions are produced when radiation interacts with matter. It is for this reason that it is called ionizing radiation. Before we start though you might find an analogy useful to help you with your thinking. This analogy works on the basis of thinking about matter as an enormous mass of atoms (that is nuclei with orbiting electrons) and that the radiation is a particle/photon passing through this type of environment. So the analogy to think about is a spaceship passing through a meteor storm like you might see in a science-fiction movie where the spaceship represents the radiation and the meteors represent the atoms of the material through which the radiation is passing. One added feature to bring on board however is that our spaceship sometimes has an electric charge depending on the type of radiation it represents. Alpha Particles We can see from the table above that alpha-particles have a double positive charge and we can therefore easily appreciate that they will exert considerable electrostatic attraction on the outer orbital electrons of atoms near which they pass. The result is that some electrons will be attracted away from their parent atoms and that ions will be produced. In other words ionizations occur. We can also appreciate from the table that alpha-particles are quite massive relative to the other types of radiation and also to the electrons of atoms of the material through which they are passing. As a result they travel in straight lines through matter except for rare direct collisions with nuclei of atoms along their path. A third feature of relevance here is the energy with which they are emitted. This energy in the case of alpha-particles is always distinct. For example 221Ra emits an alpha-particle with an energy of 6.71 MeV. Every alpha-particle emitted from this radionuclide has this energy. Another example is 230U which emits three alpha-particles with energies of 5.66, 5.82, 5.89 MeV. Finally it is useful to note that alpha-particles are very damaging biologically and this is one reason why they are not used for in-vivo diagnostic studies. We will therefore not be considering them in any great detail in this wikibook. Beta Particles We can see from the table that beta-particles have a negative electric charge. Notice that positrons are not considered here since as we noted in chapter 2 these particles do not last for very long in matter before they are annihilated. Beta-minus particles last considerably longer and are therefore the focus of our attention here. Because of their negative charge they are attracted by nuclei and repelled by electron clouds as they pass through matter. The result once again without going into great detail is ionization. The path of beta-particles in matter is often described as being tortuous, since they tend to ricochet from atom to atom. A final and important point to note is that the energy of beta-particles is never found to be distinct in contrast to the alpha-particles above. The energies of the beta-particles from a radioactive source forms a spectrum up to a maximum energy - see figure below. Notice from the figure that a range of energies is present and features such as the mean energy, Emean, or the maximum energy, Emax, are quoted. The question we will consider here is: why should a spectrum of energies be seen? Surely if a beta-particle is produced inside a nucleus when a neutron is converted into a proton, a single distinct energy should result. The answer lies in the fact that two particles are actually produced in beta-decay. We did not cover this in our treatment in chapter 2 for fear of complicating things too much at that stage of this wikibook. But we will cover it here briefly for the sake of completeness. The second particle produced in beta-decay is called a neutrino and was named by Enrico Fermi. It is quite a mysterious particle possessing virtually no mass and carrying no charge, though we are still researching its properties today. The difficulty with them is that they are very hard to detect and this has greatly limited our knowledge about them so far. The beta-particle energy spectrum can be explained by considering that the energy produced when a neutron is converted to a proton is shared between the beta-particle and the anti-neutrino. Sometimes all the energy is given to the beta-particle and it receives the maximum energy, Emax. But more often the energy is shared between them so that for example the beta-particle has the mean energy, Emean and the neutrino has the remainder of the energy. Finally it is useful to note that beta-particles are quite damaging biologically and this is one reason why they are not used for in-vivo diagnostic studies. We will therefore not consider them in any great detail in this wikibook. Gamma Rays Since we have been talking about energies above, let us first note that the energies of gamma-rays emitted from a radioactive source are always distinct. For example 99mTc emits gamma-rays which all have an energy of 140 keV and 51Cr emits gamma-rays which have an energy of 320 keV. Gamma-rays have many modes of interaction with matter. Those which have little or no relevance to nuclear medicine imaging are: and will not be described here. Those which are very important to nuclear medicine imaging, are the Photoelectric Effect and the Compton Effect. We will consider each of these in turn below. Note that the effects described here are also of relevance to the interaction of X-rays with matter since as we have noted before X-rays and gamma-rays are essentially the same entities. So the treatment below is also of relevance to radiography. Photoelectric Effect - When a gamma-ray collides with an orbital electron of an atom of the material through which it is passing it can transfer all its energy to the electron and cease to exist - see figure below. On the basis of the Principle of Conservation of Energy we can deduce that the electron will leave the atom with a kinetic energy equal to the energy of the gamma-ray less that of the orbital binding energy. This electron is called a photoelectron. - Note that an ion results when the photoelectron leaves the atom. Also note that the gamma-ray energy is totally absorbed in the process. - Two subsequent points should also be noted. Firstly the photoelectron can cause ionisations along its track in a similar manner to a beta-particle. Secondly X-ray emission can occur when the vacancy left by the photoelectron is filled by an electron from an outer shell of the atom. Remember that we came across this type of feature before when we dealt with Electron Capture in chapter 2. Compton Effect - This type of effect is somewhat akin to a cue ball hitting a coloured ball on a pool table. Here a gamma-ray transfers only part of its energy to a valance electron which is essentially free - see figure below. Notice that the electron leaves the atom and may act like a beta-particle and that the gamma-ray deflects off in a different direction to that with which it approached the atom. This deflected or scattered gamma-ray can undergo further Compton Effects within the material. - Note that this effect is sometimes called Compton Scattering. The two effects we have just described give rise to both absorption and scattering of the radiation beam. The overall effect is referred to as attenuation of gamma-rays. We will investigate this feature from an analytical perspective in the following chapter. Before we do so, we'll briefly consider the interaction of radiation with living matter. Radiation Biology It is well known that exposure to ionizing radiation can result in damage to living tissue. We've already described the initial atomic interactions. What's important in radiation biology is that these interactions may trigger complex chains of biomolecular events and consequent biological damage. We've seen above that the primary means by which ionizing radiations lose their energy in matter is by ejection of orbital electrons. The loss of orbital electrons from the atom leaves it positively charged. Other interaction processes lead to excitation of the atom rather than ionization. Here, an outer valence electron receives sufficient energy to overcome the binding energy of its shell and moves further away from the nucleus to an orbit that is not normally occupied. This type of effect alters the chemical force that binds atoms into molecules and a regrouping of the affected atoms into different molecular structures can result. That is, excitation is an indirect method of inducing chemical change through the modification of individual atomic bonds. Ionizations and excitations can give rise to unstable chemical species called free radicals. These are atoms and molecules in which there are unpaired electrons. They are chemically very reactive and seek stability by bonding with other atoms and molecules. Changes to nearby molecules can arise because of their production. But, let's go back to the interactions themselves for the moment..... In the case of X- and gamma-ray interactions, the energy of the photons is usually transferred by collisions with orbital electrons, e.g. via photoelectric and Compton effects. These radiations are capable of penetrating deeply into tissue since their interactions depend on chance collisions with electrons. Indeed, nuclear medicine imaging is only possible when the energy of the gamma-rays is sufficient for complete emission from the body, but low enough to be detected. The interaction of charged particles (e.g. alpha and beta particles), on the other hand, can be by collisions with atomic electrons and also via attractive and repulsive electrostatic forces. The rate at which energy is lost along the track of a charged particle depends therefore on the square of the charge on that particle. That is, the greater the particle charge, the greater the probability of it generating ion pairs along its track. In addition, a longer period of time is available for electrostatic forces to act when a charged particle is moving slowly and the ionization probability is therefore increased as a result. The situation is illustrated in the following figure where tracks of charged particles in water are depicted. Notice that the track of the relatively massive α-particle is a straight line, as we've discussed earlier in this chapter, with a large number of interactions (indicated by the asterisks) per unit length. Notice also that the tracks for electrons are tortuous, as we've also discussed earlier, and that the number of interactions per unit length is considerably less. The Linear Energy Transfer (LET) is defined as the energy released per unit length of the track of an ionizing particle. A slowly moving, highly charged particle therefore has a substantially higher LET than a fast, singly charged particle. An alpha particle of 5 MeV energy and an electron of 1 MeV energy have LETs, for instance, of 95 and 0.25 keV/μm, respectively. The ionization density and hence the energy deposition pattern associated with the heavier charged particle is very much greater than that arising from electrons, as illustrated in the figure above. The energy transferred along the track of a charged particle will vary because the velocity of the particle is likely to be continuously decreasing. Each interaction removes a small amount of energy from the particle so that the LET gradually increases along a particle track with a dramatic increase (called the Bragg Peak) occurring just before the particle comes to rest. The International Commission on Radiation Units and Measurements (ICRU) suggest that lineal energy is a better indicator of relative biological effectiveness (RBE). Although lineal energy has the same units as LET (e.g. keV/μm), it is defined as the: Since the microscopic deposition of energy may be quite anisotropic, lineal energy should be a more appropriate measure of potential damage than that of LET. The ICRU and the ICRP have accordingly recommended that the radiation effectiveness of a particular radiation type should be based on lineal energy in a 1 μm diameter sphere of tissue. The lineal energy can be calculated for any given radiation type and energy and a Radiation Weighting Factor, (wR) can then be determined based on the integrated values of lineal energy along the radiation track. All living things on this planet have been exposed to ionizing radiation since the dawn of time. The current situation for humans is summarized in the following table: |Source||Effective Dose (mSv/year)||Comment| |Radon and other gases|| The sum total of this Natural Background Radiation is about 2.5 mSv per year, with large variations depending on altitude and dietary intake as well as geological and geographical location. Its generally considered that repair mechanisms exist in living matter and that these can be invoked following radiation damage at the biomolecular level. These mechanisms are likely to have an evolutionary basis arising as a response to radiation fluxes generated by natural background sources over the aeons. Its also known that quite considerable damage to tissues can arise at quite higher radiation fluxes, even at medical exposures. Cell death and transformations to malignant states can result leading to latent periods of many years before clinical signs of cancer or leukemia, for instance, become manifest. Further treatment of this vast field of radiation biology however is beyond our scope here. Practical Radiation Safety Since nuclear medicine involves handling substantial quantities of radioactive materials a radiation hazard arises for the user. Although this risk may be small, it remains important to keep occupational exposures as low as reasonably achievable (i.e. the ALARA principle). There are a number of practices that can be adopted to aid in achieving this aim, which include: - Maintaining a comprehensive record of all radioactive source purchases, usage, movement and storage. - Storage of radioactive sources in a secure shielded environment. Specially dedicated facilities are required for the storage, safe handling, manipulation and dispensing of unsealed radioactive sources. Storage areas should be designed for both bulk radioisotope and radioactive waste. Furthermore, radioactive patients should be regarded as unsealed sources. - Adequate ventilation of any work area. This is particularly important to minimize the inhalation of Technigas and potentially volatile radioisotopes such as I-125 and I-131. It is preferable to use fume hoods when working with volatile materials. - Ensuring that any Codes of Safe Practice are adhered to and develop sensible written protocols and working rules for handling radioisotopes. - Benches should be manufactured with smooth, hard impervious surfaces with appropriate splash-backs to allow ready decontamination following any spillage of radioisotopes. Laboratory work should be performed in stainless steel trays lined with absorbent paper. - Protocols for dealing with minor contamination incidents of the environment or of staff members must be established. Remember that no matter how good work practices are, minor accidents or incidents involving spillage of radioisotopes can take place. - Excretion of radioactive materials by patients may be via faeces, urine, saliva, blood, exhaled breath or the skin. Provision to deal with any or all of these potential pathways for contamination must be made. - Provision for collection and possible storage of both liquid and solid radioactive waste may be necessary in some circumstances. Most short-lived, water soluble liquid waste can be flushed into the sewers but longer lived isotopes such as I-131 may have to be stored for decay. Such waste must be adequately contained and labelled during storage. - Ensure that appropriate survey monitors are available to determine if any contamination has occurred and to assist in decontamination procedures. Routine monitoring of potentially contaminated areas must be performed. - Ensure that all potentially exposed staff are issued with individual personnel monitors. - Protective clothing such as gowns, smocks, overboots and gloves should be provided and worn to prevent contamination of the personnel handling the radioactivity. In particular, gloves must be worn when administering radioactive materials orally or intravenously to patients. It should be noted that penetration of gloves may occur when handling some iodine compounds so that wearing a second pair of gloves is recommended. In any event, gloves should be changed frequently and discarded ones treated as radioactive waste. - Eating and drinking of food, smoking, and the application of cosmetics is prohibited in laboratories in which unsealed sources are utilized. - Mouth pipetting of any radioactive substance is totally prohibited. - Precautions should be taken to avoid punctures, cuts, abrasions and any other open skin wounds which otherwise might allow egress of radiopharmaceuticals into the blood stream. - Always ensure that there is a net benefit resulting from the patient procedure. Can the diagnosis or treatment be made by recourse to an alternative means using non ionizing radiation? - A useful administrative practice is to implement a program of routine and or random laboratory audits, to establish that safe work practices are being adhered to. - Ensure that all staff, including physicians, technologists, nurses and interns and other students, who are involved in the practice of nuclear medicine receive the relevant level of training and education appropriate to their assigned tasks. The training program could be in the form of seminars, refresher courses and informal tutorials. - A substantive Quality Assurance (QA) program should be implemented to ensure that the function of the Dose Calibrator, Gamma Camera, computer and other ancillary equipment is optimized. - Finally, it is worth reiterating, that attempts should be made to minimize the activity used in all procedures. Remember that the more activity you use the more radiation exposure you will receive. The potential hazards to staff in nuclear medicine include: - Milking the 99mTc/99Mo generator, drawing up and measuring the amount of radioisotope prior to administration. - Delivering the activity to the patient by injection or other means and positioning the now radioactive patient in the imaging device. - Removing the patients from the imaging device and returning them to the ward where they may continue to represent a radiation hazard for some time. For Tc-99m, a short-lived radionuclide the hazard period will be only a few hours but for therapeutic isotopes the hazardous period may be several days. - Disposal of radioactive waste including body fluids, such as blood and urine, but also swabs, syringes, needles, paper towels etc. - Cleaning up the imaging area after the procedure. The table below lists the dose rates from patients having nuclear medicine examinations. In general, the hazards from handling or dealing with radioactive patients arise in two parts: - External hazard: This will be the case when the radioisotope emits penetrating γ-rays. Usually, this hazard can be minimised by employing shielding and sensible work practices. - Radioactive contamination: This is potentially of more concern as it may lead to the inhalation or ingestion of radioactive material by staff. Possible sources of contamination are radioactive blood, urine and saliva, emanating from a patient, or airborne radioactive vapour. Again, sensible work practices, which involve high levels of personal hygiene, should ensure that contamination is not a major issue. One of the most common nuclear medicine diagnostic procedures is the bone scan using the isotope Tc-99m. The exposure rate at 1 metre from a typical patient will peak at approximately 3 μSv per hour immediately after injection dropping steadily because of radioactivity decay and through excretion so that after 2 hours it will be about 1.5 μSv per hour. Neglecting any further excretion, the total exposure received by an individual, should that person stand one meter from the patient for the whole of the first 24 hours, would be ~17 μSv. For a person at 3 meters from the patient this number would reduce to 1.7 μSv and for a distance of 5 metres it would be ~0.7 μSv. These values have been estimated on the basis of the inverse square law. One point to make is that these figures assume no reduction in exposure caused by intervening attenuation such as building materials. Also note that excretion is in fact quite important in terms of patients clearing radioactivity from their body. Patients should be encouraged to drink substantial quantities of liquid following their scan, as this will improve excretion and aid in minimizing not only their radiation dose but also that of nursing staff. In the context of the above discussion it is informative to digress and address the issue as to whether or not it is safe to radiograph a patient if the latter has just been injected for a bone scan. Whilst this practice is not recommended in view of the ALARA principle, there are situations when such an eventuality may occur through no fault of any particular person. Two quite separate concerns are frequently raised. Firstly, claims are made that the γ-rays emanating from the radioactive patient will degrade image quality and, secondly, that these same γ-rays represent a substantial risk to the radiographer. From a consideration of the numbers above, these concerns are not warranted, as in the few minutes that the patient may be close to the imaging device and/or the radiographer, the total absorbed dose received by either is going to be a fraction of a μGy. Even, in the potentially more serious scenario, where a sonographer may be required to perform an occasional abdominal ultrasound examination on a radioactive patient, the radiation dose received by the sonographer is still negligible in the context of the natural background dose he or she receives. Contamination is one of the more insidious sources of accidental exposure as it can arise from innocuous events such as handling a tap, the telephone or power switch with contaminated gloves. However, the major source of hazard in nuclear medicine or laboratory departments relates to the potential for the ingestion or inhalation of radionuclides such as I-131 that will target specific organs. Any ingested or inhaled iodine will readily accumulate in the thyroid leading to an unnecessary and avoidable thyroid absorbed dose. Although the immediate administration of stable iodine or equivalent (KI, KCLO4 or Lugols solution) may help minimize this uptake after a suspected uptake or misadventure, the real concern in those instances is when there is no awareness or knowledge of the uptake. Accordingly, high levels of personal hygiene must be practised in radioisotope laboratories. The latter means wearing gloves, aprons or smocks and masks when handling radionuclides. Hands should be washed thoroughly and often. Eating of food or drinking in clinical areas and laboratory areas is strictly forbidden. Decontamination facilities such as an emergency shower facility must be available and bench top and personal monitoring for contamination is also indicated. Decontamination usually involves little more than thorough washing of the skin surface with an appropriate soap or detergent. Care must be taken to avoid abrading the skin as this may allow direct entry of radioactivity into the blood stream. Invariably, the radioisotopes used are highly penetrating so that it is rarely practicable in nuclear medicine departments for workers to protect themselves by continuously wearing very thick lead aprons for hours on end. When milking the generator, handling high specific activity sources or conducting some patient examinations it may be practicable to do so, but this would not be the normal situation. Note that the HVL for the 140 keV γ-ray of Tc-99m is ~0.25 mm of lead so that the commercially available lead aprons are of limited usefulness. Nevertheless, a number of safety measures based on shielding can be implemented, such as: - Some dose reduction, most particularly to the fingers, can be achieved by using lead syringe shields during the drawing up and injection phase. - Radioisotopes should be stored in lead pots (except ß emitters). Specifically, high activity sources such as 99mTc/99Mo generators, should be located behind lead brick walls or stored in lead safes. Known beta emitters should be shielded in the first instance by low atomic number materials such as perspex to minimize the production of bremsstrahlung radiation. Bremsstrahlung radiation arises when highly energetic charged particles are stopped in high atomic number materials. They are in fact X-rays and as such are relatively penetrating but a thin outer shield of lead will protect against the minimal levels of bremsstrahlung produced in the perspex. Access to areas where such sources are stored should be restricted. Finally, the figure below encapsulates the practice of radiation safety in a nut shell. Attenuation of Gamma-Rays We covered the interaction of gamma-rays with matter from a descriptive viewpoint in the previous chapter and we saw that the Compton and Photoelectric Effects were the major mechanisms. We will consider the subject again here but this time from an analytical perspective. This will allow us to develop a more general understanding of the phenomenon. Note that the treatment here also refers to the attenuation of X-rays since, as we noted before gamma-rays and X-rays are essentially the same physical entities. Our treatment begins with a description of a simple radiation experiment which can be performed easily in the laboratory and which many of the early pioneers in this field did. We will then build on the information obtained from such an experiment to develop a simple equation and some simple concepts which will allow us generalise the situation to any attenuation situation. Attenuation Experiment The experiment is quite simple. It involves firing a narrow beam of gamma-rays at a material and measuring how much of the radiation gets through. We can vary the energy of the gamma-rays we use and the type of absorbing material as well as its thickness and density. The experimental set-up is illustrated in the figure below. We refer to the intensity of the radiation which strikes the absorber as the incident intensity, I0, and the intensity of the radiation which gets through the absorber as the transmitted intensity, Ix. Notice also that the thickness of the absorber is denoted by x. From what we covered in the previous chapter we can appreciate that some of the gamma-rays will be subjected to interactions such as the Photoelectric Effect and the Compton Effect as they pass through the absorber. The transmitted gamma-rays will in the main be those which pass through without any interactions at all. We can therefore expect to find that the transmitted intensity will be less than the incident intensity, that is But by how much you might ask. Before we consider this let us denote the difference between Ix and I0 as ∆I, that is Effect of Atomic Number - Let us start exploring the magnitude of ∆I by placing different absorbers in turn in the radiation beam. What we would find is that the magnitude of ∆I is highly dependent on the atomic number of the absorbing material. For example we would find that ∆I would be quite low in the case of an absorber made from carbon (Z=6) and very large in the case of lead (Z=82). - We can gain an appreciation of why this is so from the following figure: - The figure illustrates a high atomic number absorber by the large circles which represent individual atoms and a low atomic number material by smaller circles. The incident radiation beam is represented by the arrows entering each absorber from the left. Notice that the atoms of the high atomic number absorber present larger targets for the radiation to strike and hence the chances for interactions via the Photoelectric and Compton Effects is relatively high. The attenuation should therefore be relatively large. - In the case of the low atomic number absorber however the individual atoms are smaller and hence the chances of interactions are reduced. In other words the radiation has a greater probability of being transmitted through the absorber and the attenuation is consequently lower than in the high atomic number case. - With respect to our spaceship analogy used in the previous chapter the atomic number can be thought of as the size of individual meteors in the meteor cloud. - If we were to precisely control our experimental set-up and carefully analyse our results we would find that: - Therefore if we were to double the atomic number of our absorber we would increase the attenuation by a factor of two cubed, that is 8, if we were to triple the atomic number we would increase the attenuation by a factor of 27, that is three cubed, and so on. - It is for this reason that high atomic number materials (e.g. Pb) are used for radiation protection. Effect of Density - A second approach to exploring the magnitude of ∆I is to see what happens when we change the density of the absorber. We can see from the following figure that a low density absorber will give rise to less attenuation than a high density absorber since the chances of an interaction between the radiation and the atoms of the absorber are relatively lower. - So in our analogy of the spaceship entering a meteor cloud think of meteor clouds of different density and the chances of the spaceship colliding with a meteor. Effect of Thickness - A third factor which we could vary is the thickness of the absorber. As you should be able to predict at this stage the thicker the absorber the greater the attenuation. Effect of Gamma-Ray Energy - Finally in our experiment we could vary the energy of the gamma-ray beam. We would find without going into it in any great detail that the greater the energy of the gamma-rays the less the attenuation. You might like to think of it in terms of the energy with which the spaceship approaches the meteor cloud and the likelihood of a slow spaceship getting through as opposed to a spaceship travelling with a higher energy. Mathematical Model We will consider a mathematical model here which will help us to express our experimental observations in more general terms. You will find that the mathematical approach adopted and the result obtained is quite similar to what we encountered earlier with Radioactive Decay. So you will not have to plod your way through any new maths below, just a different application of the same form of mathematical analysis! Let us start quite simply and assume that we vary only the thickness of the absorber. In other words we use an absorber of the same material (i.e. same atomic number) and the same density and use gamma-rays of the same energy for the experiment. Only the thickness of the absorber is changed. From our reasoning above it is easy to appreciate that the magnitude of ∆I should be dependent on the radiation intensity as well as the thickness of the absorber, that is for an infinitesimally small change in absorber thickness: the minus sign indicating that the intensity is reduced by the absorber. Turning the proportionality in this equation into an equality, we can write: where the constant of proportionality, μ, is called the Linear Attenuation Coefficient. Dividing across by I we can rewrite this equation as: So this equation describes the situation for any tiny change in absorber thickness, dx. To find out what happens for the complete thickness of an absorber we simply add up what happens in each small thickness. In other words we integrate the above equation. Expressing this more formally we can say that for thicknesses from x = 0 to any other thickness x, the radiation intensity will decrease from I0 to Ix, so that: This final expression tells us that the radiation intensity will decrease in an exponential fashion with the thickness of the absorber with the rate of decrease being controlled by the Linear Attenuation Coefficient. The expression is shown in graphical form below. The graph plots the intensity against thickness, x. We can see that the intensity decreases from I0, that is the number at x = 0, in a rapid fashion initially and then more slowly in the classic exponential manner. The influence of the Linear Attenuation Coefficient can be seen in the next figure. All three curves here are exponential in nature, only the Linear Attenuation Coefficient is different. Notice that when the Linear Attenuation Coefficient has a low value the curve decreases relatively slowly and when the Linear Attenuation Coefficient is large the curve decreases very quickly. The Linear Attenuation Coefficient is characteristic of individual absorbing materials. Some like carbon have a small value and are easily penetrated by gamma-rays. Other materials such as lead have a relatively large Linear Attenuation Coefficient and are relatively good absorbers of radiation: |Absorber||100 keV||200 keV||500 keV| The materials listed in the table above are air, water and a range of elements from carbon (Z=6) through to lead (Z=82) and their Linear Attenuation Coefficients are given for three gamma-ray energies. The first point to note is that the Linear Attenuation Coefficient increases as the atomic number of the absorber increases. For example it increases from a very small value of 0.000195 cm-1 for air at 100 keV to almost 60 cm-1 for lead. The second point to note is that the Linear Attenuation Coefficient for all materials decreases with the energy of the gamma-rays. For example the value for copper decreases from about 3.8 cm-1 at 100 keV to 0.73 cm-1 at 500 keV. The third point to note is that the trends in the table are consistent with the analysis presented earlier. Finally it is important to appreciate that our analysis above is only strictly true when we are dealing with narrow radiation beams. Other factors need to be taken into account when broad radiation beams are involved. Half Value Layer As with using the Half Life to describe the Radioactive Decay Law an indicator is usually derived from the exponential attenuation equation above which helps us think more clearly about what is going on. This indicator is called the Half Value Layer and it expresses the thickness of absorbing material which is needed to reduce the incident radiation intensity by a factor of two. From a graphical point of view we can say that when: the thickness of absorber is the Half Value Layer: The Half Value Layer for a range of absorbers is listed in the following table for three gamma-ray energies: |Absorber||100 keV||200 keV||500 keV| The first point to note is that the Half Value Layer decreases as the atomic number increases. For example the value for air at 100 keV is about 35 meters and it decreases to just 0.12 mm for lead at this energy. In other words 35 m of air is needed to reduce the intensity of a 100 keV gamma-ray beam by a factor of two whereas just 0.12 mm of lead can do the same thing. The second thing to note is that the Half Value Layer increases with increasing gamma-ray energy. For example from 0.18 cm for copper at 100 keV to about 1 cm at 500 keV. Thirdly note that relative to the data in the previous table there is a reciprocal relationship between the Half Value Layer and the Linear Attenuation Coefficient, which we will now investigate. Relationship between μ and the HVL As was the case with the Radioactive Decay Law, where we explored the relationship between the Half Life and the Decay Constant, a relationship can be derived between the Half Value Layer and the Linear Attenuation Coefficient. We can do this by using the definition of the Half Value Layer: and inserting it in the exponential attenuation equation, that is: These last two equations express the relationship between the Linear Attenuation Coefficient and the Half Value Layer. They are very useful as you will see when solving numerical questions relating to attenuation and frequently form the first step in solving a numerical problem. Mass Attenuation Coefficient We implied above that the Linear Attenuation Coefficient was useful when we were considering an absorbing material of the same density but of different thicknesses. A related coefficient can be of value when we wish to include the density, ρ, of the absorber in our analysis. This is the Mass Attenuation Coefficient which is defined as the: The measurement unit used for the Linear Attenuation Coefficient in the table above is cm-1, and a common unit of density is the g cm-3. You might like to derive for yourself on this basis that the cm2 g-1 is the equivalent unit of the Mass Attenuation Coefficient. Two questions are given below to help you develop your understanding of the material presented in this chapter. The first one is relatively straight-forward and will exercise your application of the exponential attenuation equation. The second question is a lot more challenging and will help you relate exponential attenuation to radioactivity and radiation exposure. How much aluminium is required to reduce the intensity of a 200 keV gamma-ray beam to 10% of its incident intensity? Assume that the Half Value Layer for 200 keV gamma-rays in Al is 2.14 cm. - The question phrased in terms of the symbols used above is: - We are told that the Half Value Layer is 2.14 cm. Therefore the Linear Attenuation Coefficient is - Now combining all this with the exponential attenuation equation: - we can write: - So the thickness of aluminium required to reduce these gamma-rays by a factor of ten is about 7 cm. This relatively large thickness is the reason why aluminium is not generally used in radiation protection - its atomic number is not high enough for efficient and significant attenuation of gamma-rays. - You might like to try this question for the case when Pb is the absorber - but you will need to find out the Half Value Layer for the 200 keV gamma-rays yourself! - Here's a hint though: have a look at one of the tables above. - And here's the answer for you to check when you've finished: 2.2 mm. - In other words a relatively thin thickness of Pb is required to do the same job as 7 cm of aluminium. A 105 MBq source of 137Cs is to be contained in a Pb box so that the exposure rate 1 m away from the source is less than 0.5 mR/hour. If the Half Value Layer for 137Cs gamma-rays in Pb is 0.6 cm, what thickness of Pb is required? The Specific Gamma Ray Constant for 137Cs is 3.3 R hr-1 mCi-1 at 1 cm. - This is a fairly typical question which arises when someone is using radioactive materials. We wish to use a certain quantity of the material and we wish to store it in a lead container so that the exposure rate when we are working a certain distance away is below some level for safety reasons. We know the radioactivity of the material we will be using. But its quoted in SI units. We look up a reference book to find out the exposure rate for this radioisotope and find that the Specific Gamma Ray Constant is quoted in traditional units. Just as in our question! - So let us start by getting our units right. The Specific Gamma Ray Constant is given as: - This is equal to: - which is equal to: - on the basis of the Inverse Square Law. This result expressed per becquerel is - since 1 mCi = 3.7 x 107 Bq. And therefore for 105 MBq, the exposure rate is: - That is the exposure rate 1 meter from our source is 891.9 mR hr-1. - We wish to reduce this exposure rate according to the question to less than 0.5 mR hr-1 using Pb. - You should be able at this stage to use the exponential attenuation equation along with the Half Value Layer for these gamma-rays in Pb to calculate that the thickness of Pb required is about 6.5 cm. External Links - Mucal on the Web - an online program which calculates x-ray absorption coefficients - by Pathikrit Bandyopadhyay, The Center for Synchrotron Radiation Research and Instrumentation at the Illinois Institute of Technology. - Tables of X-Ray Mass Attenuation Coefficients - a vast amount of data for all elements from National Institute of Science & Technology, USA. Gas-Filled Radiation Detectors We have learned in the last two chapters about how radiation interacts with matter and we are now in a position to apply our understanding to the detection of radiation. One of the major outcomes of the interaction of radiation with matter is the creation of ions as we saw in Chapter 5. This outcome is exploited in gas-filled detectors as you will see in this chapter. The detector in this case is essentially a gas, in that it is the atoms of a gas which are ionised by the radiation. We will see in the next chapter that solids can also be used as radiation detectors but for now we will deal with gases and be introduced to detectors such as the Ionization Chamber and the Geiger Counter. Before considering these specific types of gas-filled detectors we will first of all consider the situation from a very general perspective. Gas-Filled Detectors As we noted above the radiation interacts with gas atoms in this form of detector and causes ions to be produced. On the basis of what we covered in Chapter 5 it is easy to appreciate that it is the Photoelectric and Compton Effects that cause the ionisations when the radiation consists of gamma-rays with energies useful for diagnostic purposes. There are actually two particles generated when an ion is produced - the positive ion itself and an electron. These two particles are collectively called an ion pair. The detection of the production of ion pairs in the gas is the basis upon which gas detectors operate. The manner in which this is done is by using an electric field to sweep the electrons away to a positively charged electrode and the ions to a negatively charged electrode. Let us consider a very simple arrangement as shown in the following figure: Here we have two electrodes with the gas between them. Something like a capacitor with a gas dielectric. The gas which is used is typically an inert gas, for example argon or xenon. The reason for using an inert gas is so that chemical reactions will not occur within the gas following the ionisations which could change the characteristics of our detector. A dc voltage is placed between the two electrodes. As a result when the radiation interacts with a gas atom the electron will move towards the positive electrode and the ion will move towards the negative electrode. But will these charges reach their respective electrodes? The answer is obviously dependent on the magnitude of the dc voltage. For example if at one extreme we had a dc voltage of a microvolt (that is, one millionth of a volt) the resultant electric field may be insufficient to move the ion pair very far and the two particles may recombine to reform the gas atom. At the other extreme suppose we applied a million volts between the two electrodes. In this case we are likely to get sparks flying between the two electrodes - a lightning bolt if you like - and our detector might act something like a neon sign. Somewhere in between these two extremes though we should be able to provide a sufficient attractive force for the ion and electron to move to their respective electrodes without recombination or sparking occurring. We will look at this subject in more detail below. Before we do let us see how the concept of the simple detector illustrated above is applied in practice. The gas-filled chamber is generally cylindrical in shape in real detectors. This shape has been found to be more efficient than the parallel electrode arrangement shown above. A cross-sectional view through this cylinder is shown in the following figure: The positive electrode consists of a thin wire running through the centre of the cylinder and the negative electrode consists of the wall of the cylinder. In principle we could make such a detector by getting a section of a metal pipe, mounting a wire through its centre, filling it with an inert gas and sealing the ends of the pipe. Actual detectors are a little bit more complex however but let us not get side-tracked at this stage. We apply a dc voltage via a battery or via a dc voltage supply and connect it as shown in the figure using a resistor, R. Now, assume that a gamma-ray enters the detector. Ion pairs will be produced in the gas - the ions heading towards the outer wall and the electrons heading towards the centre wire. Let us think about the electrons for a moment. When they hit the centre wire we can simply think of them as entering the wire and flowing through the resistor to get to the positive terminal of the dc voltage supply. These electrons flowing through the resistor constitute an electric current and as a result of Ohm's Law a voltage is generated across the resistor. This voltage is amplified by an amplifier and some type of device is used to register the amplified voltage. A loud-speaker is a fairly simple device to use for this purpose and the generation of a voltage pulse is manifest by a click from the loud-speaker. Other display devices include a ratemeter which displays the number of voltage pulses generated per unit time - something like a speedometer in a car - and a pulse counter (or scaler) which counts the number of voltage pulses generated in a set period of time. A voltage pulse is frequently referred to in practice as a count and the number of voltage pulses generated per unit time is frequently called the count rate. DC Voltage Dependence If we were to build a detector and electronic circuit as shown in the figure above we could conduct an experiment that would allow us to explore the effect of the dc voltage on the magnitude of the voltage pulses produced across the resistor, R. Note that the term pulse height is frequently used in this field to refer to the magnitude of voltage pulses. Ideally, we could generate a result similar to that illustrated in the following figure: The graph illustrates the dependence of the pulse height on the dc voltage. Note that the vertical axis representing the pulse height is on a logarithmic scale for the sake of compressing a large linear scale onto a reasonably-sized graph. The experimental results can be divided into five regions as shown. We will now consider each region in turn. - Region A Here Vdc is relatively low so that recombination of positive ions and electrons occurs. As a result not all ion pairs are collected and the voltage pulse height is relatively low. It does increase as the dc voltage increases however as the amount of recombination reduces. - Region B Vdc is sufficiently high in this region so that only a negligible amount of recombination occurs. This is the region where a type of detector called the Ionization Chamber operates. - Region C Vdc is sufficiently high in this region so that electrons approaching the centre wire attain sufficient energy between collisions with the electrons of gas atoms to produce new ion pairs. Thus the number of electrons is increased so that the electric charge passing through the resistor, R, may be up to a thousand times greater than the charge produced initially by the radiation interaction. This is the region where a type of detector called the Proportional Counter operates. - Region D Vdc is so high that even a minimally-ionizing particle will produce a very large voltage pulse. The initial ionization produced by the radiation triggers a complete gas breakdown as an avalanche of electrons heads towards and spreads along the centre wire. This region is called the Geiger-Müller Region, and is exploited in the Geiger Counter. - Region E Here Vdc is high enough for the gas to completely breakdown and it cannot be used to detect radiation. We will now consider features of the Ionisation Chamber and the Geiger Counter in more detail. Ionisation Chamber The ionisation chamber consists of a gas-filled detector energised by a relatively low dc voltage. We will first of all make an estimate of the voltage pulse height generated by this type of detector. We will then consider some applications of ionisation chambers. When a beta-particle interacts with the gas the energy required to produce one ion pair is about 30 eV. Therefore when a beta-particle of energy 1 MeV is completely absorbed in the gas the number of ion pairs produced is: The electric charge produced in the gas is therefore If the capacitance of the ionisation chamber (remember that we compared a gas-filled detector to a capacitor above) is 100 pF then the amplitude of the voltage pulse generated is: Because such a small voltage is generated it is necessary to use a very sensitive amplifier in the electronic circuitry connected to the chamber. We will now learn about two applications of ionisation chambers. The first one is for the measurement of radiation exposures. You will remember from Chapter 4 that the unit of radiation exposure (be it the SI or the traditional unit) is defined in terms of the amount of electric charge produced in a unit mass of a air. An ionization chamber filled with air is the natural instrument to use for such measurements. The second application is the measurement of radioactivity. The ionisation chamber used here is configured in what is called a re-entrant arrangement (see figure below) so that the sample of radioactive material can be placed within the detector using a holder and hence most of the emitted radiation can be detected. The instrument is widely referred to as an Isotope Calibrator and the trickle of electric current generated by such a detector is calibrated so that a reading in units of radioactivity (for example MBq or mCi) can be obtained. Most well-run Nuclear Medicine Departments will have at least one of these devices so that doses of radioactivity can be checked prior to administration to patients. Here are some photographs of ionisation chambers designed for various applications: Geiger Counter We saw earlier that the Geiger Counter operates at relatively high dc voltages (for example 400-900 volts) and that an avalanche of electrons is generated following the absorption of radiation in the gas. The voltage pulses produced by this detector are relatively large since the gas effectively acts as an amplifier of the electric charge produced. There are four features of this detector which we will discuss. The first is that a sensitive amplifier (as was the case with the Ionization Chamber) is not required for this detector because of the gas amplification noted above. The second feature results from the fact that the generation of the electron avalanche must be stopped in order to reform the detector. In other words when a radiation particle/photon is absorbed by the gas a complete gas breakdown occurs which implies that the gas is incapable of detecting the next particle/photon which enters the detector. So in the extreme case one minute we have a radiation detector and the following moment we do not. A means of stopping the electron avalanche is therefore required - a process called Quenching. One means of doing this is by electronically lowering the dc voltage following an avalanche. A more widely used method of quenching is to add a small amount of a quenching gas to the inert gas. For example the gas could be argon with ethyl alcohol added. The ethyl alcohol is in vapour form and since it consists of relatively large molecules energy which would in their absence give rise to sustaining the electron avalanche is absorbed by these molecules. The large molecules act like a brake in effect. Irrespective of the type of quenching used the detector is insensitive for a small period of time following absorption of a radiation particle/photon. This period of time is called the Dead Time and this is the third feature of this detector which we will consider. Dead times are relatively short but nevertheless significant - being typically of the order of 200-400 µs. As a result the reading obtained with this detector is less than it should be. The true reading without going into detail can be obtained using the following equation: where T is the true reading, A is the actual reading and τ is the dead time. Some instruments perform this calculation automatically. The fourth feature to note about this detector is the dependence of its performance on the dc voltage. The Geiger-Müller Region of our figure above is shown in more detail below: Notice that it contains a plateau where the count rate obtained is independent of the dc voltage. The centre of this plateau is where most detectors are operated. It is clear that the count rate from the detector is not affected if the dc voltage fluctuates about the operating voltage. This implies that a relatively straight-forward dc voltage supply can be used. This feature coupled with the fact that a sensitive amplifier is not needed translates in practice to a relatively inexpensive radiation detector. External Links - Inside a smoke detector - about the ion chamber used in smoke detectors - from the How Stuff Works website. - Ionisation Chambers - a brief description from the Triumf Safety Group. - Radiation and Radioactivity - a self-paced lesson developed by the University of Michigan's Student Chapter of the Health Physics Society with a section on gas filled detectors. - The Geiger Counter - a brief overview from the NASA Goddard Space Flight Center, USA. Scintillation Detectors The second type of radiation detector we will discuss is called the scintillation detector. Scintillations are minute flashes of light which are produced by certain materials when they absorb radiation. These materials are variously called fluorescent materials, fluors, scintillators or phosphors. If we had a radioactive source and a scintillator in the lab we could darken the room, move the scintillator close to the source and see the scintillations. These small flashes of light might be green or blue or some other colour depending on the scintillator. We could also count the number of flashes produced to gain an estimate of the radioactivity of the source, that is the more flashes of light seen the more radiation present. The scintillation detector was possibly the first radiation detector discovered. You might have heard the story of the discovery of X-rays by Wilhelm Roentgen in 1895. He was working one evening in his laboratory in Wurzburg, Germany with a device which fired a beam of electrons at a target inside an evacuated glass tube. While working with this device he noticed that some platino-barium cyanide crystals, which he just happened to have close by, began to glow - and that they stopped glowing when he switched the device off. Roentgen had accidentally discovered a new form of radiation. He had also accidentally discovered a scintillator detector. Although scintillations can be seen we have a more sophisticated way of counting and measuring them today by using some form of photodetector. We will learn about the construction and mode of operation of this type of detector in this chapter. In addition, we will see how it can be used not just for detecting the presence of ionizing radiation but also for measuring the energy of that radiation. Before we do however it is useful to note that scintillators are very widely used in the medical radiations field. For example the X-ray cassette used in radiography contains a scintillator (called an intensifying screen) in close contact with a photographic film. A second example is the X-ray Image Intensifier used in fluoroscopy which contains scintillators called phosphors. Scintillators are also used in some CT Scanners and as we will see in the next chapter, in the Gamma Camera and PET Scanner. Their application is not limited to the medical radiations field in that scintillators are also used as screens in television sets and computer monitors and for generating light in fluorescent tubes - to mention just two common applications. What other applications can you think of? So scintillators are a lot more common than you might initially think and you will therefore find the information presented here useful to you not just for your studies of nuclear medicine. Fluorescent Materials Some fluorescent materials are listed in the following table. Thallium-activated sodium iodide, NaI(Tl) is a crystalline material which is widely used for the detection of gamma-rays in scintillation detectors. We will be looking at this in more detail below. Another crystalline material sodium-activated caesium iodide, CsI(Na) is widely used for X-ray detection in devices such as the X-ray image intensifier. Another one called calcium tungstate, CaWO4 has been widely used in X-ray cassettes although this substance has been replaced by other scintillators such as lanthanum oxybromide in many modern cassettes. |p-terphenyl in toluene||liquid| |p-terphenyl in polystyrene||plastic| Notice that some scintillation materials are activated with certain elements. What this means is that the base material has a small amount of the activation element present. The term doped is sometimes used instead of activated. This activating element is used to influence the wavelength (colour) of the light produced by the scintillator. Silver-activated zinc sulphide is a scintillator in powder form and p-terphenyl in toluene is a liquid scintillator. The advantage of such forms of scintillators is that the radioactive material can be placed in close contact with the scintillating material. For example if a radioactive sample happened to be in liquid form we could mix it with a liquid scintillator so as to optimise the chances of detection of the emitted radiation and hence have a very sensitive detector. A final example is p-terphenyl in polystyrene which is a scintillator in the form of a plastic. This form can be easily made into different shapes like most plastics and is therefore useful when detectors of particular shapes are required. Photomultiplier Tube A scintillation crystal coupled to a photomultiplier tube (PMT) is illustrated in the following figure. The overall device is typically cylindrical in shape and the figure shows a cross-section through this cylinder: The scintillation crystal, NaI(Tl) is very delicate and this is one of the reasons it is housed in an aluminium casing. The inside wall of the casing is designed so that any light which strikes it is reflected downwards towards the PMT. The PMT itself consists of a photocathode, a focussing grid, an array of dynodes and an anode housed in an evacuated glass tube. The function of the photocathode is to convert the light flashes produced by radiation attenuation in the scintillation crystal into electrons. The grid focuses these electrons onto the first dynode and the dynode array is used for electron multiplication. We will consider this process in more detail below. Finally the anode collects the electrons produced by the array of dynodes. The electrical circuitry which is typically attached to a PMT is shown in the next figure: It consists of a high voltage supply, a resistor divider chain and a load resistor, RL. The high voltage supply generates a dc voltage, Vdc which can be up to 1,000 volts. It is applied to the resistor divider chain which consists of an array of resistors, each of which has the same resistance, R. The function of this chain of resistors is to divide up Vdc into equal voltages which are supplied to the dynodes. As a result voltages which increase in equal steps are applied to the array of dynodes. The load resistor is used so that an output voltage, Vout can be generated. Finally the operation of the device is illustrated in the figure below: The ionizing radiation produces flashes of light in the scintillation crystal. This light strikes the photocathode and is converted into electrons. The electrons are directed by the grid onto the first dynode. Dynodes are made from certain alloys which emit electrons when their surface is struck by electrons with the advantage that more electrons are emitted than are absorbed. A dynode used in a PMT typically emits between two and five electrons for each electron which strikes it. So when an electron from the photocathode strikes the first dynode between two and five electrons are emitted and are directed towards the second dynode in the array (three are illustrated in the figure). This electron multiplication process is repeated at the second dynode so that we end up with nine electrons for example heading towards the third dynode. An electron avalanche therefore develops so that a sizeable number of electrons eventually hits the anode at the bottom of the dynode chain. These electrons flow through the load resistor, RL and constitute an electric current which according to Ohm's Law generates a voltage, Vout which is measured by electronic circuitry (which we will describe later). A number of photographs of devices based on scintillation detection are shown below: The important feature of the scintillation detector is that this output voltage, Vout is directly proportional to the energy deposited by the radiation in the crystal. We will see what a useful feature this is below. Before we do so we will briefly analyze the operation of this device. Mathematical Model A simple mathematical model will be presented below which will help us get a better handle on the performance of a scintillation detector. We will do this by quantifying the performance of the scintillator, the photocathode and the dynodes. Let's use the following symbols to characterize each stage of the detection process: - m: number of light photons produced in crystal - k: optical efficiency of the crystal, that is the efficiency with which the crystal transmits light - l: quantum efficiency of the photocathode, that is the efficiency with which the photocathode converts light photons to electrons - n: number of dynodes - R: dynode multiplication factor, that is the number of secondary electrons emitted by a dynode per primary electron absorbed. Therefore the charge collected at the anode is given by the following equation: where e: the electronic charge. For example supposing a 100 keV gamma-ray is absorbed in the crystal. The number of light photons produced, m, might be about 1,000 for a typical scintillation crystal. A typical crystal might have an optical efficiency, k, of 0.5 - in other words 50% of the light produced reaches the photocathode which might have a quantum efficiency of 0.15. A typical PMT has ten dynodes and let us assume that the dynode multiplication factor is 4.5. This amount of charge is very small. Even though we have used a sophisticated photodetector like a PMT we still end up with quite a small electrical signal. A very sensitive amplifier is therefore needed to amplify this signal. This type of amplifier is generally called a pre-amplifier and we will refer to it again later. Output Voltage We noted above that the voltage measured across the resistor, RL, is proportional to the energy deposited in the scintillation crystal by the radiation. Let us consider how the radiation might deposit its energy in the crystal. Let us consider a situation where gamma-rays are detected by the crystal. We learnt in Chapter 5 that there were two interaction mechanisms involved in gamma-ray attenuation - the Photoelectric Effect and the Compton Effect. You will remember that the Photoelectric Effect involves the total absorption of the energy of a gamma-ray, while the Compton Effect involves just partial absorption of this energy. Since the output voltage of a scintillation detector is proportional to the energy deposited by the gamma-rays it is reasonable to expect that Photoelectric Effects in the crystal will generate distinct and relatively large output voltages and that Compton Effects will result in lower output voltages. The usual way of presenting this information is by plotting a graph of the count rate versus the output voltage pulse height as shown in the following figure: This plot illustrates what is obtained for a monoenergetic gamma-emitting radioisotope, for example 99mTc - which, as we have noted before emits a single gamma-ray with an energy of 140 keV. Before we look at it in detail remember that we noted above that the output voltage from this detector is proportional to the energy deposited by the radiation in the crystal. The horizontal axis can therefore be used to represent the output voltage or the gamma-ray energy. Both of these quantities are shown in the figure to help with this discussion. In addition note that this plot is often called a Gamma-Ray Energy Spectrum. The figure above contains two regions. One called the Photopeak and the other called the Compton Smear. The Photopeak results because of Photoelectric absorption of the gamma-rays from the radioactive source - remember that we are dealing with a monoenergetic emitter in this example. It consists of a peak representing the gamma-ray energy (140 keV in our example). If our radioisotope emitted gamma-rays of two energies we would have two photopeaks in our spectrum and so on. Notice that the peak has a statistical spread. This has to do with how good our detector is and we will not get into any detail about it here other than to note that the extent of this spread is a measure of the quality of our detector. A high quality (and more expensive!) detector will have a narrower statistical spread in the photopeaks which it measures. The other component of our spectrum is the Compton Smear. It represents a range of output voltages which are lower than that for the Photopeak. It is therefore indicative of the partial absorption of the energy of gamma-rays in the crystal. In some Compton Effects a substantial scattering with a valance electron can occur which gives rise to relatively large voltage pulses. In other Compton Effects the gamma-ray just grazes off a valance electron with minimal energy transfer and hence a relatively small voltage pulse is generated. In between these two extremes are a range of scattering events involving a range of energy transfers and hence a range of voltage pulse heights. A 'smear' therefore manifests itself on the gamma-ray energy spectrum. It is important to note that the spectrum illustrated in the figure is simplified for the sake of this introductory discussion and that actual spectra are a little more complex - see figure below for an example: You will find though that your understanding of actual spectra can easily develop on the basis of the simple picture we have painted here. It is also important to appreciate the additional information which this type of radiation detector provides relative to a gas-filled detector. In essence gas-filled detectors can be used to tell us if any radiation is present as well as the amount of that radiation. Scintillation detectors also give us this information but they tell us about the energy of this radiation as well. This additional information can be used for many diverse applications such as the identification of unknown radioisotopes and the production of nuclear medicine images. Let us stay a little bit longer though with the fundamental features of how scintillation detectors work. The photopeak of the Gamma-Ray Energy Spectrum is generally of interest in nuclear medicine. This peak is the main signature of the radioisotope being used and its isolation from the Compton Smear is normally achieved using a technique called Pulse Height Analysis. Pulse Height Analysis This is an electronic technique which allows a spectrum to be acquired using two types of circuitry. One circuit is called a Lower Level Discriminator which only allows voltages pulses through it which are lower than its setting. The other is called an Upper Level Discriminator which only allows voltage pulses though which are (you guessed it!) higher than its setting. The result of using both these circuits in combination is a variable-width window which can be placed anywhere along a spectrum. For example if we wished to obtain information from the photopeak only of our simplified spectrum we would place the discrimination controls as shown in the following figure: A final point to note here is that since the scintillation detector is widely used to obtain information about the energies of the radiation emitted from a radioactive source it is frequently referred to as a Scintillation Spectrometer. Scintillation Spectrometer Types of scintillation spectrometer fall into two basic categories - the relatively straight-forward Single Channel Analyser and the more sophisticated Multi-Channel Analyser. The Single Channel Analyser is the type of instrument we have been describing so far in this discussion. A block diagram of the instrument is shown below: It consists of a scintillation crystal coupled to a photomultiplier tube which is powered by a high voltage circuit (H.V.). The output voltages are initially amplified by a sensitive pre-amplifier (Pre-Amp) as we noted above before being amplified further and conditioned by the amplifier (Amp). The voltage pulses are then in a suitable form for the pulse height analyser (P.H.A.) - the output pulses from which can be fed to a Scaler and a Ratemeter for display of the information about the portion of the spectrum we have allowed to pass through the PHA. The Ratemeter is a display device just like the speedometer in a car and indicates the number of pulses generated per unit time. The Scaler on the other hand usually consists of a digital display which shows the number of voltage pulses produced in a specified period of time. We can illustrate the operation of this circuitry by considering how it might be used to generate a Gamma-Ray Energy Spectrum. What we would do is set up the LLD and ULD so as to define a narrow window and place this to pass the lowest voltage pulses produced by the detector through to the Scaler and Ratemeter. In other words we would place a narrow window at the extreme left of the spectrum and acquire information about the lowest energy gamma-ray interactions in the crystal. We would then adjust the LLD and ULD settings to acquire information about the interactions of the next highest energy. We would proceed in this fashion to scan the whole spectrum. A more sophisticated detector circuit is illustrated in the following figure: It is quite similar to that in the previous figure with the exception that the PHA, Scaler and Ratemeter are replaced by a Multi-Channel Analyser and a computer. The Multi-Channel Analyser (MCA) is a circuit which is capable of setting up a large number of individual windows to look at a complete spectrum in one go. The MCA might consist of 1024 individual windows for example and the computer might consist of a personal computer which can acquire information simultaneously from each window and display it as an energy spectrum. The computer generally contains software which allows us to manipulate the resultant information in a variety of ways. Indeed the 137Cs spectrum shown above was generated using this approach. External Links - Radiation and Radioactivity - a self-paced lesson developed by the University of Michigan's Student Chapter of the Health Physics Society, with a section on sodium iodide detectors. Nuclear Medicine Imaging Systems Topics we have covered in this wikibook have included radioactivity, the interaction of gamma-rays with matter and radiation detection. The main reason for following this pathway was to bring us to the subject of this chapter: nuclear medicine imaging systems. These are devices which produce pictures of the distribution of radioactive material following administration to a patient. The radioactivity is generally administered to the patient in the form of a radiopharmaceutical - the term radiotracer is also used. This follows some physiological pathway to accumulate for a short period of time in some part of the body. A good example is 99mTc-tin colloid which following intravenous injection accumulates mainly in the patient's liver. The substance emits gamma-rays while it is in the patient's liver and we can produce an image of its distribution using a nuclear medicine imaging system. This image can tell us whether the function of the liver is normal or abnormal or if sections of it are damaged from some form of disease. Different radiopharmaceuticals are used to produce images from almost every region of the body: |Part of the Body||Example Radiotracer| Note that the form of information obtained using this imaging method is mainly related to the physiological functioning of an organ as opposed to the mainly anatomical information which is obtained using X-ray imaging systems. Nuclear medicine therefore provides a different perspective on a disease condition and generates additional information to that obtained from X-ray images. Our purpose here is to concentrate on the imaging systems used to produce the images. Early forms of imaging system used in this field consisted of a radiation detector (a scintillation detector for example) which was scanned slowly over a region of the patient in order to measure the radiation intensity emitted from individual points within the region. One such device was called the Rectilinear Scanner. Such imaging systems have been replaced since the 1970s by more sophisticated devices which produce images much more rapidly. The most common of these modern devices is called the Gamma Camera and we will consider its construction and mode of operation below. A review of recent developments in this technology for cardiac applications can be found in Slomka et al (2009). Gamma Camera The basic design of the most common type of gamma camera used today was developed by an American physicist, Hal Anger and is therefore sometimes called the Anger Camera. It consists of a large diameter NaI(Tl) scintillation crystal which is viewed by a large number of photomultiplier tubes. A block diagram of the basic components of a gamma camera is shown below: The crystal and PM Tubes are housed in a cylindrical shaped housing commonly called the camera head and a cross-sectional view of this is shown in the figure. The crystal can be between about 25 cm and 40 cm in diameter and about 1 cm thick. The diameter is dependent on the application of the device. For example a 25 cm diameter crystal might be used for a camera designed for cardiac applications while a larger 40 cm crystal would be used for producing images of the lungs. The thickness of the crystal is chosen so that it provides good detection for the 140 keV gamma-rays emitted from 99mTc - which is the most common radioisotope used today. Scintillations produced in the crystal are detected by a large number of PM tubes which are arranged in a two-dimensional array. There is typically between 37 and 91 PM tubes in modern gamma cameras. The output voltages generated by these PM tubes are fed to a position circuit which produces four output signals called ±X and ±Y. These position signals contain information about where the scintillations were produced within the crystal. In the most basic gamma camera design they are fed to a cathode ray oscilloscope (CRO). We will describe the operation of the CRO in more detail below. Before we do so we should note that the position signals also contain information about the intensity of each scintillation. This intensity information can be derived from the position signals by feeding them to a summation circuit (marked ∑ in the figure) which adds up the four position signals to generate a voltage pulse which represents the intensity of a scintillation. This voltage pulse is commonly called the Z-pulse (or zee-pulse in American English!) which following pulse height analysis (PHA) is fed as the unblank pulse to the CRO. So we end up with four position signals and an unblank pulse sent to the CRO. Let us briefly review the operation of a CRO before we continue. The core of a CRO consists of an evacuated tube with an electron gun at one end and a phosphor-coated screen at the other end. The electron gun generates an electron beam which is directed at the screen and the screen emits light at those points struck by the electron beam. The position of the electron beam can be controlled by vertical and horizontal deflection plates and with the appropriate voltages fed to these plates the electron beam can be positioned at any point on the screen. The normal mode of operation of an oscilloscope is for the electron beam to remain switched on. In the case of the gamma camera the electron beam of the CRO is normally switched off - it is said to be blanked. When an unblank pulse is generated by the PHA circuit the electron beam of the CRO is switched on for a brief period of time so as to display a flash of light on the screen. In other words the voltage pulse from the PHA circuit is used to unblank the electron beam of the CRO. So where does this flash of light occur on the screen of the CRO? The position of the flash of light is dictated by the ±X and ±Y signals generated by the position circuit. These signals as you might have guessed are fed to the deflection plates of the CRO so as to cause the unblanked electron beam to strike the screen at a point related to where the scintillation was originally produced in the NaI(Tl) crystal. Simple! The gamma camera can therefore be considered to be a sophisticated arrangement of electronic circuits used to translate the position of a flash of light in a scintillation crystal to a flash of light at a related point on the screen of an oscilloscope. In addition the use of a pulse height analyser in the circuitry allows us to translate the scintillations related only to photoelectric events in the crystal by rejecting all voltage pulses except those occurring within the photopeak of the gamma-ray energy spectrum. Let us summarise where we have got to before we proceed. A radiopharmaceutical is administered to the patient and it accumulates in the organ of interest. Gamma-rays are emitted in all directions from the organ and those heading in the direction of the gamma camera enter the crystal and produce scintillations (note that there is a device in front of the crystal called a collimator which we will discuss later). The scintillations are detected by an array of PM tubes whose outputs are fed to a position circuit which generates four voltage pulses related to the position of a scintillation within the crystal. These voltage pulses are fed to the deflection circuitry of the CRO. They are also fed to a summation circuit whose output (the Z-pulse) is fed to the PHA and the output of the PHA is used to switch on (that is, unblank) the electron beam of the CRO. A flash of light appears on the screen of the CRO at a point related to where the scintillation occurred within the NaI(Tl) crystal. An image of the distribution of the radiopharmaceutical within the organ is therefore formed on the screen of the CRO when the gamma-rays emitted from the organ are detected by the crystal. What we have described above is the operation of a fairly traditional gamma camera. Modern designs are a good deal more complex but the basic design has remained much the same as has been described. One area where major design improvements have occurred is the area of image formation and display. The most basic approach to image formation is to photograph the screen of the CRO over a period of time to allow integration of the light flashes to form an image on photographic film. A stage up from this is to use a storage oscilloscope which allows each flash of light to remain on the screen for a reasonable period of time. The most modern approach is to feed the position and energy signals into the memory circuitry of a computer for storage. The memory contents can therefore be displayed on a computer monitor and can also be manipulated (that is processed) in many ways. For example various colours can be used to represent different concentrations of a radiopharmaceutical within an organ. The use of digital image processing is now widespread in nuclear medicine in that it can be used to rapidly and conveniently control image acquisition and display as well as to analyse an image or sequences of images, to annotate images with the patient's name and examination details, to store the images for subsequent retrieval and to communicate the image data to other computers over a network. The essential elements of a modern gamma camera are shown in the next figure. Gamma rays emitted by the patient pass through the collimator and are detected within the camera head, which generates data related to the location of scintillations in the crystal as well as to the energy of the gamma rays. This data is then processed on-the-fly by electronic hardware which corrects for technical factors such as spatial linearity, PM tube drift and energy response so as to produce an imaging system with a spatially-uniform sensitivity and distortion-free performance. A multichannel analyzer (MCA) is used to display the energy spectrum of gamma rays which interact inside the crystal. Since these gamma rays originate from within the patient, some of them will have an energy lower than the photopeak as a result of being scattered as they travel through the patient's tissues - and by other components such as the patient table and structures of the imaging system. Some of these scattering events may involve just glancing interactions with free electrons, so that the gamma rays lose only a small amount of energy. These gamma rays may have an energy just below that of the photopeak so that their spectrum merges with the photopeak. The photopeak for a gamma camera imaging a patient therefore contains information from spatially-correlated, unattenuated gamma rays (which is the information we want) and from spatially-uncorrelated, scattered gamma rays. The scattered gamma rays act like a variable background within the true photopeak data and the effect is that of a background haze in gamma camera images. While scatter may not be a significant problem in planar scintigraphy, it has a strong bearing on the fidelity of quantitative information derived from gamma camera images and is a vital consideration for accurate image reconstruction in emission tomography. It is the unattenuated gamma rays (also called the primary radiation) that contain the desired information, because of their direct dependence on radioactivity. The scatter situation is illustrated in more detail in the figure below, which shows estimates of the primary and scatter spectra for 99mTc in patient imaging conditions. Such spectral estimates can be generated using Monte Carlo methods. It is seen in the figure that the energy of the scattered radiation forms a broad band, similar to the Compton Smear described previously, which merges into and contributes substantially to the detected photopeak. The detected photopeak is therefore an overestimate of the primary radiation. The extent of this overestimate is likely to be dependent on the specific imaging situation because of the different thicknesses of tissues involved. It is clear however that the scatter contribution within the detected photopeak needs to be accounted for if an accurate measure of radioactivity is required. One method of compensating for the scatter contribution is illustrated in the figure below and involves using data from a lower energy window as an estimate for subtraction from the photopeak, i.e. where k is a scaling factor to account for the extent of the scatter contribution. This approach to scatter compensation is referred to as the Dual-Energy Window (DEW) method. It can be implemented in practice by acquiring two images, one for each energy window, and subtracting a fraction (k) of the scatter image from the photopeak image. For the spectrum shown above, it can be seen that the scaling factor, k, is about 0.5, but it should be appreciated that its exact value is dependent on the scattering conditions. Gamma cameras which use the DEW method therefore generally provide the capability of adjusting k for different imaging situations. Some systems use a narrower scatter window than that illustrated, e.g. 114-126 keV, with a consequent increase in k to about 1.0, for instance. A host of other methods of scatter compensation have also been developed. These include more complex forms of energy analysis such as the Dual-Photopeak and the Triple-Energy Window techniques, as well as approaches based on deconvolution and models of photon attenuation. An excellent review of these developments is provided in Zaidi & Koral (2004). Some photographs of gamma cameras and related devices are shown below: We will continue with our description of the gamma camera by considering the construction and purpose of the collimator. The collimator is a device which is attached to the front of the gamma camera head. It functions something like a lens used in a photographic camera but this analogy is not quite correct because it is rather difficult to focus gamma-rays. Nevertheless in its simplest form it is used to block out all gamma rays which are heading towards the crystal except those which are travelling at right angles to the plane of the crystal: The figure illustrates a magnified view of a parallel-hole collimator attached to a crystal. The collimator simply consists of a large number of small holes drilled in a lead plate. Notice that gamma-rays entering at an angle to the crystal get absorbed by the lead and that only those entering along the direction of the holes get through to cause scintillations in the crystal. If the collimator was not in place these obliquely incident gamma-rays would blur the images produced by the gamma camera. In other words the images would not be very clear. Most gamma cameras have a number of collimators which can be fitted depending on the examination. The basic design of these collimators is the same except that they vary in terms of the diameter of each hole, the depth of each hole and the thickness of lead between each hole (commonly called the septum thickness). The choice of a specific collimator is dependent on the amount of radiation absorption that occurs (which influences the sensitivity of the gamma camera), and the clarity of images (that is the spatial resolution) it produces. Unfortunately these two factors are inversely related in that the use of a collimator which produces images of good spatial resolution generally implies that the instrument is not very sensitive to radiation. Other collimator designs beside the parallel hole type are also in use. For example a diverging hole collimator produces a minified image and converging hole and pin-hole collimators produce a magnified image. The pin-hole collimator is illustrated in the following figure: It is typically a cone-shaped device with its walls made from lead. A cross-section through this cone is shown in the figure. It operates in a similar fashion to a pin-hole photographic camera and produces an inverted image of an object - an arrow is used in the figure to illustrate this inversion. This type of collimator has been found useful for imaging small objects such as the thyroid gland. Example Images A representative selection of nuclear medicine images is shown below: Emission Tomography The form of imaging which we have been describing is called Planar Imaging. It produces a two-dimensional image of a three-dimensional object. As a result images contain no depth information and some details can be superimposed on top of each other and obscured or partially obscured as a result. Note that this is also a feature of conventional X-ray imaging. The usual way of trying to overcome this limitation is to take at least two views of the patient, one from the front and one from the side for example. So in chest radiography a posterio-anterior (PA) and a lateral view can be taken. And in a nuclear medicine liver scan an antero-posterior (AP) and lateral scan are acquired. This limitation of planar X-ray imaging was overcome by the development of the CAT Scanner about 1970 or thereabouts. CAT stands for Computerized Axial Tomography or Computer Assisted Tomography and today the term is often shortened to Computed Tomography or CT scanning (the term tomography comes from the Greek word tomos meaning slice). Irrespective of its exact name the technique allows images of slices through the body to be produced using a computer. It does this in essence by taking X-ray images at a number of angles around the patient. These slice images show the third dimension which is missing from planar images and thus eliminate the problem of superimposed details. Furthermore images of a number of successive slices through a region of the patient can be stacked on top of each other using the computer to produce a three-dimensional image. Clearly CT scanning is a very powerful imaging technique relative to planar imaging. The equivalent nuclear medicine imaging technique is called Emission Computed Tomography. We will consider two implementations of this technique below. Single Photon Emission Computed Tomography (SPECT) - This SPECT technique uses a gamma camera to record images at a series of angles around the patient. These images are then subjected to a form of digital image processing called Image Reconstruction in order to compute images of slices through the patient. - The Back Projection reconstruction process is illustrated below. Let us assume for simplicity that the slice through the patient actually consists of a 2x2 voxel array with the radioactivity in each voxel given by A1...A4: - The first projection, P1, is imaged from the right and the second projection, P2, from the right oblique and so on. The back projection process involves firstly adding the projections to each other as shown below: - and then normalising the summed (or superimposed) projections to generate an estimate of the radioactivity in each voxel. Since this process can generate streaking artefacts in reconstructed images, the projections are generally filtered prior to back projection, as described in a later chapter, with the overall process referred to as Filtered Back Projection (FBP): - An alternative image reconstruction technique is called Iterative Reconstruction. This is a successive approximation technique as illustrated below: |Projection||Patient||Additive Iterative Reconstruction| - The first estimate of the image matrix is made by distributing the first projection, P1, evenly through an empty pixel matrix. The second projection, P2, is compared to the same projection from the estimated matrix and the difference between actual and estimated projections is added to the estimated matrix. The process is repeated for all other projections. - The Maximum-Likelihood Expectation-Maximisation (ML-EM) algorithm is a refinement to this iterative approach where a division process is used to compare the actual and estimated projections, as shown below: - One cycle of data through this processing chain is referred to as one iteration. Sixteen or more iterations can be required in order to generate an adequate reconstruction and, as a result, computation times can be rather long. The Ordered-Subsets Expectation-Maximisation (OS-EM) algorithm can be used to substantially reduce the computation time by utilising a limited number of projections (called subsets) in a sequential fashion within the iterative process. Noise generated during the reconstruction process can be reduced, for example, using a Gaussian filter built into the reconstruction calculations or applied as a post-filter: - A comparison of these image reconstruction techniques is shown below for a slice through a ventilation scan of a patient's lungs: - The gamma camera is typiclly rotated around the patient in order to acquire the images. Modern gamma cameras which are designed specifically for SPECT scanning can consist of two camera heads mounted parallel to each other with the patient in between. The time required to produce images is therefore reduced by a factor of about two. In addition some SPECT gamma cameras designed for brain scanning have three camera heads mounted in a triangular arrangement. - A wide variety of strategies can be used for the acquisition and processing of SPECT images. Positron Emission Tomography (PET) - You will remember from chapter 2 that positrons can be emitted from radioactive nuclei which have too many neutrons for stability. You will also remember that positrons do not last for very long in matter since they will quickly encounter an electron and a process called annihilation results. In the process the positron and electron vanish and their energy is converted into two gamma-rays which are emitted at roughly 180o degrees to each other. The emission is often referred to as two back-to-back gamma-rays and they each have a discrete energy of 0.51 MeV. - So if we administer a positron-emitting radiopharmaceutical to a patient an emitted positrons can annihilate with a nearby electron and two gamma-rays will be emitted in opposite directions. These gamma-rays can be detected using a ring of radiation detectors encircling the patient and tomographic images can be generated using a computer system. The detectors are typically specialised scintillation devices which are optimised for detection of the 0.51 MeV gamma-rays. This ring of detectors, associated apparatus and computer system are called a PET Scanner: - The locations of positron decays within the patient are highlighted by the solid circles in the above diagram. In addition only a few detectors are shown in the diagram for reasons of clarity. Each detector around the ring is operated in coincidence with a bank of opposing detectors and the annihilation gamma-rays thus detected are used to build up a single profile. - It has also been found that gamma cameras fitted with thick crystals and special collimators can be used for PET scanning. - The radioisotopes used for PET scanning include 11C, 13N, 15O and 18F. These isotopes are usually produced using an instrument called a cyclotron. In addition these isotopes have relatively short half lives. PET scanning therefore needs a cyclotron and associated radiopharmaceutical production facilities located close by. We will consider cyclotrons in the next chapter of this wikibook. - Standardized Uptake Value (SUV) is a semi-quantitative index used in PET to express the uptake of a radiopharmaceutical in a region of interest of a patient's scan. Its typically calculated as the ratio of the radioactivity in the region to the injected dose, corrected for body weight. It should be noted that the SUV is influenced by several major sources of variability and it therefore should not be used as a quantitative measure. - A number of photographs of a PET scanner are shown below: - Slomka PJ, Patton JA, Berman DS & Germano G, 2009. Advances in technical aspects of myocardial perfusion SPECT imaging. Journal of Nuclear Cardiology, 16(2), 255–76. External Links - Centre for Positron Emission Tomography at the Austin & Repatriation Medical Centre, Melbourne with sections on what PET is, current facilities, projects & research and a PET image library. - Online Learning Tools - an advanced treatment from the Department of Radiology, Brigham and Women's Hospital, USA containing nuclear medicine teaching files, an atlas of myocardial perfusion SPECT, an atlas of brain perfusion SPECT and the physical characteristics of nuclear medicine images. Production of Radioisotopes Most of the radioisotopes found in nature have relatively long half lives. They also belong to elements which are not handled well by the human body. As a result medical applications generally require the use of radioisotopes which are produced artificially. We have looked at the subject of radioactivity in earlier chapters of this wikibook and have then progressed to cover the interaction of radiation with matter, radiation detectors and imaging systems. We return to sources of radioactivity in this chapter in order to learn about methods which are used to make radioisotopes. The type of radioisotope of value to nuclear medicine imaging should have characteristics which keep the radiation dose to the patient as low as possible. For this reason they generally have a short half life and emit only gamma-rays - that is no alpha-particle or beta-particle emissions. From an energy point of view the gamma-ray energy should not be so low that the radiation gets completely absorbed before emerging from the patient's body and not too high that it is difficult to detect. For this reason most of the radioisotopes used emit gamma-rays of medium energy, that is between about 100 and 200 keV. Finally since the radioisotope needs to be incorporated into some form of radiopharmaceutical it should also be capable of being produced in a form which is amenable to chemical, pharmaceutical and sterile processing. The production methods we will consider are nuclear fission, nuclear bombardment and the radioisotope generator. Nuclear Fission We were introduced to spontaneous fission in chapter 2 where we saw that a heavy nucleus can break into a number of fragments. This disintegration process can be induced to occur when certain heavy nuclei absorb neutrons. Following absorption of a neutron such nuclei break into smaller fragments with atomic numbers between about 30 and 65. Some of these new nuclei are of value to nuclear medicine and can be separated from other fission fragments using chemical processes. Nuclear Bombardment In this method of radioisotope production charged particles are accelerated up to very high energies and caused to collide into a target material. Examples of such charged particles are protons, alpha particles and deuterons. New nuclei can be formed when these particles collide with nuclei in the target material. Some of these nuclei are of value to nuclear medicine. An example of this method is the production of 22Na where a target of 24Mg is bombarded with deuterons, that is: 24Mg + 2H → 22Na + 4He. A deuteron you will remember from chapter 1 is the second most common isotope of hydrogen, that is 2H. When it collides with a 24Mg nucleus a 22Na nucleus plus an alpha particle is produced. The target is exposed to the deuterons for a period of time and is subsequently processed chemically in order to separate out the 22Na nuclei. The type of device commonly used for this method of radioisotope production is called a cyclotron. It consists of an ion gun for producing the charged particles, electrodes for accelerating them to high energies and a magnet for steering them towards the target material. All arranged in a circular structure. Radioisotope Generator This method is widely used to produce certain short-lived radioisotopes in a hospital or clinic. It involves obtaining a relatively long-lived radioisotope which decays into the short-lived isotope of interest. A good example is 99mTc which as we have noted before is the most widely used radioisotope in nuclear medicine today. This isotope has a half-life of six hours which is rather short if we wish to have it delivered directly from a nuclear facility. Instead the nuclear facility supplies the isotope 99Mo which decays into 99mTc with a half life of about 2.75 days. The 99Mo is called the parent isotope and 99mTc is called the daughter isotope. So the nuclear facility produces the parent isotope which decays relatively slowly into the daughter isotope and the daughter is separated chemically from the parent at the hospital/clinic. The chemical separation device is called, in this example, a 99mTc Generator: It consists of a ceramic column with 99Mo adsorbed onto its top surface. A solution called an eluent is passed through the column, reacts chemically with any 99mTc and emerges in a chemical form which is suitable for combining with a pharmaceutical to produce a radiopharmaceutical. The arrangement shown in the figure on the right is called a Positive Pressure system where the eluent is forced through the ceramic column by a pressure, slightly above atmospheric pressure, in the eluent vial. The ceramic column and collection vials need to be surrounded by lead shielding for radiation protection purposes. In addition all components are produced and need to be maintained in a sterile condition since the collected solution will be administered to patients. Finally an Isotope Calibrator is needed when a 99mTc Generator is used to determine the radioactivity for preparation of patient doses and to check whether any 99Mo is present in the collected solution. Operation of a 99m-Tc Generator Suppose we have a sample of 99Mo and suppose that at time there are nuclei in our sample and nothing else. The number of 99Mo nuclei decreases with time according to radioactive decay law as discussed in Chapter 3: where is the decay constant for 99Mo. Thus the number of 99Mo nuclei that decay during a small time interval is given by Since 99Mo decays into 99mTc, the same number of 99mTc nuclei are formed during the time period . At a time , only a fraction of these nuclei will still be around since the 99mTc is also decaying. The time for 99mTc to decay is given by . Plugging this into radioactive the decay law we arrive at: Now we sum up the little contributions . In other words we integrate over in order to find the number , that is the number of all 99mTc nuclei present at the time : Finally solving this integral we find: The figure below illustrates the outcome of this calculation. The horizontal axis represents time (in days), while the vertical one represents the number of nulcei present (in arbitrary units). The green curve illustrates the exponential decay of a sample of pure 99mTc. The red curve shows the number of 99mTc nuclei present in a 99mTc generator that is never eluted. Finally, the blue curve shows the situation for a 99mTc generator that is eluted every 12 hours. Photographs taken in a nuclear medicine hot lab are shown below: External Links - Concerns over Molybdenum Supplies - news from 2008 compiled by the British Nuclear Medicine Society. - Cyclotron Java Applet - a Java-based interactive demonstration of the operation of a cyclotron from GFu-Kwun Hwang, Dept. of Physics, National Taiwan Normal University, Virtual Physics Laboratory. - Nuclear Power Plant Demonstration - a Java-based interactive demonstration of controlling a nuclear reactor. Also contains nuclear power Information links. - ANSTO - information about Australia's nuclear organization. - Medical Valley - contains information on what nuclear medicine is, production of nuclear pharmaceuticals, molybdenum and technetium - from The Netherlands Energy Research Foundation Petten. Chapter Review Chapter Review: Atomic & Nuclear Structure - The atom consists of two components - a nucleus (positively charged) and an electron cloud (negatively charged); - The radius of the nucleus is about 10,000 times smaller than that of the atom; - The nucleus can have two component particles - neutrons (no charge) and protons (positively charged) - collectively called nucleons; - The mass of a proton is about equal to that of a neutron - and is about 1,840 times that of an electron; - The number of protons equals the number of electrons in an isolated atom; - The Atomic Number specifies the number of protons in a nucleus; - The Mass Number specifies the number of nucleons in a nucleus; - Isotopes of elements have the same atomic number but different mass numbers; - Isotopes are classified by specifying the element's chemical symbol preceded by a superscript giving the mass number and a subscript giving the atomic number; - The atomic mass unit is defined as 1/12th the mass of the stable, most commonly occurring isotope of carbon (i.e. C-12); - Binding energy is the energy which holds the nucleons together in a nucleus and is measured in electron volts (eV); - To combat the effect of the increase in electrostatic repulsion as the number of protons increases, the number of neutrons increases more rapidly - giving rise to the Nuclear Stability Curve; - There are ~2450 isotopes of ~100 elements and the unstable isotopes lie above or below the Nuclear Stability Curve; - Unstable isotopes attempt to reach the stability curve by splitting into fragments (fission) or by emitting particles/energy (radioactivity); - Unstable isotopes <=> radioactive isotopes <=> radioisotopes <=> radionuclides; - ~300 of the ~2450 isotopes are found in nature - the rest are produced artificially. Chapter Review: Radioactive Decay - Fission: Some heavy nuclei decay by splitting into 2 or 3 fragments plus some neutrons. These fragments form new nuclei which are usually radioactive; - Alpha Decay: Two protons and two neutrons leave the nucleus together in an assembly known as an alpha-particle; - An alpha-particle is a He-4 nucleus; - Beta Decay - Electron Emission: Certain nuclei with an excess of neutrons may reach stability by converting a neutron into a proton with the emission of a beta-minus particle; - A beta-minus particle is an electron; - Beta Decay - Positron Emission: When the number of protons in a nucleus is in excess, the nucleus may reach stability by converting a proton into a neutron with the emission of a beta-plus particle; - A beta-plus particle is a positron; - Positrons annihilate with electrons to produce two back-to-back gamma-rays; - Beta Decay - Electron Capture: An inner orbital electron is attracted into the nucleus where it combines with a proton to form a neutron; - Electron capture is also known as K-capture; - Following electron capture, the excited nucleus may give off some gamma-rays. In addition, as the vacant electron site is filled, an X-ray is emitted; - Gamma Decay - Isomeric Transition: A nucleus in an excited state may reach its ground state by the emission of a gamma-ray; - A gamma-ray is an electromagnetic photon of high energy; - Gamma Decay - Internal Conversion: the excitation energy of an excited nucleus is given to an atomic electron. Chapter Review: The Radioactive Decay Law - The radioactive decay law in equation form; - Radioactivity is the number of radioactive decays per unit time; - The decay constant is defined as the fraction of the initial number of radioactive nuclei which decay in unit time; - Half Life: The time taken for the number of radioactive nuclei in the sample to reduce by a factor of two; - Half Life = (0.693)/(Decay Constant); - The SI Unit of radioactivity is the becquerel (Bq) - 1 Bq = one radioactive decay per second; - The traditional unit of radioactivity is the curie (Ci); - 1 Ci = 3.7 x 1010 radioactive decays per second. Chapter Review: Units of Radiation Measurement - Exposure expresses the intensity of an X- or gamma-ray beam; - The SI unit of exposure is the coulomb per kilogram (C/kg); - 1 C/kg = The quantity of X- or gamma-rays such that the associated electrons emitted per kg of air at STP produce in air ions carrying 1 coulomb of electric charge; - The traditional unit of exposure is the roentgen (R); - 1 R = The quantity of X- or gamma-rays such that the associated electrons emitted per kg of air at STP produce in air ions carrying 2.58 x 10-4 coulombs of electric charge; - The exposure rate is the exposure per unit time, e.g. C/kg/s; - Absorbed dose is the radiation energy absorbed per unit mass of absorbing material; - The SI unit of absorbed dose is the gray (Gy); - 1 Gy = The absorption of 1 joule of radiation energy per kilogram of material; - The traditional unit of absorbed dose is the rad; - 1 rad = The absorption of 10-2 joules of radiation energy per kilogram of material; - The Specific Gamma-Ray Constant expresses the exposure rate produced by the gamma-rays from a radioisotope; - The Specific Gamma-Ray Constant is expressed in SI units in C/kg/s/Bq at 1 m; - Exposure from an X- or gamma-ray source follows the Inverse Square Law and decreases with the square of the distance from the source. Chapter Review: Interaction of Radiation with Matter - exert considerable electrostatic attraction on the outer orbital electrons of atoms near which they pass and cause ionisations; - travel in straight lines - except for rare direct collisions with nuclei of atoms in their path; - energy is always discrete. - Beta-Minus Particles: - attracted by nuclei and repelled by electron clouds as they pass through matter and cause ionisations; - have a tortuous path; - have a range of energies; - range of energies results because two particles are emitted - a beta-particle and a neutrino. - energy is always discrete; - have many modes of interaction with matter; - important interactions for nuclear medicine imaging (and radiography) are the Photoelectric Effect and the Compton Effect. - Photoelectric Effect: - when a gamma-ray collides with an orbital electron, it may transfer all its energy to the electron and cease to exist; - the electron can leave the atom with a kinetic energy equal to the energy of the gamma-ray less the orbital binding energy; - a positive ion is formed when the electron leaves the atom; - the electron is called a photoelectron; - the photoelectron can cause further ionisations; - subsequent X-ray emission as the orbital vacancy is filled. - Compton Effect: - A gamma-ray may transfer only part of its energy to a valence electron which is essentially free; ** gives rise to a scattered gamma-ray; - is sometimes called Compton Scatter; - a positive ion results; - Attenuation is term used to describe both absorption and scattering of radiation. Chapter Review: Attenuation of Gamma-Rays - Attenuation of a narrow-beam of gamma-rays increases as the thickness, the density and the atomic number of the absorber increases; - Attenuation of a narrow-beam of gamma-rays decreases as the energy of the gamma-rays increases; - Attenuation of a narrow beam is described by an equation; - the Linear Attenuation Coefficient is defined as the fraction of the incident intensity absorbed in a unit distance of the absorber; - Linear attenuation coefficients are usually expressed in units of cm-1; - the Half Value Layer is the thickness of absorber required to reduce the intensity of a radiation beam by a factor of 2; - Half Value Layer = (0.693)/(Linear Attenuation Coefficient); - the Mass Attenuation Coefficient is given by the linear attenuation coefficient divided by the density of the absorber; - Mass attenuation coefficients are usually expressed in units of cm2 g-1. Chapter Review: Gas-Filled Detectors - Gas-filled detectors include the ionisation chamber, the proportional counter and the Geiger counter; - They operate on the basis of ionisation of gas atoms by the incident radiation, where the positive ions and electrons produced are collected by electrodes; - An ion pair is the term used to describe a positive ion and an electron; - The operation of gas-filled detectors is critically dependent on the magnitude of the applied dc voltage; - The output voltage of an ionisation chamber can be calculated on the basis of the capacitance of the chamber; - A very sensitive amplifier is required to measure voltage pulses produced by an ionisation chamber; - The gas in ionisation chambers is usually air; - Ionisation chambers are typically used to measure radiation exposure (in a device called an Exposure Meter) and radioactivity (in a device called an Isotope Calibrator); - The total charge collected in a proportional counter may be up to 1000 times the charge produced initially by the radiation; - The initial ionisation triggers a complete gas breakdown in a Geiger counter; - The gas in a Geiger counter is usually an inert gas; - The gas breakdown must be stopped in order to prepare the Geiger counter for a new event by a process called quenching; - Two types of quenching are possible: electronic quenching and the use of a quenching gas; - Geiger counters suffer from dead time, a small period of time following the gas breakdown when the counter is inoperative; - The true count rate can be determined from the actual count rate and the dead time using an equation; - The value of the applied dc voltage in a Geiger counter is critical, but high stability is not required. Chapter Review: Scintillation Detectors - NaI(Tl) is a scintillation crystal widely used in nuclear medicine; - The crystal is coupled to a photomultiplier tube to generate a voltage pulse representing the energy deposited in the crystal by the radiation; - A very sensitive amplifier is needed to measure such voltage pulses; - The voltages pulses range in amplitude depending on how the radiation interacts with the crystal, i.e. the pulses form a spectrum whose shape depends on the interaction mechanisms involved, e.g. for medium-energy gamma-rays used in in-vivo nuclear medicine: the Compton effect and the Photoelectric effect; - A Gamma-Ray Energy Spectrum for a medium-energy, monoenergetic gamma-ray emitter consists (simply) of a Compton Smear and a Photopeak; - Pulse Height Analysis is used to discriminate the amplitude of voltage pulses; - A pulse height analyser (PHA) consists of a lower level discriminator (which passes voltage pulses which are than its setting) and an upper level discriminator (which passes voltage pulses lower than its setting); - The result is a variable width window which can be placed anywhere along a spectrum, or used to scan a spectrum; - A single channel analyser (SCA) consists of a single PHA with a scaler and a ratemeter; - A multi-channel analyser (MCA) is a computer-controlled device which can acquire data from many windows simultaneously. Chapter Review: Nuclear Medicine Imaging Systems - A gamma camera consists of a large diameter (25-40 cm) NaI(Tl) crystal, ~1 cm thick; - The crystal is viewed by an array of 37-91 PM tubes; - PM tubes signals are processed by a position circuit which generates +/- X and +/- Y signals; - These position signals are summed to form a Z signal which is fed to a pulse height analyser; - The +/- X, +/- Y and discriminated Z signals are sent to a computer for digital image processing; - A collimator is used to improve the spatial resolution of a gamma-camera; - Collimators typically consist of a Pb plate containing a large number of small holes; - The most common type is a parallel multi-hole collimator; - The most resolvable area is directly in front of a collimator; - Parallel-hole collimators vary in terms of the number of holes, the hole diameter, the length of each hole and the septum thickness - the combination of which affect the sensitivity and spatial resolution of the imaging system; - Other types include the diverging-hole collimator (which generates minified images), the converging-hole collimator (which generates magnified images) and the pin-hole collimator (which generates magnified inverted images); - Conventional imaging with a gamma camera is referred to as Planar Imaging, i.e. a 2D image portraying a 3D object giving superimposed details and no depth information; - Single Photon Emission Computed Tomography (SPECT) produces images of slices through the body; - SPECT uses a gamma camera to record images at a series of angles around the patient; - The resultant data can be processed using Filtered Back Projection and Iterative Reconstruction; - SPECT gamma-cameras can have one, two or three camera heads; - Positron Emission Tomography (PET) also produces images of slices through the body; - PET exploits the positron annihilation process where two 0.51 MeV back-to-back gamma-rays are produced; - If these gamma-rays are detected, their origin will lie on a line joining two of the detectors of the ring of detectors which encircles the patient; - A Time-of-Flight method can be used to localise their origin; - PET systems require on-site or nearby cyclotron to produce short-lived radioisotopes, such as C-11, N-13, O-15 and F-18. Chapter Review: Production of Radioisotopes - Naturally-occurring radioisotopes generally have long half lives and belong to relatively heavy elements - and are therefore unsuitable for medical diagnostic applications; - Medical diagnostic radioisotopes are generally produced artificially; - The fission process can be exploited so that radioisotopes of interest can be separated chemically from fission products; - A cyclotron can be used to accelerate charged particles up to high energies so that they to collide into a target of the material to be activated; - A radioisotope generator is generally used in hospitals to produce short-lived radioisotopes; - A technetium-99m generator consists of an alumina column containing Mo-99, which decays into Tc-99m; - Saline is passed through the generator to elute the Tc-99m - the resulting solution is called sodium pertechnetate; - Both positive pressure and negative pressure generators are in use; - An isotope calibrator is needed when a Tc-99m generator is used in order to determine the activity for preparation of patient doses and to test whether any Mo-99 is present in the collected solution. Exercise Questions 1. Discuss the process of radioactive decay from the perspective of the nuclear stability curve. 2. Describe in detail FOUR common forms of radioactive decay. 3. Give the equation which expresses the Radioactive Decay Law, and explain the meaning of each of its terms. 4. Define each of the following: - (a) Half life; - (b) Decay Constant; - (c) Becquerel. 5. A sample of radioactive substance is found to have an activity of 100 kBq. Its radioactivity is measured again 82 days later and is found to be 15 kBq. Calculate: - (a) the half-life; - (b) the decay constant. 6. Define each of the following radiation units: - (a) Roentgen; - (b) Becquerel; - (c) Gray. 7. Estimate the exposure rate at 1 metre from a 100 MBq source of radioactivity which has a Specific Gamma Ray Constant of 50 mR per hour per MBq at 1 cm. 8. Briefly describe the basic principle of operation of gas-filled radiation detectors. 9. Illustrate using a graph how the magnitude of the voltage pulses from a gas-filled radiation detector varies with applied voltage and identify on the graph the regions associated with the operation of Ionisation Chambers and the Geiger Counters. 10. Describe the construction and principles of operation of a scintillation spectrometer. 11. Discuss the components of the energy spectrum from a monoenergetic, medium energy gamma- emitting radioisotope obtained using a scintillation spectrometer on the basis of how the gamma-rays interact with the scintillation crystal. 12. Describe the construction and principles of operation of a Gamma Camera. 13. Compare features of three types of collimator which can be used with a Gamma Camera. Further Information Nuclear Medicine is a fascinating application of nuclear physics. The first ten chapters of this wikibook are intended to support a basic introductory course in an early semester of an undergraduate program. They assume that students have completed decent high school programs in maths and physics and are concurrently taking subjects in the medical sciences. Additional chapters cover more advanced topics in this field. Our focus in this wikibook is the diagnostic application of Nuclear Medicine. Therapeutic applications are considered in a separate wikibook, "Radiation Oncology". - Atomic & Nuclear Structure - Radioactive Decay - The Radioactive Decay Law - Units of Radiation Measurement - Interaction of Radiation with Matter - Attenuation of Gamma-Rays - Gas-Filled Radiation Detectors - Scintillation Detectors - Nuclear Medicine Imaging Systems - Computers in Nuclear Medicine - Fourier Methods - X-Ray CT in Nuclear Medicine - PACS and Advanced Image Processing - Three-Dimensional Visualization Techniques - Patient Dosimetry - Production of Radioisotopes - Chapter Review - Dynamic Studies in Nuclear Medicine - Deconvolution Analysis - Sonography & Nuclear Medicine - MRI & Nuclear Medicine - Dual-Energy Absorptiometry The principal author of this text is KieranMaher, who is very grateful for the expert editorial assistance of Dirk Hünniger during his German translation of the text and his contribution to the section on the Operation of a 99m-Tc Generator. You can send an e-mail message if you'd like to provide any feedback, criticism, correction, additions/improvement suggestions etc. regarding this wikibook. - Applied Imaging Technology, 4th Ed., JCP Heggie, NA Liddell & KP Maher (St Vincent's Hospital Melbourne, 2001) - Basic Science of Nuclear Medicine, 2nd Ed., RP Parker, PHS Smith, DM Taylor (Churchill Livingstone, 1984) - Computed Tomography: Fundamentals, System Technology, Image Quality, Applications, 3rd Ed., WA Kalender (Wiley, 2000) - Introduction to Nuclear Physics, H Enge (Addison-Wesley, 1966) - Magnetic Resonance in Medicine, 4th Ed., PA Rinck (Blackwell, 2001) - Nuclear Medicine in Urology & Nephrology, 2nd Ed., HJ Testa, PH O'Reilly & RA Shields (Butterworth-Heinemann, 1986) - Physics in Nuclear Medicine, JA Sorenson and ME Phelps (Grune & Stratton, 1980) - Radioisotope Techniques in Clinical Research and Diagnosis, N Veall and H Vetter (Butterworths, 1958) - Radionuclide Techniques in Medicine, JM McAlister (Cambridge University Press, 1979).
http://en.wikibooks.org/wiki/Basic_Physics_of_Nuclear_Medicine/Print_version
13
243
g-force (with g from gravitational) is a term for accelerations felt as weight and measurable by accelerometers. It is not a force, but a force per unit mass. Since such a force is perceived as a weight, any g-force can be described as a "weight per unit mass" (see the synonym specific weight). The g-force acceleration acts as a multiplier of weight-like forces for every unit of an object's mass, and (save for certain electromagnetic force influences) is the cause of an object's acceleration in relation to free-fall.12 This acceleration experienced by an object is due to the vector sum of non-gravitational forces acting on an object free to move. The accelerations that are not produced by gravity are termed proper accelerations, and it is only these that are measured in g-force units. They cause stresses and strains on objects. Because of these strains, large g-forces may be destructive. Gravitation acting alone does not produce a g-force, even though g-forces are expressed in multiples of the acceleration of a standard gravity. Thus, the standard gravitational acceleration at the Earth's surface produces g-force only indirectly, as a result of resistance to it by mechanical forces. The 1 g-force on an object sitting on the Earth's surface is caused by mechanical force exerted in the upward direction by the ground, keeping the object from going into free-fall. The upward force from the ground ensures that an object at rest on the Earth's surface is accelerating relative to the free-fall condition, which is the path that the object would follow when falling freely toward the Earth's center. Objects allowed to free-fall in an inertial trajectory under the influence of gravitation-only, feel no g-force. This is demonstrated by the "zero-g" conditions inside a freely falling elevator falling toward the Earth's center (in vacuum), or (to good approximation) conditions inside a spacecraft in Earth orbit. These are examples of coordinate acceleration (a change in velocity) without a sensation of weight. The experience of no g-force (zero-g), however it is produced, is synonymous with weightlessness. In the absence of gravitational fields, or in directions at right angles to them, proper and coordinate accelerations are the same, and any coordinate acceleration must be produced by a corresponding g-force acceleration. An example here is a rocket in free space, in which simple changes in velocity are produced by the engines, and produce g-forces on the rocket and passengers. The unit of measure of acceleration in the International System of Units (SI) is m/s2. However, to distinguish acceleration relative to free-fall from simple acceleration (rate of change of velocity), the unit g (or g) is often used. One g is the acceleration due to gravity at the Earth's surface and is the standard gravity (symbol: gn), defined as 9.80665 metres per second squared,3 or equivalently 9.80665 newtons of force per kilogram of mass.4 Measurement of g-force is typically achieved using an accelerometer (see discussion below in Measuring g-force using an accelerometer). In certain cases, g-forces may be measured using suitably calibrated scales. Specific force is another name that has been used for g-force. The term g-force is technically incorrect as it is a measure of acceleration, not force. While acceleration is a vector quantity, g-force accelerations ("g-forces" for short) are often expressed as a scalar, with positive g-forces pointing upward (indicating upward acceleration), and negative g-forces pointing downward. Thus, a g-force is a vector acceleration. It is an acceleration that must be produced by a mechanical force, and cannot be produced by simple gravitation. Objects acted upon only by gravitation, experience (or "feel") no g-force, and are weightless. G-forces, when multiplied by a mass upon which they act, are associated with a certain type of mechanical force in the correct sense of the term force, and this force produces compressive stress and tensile stress. Such forces result in the operational sensation of weight, but the equation carries a sign change due to the definition of positive weight in the direction downward, so the direction of weight-force is opposite to the direction of g-force acceleration: Weight = −mass times (g-force acceleration) The reason for the minus sign is that the actual force (i.e., measured weight) on an object produced by a g-force is in the opposite direction to the sign of the g-force, since in physics, weight is not the force that produces the acceleration, but rather the equal-and-opposite reaction force to it. If the direction upward is taken as positive (the normal cartesian convention) then positive g-force (an acceleration vector that points upward) produces a force/weight on any mass, that acts downward (an example is positive-g acceleration of a rocket launch, producing downward weight). In the same way, a negative-g force is an acceleration vector downward (the negative direction on the y axis), and this acceleration downward produces a weight-force in a direction upward (thus pulling a pilot upward out of the seat, and forcing blood toward the head of a normally oriented pilot). If a g-force (acceleration) is vertically upward and is applied by the ground (which is accelerating through space-time) or applied by the floor of an elevator to a standing person, most of the body experiences compressive stress which at any height, if multiplied by the area, is the related mechanical force, which is the product of the g-force and the supported mass (the mass above the level of support, including arms hanging down from above that level). At the same time, the arms themselves experience a tensile stress, which at any height, if multiplied by the area, is again the related mechanical force, which is the product of the g-force and the mass hanging below the point of mechanical support. The mechanical resistive force spreads from points of contact with the floor or supporting structure, and gradually decreases toward zero at the unsupported ends (the top in the case of support from below, such as a seat or the floor, the bottom for a hanging part of the body or object). With compressive force counted as negative tensile force, the rate of change of the tensile force in the direction of the g-force, per unit mass (the change between parts of the object such that the slice of the object between them has unit mass), is equal to the g-force plus the non-gravitational external forces on the slice, if any (counted positive in the direction opposite to the g-force). For a given g-force the stresses are the same, regardless of whether this g-force is caused by mechanical resistance to gravity, or by a coordinate-acceleration (change in velocity) caused by a mechanical force, or by a combination of these. Hence, for people all mechanical forces feels exactly the same whether they cause coordinate acceleration or not. For objects likewise, the question of whether they can withstand the mechanical g-force without damage is the same for any type of g-force. For example, upward acceleration (e.g., increase of speed when going up or decrease of speed when going down) on Earth feels the same as being stationary on a celestial body with a higher surface gravity. Again, one should note that gravitation acting alone does not produce any g-force; g-force is only produced from mechanical pushes and pulls. For a free body (one that is free to move in space) such g-forces only arise as the "inertial" path that is the natural effect of gravitation, or the natural effect of the inertia of mass, is modified. Such modification may only arise from influences other than gravitation. Examples of important situations involving g-forces include: - The g-force acting on a stationary object resting on the Earth's surface is 1 g (upwards) and results from the resisting reaction of the Earth's surface bearing upwards equal to an acceleration of 1 g, and is equal and opposite to gravity. The number 1 is approximate, depending on location. - The g-force acting on an object in any weightless environment such as free-fall in a vacuum is 0 g. - The g-force acting on an object under acceleration can be much greater than 1 g, for example, the dragster pictured at top right can exert a horizontal g-force of 5.3 when accelerating. - The g-force acting on an object under acceleration may be downwards, for example when cresting a sharp hill on a roller coaster. - If there are no other external forces than gravity, the g-force in a rocket is the thrust per unit mass. Its magnitude is equal to the thrust-to-weight ratio times g, and to the consumption of delta-v per unit time. - In the case of a shock, e.g., a collision, the g-force can be very large during a short time. A classic example of negative g-force is in a fully inverted roller coaster which is accelerating (changing velocity) toward the ground. In this case, the roller coaster riders are accelerated toward the ground faster than gravity would accelerate them, and are thus pinned upside down in their seats. In this case, the mechanical force exerted by the seat causes the g-force by altering the path of the passenger downward in a way that differs from gravitational acceleration. The difference in downward motion, now faster than gravity would provide, is caused by the push of the seat, and it results in a g-force toward the ground. All "coordinate accelerations" (or lack of them), are described by Newton's laws of motion as follows: The Second Law of Motion, the law of acceleration states that: F = ma., meaning that a force F acting on a body is equal to the mass m of the body times its acceleration a. The Third Law of Motion, the law of reciprocal actions states that: all forces occur in pairs, and these two forces are equal in magnitude and opposite in direction. Newton's third law of motion means that not only does gravity behave as a force acting downwards on, say, a rock held in your hand but also that the rock exerts a force on the Earth, equal in magnitude and opposite in direction. In an airplane, the pilot’s seat can be thought of as the hand holding the rock, the pilot as the rock. When flying straight and level at 1 g, the pilot is acted upon by the force of gravity. His weight (a downward force) is 725 newtons (163 lbf). In accordance with Newton’s third law, the plane and the seat underneath the pilot provides an equal and opposite force pushing upwards with a force of 725 N (163 lbf). This mechanical force provides the 1.0 g-force upward proper acceleration on the pilot, even though this velocity in the upward direction does not change (this is similar to the situation of a person standing on the ground, where the ground provides this force and this g-force). If the pilot were suddenly to pull back on the stick and make his plane accelerate upwards at 9.8 m/s2, the total g‑force on his body is 2 g, half of which comes from the seat pushing the pilot to resist gravity, and half from the seat pushing the pilot to cause his upward acceleration—a change in velocity which also is a proper acceleration because it also differs from a free fall trajectory. Considered in the frame of reference of the plane his body is now generating a force of 1,450 N (330 lbf) downwards into his seat and the seat is simultaneously pushing upwards with an equal force of 1,450 N (330 lbf). Unopposed acceleration due to mechanical forces, and consequentially g-force, is experienced whenever anyone rides in a vehicle because it always causes a proper acceleration, and (in the absence of gravity) also always a coordinate acceleration (where velocity changes). Whenever the vehicle changes either direction or speed, the occupants feel lateral (side to side) or longitudinal (forward and backwards) forces produced by the mechanical push of their seats. The expression "1 g = 9.80665 m/s2" means that for every second that elapses, velocity changes 9.80665 meters per second (≡35.30394 km/h). This rate of change in velocity can also be denoted as 9.80665 (meter per second) per second, or 9.80665 m/s2. For example: An acceleration of 1 g equates to a rate of change in velocity of approximately 35 kilometres per hour (22 mph) for each second that elapses. Therefore, if an automobile is capable of braking at 1 g and is traveling at 35 kilometres per hour (22 mph) it can brake to a standstill in one second and the driver will experience a deceleration of 1 g. The automobile traveling at three times this speed, 105 km/h (65 mph), can brake to a standstill in three seconds. In the case of an increase in speed from 0 to v with constant acceleration within a distance of s this acceleration is v2/(2s). The human body is flexible and deformable, particularly the softer tissues. A hard slap on the face may briefly impose hundreds of g locally but not produce any real damage; a constant 16 g for a minute, however, may be deadly. When vibration is experienced, relatively low peak g levels can be severely damaging if they are at the resonance frequency of organs and connective tissues. To some degree, g-tolerance can be trainable, and there is also considerable variation in innate ability between individuals. In addition, some illnesses, particularly cardiovascular problems, reduce g-tolerance. Aircraft pilots (in particular) exert g-forces along the axis aligned with the spine. This causes significant variation in blood pressure along the length of the subject's body, which limits the maximum g-forces that can be tolerated. Positive, or "upward" g, drives blood downward to the feet of a seated or standing person (more naturally, the feet and body may be seen as being driven by the upward force of the floor and seat, upward around the blood). Resistance to positive g varies. A typical person can handle about 5 g (49 m/s²) before losing consciousness, but through the combination of special g-suits and efforts to strain muscles—both of which act to force blood back into the brain—modern pilots can typically handle a sustained 9 g (88 m/s²) (see High-G training)citation needed. In aircraft particularly, vertical g-forces are often positive (force blood towards the feet and away from the head); this causes problems with the eyes and brain in particular. As positive vertical g-force is progressively increased (such as in a centrifuge) the following symptoms may be experienced: - Grey-out, where the vision loses hue, easily reversible on levelling out. - Tunnel vision, where peripheral vision is progressively lost. - Blackout, a loss of vision while consciousness is maintained, caused by a lack of blood to the head. - G-LOC a loss of consciousness ("LOC" stands for "Loss Of Consciousness").9 - Death, if g-forces are not quickly reduced, death can occur.10 Resistance to "negative" or "downward" g, which drives blood to the head, is much lower. This limit is typically in the −2 to −3 g (about −20 m/s² to −30 m/s²) range. This condition is sometimes referred to as red out where vision is literally reddened11 due to the blood laden lower eyelid being pulled into the field of vision12 Negative g is generally unpleasant and can cause damage. Blood vessels in the eyes or brain may swell or burst under the increased blood pressure, resulting in degraded sight or even blindness. The human body is better at surviving g-forces that are perpendicular to the spine. In general when the acceleration is forwards (subject essentially lying on their back, colloquially known as "eyeballs in"13) a much higher tolerance is shown than when the acceleration is backwards (lying on their front, "eyeballs out") since blood vessels in the retina appear more sensitive in the latter directioncitation needed. Early experiments showed that untrained humans were able to tolerate a range of accelerations depending on the time of exposure. This ranged from as much as 20 g for less than 10 seconds, to 10 g for 1 minute, and 6 g for 10 minutes for both eyeballs in and out.14 These forces were endured with cognitive facilities intact, as subjects were able to perform simple physical and communication tasks. The tests were determined to not cause long or short term harm although tolerance was quite subjective, with only the most motivated non-pilots capable of completing tests. 15 The record for peak experimental horizontal g-force tolerance is held by acceleration pioneer John Stapp, in a series of rocket sled deceleration experiments culminating in a late 1954 test in which he was clocked in a little over a second from a land speed of Mach 0.9. He survived a peak "eyeballs-out" force of 46.2 times the force of gravity, and more than 25 g for 1.1 sec, proving that the human body is capable of this. Stapp lived another 45 years to age 89, but suffered lifelong damage to his vision from this last test.16 Short term shocks may be caused by impacts, drops, earthquake, or explosion. Shock is a short-term transient excitiation and is often measured as an acceleration. Very short duration shocks of 100 g have been survivable in racing car crashes.17 Jerk is the rate of change of acceleration. In SI units, jerk is expressed as m/s3. Recent research carried out on extremophiles in Japan involved a variety of bacteria including E. coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g. Paracoccus denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually only to be found in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of panspermia.1819 |The gyro rotors in Gravity Probe B and the free-floating proof masses in the TRIAD I navigation satellite20||0 g| |A ride in the Vomit Comet||≈ 0 g| |Standing on the Moon at its equator||0.1654 g| |Standing on the Earth at sea level–standard||1 g| |Saturn V moon rocket just after launch||1.14 g| |Bugatti Veyron from 0 to 100 km/h in 2.4 s||1.55 g†| |Space Shuttle, maximum during launch and reentry||3 g| |High-g roller coasters/8:340||3.5–6.3 g| |Top Fuel drag racing world record of 4.4 s over 1/4 mile||4.2 g| |World War One Aircraft Sopwith Pup, Sopwith Triplane, Fokker D.VII, Fokker Dr.1, SPAD S.VII, SPAD S.XIII, Nieuport 17 in a steep dive or back or front looping.||4.5-7 g| |Formula One car, peak lateral in turns 21||5-6 g| |Luge, maximum expected at the Whistler Sliding Centre||5.2 g| |Standard, full aerobatics certified glider||+7/-5 g| |Apollo 16 on reentry22||7.19 g| |Typical max. turn in an aerobatic plane or fighter jet||9–12 g| |Maximum for human on a rocket sled||46.2 g| |Death or serious injury likely||> 25 g| |Sprint missile||100 g| |Brief human exposure survived in crash1723||> 100 g| |Space gun with a barrel length of 1 km and a muzzle velocity of 6 km/s, as proposed by Quicklaunch (assuming constant acceleration)||1,800 g| |Shock capability of mechanical wrist watches24||> 5,000 g| |Current formula one engines, maximum piston acceleration 25||8,600 g| |Rating of electronics built into military artillery shells26||15,500 g| |9 × 19 Parabellum handgun bullet (average along the length of the barrel)27||31,000 g| |9 × 19 Parabellum handgun bullet, peak28||190,000 g| |Analytical ultracentrifuge spinning at 60,000 rpm, at the bottom of the analysis cell (7.2 cm)29||298,786 g| |Mean acceleration of a proton in the Large Hadron Collider30||190,000,000 g| |Acceleration from a Wakefield plasma accelerator31||8.9×1020 g| * Including contribution from resistance to gravity. † Directed 40 degrees from horizontal. Accelerometers are often calibrated to measure g-force along one or more axes. If a stationary, single-axis accelerometer is oriented so that its measuring axis is horizontal, its output will be 0 g, and it will continue to be 0 g if mounted in an automobile traveling at a constant velocity on a level road. When the driver presses on the brake or gas pedal, the accelerometer will register positive or negative acceleration. If the accelerometer is rotated by 90° so that it is vertical, it will read +1 g upwards even though stationary. In that situation, the accelerometer is subject to two forces: the gravitational force and the ground reaction force of the surface it is resting on. Only the latter force can be measured by the accelerometer, due to mechanical interaction between the accelerometer and the ground. The reading is the acceleration the instrument would have if it were exclusively subject to that force. A three-axis accelerometer will output zero‑g on all three axes if it is dropped or otherwise put into a ballistic trajectory (also known as an inertial trajectory), so that it experiences "free fall," as do astronauts in orbit (astronauts experience small tidal accelerations called microgravity, which are neglected for the sake of discussion here). Some amusement park rides can provide several seconds at near-zero g. Riding NASA's "Vomit Comet" provides near-zero g for about 25 seconds at a time. A single-axis accelerometer mounted in an airplane with its measurement axis oriented vertically reads +1 g when the plane is parked. This is the g-force exerted by the ground. When flying at a stable altitude (or at a constant rate of climb or descent), the accelerometer will continue to indicate 1 g, as the g-force is provided by the aerodynamic lift, which now acts in place of the ground to keep the plane from free-falling. Under such conditions, the upward force acting upon the pilot's body (which keeps him from falling) is the normal value of about 9.8 newtons per kilogram (N/kg), and it is provided by his seat, which in turn is supported by the lift of the wings. If the pilot pulls back on the stick until the accelerometer indicates 2 g, the g-force acting upwards on him through the seat doubles to 19.6 N/kg. - Artificial gravity - Earth's gravity - Euthanasia Coaster - Load factor (aeronautics) - Peak ground acceleration – g-force of earthquakes - Relation between g-force and apparent weight - Shock and vibration data logger - G Force. Newton.dep.anl.gov. Retrieved on 2011-10-14. - Sircar, Sabyasachi (2007-12-12). Principles of Medical Physiology. ISBN 978-1-58890-572-7. - BIPM: Declaration on the unit of mass and on the definition of weight; conventional value of gn - Note that the unit does not vary with location- the g-force when standing on the moon is about 0.181g - Symbol g: ESA: GOCE, Basic Measurement Units, NASA: Multiple G, Astronautix: Stapp, Honeywell: Accelerometersdead link, Sensr LLC: GP1 Programmable Accelerometerdead link, Farnell: accelometers, Delphi: Accident Data Recorder 3 (ADR3) MS0148dead link, NASA: Constants and Equations for Calculationsdead link, Jet Propulsion Laboratory: A Discussion of Various Measures of Altitudedead link, Vehicle Safety Research Centre Loughborough: Use of smart technologies to collect and retain crash information, National Highway Traffic Safety Administration: Recording Automotive Crash Event Data Symbol G: Lyndon B. Johnson Space Center: ENVIRONMENTAL FACTORS: BIOMEDICAL RESULTS OF APOLLO, Section II, Chapter 5, Honywell: Model JTF, General Purpose Accelerometerdead link - The Ejection Site: The Story of John Paul Stapp - Balldin, Ulf I (2002). "33". Acceleration effects on fighter pilots. In: Medical conditions of Harsh Environments 2. Washington, DC. Retrieved 2009-04-06. More than one of - George Bibel. Beyond the Black Box: the Forensics of Airplane Crashes. Johns Hopkins University Press, 2008. ISBN 0-8018-8631-7. - Burton RR (1988). "G-induced loss of consciousness: definition, history, current status". Aviation, Space, and Environmental Medicine 59 (1): 2–5. PMID 3281645. - The Science of G Force- Joshua Davis - Brown, Robert G (1999). On the edge: Personal flying experiences during the Second World War. ISBN 978-1-896182-87-2. - DeHart, Roy L. (2002). Fundamentals of Aerospace Medicine: 3rd Edition. Lippincott Williams & Wilkins. - "NASA Physiological Acceleration Systems". Web.archive.org. 2008-05-20. Archived from the original on 2008-05-20. Retrieved 2012-12-25. - NASA Technical note D-337, Centrifuge Study of Pilot Tolerance to Acceleration and the Effects of Acceleration on Pilot Performance, by Brent Y. Creer, Captain Harald A. Smedal, USN (MC), and Rodney C. Vtlfngrove, figure 10 - NASA Technical note D-337, Centrifuge Study of Pilot Tolerance to Acceleration and the Effects of Acceleration on Pilot Performance, by Brent Y. Creer, Captain Harald A. Smedal, USN (MC), and Rodney C. Vtlfngrove - Fastest Man on Earth- John Paul Stapp. Ejection Site. Retrieved on 2011-10-14. - "Several Indy car drivers have withstood impacts in excess of 100 G without serious injuries." Dennis F. Shanahan, M.D., M.P.H.: "Human Tolerance and Crash Survivability, citing Society of Automotive Engineers. Indy racecar crash analysis. Automotive Engineering International, June 1999, 87–90. And National Highway Traffic Safety Administration: Recording Automotive Crash Event Data - Than, Ker (25 April 2011). "Bacteria Grow Under 400,000 Times Earth's Gravity". National Geographic- Daily News. National Geographic Society. Retrieved 28 April 2011. - Deguchi, Shigeru; Hirokazu Shimoshige, Mikiko Tsudome, Sada-atsu Mukai, Robert W. Corkery, Susumu Ito, and Koki Horikoshi (2011). "Microbial growth at hyperaccelerations up to 403,627 xg". Proceedings of the National Academy of Sciences 108 (19): 7997. Bibcode:2011PNAS..108.7997D. doi:10.1073/pnas.1018027108. Retrieved 28 April 2011. - Stanford University: Gravity Probe B, Payload & Spacecraft, and NASA: Investigation of Drag-Free Control Technology for Earth Science Constellation Missions. The TRIAD 1 satellite was a later, more advanced navigation satellite that was part of the U.S. Navy’s Transit, or NAVSAT system. - 6 g has been recorded in the 130R turn at Suzuka circuit, Japan. Many turns have 5 g peak values, like turn 8 at Istanbul or Eau Rouge at Spa - NASA: Table 2: Apollo Manned Space Flight Reentry G Levels - David Purley - Omega , Ball Watch Technology - Cosworth V8 engine ; Up to 10,000 g before rev limits - "L-3 Communication's IEC Awarded Contract with Raytheon for Common Air Launched Navigation System". - Assuming a 8.04 gram bullet, a muzzle velocity of 350 metre per second (1,100 ft/s), and a 102 mm barrel. - Assuming a 8.04 gram bullet, a peak pressure of 240 MPa (35,000 psi) and 440 N of friction. - (7 TeV / (20 minutes * c))/proton mass - (42 GeV / 85 cm)/electron mass Faller, James E. (November–December 2005). "The Measurement of Little g: A Fertile Ground for Precision Measurement Science". Journal of Research of the National Institutes of Standards and Technology 110 (6): 559–581. - "How Many Gs Can a Flyer Take?", October 1944, Popular Science one of the first detailed public articles explaining this subject - Wired article about enduring a human centrifuge at the NASA Ames Research Center
http://www.bioscience.ws/encyclopedia/index.php?title=G-force
13
162
Functions have an Independent Variable and a Dependent Variable When we look at a function such as we call the variable that we are changing--in this case --the independent variable. We assign the value of the function to a variable we call the dependent variable. The reason that we say that is independent is because we can pick any value for which the function is defined--in this case real is implied--as an input into the function. Once we pick the value of the independent variable the same result will always come out of the function. We say the result is assigned to the dependent variable, since it depends on what value we placed into the function. Equating with our function then then then The independent variable is now and the dependent variable - Note: this is a very unusual case where the ordered pair is reverse mapped and corresponding reverses (dependent, independent), (range, domain), and now must be singular for each and every corresponding to an horizontal line test of function! It would be less desireable to rotate or swap the positions of axes, the order of coordinate pairs and (abscissa, ordinate). Have we used Algebra to change the nature of the function? Let's look at the results for three functions If we look at the table above we can see that the independent variable for gives the same results as the dependent variable of We can see what this means when we look at the values for The function is the same as the function but when we switch which variable we use as the independent variable between and we see that we have discovered that and are inverse functions. Let's take a look at how we can draw functions in and and then come back and look at this idea of independent and dependent variables again. Explicit and Implicit Functions Variables like and formulate a 'relation' using simple algebra. and commonly denote functions. Function notation read "eff of ex", denotes a function with 'explicit' dependence on the independent variable By assigning variable to is now an 'implicit' function of using equation notation. If is then [ would denote an 'explicit' function of ]. A relation is also a function when the dependent variable has one and only one value for each and every independent variable value. The Cartesian Coordinate System The Cartesian Coordinate System is a uniform retangular grid used for plane graph plots. It's named after pioneer of analytic geometry, 17th century French mathematician René Descartes, whom's Latinized name was Renatus Cartesius. Recall that each point has a unique location, different from every other point. We know that a line is a collection of points. If we pick a direction of travel for the line that starts at a point then all of the other points can be thought of as either behind our starting point or ahead of it. Finally, a plane can be thought of as a collection of lines that are parallel to each other. We can draw another line that is composed of one point from each of the lines that we chose to fill our plane. If we do this then we can locate the other lines as behind or ahead of the line with the point we chose to start on. Descartes decided to pick a line and call it the -axis, and to then pick a line perpendicular to this line and call it the -axis. He then labeled this intersection point and origin O. The points to the left (or behind) of this point each represent a negative number that we label as The points to the right (or ahead) of this point each represent a positive number that we label as The points on the -axis that are above are labeled as positive and the points on the -axis below are labeled as negative A point is plotted as a location on the plane using its coordinates from the grid formed by the and -axes. If you draw a line perpendicular to the -axis from a point you pick then that point has the same -coordinate as the point where that line crosses the -axis. If you draw a line perpendicular to the -axis from your point then it has the same -coordinate as the point where that line crosses the -axis. If you need to sharpen your knowledge in this area, this link/section should help: The Coordinate (Cartesian) Plane An equation and its graph can be referred to as equal. This is true since a graph is a representation of a specific equation. This is because an equation is a group of one or more variables along with one or more numbers and an equal sign ( and are all examples of equations). Since variables were introduced as way of representing the many possible numbers that could be plugged into the equation. A graph of an equation is a way of drawing the relationship between the numbers that can be input (the independent variable) and the possible outputs that would be produced. For example, in the equation: we could choose to make the the independent variable and the output number would be two more than the input number every time. The graph of this equation would be a picture showing this relationship. On the graph, each -value (the vertical axis) would be two higher than the (horizontal) -value that is plugged in because of the in the equation. Linear Equations and Functions This section shows the different ways we can algebraically write a linear function. We will spend some time looking at a way called the "slope intercept form" that has the equation Unless a domain for is otherwise stated, the domain for linear functions will be assumed to be all real numbers and so the lines in graphs of all linear functions extend infinitely in both directions. Also in linear functions with all real number domains, the range of a linear function may cover the entire set of real numbers for one exception is when the slope and the function equals a constant. In such cases, the range is simply the constant. Another would be a squaring function where the range would be non-negative when The y-intercept constant b It was shown that has infinite solutions (in the UK, also common and ). Points will be mapped with independent variable assuming the horizontal axis and vertical on a Cartesian grid. By assigning to a value and evaluating a (single) point coordinate solution is found. When then by zero-product property term , and by additive identity terms The point is the unique member of the line (linear equation's solution) where the y-axis is 'intercepted'. More about intercepts link: The and Intercepts What does the m tell us when we have the equation ? is a constant called the slope of the line. Slope indicates the steepness of the line. Two separate points fixed anywhere defines a unique straight line containing the points. Confining this study to plane geometry () and fixing coordinates for unique points at and a straight line is defined relating two variables in a linear-equation mappable on a graph-plot. When the two points are identical, infinite lines result, even in a single plane. When then a vertical-line mere relation is defined, not a function. Functions are equation-relations evaluating to singularly unique dependent values. Only when (iff) then is the line containing the points a linear 'function' of For a linear function, the slope can be determined from any two known points of the line. The slope corresponds to an increment or change in the vertical direction divided by a corresponding increment or change in the horizontal direction between any different points of the straight line. Let increment or change in the -direction (vertical) and Let increment or change in the -direction (horizontal). For two points and the slope of the function line m is given by: - This formula is called the formula for slope measure but is sometimes referred to as the slope formula. For a linear function, fixing two unique points of the line or fixing the slope and any one point of the line is enough to determine the line and identify it by an equation. There is an equation form for a linear function called the point-slope form of a line2 which uses the slope and any one point to determine a valid equation for the function's line: Other forms of linear equations? Intercept Form of a Line There is one more general form of a linear function we will cover. This is the intercept form of a line, where the constants a and b are such that (a,0) is the x-intercept point and (0,b) is the Neither constant a nor b can equal 0 because division by 0 is not allowed. The intercept form of a line cannot be applied when the linear function has the simplified form y = m x because the y-intercept ordinate cannot equal 0. Multiplying the intercept form of a line by the constants a and b will give which then becomes equivalent to the general linear equation form A x + B y + C where A = b, B = a, and C = ab. We now see that neither A nor B can be 0, therefore the intercept form cannot represent horizontal or vertical lines. Multiplying the intercept form of a line by just b gives if we subtract which can, in turn, be rearranged to: which becomes equivalent to the slope-intercept form where the slope m = -b/a. Example: A graphed line crosses the x-axis at -3 and crosses the y-axis at -6. What equation can represent this line? What is the slope? Solution: intercept form: Multiplying by -6 gives so we see the slope m = -2. The line can also be written as Example: Can the equation be transformed into an intercept form of a line, (x/a) + (y/b) =1, to find the intercepts? Solution: No, no amount of valid mathematical manipulation can transform it into the intercept form. Instead multiplying by 4, then subtracting 2x gives which is of the form y = m x where m = -2. The line intersects the axes at (0,0). Since the intercepts are both 0, the general intercept form of a line cannot be used. Example: Find the slope and function of the line connecting the points (2,1) and (4,4). Solution: When calculating the slope of a straight line from two points with the preceding formula, it does not matter which is point 1 and which is point 2. Let's set (x1,y1) as (2,1) and (x2,y2) as (4,4). Then using the two-point formula for the slope m: Using the point-slope form: One substitutes the coordinates for either point into the point-slope form as x1 and y1. For simplicity, we will use x1=2 and y1=1. Using the slope-intercept form: Alternatively, one can solve for b, the y-intercept ordinate, in the general form of a linear function of one variable, y = m x + b. Knowing the slope m, take any known point on the line and substitute the point coordinates and m into this form of a linear function and calculate b. In this example, (x1,y1) is used. Now the constants m and b are both known and the function is written as __________end of example__________ For another explanation of slope look here: Example: Graph the equation 5x + 2y = 10 and calculate the slope. Solution: This fits the general form of a linear equation, so finding two different points are enough to determine the line. To find the x-intercept, set y = 0 and solve for x. so the x-intercept point is (2,0). To find the y-intercept, set x = 0 and solve for y. so the y-intercept point is (0,5). Drawing a line through (2,0) and (0,5) would produce the following graph. To determine the slope m from the two points, one can set (x1,y1) as (2,0) and (x2,y2) as (0,5), or vice versa and calculate as follows: __________end of example__________ Summary of General Equation Forms of a Line The most general form applicable to all lines on a two-dimensional Cartesian graph is with three constants, A, B, and C. These constants are not unique to the line because multiplying the whole equation by a constant factor gives a new set of valid constants for the same line. When B = 0, the rest of the equation represents a vertical line, which is not a function. If B ≠ 0, then the line is a function. Such a linear function can be represented by the slope-intercept form which has two constants. The two constants, m and b, used together are unique to the line. In other words, a certain line can have only one pair of values for m and b in this form. The point-slope form given here uses three constants; m is unique for a given line; x1 and y1 are not unique and can be from any point on the line. The point-slope cannot represent a vertical line. The intercept form of a line, given here, uses two unique constants which are the x and y intercepts, but cannot be made to represent horizontal or vertical lines or lines crossing through (0,0). It is the least applicable of the general forms in this summary. Of the last three general forms of a linear function, the slope-intercept form is the most useful because it uses only constants unique to a given line and can represent any linear function. All of the problems in this book and in mathematics in general can be solved without using the point-slope form or the intercept form unless they are specifically called for in a problem. Generally, problems involving linear functions can be solved using the slope-intercept form (y = m x + b) and the formula for slope. Discontinuity in Otherwise Linear Equations Let variable y be dependent upon a function of independent variable x y is also the function f, and x is also the argument ( ). Let y be the expressed quotient function The graph of y's solution plots a continuous straight line set of points except for the point where x would be 1. Evaluation of the denominator with results in division by zero, an undefined condition not a member element of R and outside algebraic closure. y has a discontinuity (break) and no solution at point 1,-1. It becomes important to treat each side of a break separately in advanced studies. y's otherwise linear form can be expressed by an equation removed of its discontinuity. Factor from the numerator (use synthetic division). - (for all x except 1) . Reducing its (x-1) multiplicative inverse factors (reciprocals) to multiplicative identity (unity) leaves the factor (with implied universal-factor 1/1). Limiting this simpler function's domain; 'all except , where x is undefined' or simply 'and x ≠ 1' (implying 'and R2 '); equates it to the original function. This expression is a linear function of x, with slope m = 2 and a y-intercept ordinate of -3. The expression evaluates to -1 at x = 1, but function y is undefined (division by zero) at that point. There is a discontinuity for function y at x = 1. Practically the function has a sort of one-point hole (a skip), shown on the graph as a small hollow circle around that point. Lines, rays and line segments (and arcs, chords and curves) are shown discontinuous by dashed or dotted lines. Note: non-linear equations may also be discontinuous--see the subsequent graph plot of the reciprocal function y = 1/x, in which y is discontinuous at x = 0 not just for a point, but over a 'double' asymptotic extreemum pole along the y-axis. As x is evaluated at smaller magnitudes (both - and +) closer to zero, y approaches no definition in both the - and + mappings of the function. Example: What would the graph of the following function look like? Reduce the reciprocal (x + 2) factors to unity. This makes y = x - 2 for all x except x = -2, where there is a discontinuity. The line y = x - 2 would have a slope m = 1 and a y-intercept ordinate of -2. So for the final answer , we graph a line with a slope of 1 and a y-intercept of -2, and we show a discontinuity at x = -2, where y would otherwise have been equal to -4. Example: Write a function which would be graphed as a line the same as y = 2 x - 3 except with two discontinuities, one at x = 0 and another at x = 1. Solution: The function must have a denominator with the factors to have 'zeros' at the two x values. The function's numerator also gets the factors preserving an overall factor of unity, the expressions are multiplied out: __________end of example__________
http://en.wikibooks.org/wiki/Algebra/Function_Graphing
13
146
||This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: the whole article has a confused structure, and majority effects from solar winds (gas) have been completely wiped. Suggest complete overhaul maybe using the German wikipedia Sonnensegel (Raumfahrt) article as a starting structure: basics, practical tests including history, components and design. — Overhaul is in progress.. (July 2011)| Solar sails (also called light sails or photon sails) are a form of spacecraft propulsion using the radiation pressure (also called solar pressure) of a combination of light and high speed ejected gasses from a star to push large ultra-thin mirrors to high speeds. Light sails could also be driven by energy beams to extend their range of operations, which is strictly beam sailing rather than solar sailing. Solar sail craft offer the possibility of low-cost operations combined with long operating lifetimes. Since they have few moving parts and use no propellant, they can potentially be used numerous times for delivery of payloads. Solar sails use a phenomenon that has a proven, measured effect on spacecraft. Solar pressure affects all spacecraft, whether in interplanetary space or in orbit around a planet or small body. A typical spacecraft going to Mars, for example, will be displaced by more than 1,000 km by solar pressure, so the effects must be accounted for in trajectory planning, which has been done since the time of the earliest interplanetary spacecraft of the 1960s. Solar pressure also affects the attitude of a craft, a factor that must be included in spacecraft design. The total force exerted on a solar sail may be around 1 newton or less, making it a low-thrust spacecraft, along with spacecraft propelled by electric engines. History of concept Johannes Kepler observed that comet tails point away from the Sun and suggested that the sun caused the effect. In a letter to Galileo in 1610, he wrote, "Provide ships or sails adapted to the heavenly breezes, and there will be some who will brave even that void." He might have had the comet tail phenomenon in mind when he wrote those words, although his publications on comet tails came several years later. James Clerk Maxwell, in 1861–64, published his theory of electromagnetic fields and radiation, which shows that light has momentum and thus can exert pressure on objects. Maxwell's equations provide the theoretical foundation for sailing with light pressure. So by 1864, the physics community and beyond knew sunlight carried momentum that would exert a pressure on objects. Jules Verne, in From the Earth to the Moon, published in 1865, wrote "there will some day appear velocities far greater than these [of the planets and the projectile], of which light or electricity will probably be the mechanical agent ... we shall one day travel to the moon, the planets, and the stars." This is possibly the first published recognition that light could move ships through space. Given the date of his publication and the widespread, permanent distribution of his work, it appears that he should be regarded as the originator of the concept of space sailing by light pressure, although he did not develop the concept further. Verne probably got the idea directly and immediately from Maxwell's 1864 theory (although it cannot be ruled out that Maxwell or an intermediary recognized the sailing potential and became the source for Verne). Pyotr Lebedev was first to successfully demonstrate light pressure, which he did in 1899 with a torsional balance; Ernest Nichols and Gordon Hull conducted a similar independent experiment in 1901 using a Nichols radiometer. Svante Arrhenius predicted in 1908 the possibility of solar radiation pressure distributing life spores across interstellar distances, the concept of panspermia. He apparently was the first scientist to state that light could move objects between stars. Friedrich Zander (Tsander) published a technical paper that included technical analysis of solar sailing. Zander wrote of "using tremendous mirrors of very thin sheets" and "using the pressure of sunlight to attain cosmic velocities". J.D. Bernal wrote in 1929, "A form of space sailing might be developed which used the repulsive effect of the sun's rays instead of wind. A space vessel spreading its large, metallic wings, acres in extent, to the full, might be blown to the limit of Neptune's orbit. Then, to increase its speed, it would tack, close-hauled, down the gravitational field, spreading full sail again as it rushed past the sun." Solar radiation pressure Solar radiation exerts a pressure on the sail due to reflection and a small fraction that is absorbed. The absorbed energy heats the sail, which re-radiates that energy from the front and rear surfaces. The momentum of a photon or an entire flux is given by p = E/c, where E is the photon or flux energy, p is the momentum, and c is the speed of light. At 1 AU the solar power flux density is about 1370 W/m2, resulting in a pressure under: perfect absorbance: F = 4.57 μN per square metre (4.57 μPa) perfect reflectance: F = 9.13 μN per square metre (9.13 μPa) A perfect sail is flat and has 100% specular reflection. An actual sail will have an overall efficiency of about 90%, about 8.22 μN/m2, due to curvature (billow), wrinkles, absorbance, re-radiation from front and back, non-specular effects, and other factors. The force on a sail and the actual acceleration of the craft vary by the inverse square of distance from the sun (unless close to the sun), and by the square of the cosine of the angle between the sail force vector and the radial from the sun, so F = F0 cos2 θ / R2 (ideal sail) where R is distance from the sun in AU. An actual square sail can be modelled as: F = F0 (0.349 + 0.662 cos 2θ − 0.011 cos 4θ) / R2 Note that the force and acceleration approach zero generally around θ = 60° rather than 90° as one might expect with an ideal sail. Solar wind, the flux of charged particles blown out from the sun, exerts a nominal dynamic pressure of about 3 to 4 nPa, three orders of magnitude less than solar radiation pressure on a reflective sail. Sail loading (areal density) is an important parameter, which is the total mass divided by the sail area, expressed in g/m2. It is represented by the Greek letter σ. A sail craft has a characteristic acceleration, ac, which it would experience at 1 AU when facing the sun. It is related to areal density by: ac = 8.22 / σ, in mm/s2 The lightness number, λ, is the dimensionless ratio of maximum vehicle acceleration divided by the sun's local gravity; using the values at 1 AU: λ = ac / 5.93 The table presents some example values. Payloads are not included. The first two are from the detailed design effort at JPL in the 1970s. The third, the lattice sailer, might represent about the best possible performance level. |Square sail||5.27||1.56||0.26||820 m| |Lattice sailer||0.07||117||20||1 km| Sailing operations are simplest in interplanetary orbits, where attitude changes are done at low rates. For outward bound trajectories, the sail force vector is oriented forward of the sun line, which increases orbital energy and angular momentum, resulting in the craft moving farther from the sun. For inward trajectories, the sail force vector is oriented behind the sun line, which decreases orbital energy and angular momentum, resulting in the craft moving in toward the sun. To change orbital inclination, the force vector is turned out of the plane of the velocity vector. In orbits around planets or other bodies, the sail is oriented so that its force vector has a component along the velocity vector, either in the direction of motion for an outward spiral, or against the direction of motion for an inward spiral. An active attitude control system (ACS) is essential for a sail craft to achieve and maintain a desired orientation. The required sail orientation changes slowly, often less than 1 degree per day, in interplanetary space, but much more rapidly in a planetary orbit. The ACS must be capable of meeting these orientation requirements. Control is achieved by a relative shift between the craft's center of pressure and its center of mass. This can be achieved with control vanes, movement of individual sails, movement of a control mass, or altering reflectivity. Holding a constant attitude requires that the ACS maintain a net torque of zero on the craft. The total force and torque on a sail, or set of sails, is not constant along a trajectory. The force changes with solar distance and sail angle, which changes the billow in the sail and deflects some elements of the supporting structure, resulting in changes in the sail force and torque. Sail temperature also changes with solar distance and sail angle, which changes sail dimensions. The radiant heat from the sail changes the temperature of the supporting structure. Both factors affect total force and torque. The ACS must compensate for all of these changes for it to hold the desired attitude. In Earth orbit, solar pressure and drag pressure are typically equal at an altitude of about 800 km, which means that a sail craft would have to operate above that altitude. Sail craft must operate in orbits where their turn rates are compatible with the orbits, which is generally a concern only for spinning disk configurations. Sail operating temperatures are a function of solar distance, sail angle, reflectivity, and front and back emissivities. A sail can be used only where its temperature is kept within its material limits. Generally, a sail can be used rather close to the sun, around 0.25 AU, or even closer if carefully designed for those conditions. Robert L. Forward pointed out that a solar sail could be used to modify the orbit of a satellite around the Earth. In the limit, a sail could be used to "hover" a satellite above one pole of the Earth. Spacecraft fitted with solar sails could also be placed in close orbits about the Sun that are stationary with respect to either the Sun or the Earth, a type of satellite named by Forward a statite. This is possible because the propulsion provided by the sail offsets the gravitational potential of the Sun. Such an orbit could be useful for studying the properties of the Sun over long durations. Such a spacecraft could conceivably be placed directly over a pole of the Sun, and remain at that station for lengthy durations. Likewise a solar sail-equipped spacecraft could also remain on station nearly above the polar terminator of a planet such as the Earth by tilting the sail at the appropriate angle needed to just counteract the planet's gravity. In his book The Case for Mars, Robert Zubrin points out that the reflected sunlight from a large statite placed near the polar terminator of the planet Mars could be focussed on one of the Martian polar ice caps to significantly warm the planet's atmosphere. Such a statite could be made from asteroid material. The MESSENGER probe orbiting Mercury used light pressure on its solar panels to perform fine trajectory corrections on the way to Mercury. By changing the angle of the solar panels relative to the Sun, the amount of solar radiation pressure was varied to adjust the spacecraft trajectory more delicately than possible with thrusters. Minor errors are greatly amplified by gravity assist maneuvers, so using radiation pressure to make very small corrections saved large amounts of propellant. In The Flight of the Dragonfly, Forward described a light sail propelled by superlasers. As the starship neared its destination, the outer portion of the sail would detach. The outer sail would then refocus and reflect the lasers back onto a smaller, inner sail. This would provide braking thrust to stop the ship in the destination star system. Both methods pose monumental engineering challenges. The lasers would have to operate for years continuously at gigawatt strength. Forward's solution to this requires enormous solar panel arrays to be built at or near the planet Mercury. A planet-sized mirror or fresnel lens would be needed several dozen astronomical units from the Sun to keep the lasers focused on the sail. The giant braking sail would have to act as a precision mirror to focus the braking beam onto the inner "deceleration" sail. A potentially easier approach would be to use a maser to drive a "solar sail" composed of a mesh of wires with the same spacing as the wavelength of the microwaves, since the manipulation of microwave radiation is somewhat easier than the manipulation of visible light. The hypothetical "Starwisp" interstellar probe design would use a maser to drive it. Masers spread out more rapidly than optical lasers owing to their longer wavelength, and so would not have as long an effective range. Masers could also be used to power a painted solar sail, a conventional sail coated with a layer of chemicals designed to evaporate when struck by microwave radiation. The momentum generated by this evaporation could significantly increase the thrust generated by solar sails, as a form of lightweight ablative laser propulsion. To further focus the energy on a distant solar sail, designs have considered the use of a large zone plate. This would be placed at a location between the laser or maser and the spacecraft.[clarification needed] The plate could then be propelled outward using the same energy source, thus maintaining its position so as to focus the energy on the solar sail. Additionally, it has been theorized by da Vinci Project contributor T. Pesando that solar sail-utilizing spacecraft successful in interstellar travel could be used to carry their own zone plates or perhaps even masers to be deployed during flybys at nearby stars. Such an endeavor could allow future solar-sailed craft to effectively utilize focused energy from other stars rather than from the Earth or Sun, thus propelling them more swiftly through space and perhaps even to more distant stars. However, the potential of such a theory remains uncertain if not dubious due to the high-speed precision involved and possible payloads required. Another more physically realistic approach would be to use the light from the home star to accelerate. The ship would first orbit continuously away around the home star until the appropriate starting velocity is reached, then the ship would begin its trip away from the system using the light from the star to keep accelerating. Beyond some distance, the ship would no longer receive enough light to accelerate it significantly, but would maintain its course due to inertia. When nearing the target star, the ship could turn its sails toward it and begin to orbit inward to decelerate. Additional forward and reverse thrust could be achieved with more conventional means of propulsion such as rockets. Similar solar sailing, such launch and capture were suggested for directed panspermia to expand life in other solar systems. Velocities of 0.0005 c could be obtained by solar sails carrying 10 kg payloads, using thin solar sail vehicles with effective areal densities 0.0001 kg/m2 with thin sails of thickness of 0.1 microns and sizes on the order of one square km. Alternatively, swarms of 1 mm capsules can be launched on solar sails with radii of 42 cm, each carrying 10,000 capsules of a hundred million extremophile microorganism to seed life in diverse target environments. Parachutes have very low mass, but a parachute is not a workable configuration for a solar sail. Analysis shows that a parachute configuration would collapse from the forces exerted by shroud lines, since radiation pressure does not behave like aerodynamic pressure, and would not act to keep the parachute open. Eric Drexler proposed very high thrust-to-mass solar sails, and made prototypes of the sail material. His sail would use panels of thin aluminium film (30 to 100 nanometres thick) supported by a tensile structure. The sail would rotate and would have to be continually under thrust. He made and handled samples of the film in the laboratory, but the material was too delicate to survive folding, launch, and deployment. The design planned to rely on space-based production of the film panels, joining them to a deployable tension structure. Sails in this class would offer high area per unit mass and hence accelerations up to "fifty times higher" than designs based on deployable plastic films. The highest thrust-to-mass designs for ground-assembled deployable structures are square sails with the masts and guy lines on the dark side of the sail. Usually there are four masts that spread the corners of the sail, and a mast in the center to hold guy-wires. One of the largest advantages is that there are no hot spots in the rigging from wrinkling or bagging, and the sail protects the structure from the Sun. This form can therefore go close to the Sun for maximum thrust. Most designs steer with small sails on the ends of the spars. In the 1970s JPL studied many rotating blade and ring sails for a mission to rendezvous with Halley's Comet. The intention was to stiffen the structures using angular momentum, eliminating the need for struts, and saving mass. In all cases, surprisingly large amounts of tensile strength were needed to cope with dynamic loads. Weaker sails would ripple or oscillate when the sail's attitude changed, and the oscillations would add and cause structural failure. The difference in the thrust-to-mass ratio between practical designs was almost nil, and the static designs were easier to control. JPL's reference design was called the "heliogyro". It had plastic-film blades deployed from rollers and held out by centrifugal forces as it rotated. The spacecraft's attitude and direction were to be completely controlled by changing the angle of the blades in various ways, similar to the cyclic and collective pitch of a helicopter. Although the design had no mass advantage over a square sail, it remained attractive because the method of deploying the sail was simpler than a strut-based design. JPL also investigated "ring sails" (Spinning Disk Sail in the above diagram), panels attached to the edge of a rotating spacecraft. The panels would have slight gaps, about one to five percent of the total area. Lines would connect the edge of one sail to the other. Masses in the middles of these lines would pull the sails taut against the coning caused by the radiation pressure. JPL researchers said that this might be an attractive sail design for large manned structures. The inner ring, in particular, might be made to have artificial gravity roughly equal to the gravity on the surface of Mars. A solar sail can serve a dual function as a high-gain antenna. Designs differ, but most modify the metallization pattern to create a holographic monochromatic lens or mirror in the radio frequencies of interest, including visible light. Pekka Janhunen from FMI has invented a type of solar sail called the electric solar wind sail. Mechanically it has little in common with the traditional solar sail design. The sails are replaced with straightened conducting tethers (wires) placed radially around the host ship. The wires are electrically charged to create an electric field around the wires. The electric field extends a few tens of metres into the plasma of the surrounding solar wind. The solar electrons are reflected by the electric field (like the photons on a traditional solar sail). The radius of the sail is from the electric field rather than the actual wire itself, making the sail lighter. The craft can also be steered by regulating the electric charge of the wires. A practical electric sail would have 50–100 straightened wires with a length of about 20 km each. A magnetic sail would also employ the solar wind. However, the magnetic field deflects the electrically charged particles in the wind. It uses wire loops, and runs a static current through them instead of applying a static voltage. All these designs maneuver, though the mechanisms are different. Magnetic sails bend the path of the charged protons that are in the solar wind. By changing the sails' attitudes, and the size of the magnetic fields, they can change the amount and direction of the thrust. Electric solar wind sails can adjust their electrostatic fields and sail attitudes. The material developed for the Drexler solar sail was a thin aluminum film with a baseline thickness of 0.1 micrometres, to be fabricated by vapor deposition in a space-based system. Drexler used a similar process to prepare films on the ground. As anticipated, these films demonstrated adequate strength and robustness for handling in the laboratory and for use in space, but not for folding, launch, and deployment. The most common material in current designs is aluminized 2 µm Kapton film. It resists the heat of a pass close to the Sun and still remains reasonably strong. The aluminium reflecting film is on the Sun side. The sails of Cosmos 1 were made of aluminized PET film (Mylar). Research by Dr. Geoffrey Landis in 1998–9, funded by the NASA Institute for Advanced Concepts, showed that various materials such as alumina for laser lightsails and carbon fiber for microwave pushed lightsails were superior sail materials to the previously standard aluminium or Kapton films. In 2000, Energy Science Laboratories developed a new carbon fiber material that might be useful for solar sails. The material is over 200 times thicker than conventional solar sail designs, but it is so porous that it has the same mass. The rigidity and durability of this material could make solar sails that are significantly sturdier than plastic films. The material could self-deploy and should withstand higher temperatures. There has been some theoretical speculation about using molecular manufacturing techniques to create advanced, strong, hyper-light sail material, based on nanotube mesh weaves, where the weave "spaces" are less than half the wavelength of light impinging on the sail. While such materials have so far only been produced in laboratory conditions, and the means for manufacturing such material on an industrial scale are not yet available, such materials could mass less than 0.1 g/m², making them lighter than any current sail material by a factor of at least 30. For comparison, 5 micrometre thick Mylar sail material mass 7 g/m², aluminized Kapton films have a mass as much as 12 g/m², and Energy Science Laboratories' new carbon fiber material masses 3 g/m². Sail testing in space Japan's JAXA successfully tested IKAROS in 2010. The goal was to deploy and control the sail and for the first time determining the minute orbit perturbations caused by light pressure. Orbit determination was done by the nearby AKATSUKI probe from which IKAROS detached after both had been brought into a transfer orbit to Venus. The total effect over the six month' flight was 100 m/s. Until 2010, no solar sails had been successfully used in space as primary propulsion systems. On 21 May 2010, the Japan Aerospace Exploration Agency (JAXA) launched the IKAROS (Interplanetary Kite-craft Accelerated by Radiation Of the Sun) spacecraft, which deployed a 200 m2 polyimide experimental solar sail on June 10. In July, the next phase for the demonstration of acceleration by radiation began. On 9 July 2010, it was verified that IKAROS collected radiation from the Sun and began photon acceleration by the orbit determination of IKAROS by range-and-range-rate (RARR) that is newly calculated in addition to the data of the relativization accelerating speed of IKAROS between IKAROS and the Earth that has been taken since before the Doppler effect was utilized. The data showed that IKAROS appears to have been solar-sailing since 3 June when it deployed the sail. IKAROS has a diagonal spinning square sail 20 m (66 ft) made of a 7.5-micrometre (0.0075 mm) thick sheet of polyimide. A thin-film solar array is embedded in the sail. Eight LCD panels are embedded in the sail, whose reflectance can be adjusted for attitude control. IKAROS spent six months traveling to Venus, and then began a three-year journey to the far side of the Sun. Attitude (orientation) control Both the Mariner 10 mission, which flew by the planets Mercury and Venus, and the MESSENGER mission to Mercury demonstrated the use of solar pressure as a method of attitude control in order to conserve attitude-control propellant. Sail deployment tests NASA has successfully tested deployment technologies on small scale sails in vacuum chambers. On February 4, 1993, the Znamya 2, a 20-meter wide aluminized-mylar reflector, was successfully deployed from the Russian Mir space station. Although the deployment succeeded, propulsion was not demonstrated. A second test, Znamya 2.5, failed to deploy properly. In 1999, a full-scale deployment of a solar sail was tested on the ground at DLR/ESA in Cologne. On August 9, 2004, the Japanese ISAS successfully deployed two prototype solar sails from a sounding rocket. A clover-shaped sail was deployed at 122 km altitude and a fan-shaped sail was deployed at 169 km altitude. Both sails used 7.5-micrometer film. The experiment purely tested the deployment mechanisms, not propulsion. Solar sail propulsion attempts A joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science made two sail testing attempts: in 2001 a suborbital prototype test failed because of rocket failure; and in June 21, 2005, Cosmos 1 launched from a submarine in the Barents Sea, but the Volna rocket failed, and the spacecraft failed to reach orbit. They intended to use the sail to gradually raise the spacecraft to a higher Earth orbit over a mission duration of one month. On Carl Sagan's 75th birthday (November 9, 2009) the same group announced plans to make three further attempts, dubbed LightSail-1, -2, and -3. The new design will use a 32-square-meter Mylar sail, deployed in four triangular segments like NanoSail-D. The launch configuration is that of three adjacent CubeSats, and as of 2011 was waiting for a piggyback launch opportunity. A 15-meter-diameter solar sail (SSP, solar sail sub payload, soraseiru sabupeiro-do) was launched together with ASTRO-F on a M-V rocket on February 21, 2006, and made it to orbit. It deployed from the stage, but opened incompletely. A team from the NASA Marshall Space Flight Center (Marshall), along with a team from the NASA Ames Research Center, developed a solar sail mission called NanoSail-D, which was lost in a launch failure aboard a Falcon 1 rocket on 3 August 2008. The second backup version, NanoSail-D2, also sometimes called simply NanoSail-D, was launched with FASTSAT on a Minotaur IV on November 19, 2010, becoming NASA's first solar sail deployed in low earth orbit. The objectives of the mission were to test sail deployment technologies, and to gather data about the use of solar sails as a simple, "passive" means of de-orbiting dead satellites and space debris. The NanoSail-D structure was made of aluminium and plastic, with the spacecraft massing less than 10 pounds (4.5 kg). The sail has about 100 square feet (9.3 m2) of light-catching surface. After some initial problems with deployment, the solar sail was deployed and over the course of its 240 day mission reportedly produced a "wealth of data" concerning the use of solar sails as passive deorbit devices. Future solar sail propulsion tests NASA researchers are developing a technology demonstration mission known as "In-Space Demonstration of a Mission-Capable Solar Sail", later dubbed the "Sunjammer", with the intent to prove the viability and value of the technology. The "Sunjammer" is a square, 124 feet (38 meters) wide on each side (total area 13,000 sq ft or 1,208 sq m), and will travel from the Sun-Earth L1 Lagrangian point 900,000 miles from Earth (1.5 million km) to a distance of 1,864,114 miles (3 million kilometers). The demonstration will launch on a Falcon 9 as early as November 2014. It will be a secondary payload, released after the placement of the DSCOVR climate satellite at the L1 point. Despite the losses of Cosmos 1 and NanoSail-D (which were due to failure of their launchers), scientists and engineers around the world remain encouraged and continue to work on solar sails. While most direct applications created so far intend to use the sails as inexpensive modes of cargo transport, some scientists are investigating the possibility of using solar sails as a means of transporting humans. This goal is strongly related to the management of very large (i.e. well above 1 km²) surfaces in space and the sail making advancements. Thus, in the near/medium term, solar sail propulsion is aimed chiefly at accomplishing a very high number of non-crewed missions in any part of the solar system and beyond. Manned space flight utilizing solar sails is still in the development state of infancy. Solar sail launching projects in 2010 and 2011 On 21 May 2010, Japan Aerospace Exploration Agency (Jaxa) launched the world's first interplanetary solar sail spacecraft "IKAROS" (Interplanetary Kite-craft Accelerated by Radiation Of the Sun) to Venus. NASA launched the second NanoSail-D unit stowed inside the FASTSAT satellite on the Minotaur IV on November 19, 2010. The ejection date from the FASTSAT microsatellite was planned for December 6, 2010, but deployment only occurred on January 20, 2011. The Planetary Society of the United States plans to launch an artificial satellite "LightSail-1" onto the Earth's orbit in 2011. Solar sail vessels are classified by their lightness number, which is the ratio of the acceleration due to the light force on the sail to the force of gravity. (These both vary with the inverse square of distance, so the ratio is constant for any vehicle.) A typical reflective surface needs to provide about 4 square meters of reflective area for every 5 grams of vehicle weight to have a lightness factor of 1. The light force can be separated into the normal force (away from the light source) and the tangential force as a function of the angle A of the sail face to the light. The Normal Force per area = 8/9 + 1/9 . The Tangential Force per area = 4/9 . Extended heliocentric reference frame ||This section's tone or style may not reflect the encyclopedic tone used on Wikipedia. (July 2010)| - In 1991–92 the classical equations of solar sail motion in the solar gravitational field were written using a different mathematical formalism, namely the lightness vector, fully characterizing the sailcraft dynamics. In addition, a solar sail spacecraft has been supposed to be able to reverse its motion (in the solar system) provided that its sail is sufficiently light that sailcraft sail loading (σ) is not higher than 2.1 g/m². This value entails a very high-performance technology, but probably within the capabilities of emerging technologies. - For describing the concept of fast sailing and some related items, we need to define two frames of reference. The first is an inertial Cartesian coordinate system centred on the Sun or a heliocentric inertial frame (HIF, for short). For instance, the plane of reference, or the XY plane, of HIF can be the mean ecliptic at some standard epoch such as J2000. The second Cartesian reference frame is the so-called heliocentric orbital frame (HOF, for short) with the origin in the sailcraft barycenter. The x-axis of HOF is the direction of the Sun-to-sailcraft vector, or position vector, the z-axis is along the sailcraft orbital angular momentum, whereas the y-axis completes the counterclockwise triad. Such a definition can be extended to sailcraft trajectories, including both counterclockwise and clockwise arcs of motion, in such a way that HOF is always a continuous positively oriented triad. The sail orientation unit vector (defined in sailcraft), say, n can be specified in HOF by a pair of angles, e.g. the azimuth α and the elevation δ. Elevation is the angle that n forms with the xy-plane of HOF (−90° ≤ δ ≤ 90°). Azimuth is the angle that the projection of n onto the HOF xy-plane forms with the HOF x-axis (0 ≤ α < 360°). In HOF, azimuth and elevation are equivalent to longitude and latitude, respectively. - The sailcraft lightness vector L = [λr, λt, λn] depends on α and δ (non-linearly) and the thermo-optical parameters of the sail materials (linearly). Neglecting a small contribution coming from the aberration of light, one has the following particular cases (irrespective of the sail material): - α = 0, δ = 0 ⇔ [λr, 0, 0] ⇔ λ=|L|=λr - α ≠ 0, δ = 0 ⇔ [λr, λt, 0] - α = 0, δ ≠ 0 ⇔ [λr, 0, λn]. - Suppose a sailcraft is built with an all-metal sail of aluminium and chromium such that σ = 2 g/m². A launcher delivers the (packed) sailcraft at some million kilometers from the Earth. There, the whole sailcraft is deployed and begins its flight in the solar system (here, for the sake of simplicity, any gravitational perturbation from planets is neglected). A conventional spacecraft would move approximately in a circular orbit at about 1 AU from the Sun. In contrast, a sailcraft like this one is sufficiently light to be able to escape the solar system or to point to some distant object in the heliosphere. If the direction that sail's surface faces, represented by surface normal vector n, is parallel to the local sunlight direction (i.e. the sail faces toward the Sun), then λr = λ = 0.725 (i.e. 1/2 < λ < 1); as a result, this sailcraft moves on a hyperbolic orbit. Its speed at infinity is equal to 20 km/s. Strictly speaking, this potential solar sail mission would be faster than the current record speed for missions beyond the planetary range, that of Voyager 1, which is 17 km/s or about 3.6 AU/yr (1 AU/yr = 4.7404 km/s). However, three kilometers per second are not meaningful in the context of very deep space missions. - As a consequence, one has to resort to some L having more than one component different from zero. The classical way to gain speed is to tilt the sail at some suitable positive α. If α= +21°, then the sailcraft begins by accelerating; after about two months, it achieves 32 km/s. However, this is a speed peak inasmuch as its subsequent motion is characterized by a monotonic speed decrease towards an asymptotic value, or the cruise speed, of 26 km/s. After 18 years, the sailcraft is 100 AU away from the Sun. This would mean a pretty fast mission. However, considering that a sailcraft with 2 g/m² is technologically advanced, is there any other way to increase its speed significantly? Yes, there is. Let us try to explain this effect of non-linear dynamics. - The above figures show that spiralling out from a circular orbit is not a convenient mode for a sailcraft to be sent away from the Sun since it would not have a high enough excess speed. On the other hand, it is known from astrodynamics that a conventional Earth satellite has to perform a rocket maneuver at/around its perigee for maximizing its speed at "infinity". Similarly, one can think of delivering a sailcraft close to the Sun to get much more energy from the solar photon pressure that scales as 1/R2. (Inverse-square law) For instance, suppose one starts from a point at 1 AU on the ecliptic and achieves a perihelion distance of 0.2 AU in the same plane by a two-dimensional trajectory. In general, there are three ways to deliver a sailcraft, initially at R0 from the Sun, to some distance R < R0: - using an additional propulsion system to send the folded-sail sailcraft to the perihelion of an elliptical orbit; there, the sail is deployed with its axis parallel to the sunlight for getting the maximum solar flux at the chosen distance; - spiralling in by α slightly negative, namely, via a slow deceleration; - strongly decelerating by a "sufficiently large" sail-axis angle negative in HOF. - The first way—although usable as a good reference mode—requires another high-performance propulsion system. - The second way is ruled out in the present case of σ = 2 g/m2; as a matter of fact, a small α < 0 entails a λr too high and a negative λt too low in absolute value: the sailcraft would go far from the Sun with a decreasing speed (as discussed above). - In the third way, there is a critical negative sail-axis angle in HOF, say, αcr such that for sail orientation angles α < αcr the sailcraft trajectory is characterized as follows: - the distance (from the Sun) first increases, achieves a local maximum at some point M, then decreases. The orbital angular momentum (per unit mass), say, H of the sailcraft decreases in magnitude. It is suitable to define the scalar H = H•k, where k is the unit vector of the HIF Z-axis; - after a short time (few weeks or less, in general), the sailcraft speed V = |V| achieves a local minimum at a point P. H continues to decrease; - past P, the sailcraft speed increases because the total vector acceleration, say, A begins by forming an acute angle with the vector velocity V; in mathematical terms, dV / dt = A • V / V > 0. This is the first key-point to realize: the orbital velocity having been largely neutralized, the sailcraft is falling nearly straight toward the Sun under the influence of its gravity, gaining velocity in that direction; - eventually, the sailcraft achieves a point Q where H = 0; here, the sailcraft's total energy (per unit mass), say, E (including the contribution of the solar pressure on the sail) shows a (negative) local minimum. This is the second key-point: in the absence of continued light pressure, the craft would fall directly into the Sun; - past Q, the sailcraft—keeping the negative value of the sail orientation—regains angular momentum by reversing its original motion (that is H is oriented downward in the diagram and H < 0 means the trajectory is now clockwise or retrograde motion, the opposite of normal planetary orbits). R (distance from Sun) decreases rapidly while dV/dt (acceleration) increases. This is the third key-point; - the sailcraft energy continues to increase and a point S is reached where E=0, namely, the escape condition is satisfied (V is greater than solar escape velocity); the sailcraft continues accelerating. S is located before the perihelion. The (negative) H becomes increasingly negative (retrograde); - if the sail attitude α has been chosen appropriately (about −25.9° in this example), the sailcraft flies-by the Sun at the desired (0.2 AU) perihelion, say, U; however, differently from a Keplerian orbit (for which the perihelion is the point of maximum speed), past the perihelion, V increases further while the sailcraft rapidly accelerates away from the Sun due to the much stronger photon pressure at small distances from the Sun; - past U, the sailcraft is very fast and passes through a point, say, W of local maximum for the speed, since λ < 1. Thus, speed decreases but, at a few AU from the Sun (about 2.7 AU in this example), both the (positive) E and the (negative) H begin a plateau or cruise phase; V becomes practically constant and, the most important thing, takes on a cruise value considerably higher than the speed of the circular orbit of the departure planet (the Earth, in this case). This example shows a cruise speed of 14.75 AU/yr or 69.9 km/s. At 100 AU, the sailcraft speed is still 69.6 km/s. The net effect is a powered slingshot maneuver utilizing the Oberth effect to achieve a much higher final velocity than would otherwise be possible. H-reversal Sun flyby trajectory - The figure below shows the optimal sailcraft trajectory mentioned above. Only the initial arc around the Sun has been plotted. The remaining part is rectilinear, in practice, and represents the cruise phase of the spacecraft. The sail is represented by a short segment with a central arrow that indicates its direction of thrust. Note that the complicated change of sail direction in HIF is very simply achieved by a constant attitude in HOF. That brings about a net non-Keplerian feature to the whole trajectory. - As mentioned in point-3, past P the strong sailcraft speed increase is due to both the solar-light thrust (reducing the residual orbital angular momentum) and gravity (towards the Sun) acceleration vectors. In particular, dV / dt, or the along-track (sunward) component of the total acceleration, is positive and particularly high from the point-Q to the point-U. This suggests that if a quick sail attitude maneuver is performed just before H vanishes, α → −α, the sailcraft motion continues to be a direct motion with a final cruise velocity equal in magnitude to the total velocity reversal (because the above maneuver keeps the perihelion value unchanged). The basic principle may be summarised as follows: a sufficiently light sailcraft, by losing most of its initial energy, subsequently achieves the absolute maximum of energy compliant with its given technology. - The above 2D class of new trajectories represents an ideal case. The realistic 3D fast sailcraft trajectories are considerably more complicated than the 2D cases. However, the general feature of producing a fast cruise speed can be further enhanced. Some of the enclosed references[clarification needed] contain strict mathematical algorithms for dealing with this topic. Recently (July 2005), in an international symposium an evolution of the above concept of fast solar sailing has been discussed.[clarification needed] A sailcraft with σ = 1 g/m² could achieve over 30 AU/yr (0.000474 c) in cruise (by keeping the perihelion at 0.2 AU), namely, well beyond the cruise speed of any nuclear-electric spacecraft (at least as conceived today). Such paper was published in the Journal of the British Interplanetary Society (JBIS) in 2006. (See Bibliography) The complete (2D and 3D) theory of the theory of the sailcraft reverse motion has been developed only in the last five years, and can be found in a very recent book from Springer (August 2012) (see Bibliography). In science fiction The earliest reference to solar sailing was in Jules Verne's 1865 novel From the Earth to the Moon, coming only a year after Maxwell's equations were published. The next known publication came more than 20 years later when Georges Le Faure and Henri De Graffigny published a four-volume science fiction novel in 1889, The Extraordinary Adventures of a Russian Scientist, which included a spacecraft propelled by solar pressure. B. Krasnogorskii published On the Waves of the Ether in 1913. In his story backed by technical calculations, a small, bullet-shaped capsule is surrounded by a circular mirror 35 meters in diameter. It travels through space by means of solar pressure on the mirror. One of the earliest American stories about light sails is "The Lady Who Sailed the Soul" by Cordwainer Smith, which was published in 1960. In it, a tragedy results from the slowness of interstellar travel by this method. Another example is the 1962 story "Gateway to Strangeness" (also known as "Sail 25") by Jack Vance, in which the outward direction of propulsion poses a life-threatening dilemma. Also in early 20th century literature, Pierre Boulle's Planet of the Apes starts with a couple floating in space on a ship propelled and maneuvered by light sails. In Larry Niven and Jerry Pournelle's The Mote in God's Eye, a sail is used as a brake and a weapon. Author and scientist Arthur C. Clarke depicted a "yacht race" between solar sail spacecraft in the 1964 short story "Sunjammer". In "Flight of the Dragonfly", Robert Forward (who also proposed the microwave-pushed Starwisp design) described an interstellar journey using a light driven propulsion system, wherein a part of the sail was broken off and used as a reflector to slow the main spacecraft as it approached its destination. In the 1982 film Tron, a "Solar Sailer" was an inner spacecraft with butterfly like sails moved along focused beam of light. The 1983 episode "Enlightenment" of Doctor Who featured sailing ships in space that used solar wind to fly. In the episode "Explorers" of Star Trek: Deep Space Nine that aired in 1995, a "light ship" was featured. It was designed to use solar wind to fly out of a solar system with no engine. In the film Star Wars Episode II: Attack of the Clones one is used by Count Dooku to propel himself across space. A solar sail was also used in James Cameron's Avatar. In the Disney film Treasure Planet, solar sails are used literally as sails for interstellar travel of a steampunk-styled masted sailing ship capable of traveling through space. - Yarkovsky effect - Poynting–Robertson effect - Nichols radiometer - Cosmos 1 - Spacecraft propulsion - Beam-powered propulsion - Magnetic sail - Electric sail - Optical lift - Optical tweezers - R. M. Georgevic (1973) "The Solar Radiation Pressure Forces and Torques Model", The Journal of the Astronautical Sciences, Vol. 27, No. 1, Jan–Feb. First known publication describing how solar radiation pressure creates forces and torques that affect spacecraft. - Jerome Wright (1992), Space Sailing, Gordon and Breach Science Publishers - Johannes Kepler (1604) Ad vitellionem parali pomena, Frankfort; (1619) De cometis liballi tres , Augsburg - Jules Verne (1865) De la Terre à la Lune (From the Earth to the Moon) - P. Lebedev, 1901, "Untersuchungen über die Druckkräfte des Lichtes", Annalen der Physik, 1901 - Lee, Dillon (2008). "A Celebration of the Legacy of Physics at Dartmouth". Dartmouth Undergraduate Journal of Science. Dartmouth College. Retrieved 2009-06-11. - Svante Arrhenius (1908) Worlds in the Making - Friedrich Zander's 1925 paper, "Problems of flight by jet propulsion: interplanetary flights", was translated by NASA. See NASA Technical Translation F-147 (1964) - J. D. Bernal (1929) The World, the Flesh & the Devil: An Enquiry into the Future of the Three Enemies of the Rational Soul - http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/relmom.html - Wright, Appendix A - Wright, ibid., Appendix A - McInnes, C. R. and Brown, J. C. (1989) Solar Sail Dynamics with an Extended Source of Radiation Pressure, International Astronautical Federation, IAF-89-350, October. - Wright, Appendix B. - http://www.swpc.noaa.gov/SWN/index.html - Wright, ibid., Ch 6 and Appendix B. - "MESSENGER Sails on Sun's Fire for Second Flyby of Mercury". 2008-09-05. "On September 4, the MESSENGER team announced that it would not need to implement a scheduled maneuver to adjust the probe's trajectory. This is the fourth time this year that such a maneuver has been called off. The reason? A recently implemented navigational technique that makes use of solar-radiation pressure (SRP) to guide the probe has been extremely successful at maintaining MESSENGER on a trajectory that will carry it over the cratered surface of Mercury for a second time on October 6." - Forward, R.L. (1984). "Roundtrip Interstellar Travel Using Laser-Pushed Lightsails". J Spacecraft 21 (2): 187–195. Bibcode:1984JSpRo..21..187F. doi:10.2514/3.8632. - "Earth To Mars in a Month With Painted Solar Sail". SPACE.com. 2005-02-11. Retrieved 2011-01-18. - Gregory L., Michael N.; Matloff (1979). "Directed panspermia: A technical and ethical evaluation of seeding nearby solar systems". Journal of the British Interplanetary Society 32: 419–423.[dead link] - Mautner, Michael N. (1995). "Directed panspermia. 2. Technological advances toward seeding other solar systems, and the foundations of panbiotic ethics". Journal of the British Interplanetary Society 48: 435–440. - Wright, ibid., p. 71, last paragraph - Drexler, K. E. (1977). "Design of a High Performance Solar Sail System, MS Thesis,". Dept. of Aeronautics and Astronautics, Massachusetts Institute of Techniology, Boston. - "Design & Construction". NASA JPL. Archived from the original on 2005-03-11. - Khayatian, Rahmatsamii, Porgorzelski, UCLA and JPL. "An Antenna Concept Integrated with Future Solar Sails". - NASA. "Solar Sails Could Send Spacecraft 'Sailing' Through Space". - Zubrin & Andrew's presentation in a pdf. - Geoffrey A. Landis, Ohio Aerospace Institute (1999). "Advanced Solar- and Laser-pushed Lightsail Concepts". - SPACE.com Exclusive: Breakthrough In Solar Sail Technology[dead link] - "Researchers produce strong, transparent carbon nanotube sheets". Physorg.com. 2005-08-18. Retrieved 2011-01-18. - Tsuda, Yuichi (2011). "Solar Sail Navigation Technology of IKAROS". JAXA. - "Small Solar Power Sail Demonstrator 'IKAROS' Successful Solar Sail Deployment". JAXA website press release. Japan Aerospace Exploration Agency. 2010-06-11. Retrieved 2010-06-17. - "News briefing: 27 May 2010". NatureNEWS. 26 May 2010. Retrieved 2 June 2010. - Samantha Harvey (21 May 2010). "Solar System Exploration: Missions: By Target: Venus: Future: Akatsuki". NASA. Retrieved 2010-05-21. - "About the confirmation of photon acceleration of "IKAROS" the small solar-sail demonstrating craft (There is not English press release yet)". JAXA website press release. Japan Aerospace Exploration Agency. 2010-07-09. Retrieved 2010-07-10. - "Small Solar Power Sail Demonstrator". JAXA. 11 March 2010. Retrieved 2010-05-07. - "IKAROS Project". JAXA. 2008. Retrieved 30 March 2010. - McCurry, Justin (2010-05-17). "Space yacht Ikaros ready to cast off for far side of the Sun". London: The Guardian Weekly. Retrieved 2010-05-18. - "Solar Sails Could Send Spacecraft 'Sailing' Through Space". - "Full-scale deployment test of the DLR/ESA Solar Sail". 1999. - "Cosmos 1 - Solar Sail (2004) Japanese Researchers Successfully Test Unfurling of Solar Sail on Rocket Flight". 2004. - OVERBYE, DENNIS (November 9, 2009). "Setting Sail Into Space, Propelled by Sunshine". Retrieved 18 May 2012. "Planetary Society, ... the next three years, ... series of solar-sail spacecraft dubbed LightSails" - "LightSail Mission FAQ". The Planetary Society. Retrieved 18 May 2012. - "LightSail-1 on NASA Short List for Upcoming Launch". planetary.org. 2011-02-09. Retrieved 2012-05-18. - "SSSat 1, 2". Space.skyrocket.de. Retrieved 2011-01-18. - NASASpaceflight.com - SpaceX Falcon I FAILS during first stage flight[dead link] - "NASA to Attempt Historic Solar Sail Deployment". NASA. 2008-06-26. - "NASA Chat: First Solar Sail Deploys in Low-Earth Orbit". NASA. 2011-01-27. Retrieved 18 May 2012. "Sometimes the satellite is called NanoSail-D and sometimes NanoSail-D2. ... Dean: The project is just NanoSail-D. NanoSail-D2 is the serial #2 version." - Nasa report on mission - Nasa report on mission - "Nasa Solar Sail Demonstration". www.nasa.gov. - Leonard David (January 31, 2013). "NASA to Launch World's Largest Solar Sail in 2014". Space.com. Retrieved June 13, 2013. - Mike Wall (June 13, 2013). "World's Largest Solar Sail to Launch in November 2014". Space.com. Retrieved June 13, 2013. - "IKAROS Project|JAXA Space Exploration Center". Jspec.jaxa.jp. 2010-05-21. Retrieved 2011-01-18. - "NASA - NanoSail-D Home Page". Nasa.gov. 2011-01-21. Retrieved 2011-01-24. - "LightSail-1- A Solar Sail Missio no fThe Planetary Society". Planetary.org. Retrieved 2011-01-18. - G. Vulpetti, Fast Solar Sailing: Astrodynamics of Special Sailcraft Trajectories, ;;Space Technology Library Vol. 30, Springer, August 2012, (Hardcover) http://www.springer.com/engineering/mechanical+engineering/book/978-94-007-4776-0, (Kindle-edition), ASIN: B00A9YGY4I - G. Vulpetti, L. Johnson, G. L. Matloff, Solar Sails: A Novel Approach to Interplanetary Flight, Springer, August 2008, ISBN 978-0-387-34404-1 - J. L. Wright, Space Sailing, Gordon and Breach Science Publishers, London, 1992; Wright was involved with JPL's effort to use a solar sail for a rendezvous with Halley's comet. - NASA/CR 2002-211730, the chapter IV—presents the theory and the optimal NASA-ISP trajectory via the H-reversal sailing mode - G. Vulpetti, The Sailcraft Splitting Concept, JBIS, Vol. 59, pp. 48–53, February 2006 - G. L. Matloff, Deep-Space Probes: To the Outer Solar System and Beyond, 2nd ed., Springer-Praxis, UK, 2005, ISBN 978-3-540-24772-2 - T. Taylor, D. Robinson, T. Moton, T. C. Powell, G. Matloff, and J. Hall, "Solar Sail Propulsion Systems Integration and Analysis (for Option Period)", Final Report for NASA/MSFC, Contract No. H-35191D Option Period, Teledyne Brown Engineering Inc., Huntsville, AL, May 11, 2004 - G. Vulpetti, "Sailcraft Trajectory Options for the Interstellar Probe: Mathematical Theory and Numerical Results", the Chapter IV of NASA/CR-2002-211730, The Interstellar Probe (ISP): Pre-Perihelion Trajectories and Application of Holography, June 2002 - G. Vulpetti, Sailcraft-Based Mission to The Solar Gravitational Lens, STAIF-2000, Albuquerque (New Mexico, USA), 30 January – 3 February 2000 - G. Vulpetti, "General 3D H-Reversal Trajectories for High-Speed Sailcraft", Acta Astronautica, Vol. 44, No. 1, pp. 67–73, 1999 - C. R. McInnes, Solar Sailing: Technology, Dynamics, and Mission Applications, Springer-Praxis Publishing Ltd, Chichester, UK, 1999, ISBN 978-3-540-21062-7 - Genta, G., and Brusa, E., "The AURORA Project: a New Sail Layout", Acta Astronautica, 44, No. 2–4, pp. 141–146 (1999) - S. Scaglione and G. Vulpetti, "The Aurora Project: Removal of Plastic Substrate to Obtain an All-Metal Solar Sail", special issue of Acta Astronautica, vol. 44, No. 2–4, pp. 147–150, 1999 |Wikimedia Commons has media related to: Solar sails| - "Deflecting Asteroids" by Gregory L. Matloff, IEEE Spectrum, April 2012 - Planetary Society site for solar sailing projects - The Solar Photon Sail Comes of Age by Gregory L. Matloff - NASA Mission Site for NanoSail-D - NanoSail-D mission: Dana Coulter, "NASA to Attempt Historic Solar Sail Deployment", NASA, June 28, 2008 - Far-out Pathways to Space: Solar Sails from NASA - Solar Sails Comprehensive collection of solar sail information and references, maintained by Benjamin Diedrich. Good diagrams showing how light sailors must tack. - U3P Multilingual site with news and flight simulators - ISAS Deployed Solar Sail Film in Space - Suggestion of a solar sail with roller reefing, hybrid propulsion and a central docking and payload station. - Interview with NASA's JPL about solar sail technology and missions - Website with technical pdf-files about solar-sailing, including NASA report and lectures at Aerospace Engineering School of Rome University - Advanced Solar- and Laser-pushed Lightsail Concepts - Andrews, D. G. (2003). "Interstellar Transportation using Today’s Physics". AIAA Paper 2003-4691 (American Institute of Aeronautics and Astronautics). - www.aibep.org: Official site of American Institute of Beamed Energy Propulsion - Space Sailing Sailing ship concepts, operations, and history of concept
http://en.wikipedia.org/wiki/Solar_sail
13
71
The geometry and trigonometry strand of Core-Plus Mathematics has several goals. One important goal is to provide mathematical experiences that convey to students the usefulness of knowledge about shapes, shape properties, and relationships between shapes. A second goal is to provide mathematical experiences that allow students to become familiar with a substantial portion of elementary Euclidean geometry of the plane and, to a lesser extent, of space. A third goal is to provide mathematical experiences that allow students to experience both synthetic-graphic and algebraic-symbolic approaches to studying geometric topics. A fourth goal is to introduce students to axiomatic organizations of small parts of Euclidean geometry and to develop reasoning skills in those contexts. The initial work in geometry comes in Course 1. It is synthetic, begins with three-dimensional shapes, and includes volumes, areas and perimeters of many common shapes. It then considers polygons and their properties. The Pythagorean Theorem is introduced and applied. Symmetry of plane shapes, both bilateral and rotational, is extended to include translational symmetry of infinite strip patterns and infinite plane patterns. All of the content is developed in the context of real-world situations and An early unit in Course 2, Patterns of Location, Shape, and Size, focuses on the goal of algebraic representation of geometric ideas by introducing coordinate representations of points. Coordinates are used to quantify distance, slope of lines, and to express the numeric representation of the relation of the slopes of two perpendicular lines. Coordinates are further used to model isometries and size transformations and their compositions. Coordinate models of points allow matrices to be introduced as another way to represent a polygon and a transformation that leaves the origin fixed. Matrix representation of shapes and transformations are used to create animations. The second geometry and trigonometry unit in Course 2, Geometric Form and Its Function, returns to study of three basic plane figures: triangles, quadrilaterals and circles. The fact that a quadrilateral is not rigid is used to introduce linkages and their properties in a variety of contexts. Similar figures are related to a special linkage - the pantograph. Triangles with one side that can vary in length are studied and used to introduce the trigonometric ratios of sine, cosine, and tangent. The sine and cosine functions are developed further in the study of circles and circular motion. Trigonometric concepts and methods are interwoven and extended in each of the three Course 3 algebra and Course 3 includes one unit whose primary focus is geometry. Its goal is to consolidate and organize the geometric knowledge of the students more logically and formally. To accomplish this, students learn to reason logically in geometric contexts. Inductive and deductive reasoning patterns are contrasted and the simple geometry of plane angles is presented in a local axiomatic system. Necessary and sufficient conditions (NASC) for parallelism of lines are introduced and applied. The similarity and congruence of triangles is developed and used. The NASC for a quadrilateral to be a parallelogram are included as well as NASC for a few other more specialized parallelograms. Reasoning synthetically and analytically Geometry, trigonometry, and algebra become increasingly intertwined in the Course 4 units. In Modeling Motion, two-dimensional vectors are introduced and used to model linear, circular, and other nonlinear motions. Inverse trigonometric functions are introduced and methods for solving trigonometric equations and proving identities are developed in the Functions and Symbolic Reasoning unit. The Space Geometry unit provides college-bound students further work with visualization and representations of three-dimensional shapes and surfaces. An overview of the sequence and contents of the geometry and trigonometry units in the CPMP four-year curriculum follows. Unit 5 - Patterns in Space and Visualization develops student visualization skills and an understanding of properties of space-shapes including symmetry, area, and volume. Topics include: Two-and three-dimensional shapes, spatial visualization, perimeter, area, surface area, volume, the Pythagorean Theorem, angle properties, symmetry, isometric transformations (reflections, rotations, translations, glide reflections), one-dimensional strip patterns, tilings of the plane, and the regular (Platonic) solids. Unit 2 - Patterns of Location, Shape, and Size develops student understanding of coordinate methods for representing, and analyzing relations among, geometric shapes and for describing geometric change. Topics include: Modeling situations with coordinates, including computer-generated graphics, distance, midpoint of a segment, slope, designing and programming algorithms, matrices, systems of equations, coordinate models of isometric transformations (reflections, rotations, translations, glide reflections) and of size transformations, and similarity. Unit 6 - Geometric Form and Its Function develops student ability to model and analyze physical phenomena with triangles, quadrilaterals, and circles and to use these shapes to investigate trigonometric functions, angular velocity, and periodic change. Topics include: Parallelogram linkages, pantographs, similarity, triangular linkages (with one side that can change length), sine, cosine, and tangent ratios, indirect measurement, angular velocity, transmission factor, linear velocity, periodic change, radian measure, period, amplitude, and graphs of functions of the form y = A sin Bx, y = A cos Bx. Unit 4 - Shapes and Geometric Reasoning introduces students to formal reasoning and deduction in geometric settings. Topics include: Inductive and deductive reasoning, counterexamples, the role of assumptions in proof, conclusions concerning supplementary and vertical angles and the angles formed by parallel lines and transversals, conditions insuring similarity and congruence of triangles and their application to quadrilaterals and other shapes, and necessary and sufficient conditions for parallelograms. Unit 2 - Modeling Motion develops student understanding of two-dimensional vectors and their use in modeling linear, circular, and other nonlinear Topics include: Concept of vector as a mathematical object used to model situations defined by magnitude and direction; equality of vectors, scalar multiples, opposite vectors, sum and difference vectors, position vectors and coordinates; and parametric equations for motion along a line and for motion of projectiles and rotating objects. Unit 7 - Functions and Symbolic Reasoning extends student ability to manipulate symbolic representations of exponential, logarithmic, and trigonometric functions; to solve exponential and logarithmic equations; to prove or disprove that two trigonometric expressions are identical and to solve trigonometric equations; to reason with complex numbers and complex number operations using geometric representations and to find roots of complex numbers. Topics include: Equivalent forms of exponential expressions, definition of e and natural logarithms, solving equations using logarithms and solving logarithmic equations; the tangent, cotangent, secant, and cosecant functions; fundamental trigonometric identities, sum and difference identities, double-angle identities; solving trigonometric equations and expression of periodic solutions; rectangular and polar representations of complex numbers, absolute value, DeMoivre's Theorem, and the roots of a complex Unit 8 - Space Geometry extends student ability to visualize and represent nonregular three-dimensional shapes using contours, cross sections and reliefs; to visualize and represent surfaces defined by algebraic equations; to visualize and represent lines in space; and to sketch three-dimensional shapes. Topics include: Using contours to represent three-dimensional surfaces and developing contour maps from data; conics as planar sections of right circular cones; sketching surfaces from sets of cross sections; three-dimensional rectangular coordinate systems, sketching surfaces using traces, intercepts and cross sections derived from algebraically defined surfaces; cylinders, surfaces of revolution; and describing planes and lines in space.
http://www.wmich.edu/cpmp/parentresource/geometry.html
13
65
What is a circle? In a third grader’s language, a circle is a closed figure with curved side or no sides. A circle is defined by its centre and its radius. The centre of a circle is the point exactly in the middle of the circle such that every point on the circle is at the same distance from the centre of the circle. The radius of the circle is the distance between the centre of the circle and any point on the circle. Since all points on a circle are at the same distance from the centre of the circle, there can be infinitely many radii (plural of radius) of a circle. What is the diameter of a circle? When we draw two radii from the centre of a circle to any two different points on a circle, an angle is formed such that the two radii are the rays of the angle. This angle can be an acute angle, an obtuse angle, a right angle, a straight angle or also a reflex angle. When this angle is a straight angle, that is, when the angle at the centre subtended by the two radii is 180 degrees, the two radii form a straight line that passes through the centre of the circle and touches two points on the circle. Such a line is called the diameter of the circle. In other words, a diameter of a circle is a line that passes through the centre of the circle and touches any two points on the circle as well. Diameter of a circle can also be defined in terms of the chord of a circle. A chord of a circle is a line segment that joins any two points on the circle. If a chord is such that it passes through the centre of the circle then it is called the diameter of the circle. Just as in radii, a circle can have infinitely many diameters as well. By the definition of a diameter stated above, we also see that all the diameters would pass through the centre of the circle. Therefore we can say that all diameters of a circle are concentric. How to find the diameter of a circle? From the definition of a diameter above we saw that the length of the diameter would be two times that of the radius. Therefore the diameter of a circle formula can be written like this: d = 2r Where, d is the diameter of the circle and r is the radius of the circle.
http://mathmarvelous.blogspot.com/2012/12/what-to-understand-by-diameter-of-circle.html
13
51
The imaging geometry of a radar system is different from the framing and scanning systems commonly employed for optical remote sensing described in Chapter 2. Similar to optical systems, the platform travels forward in the flight direction (A) with the nadir (B) directly beneath the platform. The microwave beam is transmitted obliquely at right angles to the direction of flight illuminating a swath (C) which is offset from nadir. Range (D) refers to the across-track dimension perpendicular to the flight direction, while azimuth (E) refers to the along-track dimension parallel to the flight direction. This side-looking viewing geometry is typical of imaging radar systems (airborne or spaceborne). The portion of the image swath closest to the nadir track of the radar platform is called the near range (A) while the portion of the swath farthest from the nadir is called the far range (B). The incidence angle is the angle between the radar beam and ground surface (A) which increases, moving across the swath from near to far range. The look angle (B) is the angle at which the radar "looks" at the surface. In the near range, the viewing geometry may be referred to as being steep, relative to the far range, where the viewing geometry is shallow. At all ranges the radar antenna measures the radial line of sight distance between the radar and each target on the surface. This is the slant range distance (C). The ground range distance (D) is the true horizontal distance along the ground corresponding to each point measured in slant range. Unlike optical systems, a radar's spatial resolution is a function of the specific properties of the microwave radiation and geometrical effects. If a Real Aperture Radar (RAR) is used for image formation (as in Side-Looking Airborne Radar) a single transmit pulse and the backscattered signal are used to form the image. In this case, the resolution is dependent on the effective length of the pulse in the slant range direction and on the width of the illumination in the azimuth direction. The range or across-track resolution is dependent on the length of the pulse (P). Two distinct targets on the surface will be resolved in the range dimension if their separation is greater than half the pulse length. For example, targets 1 and 2 will not be separable while targets 3 and 4 will. Slant range resolution remains constant, independent of range. However, when projected into ground range coordinates, the resolution in ground range will be dependent of the incidence angle. Thus, for fixed slant range resolution, the ground range resolution will decrease with increasing range. The azimuth or along-track resolution is determined by the angular width of the radiated microwave beam and the slant range distance. This beamwidth (A) is a measure of the width of the illumination pattern. As the radar illumination propagates to increasing distance from the sensor, the azimuth resolution increases (becomes coarser). In this illustration, targets 1 and 2 in the near range would be separable, but targets 3 and 4 at further range would not. The radar beamwidth is inversely proportional to the antenna length (also referred to as the aperture) which means that a longer antenna (or aperture) will produce a narrower beam and finer resolution. Finer range resolution can be achieved by using a shorter pulse length, which can be done within certain engineering design restrictions. Finer azimuth resolution can be achieved by increasing the antenna length. However, the actual length of the antenna is limited by what can be carried on an airborne or spaceborne platform. For airborne radars, antennas are usually limited to one to two metres; for satellites they can be 10 to 15 metres in length. To overcome this size limitation, the forward motion of the platform and special recording and processing of the backscattered echoes are used to simulate a very long antenna and thus increase azimuth resolution. This figure illustrates how this is achieved. As a target (A) first enters the radar beam (1), the backscattered echoes from each transmitted pulse begin to be recorded. As the platform continues to move forward, all echoes from the target for each pulse are recorded during the entire time that the target is within the beam. The point at which the target leaves the view of the radar beam (2) some time later, determines the length of the simulated or synthesized antenna (B). Targets at far range, where the beam is widest will be illuminated for a longer period of time than objects at near range. The expanding beamwidth, combined with the increased time a target is within the beam as ground range increases, balance each other, such that the resolution remains constant across the entire swath. This method of achieving uniform, fine azimuth resolution across the entire imaging swath is called synthetic aperture radar, or SAR. Most airborne and spaceborne radars employ this type of radar. Explain why the use of a synthetic aperture radar (SAR) is the only practical option for radar remote sensing from space. The answer is ... The high altitudes of spaceborne platforms (i.e. hundreds of kilometres) preclude the use of real aperture radar (RAR) because the azimuth resolution, which is a function of the range distance, would be too coarse to be useful. In a spaceborne RAR, the only way to achieve fine resolution would be to have a very, very narrow beam which would require an extremely long physical antenna. However, an antenna of several kilometres in length is physically impossible to build, let alone fly on a spacecraft. Therefore, we need to use synthetic aperture radar to synthesize a long antenna to achieve fine azimuth resolution.
http://www.nrcan.gc.ca/earth-sciences/geography-boundary/remote-sensing/fundamentals/1742
13
190
In fluid dynamics, Bernoulli's principle states that for an inviscid flow, an increase in the speed of the fluid occurs simultaneously with a decrease in pressure or a decrease in the fluid's potential energy. Bernoulli's principle is named after the Swiss scientist Daniel Bernoulli who published his principle in his book Hydrodynamica in 1738. Bernoulli's principle can be applied to various types of fluid flow, resulting in what is loosely denoted as Bernoulli's equation. In fact, there are different forms of the Bernoulli equation for different types of flow. The simple form of Bernoulli's principle is valid for incompressible flows (e.g. most liquid flows) and also for compressible flows (e.g. gases) moving at low Mach numbers (usually less than 0.3). More advanced forms may in some cases be applied to compressible flows at higher Mach numbers (see the derivations of the Bernoulli equation). Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of mechanical energy in a fluid along a streamline is the same at all points on that streamline. This requires that the sum of kinetic energy and potential energy remain constant. Thus an increase in the speed of the fluid occurs proportionately with an increase in both its dynamic pressure and kinetic energy, and a decrease in its static pressure and potential energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same on all streamlines because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ρ g h) is the same everywhere. Bernoulli's principle can also be derived directly from Newton's 2nd law. If a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline. Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. Incompressible flow equation In most flows of liquids, and of gases at low Mach number, the density of a fluid parcel can be considered to be constant, regardless of pressure variations in the flow. Therefore, the fluid can be considered to be incompressible and these flows are called incompressible flow. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow. A common form of Bernoulli's equation, valid at any arbitrary point along a streamline, is: - is the fluid flow speed at a point on a streamline, - is the acceleration due to gravity, - is the elevation of the point above a reference plane, with the positive z-direction pointing upward – so in the direction opposite to the gravitational acceleration, - is the pressure at the chosen point, and - is the density of the fluid at all points in the fluid. where Ψ is the force potential at the point considered on the streamline. E.g. for the Earth's gravity Ψ = gz. The following two assumptions must be met for this Bernoulli equation to apply: - the flow must be incompressible – even though pressure varies, the density must remain constant along a streamline; - friction by viscous forces has to be negligible. By multiplying with the fluid density , equation (A) can be rewritten as: - is dynamic pressure, - is the piezometric head or hydraulic head (the sum of the elevation z and the pressure head) and The constant in the Bernoulli equation can be normalised. A common approach is in terms of total head or energy head H: The above equations suggest there is a flow speed at which pressure is zero, and at even higher speeds the pressure is negative. Most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoulli's equation ceases to be valid before zero pressure is reached. In liquids – when the pressure becomes too low – cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, the changes in mass density become significant so that the assumption of constant density is invalid. Simplified form In many applications of Bernoulli's equation, the change in the ρ g z term along the streamline is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height z along a streamline is so small the ρ g z term can be omitted. This allows the above equation to be presented in the following simplified form: where p0 is called 'total pressure', and q is 'dynamic pressure'. Many authors refer to the pressure p as static pressure to distinguish it from total pressure p0 and dynamic pressure q. In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure." The simplified form of Bernoulli's equation can be summarized in the following memorable word equation: - static pressure + dynamic pressure = total pressure Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own unique static pressure p and dynamic pressure q. Their sum p + q is defined to be the total pressure p0. The significance of Bernoulli's principle can now be summarized as total pressure is constant along a streamline. If the fluid flow is irrotational, the total pressure on every streamline is the same and Bernoulli's principle can be summarized as total pressure is constant everywhere in the fluid flow. It is reasonable to assume that irrotational flow exists in any situation where a large body of fluid is flowing past a solid body. Examples are aircraft in flight, and ships moving in open bodies of water. However, it is important to remember that Bernoulli's principle does not apply in the boundary layer or in fluid flow through long pipes. If the fluid flow at some point along a stream line is brought to rest, this point is called a stagnation point, and at this point the total pressure is equal to the stagnation pressure. Applicability of incompressible flow equation to flow of gases Bernoulli's equation is sometimes valid for the flow of gases: provided that there is no transfer of kinetic or potential energy from the gas flow to the compression or expansion of the gas. If both the gas pressure and volume change simultaneously, then work will be done on or by the gas. In this case, Bernoulli's equation – in its incompressible flow form – can not be assumed to be valid. However if the gas process is entirely isobaric, or isochoric, then no work is done on or by the gas, (so the simple energy balance is not upset). According to the gas law, an isobaric or isochoric process is ordinarily the only way to ensure constant density in a gas. Also the gas density will be proportional to the ratio of pressure and absolute temperature, however this ratio will vary upon compression or expansion, no matter what non-zero quantity of heat is added or removed. The only exception is if the net heat transfer is zero, as in a complete thermodynamic cycle, or in an individual isentropic (frictionless adiabatic) process, and even then this reversible process must be reversed, to restore the gas to the original pressure and specific volume, and thus density. Only then is the original, unmodified Bernoulli equation applicable. In this case the equation can be used if the flow speed of the gas is sufficiently below the speed of sound, such that the variation in density of the gas (due to this effect) along each streamline can be ignored. Adiabatic flow at less than Mach 0.3 is generally considered to be slow enough. Unsteady potential flow For an irrotational flow, the flow velocity can be described as the gradient ∇φ of a velocity potential φ. In that case, and for a constant density ρ, the momentum equations of the Euler equations can be integrated to: which is a Bernoulli equation valid also for unsteady—or time dependent—flows. Here ∂φ/∂t denotes the partial derivative of the velocity potential φ with respect to time t, and v = |∇φ| is the flow speed. The function f(t) depends only on time and not on position in the fluid. As a result, the Bernoulli equation at some moment t does not only apply along a certain streamline, but in the whole fluid domain. This is also true for the special case of a steady irrotational flow, in which case f is a constant. Further f(t) can be made equal to zero by incorporating it into the velocity potential using the transformation Note that the relation of the potential to the flow velocity is unaffected by this transformation: ∇Φ = ∇φ. The Bernoulli equation for unsteady potential flow also appears to play a central role in Luke's variational principle, a variational description of free-surface flows using the Lagrangian (not to be confused with Lagrangian coordinates). Compressible flow equation Bernoulli developed his principle from his observations on liquids, and his equation is applicable only to incompressible fluids, and compressible fluids up to approximately Mach number 0.3. It is possible to use the fundamental principles of physics to develop similar equations applicable to compressible fluids. There are numerous equations, each tailored for a particular application, but all are analogous to Bernoulli's equation and all rely on nothing more than the fundamental principles of physics such as Newton's laws of motion or the first law of thermodynamics. Compressible flow in fluid dynamics - (constant along a streamline) - p is the pressure - ρ is the density - v is the flow speed - Ψ is the potential associated with the conservative force field, often the gravitational potential In engineering situations, elevations are generally small compared to the size of the Earth, and the time scales of fluid flow are small enough to consider the equation of state as adiabatic. In this case, the above equation becomes - (constant along a streamline) where, in addition to the terms listed above: - γ is the ratio of the specific heats of the fluid - g is the acceleration due to gravity - z is the elevation of the point above a reference plane In many applications of compressible flow, changes in elevation are negligible compared to the other terms, so the term gz can be omitted. A very useful form of the equation is then: - p0 is the total pressure - ρ0 is the total density Compressible flow in thermodynamics Here w is the enthalpy per unit mass, which is also often written as h (not to be confused with "head" or "height"). The constant on the right hand side is often called the Bernoulli constant and denoted b. For steady inviscid adiabatic flow with no additional sources or sinks of energy, b is constant along any given streamline. More generally, when b may vary along streamlines, it still proves a useful parameter, related to the "head" of the fluid (see below). When the change in Ψ can be ignored, a very useful form of this equation is: where w0 is total enthalpy. For a calorically perfect gas such as an ideal gas, the enthalpy is directly proportional to the temperature, and this leads to the concept of the total (or stagnation) temperature. When shock waves are present, in a reference frame in which the shock is stationary and the flow is steady, many of the parameters in the Bernoulli equation suffer abrupt changes in passing through the shock. The Bernoulli parameter itself, however, remains unaffected. An exception to this rule is radiative shocks, which violate the assumptions leading to the Bernoulli equation, namely the lack of additional sinks or sources of energy. Derivations of Bernoulli equation Bernoulli equation for incompressible fluids The Bernoulli equation for incompressible fluids can be derived by integrating Newton's Second Law of Motion, or applying the law of conservation of energy in two sections along a streamline, ignoring viscosity, compressibility, and thermal effects. The simplest derivation is to first ignore gravity and consider constrictions and expansions in pipes that are otherwise straight, as seen in Venturi effect. Let the x axis be directed down the axis of the pipe. Define a parcel of fluid moving through a pipe with cross-sectional area "A", the length of the parcel is "dx", and the volume of the parcel A dx. If mass density is ρ, the mass of the parcel is density multiplied by its volume m = ρ A dx. The change in pressure over distance dx is "dp" and flow velocity v = dx / dt. Apply Newton's Second Law of Motion (Force =mass×acceleration) and recognizing that the effective force on the parcel of fluid is -A dp. If the pressure decreases along the length of the pipe, dp is negative but the force resulting in flow is positive along the x axis. In steady flow the velocity field is constant with respect to time, v = v(x) = v(x(t)), so v itself is not directly a function of time t. It is only when the parcel moves through x that the cross sectional area changes: v depends on t only through the cross-sectional position x(t). With density ρ constant, the equation of motion can be written as by integrating with respect to x where C is a constant, sometimes referred to as the Bernoulli constant. It is not a universal constant, but rather a constant of a particular fluid system. The deduction is: where the speed is large, pressure is low and vice versa. In the above derivation, no external work-energy principle is invoked. Rather, Bernoulli's principle was inherently derived by a simple manipulation of the momentum equation. - the change in the kinetic energy Ekin of the system equals the net work W done on the system; The system consists of the volume of fluid, initially between the cross-sections A1 and A2. In the time interval Δt fluid elements initially at the inflow cross-section A1 move over a distance s1 = v1 Δt, while at the outflow cross-section the fluid moves away from cross-section A2 over a distance s2 = v2 Δt. The displaced fluid volumes at the inflow and outflow are respectively A1 s1 and A2 s2. The associated displaced fluid masses are – when ρ is the fluid's mass density – equal to density times volume, so ρ A1 s1 and ρ A2 s2. By mass conservation, these two masses displaced in the time interval Δt have to be equal, and this displaced mass is denoted by Δm: The work done by the forces consists of two parts: - The work done by the pressure acting on the areas A1 and A2 - The work done by gravity: the gravitational potential energy in the volume A1 s1 is lost, and at the outflow in the volume A2 s2 is gained. So, the change in gravitational potential energy ΔEpot,gravity in the time interval Δt is - Now, the work by the force of gravity is opposite to the change in potential energy, Wgravity = −ΔEpot,gravity: while the force of gravity is in the negative z-direction, the work—gravity force times change in elevation—will be negative for a positive elevation change Δz = z2 − z1, while the corresponding potential energy change is positive. So: And the total work done in this time interval is The increase in kinetic energy is Putting these together, the work-kinetic energy theorem W = ΔEkin gives: After dividing by the mass Δm = ρ A1 v1 Δt = ρ A2 v2 Δt the result is: or, as stated in the first paragraph: - (Eqn. 1), Which is also Equation (A) Further division by g produces the following equation. Note that each term can be described in the length dimension (such as meters). This is the head equation derived from Bernoulli's principle: - (Eqn. 2a) The middle term, z, represents the potential energy of the fluid due to its elevation with respect to a reference plane. Now, z is called the elevation head and given the designation zelevation. - when arriving at elevation z = 0. Or when we rearrange it as a head: The hydrostatic pressure p is defined as - , with p0 some reference pressure, or when we rearrange it as a head: The term p / (ρg) is also called the pressure head, expressed as a length measurement. It represents the internal energy of the fluid due to the pressure exerted on the container. When we combine the head due to the flow speed and the head due to static pressure with the elevation above a reference plane, we obtain a simple relationship useful for incompressible fluids using the velocity head, elevation head, and pressure head. - (Eqn. 2b) If we were to multiply Eqn. 1 by the density of the fluid, we would get an equation with three pressure terms: - (Eqn. 3) We note that the pressure of the system is constant in this form of the Bernoulli Equation. If the static pressure of the system (the far right term) increases, and if the pressure due to elevation (the middle term) is constant, then we know that the dynamic pressure (the left term) must have decreased. In other words, if the speed of a fluid decreases and it is not due to an elevation difference, we know it must be due to an increase in the static pressure that is resisting the flow. All three equations are merely simplified versions of an energy balance on a system. Bernoulli equation for compressible fluids The derivation for compressible fluids is similar. Again, the derivation depends upon (1) conservation of mass, and (2) conservation of energy. Conservation of mass implies that in the above figure, in the interval of time Δt, the amount of mass passing through the boundary defined by the area A1 is equal to the amount of mass passing outwards through the boundary defined by the area A2: Conservation of energy is applied in a similar manner: It is assumed that the change in energy of the volume of the streamtube bounded by A1 and A2 is due entirely to energy entering or leaving through one or the other of these two boundaries. Clearly, in a more complicated situation such as a fluid flow coupled with radiation, such conditions are not met. Nevertheless, assuming this to be the case and assuming the flow is steady so that the net change in the energy is zero, where ΔE1 and ΔE2 are the energy entering through A1 and leaving through A2, respectively. The energy entering through A1 is the sum of the kinetic energy entering, the energy entering in the form of potential gravitational energy of the fluid, the fluid thermodynamic energy entering, and the energy entering in the form of mechanical p dV work: A similar expression for may easily be constructed. So now setting : which can be rewritten as: Now, using the previously-obtained result from conservation of mass, this may be simplified to obtain which is the Bernoulli equation for compressible flow. In modern everyday life there are many observations that can be successfully explained by application of Bernoulli's principle, even though no real fluid is entirely inviscid and a small viscosity often has a large effect on the flow. - Bernoulli's principle can be used to calculate the lift force on an airfoil if the behaviour of the fluid flow in the vicinity of the foil is known. For example, if the air flowing past the top surface of an aircraft wing is moving faster than the air flowing past the bottom surface, then Bernoulli's principle implies that the pressure on the surfaces of the wing will be lower above than below. This pressure difference results in an upwards lifting force.[nb 1] Whenever the distribution of speed past the top and bottom surfaces of a wing is known, the lift forces can be calculated (to a good approximation) using Bernoulli's equations – established by Bernoulli over a century before the first man-made wings were used for the purpose of flight. Bernoulli's principle does not explain why the air flows faster past the top of the wing and slower past the underside. To understand why, it is helpful to understand circulation, the Kutta condition, and the Kutta–Joukowski theorem. - The Dyson Bladeless Fan (or Air Multiplier) is an implementation that takes advantage of the Venturi effect, Coandă effect and Bernoulli's Principle. - The carburetor used in many reciprocating engines contains a venturi to create a region of low pressure to draw fuel into the carburetor and mix it thoroughly with the incoming air. The low pressure in the throat of a venturi can be explained by Bernoulli's principle; in the narrow throat, the air is moving at its fastest speed and therefore it is at its lowest pressure. - The Pitot tube and static port on an aircraft are used to determine the airspeed of the aircraft. These two devices are connected to the airspeed indicator, which determines the dynamic pressure of the airflow past the aircraft. Dynamic pressure is the difference between stagnation pressure and static pressure. Bernoulli's principle is used to calibrate the airspeed indicator so that it displays the indicated airspeed appropriate to the dynamic pressure. - The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect. - The maximum possible drain rate for a tank with a hole or tap at the base can be calculated directly from Bernoulli's equation, and is found to be proportional to the square root of the height of the fluid in the tank. This is Torricelli's law, showing that Torricelli's law is compatible with Bernoulli's principle. Viscosity lowers this drain rate. This is reflected in the discharge coefficient, which is a function of the Reynolds number and the shape of the orifice. - In open-channel hydraulics, a detailed analysis of the Bernoulli theorem and its extension were recently (2009) developed. It was proved that the depth-averaged specific energy reaches a minimum in converging accelerating free-surface flow over weirs and flumes (also). Further, in general, a channel control with minimum specific energy in curvilinear flow is not isolated from water waves, as customary state in open-channel hydraulics. - The Bernoulli grip relies on this principle to create a non-contact adhesive force between a surface and the gripper. Misunderstandings about the generation of lift Many explanations for the generation of lift (on airfoils, propeller blades, etc.) can be found; some of these explanations can be misleading, and some are false. This has been a source of heated discussion over the years. In particular, there has been debate about whether lift is best explained by Bernoulli's principle or Newton's laws of motion. Modern writings agree that both Bernoulli's principle and Newton's laws are relevant and either can be used to correctly describe lift. Several of these explanations use the Bernoulli principle to connect the flow kinematics to the flow-induced pressures. In cases of incorrect (or partially correct) explanations relying on the Bernoulli principle, the errors generally occur in the assumptions on the flow kinematics and how these are produced. It is not the Bernoulli principle itself that is questioned because this principle is well established. Misapplications of Bernoulli's principle in common classroom demonstrations There are several common classroom demonstrations that are sometimes incorrectly explained using Bernoulli's principle. One involves holding a piece of paper horizontally so that it droops downward and then blowing over the top of it. As the demonstrator blows over the paper, the paper rises. It is then asserted that this is because "faster moving air has lower pressure". One problem with this explanation can be seen by blowing along the bottom of the paper - were the deflection due simply to faster moving air one would expect the paper to deflect downward, but the paper deflects upward regardless of whether the faster moving air is on the top or the bottom. Another problem is that when the air leaves the demonstrator's mouth it has the same pressure as the surrounding air; the air does not have lower pressure just because it is moving; in the demonstration, the static pressure of the air leaving the demonstrator's mouth is equal to the pressure of the surrounding air. A third problem is that it is false to make a connection between the flow on the two sides of the paper using Bernoulli’s equation since the air above and below are different flow fields and Bernoulli's principle only applies within a flow field. As the wording of the principle can change its implications, stating the principle correctly is important. What Bernoulli's principle actually says is that within a flow of constant energy, when fluid flows through a region of lower pressure it speeds up and vice versa. Thus, Bernoulli's principle concerns itself with changes in speed and changes in pressure within a flow field. It cannot be used to compare different flow fields. A correct explanation of why the paper rises would observe that the plume follows the curve of the paper and that a curved streamline will develop a pressure gradient perpendicular to the direction of flow, with the lower pressure on the inside of the curve. Bernoulli's principle predicts that the decrease in pressure is associated with an increase in speed, i.e. that as the air passes over the paper it speeds up and moves faster than it was moving when it left the demonstrator's mouth. But this is not apparent from the demonstration. Other common classroom demonstrations, such as blowing between two suspended spheres, or suspending a ball in an airstream are sometimes explained in a similarly misleading manner by saying "faster moving air has lower pressure". See also - Terminology in fluid dynamics - Navier–Stokes equations – for the flow of a viscous fluid - Euler equations – for the flow of an inviscid fluid - Hydraulics – applied fluid mechanics for liquids - Venturi effect - Inviscid flow - Clancy, L.J., Aerodynamics, Chapter 3. - Batchelor, G.K. (1967), Section 3.5, pp. 156–64. - "Hydrodynamica". Britannica Online Encyclopedia. Retrieved 2008-10-30. - Streeter, V.L., Fluid Mechanics, Example 3.5, McGraw–Hill Inc. (1966), New York. - "If the particle is in a region of varying pressure (a non-vanishing pressure gradient in the x-direction) and if the particle has a finite size l, then the front of the particle will be ‘seeing’ a different pressure from the rear. More precisely, if the pressure drops in the x-direction (dp/dx < 0) the pressure at the rear is higher than at the front and the particle experiences a (positive) net force. According to Newton’s second law, this force causes an acceleration and the particle’s velocity increases as it moves along the streamline... Bernoulli’s equation describes this mathematically (see the complete derivation in the appendix)."Babinsky, Holger (November 2003), "How do wings work?", Physics Education - "Acceleration of air is caused by pressure gradients. Air is accelerated in direction of the velocity if the pressure goes down. Thus the decrease of pressure is the cause of a higher velocity." Weltner, Klaus; Ingelman-Sundberg, Martin, Misinterpretations of Bernoulli's Law - " The idea is that as the parcel moves along, following a streamline, as it moves into an area of higher pressure there will be higher pressure ahead (higher than the pressure behind) and this will exert a force on the parcel, slowing it down. Conversely if the parcel is moving into a region of lower pressure, there will be an higher pressure behind it (higher than the pressure ahead), speeding it up. As always, any unbalanced force will cause a change in momentum (and velocity), as required by Newton’s laws of motion." See How It Flies John S. Denker http://www.av8n.com/how/htm/airfoils.html - Batchelor, G.K. (1967), §5.1, p. 265. - Mulley, Raymond (2004). Flow of Industrial Fluids: Theory and Equations. CRC Press. ISBN 0-8493-2767-9., 410 pages. See pp. 43–44. - Chanson, Hubert (2004). Hydraulics of Open Channel Flow: An Introduction. Butterworth-Heinemann. ISBN 0-7506-5978-5., 650 pages. See p. 22. - Oertel, Herbert; Prandtl, Ludwig; Böhle, M.; Mayes, Katherine (2004). Prandtl's Essentials of Fluid Mechanics. Springer. pp. 70–71. ISBN 0-387-40437-6. - "Bernoulli's Equation". NASA Glenn Research Center. Retrieved 2009-03-04. - Clancy, L.J., Aerodynamics, Section 3.5. - Clancy, L.J. Aerodynamics, Equation 3.12 - Batchelor, G.K. (1967), p. 383 - White, Frank M. Fluid Mechanics, 6e. McGraw-Hill International Edition. p. 602. - Clarke C. and Carswell B., Astrophysical Fluid Dynamics - Clancy, L.J., Aerodynamics, Section 3.11 - Landau & Lifshitz (1987, §5) - Van Wylen, G.J., and Sonntag, R.E., (1965), Fundamentals of Classical Thermodynamics, Section 5.9, John Wiley and Sons Inc., New York - Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. ISBN 0-201-02116-1., Vol. 2, §40–3, pp. 40–6 – 40–9. - Tipler, Paul (1991). Physics for Scientists and Engineers: Mechanics (3rd extended ed.). W. H. Freeman. ISBN 0-87901-432-6., p. 138. - Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. ISBN 0-201-02116-1., Vol. 1, §14–3, p. 14–4. - Physics Today, May 1010, "The Nearly Perfect Fermi Gas", by John E. Thomas, p 34. - Resnick, R. and Halliday, D. (1960), Physics, Section 18–5, John Wiley & Sons, Inc., New York ("[streamlines] are closer together above the wing than they are below so that Bernoulli's principle predicts the observed upward dynamic lift.") - Eastlake, Charles N. (March 2002). "An Aerodynamicist’s View of Lift, Bernoulli, and Newton". The Physics Teacher 40. "The resultant force is determined by integrating the surface-pressure distribution over the surface area of the airfoil." - Hua, M., Khaitan, D. and Kintner, P. (2011). University of Rochester, NY. Studying Near-Surface Effects of the Dyson Air-Multiplier Airfoil (2.7MB file) Retrieved 2012-07-19 - Clancy, L.J., Aerodynamics, Section 3.8 - Mechanical Engineering Reference Manual Ninth Edition - Castro-Orgaz, O. & Chanson, H. (2009). "Bernoulli Theorem, Minimum Specific Energy and Water Wave Celerity in Open Channel Flow". Journal of Irrigation and Drainage Engineering, ASCE, 135 (6): 773–778. doi:10.1061/(ASCE)IR.1943-4774.0000084. - Chanson, H. (2009). "Transcritical Flow due to Channel Contraction". Journal of Hydraulic Engineering, ASCE 135 (12): 1113–1114. - Chanson, H. (2006). "Minimum Specific Energy and Critical Flow Conditions in Open Channels". Journal of Irrigation and Drainage Engineering, ASCE 132 (5): 498–502. doi:10.1061/(ASCE)0733-9437(2006)132:5(498). - Glenn Research Center (2006-03-15). "Incorrect Lift Theory". NASA. Retrieved 2010-08-12. - Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages. ISBN 978-0-415-49271-3. - "Newton vs Bernoulli". - Ison, David. Bernoulli Or Newton: Who's Right About Lift? Retrieved on 2009-11-26 - Phillips, O.M. (1977). The dynamics of the upper ocean (2nd ed.). Cambridge University Press. ISBN 0-521-29801-6. Section 2.4. - Batchelor, G.K. (1967). Sections 3.5 and 5.1 - Lamb, H. (1994) §17–§29 - Weltner, Klaus; Ingelman-Sundberg, Martin. "Physics of Flight – reviewed". "The conventional explanation of aerodynamical lift based on Bernoulli’s law and velocity differences mixes up cause and effect. The faster flow at the upper side of the wing is the consequence of low pressure and not its cause." - "Bernoulli's law and experiments attributed to it are fascinating. Unfortunately some of these experiments are explained erroneously..." Misinterpretations of Bernoulli's Law Weltner, Klaus and Ingelman-Sundberg, Martin Department of Physics, University Frankfurt http://www-stud.rbi.informatik.uni-frankfurt.de/~plass/MIS/mis6.html - "This occurs because of Bernoulli’s principle — fast-moving air has lower pressure than non-moving air." Make Magazine http://makeprojects.com/Project/Origami-Flying-Disk/327/1 - " Faster-moving fluid, lower pressure. ... When the demonstrator holds the paper in front of his mouth and blows across the top, he is creating an area of faster-moving air." University of Minnesota School of Physics and Astronomy http://www.physics.umn.edu/outreach/pforce/circus/Bernoulli.html - "Bernoulli's Principle states that faster moving air has lower pressure... You can demonstrate Bernoulli's Principle by blowing over a piece of paper held horizontally across your lips." http://www.tallshipschannelislands.com/PDFs/Educational_Packet.pdf - "If the lift in figure A were caused by "Bernoulli principle," then the paper in figure B should droop further when air is blown beneath it. However, as shown, it raises when the upward pressure gradient in downward-curving flow adds to atmospheric pressure at the paper lower surface." Gale M. Craig PHYSICAL PRINCIPLES OF WINGED FLIGHT http://www.regenpress.com/aerodynamics.pdf - "In fact, the pressure in the air blown out of the lungs is equal to that of the surrounding air..." Babinsky http://iopscience.iop.org/0031-9120/38/6/001/pdf/pe3_6_001.pdf - "...air does not have a reduced lateral pressure (or static pressure...) simply because it is caused to move, the static pressure of free air does not decrease as the speed of the air increases, it misunderstanding Bernoulli's principle to suggest that this is what it tells us, and the behavior of the curved paper is explained by other reasoning than Bernoulli's principle." Peter Eastwell Bernoulli? Perhaps, but What About Viscosity? The Science Education Review, 6(1) 2007 http://www.scienceeducationreview.com/open_access/eastwell-bernoulli.pdf - "Make a strip of writing paper about 5 cm X 25 cm. Hold it in front of your lips so that it hangs out and down making a convex upward surface. When you blow across the top of the paper, it rises. Many books attribute this to the lowering of the air pressure on top solely to the Bernoulli effect. Now use your fingers to form the paper into a curve that it is slightly concave upward along its whole length and again blow along the top of this strip. The paper now bends downward...an often-cited experiment, which is usually taken as demonstrating the common explanation of lift, does not do so..." Jef Raskin Coanda Effect: Understanding Why Wings Work http://karmak.org/archive/2003/02/coanda_effect.html - "Blowing over a piece of paper does not demonstrate Bernoulli’s equation. While it is true that a curved paper lifts when flow is applied on one side, this is not because air is moving at different speeds on the two sides... It is false to make a connection between the flow on the two sides of the paper using Bernoulli’s equation." Holger Babinsky How Do Wings Work Physics Education 38(6) http://iopscience.iop.org/0031-9120/38/6/001/pdf/pe3_6_001.pdf - "An explanation based on Bernoulli’s principle is not applicable to this situation, because this principle has nothing to say about the interaction of air masses having different speeds... Also, while Bernoulli’s principle allows us to compare fluid speeds and pressures along a single streamline and... along two different streamlines that originate under identical fluid conditions, using Bernoulli’s principle to compare the air above and below the curved paper in Figure 1 is nonsensical; in this case, there aren’t any streamlines at all below the paper!" Peter Eastwell Bernoulli? Perhaps, but What About Viscosity? The Science Education Review 6(1) 2007 http://www.scienceeducationreview.com/open_access/eastwell-bernoulli.pdf - "The well-known demonstration of the phenomenon of lift by means of lifting a page cantilevered in one’s hand by blowing horizontally along it is probably more a demonstration of the forces inherent in the Coanda effect than a demonstration of Bernoulli’s law; for, here, an air jet issues from the mouth and attaches to a curved (and, in this case pliable) surface. The upper edge is a complicated vortex-laden mixing layer and the distant flow is quiescent, so that Bernoulli’s law is hardly applicable." David Auerbach Why Aircreft Fly European Journal of Physics Vol 21 p 289 http://iopscience.iop.org/0143-0807/21/4/302/pdf/0143-0807_21_4_302.pdf - "Millions of children in science classes are being asked to blow over curved pieces of paper and observe that the paper "lifts"... They are then asked to believe that Bernoulli's theorem is responsible... Unfortunately, the "dynamic lift" involved...is not properly explained by Bernoulli's theorem." Norman F. Smith "Bernoulli and Newton in Fluid Mechanics" The Physics Teacher Nov 1972 - "Bernoulli’s principle is very easy to understand provided the principle is correctly stated. However, we must be careful, because seemingly-small changes in the wording can lead to completely wrong conclusions." See How It Flies John S. Denker http://www.av8n.com/how/htm/airfoils.html#sec-bernoulli - "A complete statement of Bernoulli's Theorem is as follows: "In a flow where no energy is being added or taken away, the sum of its various energies is a constant: consequently where the velocity increasees the pressure decreases and vice versa."" Norman F Smith Bernoulli, Newton and Dynamic Lift Part I School Science and Mathematics Vol 73 Issue 3 http://onlinelibrary.wiley.com/doi/10.1111/j.1949-8594.1973.tb08998.x/pdf - "...if a streamline is curved, there must be a pressure gradient across the streamline, with the pressure increasing in the direction away from the centre of curvature." Babinsky http://iopscience.iop.org/0031-9120/38/6/001/pdf/pe3_6_001.pdf - "The curved paper turns the stream of air downward, and this action produces the lift reaction that lifts the paper." Norman F. Smith Bernoulli, Newton, and Dynamic Lift Part II School Science and Mathematics vol 73 Issue 4 pg 333 http://onlinelibrary.wiley.com/doi/10.1111/j.1949-8594.1973.tb09040.x/pdf - "The curved surface of the tongue creates unequal air pressure and a lifting action. ... Lift is caused by air moving over a curved surface." AERONAUTICS An Educator’s Guide with Activities in Science, Mathematics, and Technology Education by NASA pg 26 http://www.nasa.gov/pdf/58152main_Aeronautics.Educator.pdf - "Viscosity causes the breath to follow the curved surface, Newton's first law says there a force on the air and Newton’s third law says there is an equal and opposite force on the paper. Momentum transfer lifts the strip. The reduction in pressure acting on the top surface of the piece of paper causes the paper to rise." The Newtonian Description of Lift of a Wing-Revised David F. Anderson & Scott Eberhardt http://home.comcast.net/~clipper-108/Lift_AAPT.pdf - '"Demonstrations" of Bernoulli's principle are often given as demonstrations of the physics of lift. They are truly demonstrations of lift, but certainly not of Bernoulli's principle.' David F Anderson & Scott Eberhardt Understanding Flight pg 229 http://books.google.com/books?id=52Hfn7uEGSoC&pg=PA229 - "As an example, take the misleading experiment most often used to "demonstrate" Bernoulli's principle. Hold a piece of piece of paper so that it curves over your finger, then blow across the top. The paper will rise. However most people do not realize that the paper would not rise if it were flat, even though you are blowing air across the top of it at a furious rate. Bernoulli's principle does not apply directly in this case. This is because the air on the two sides of the paper did not start out from the same source. The air on the bottom is ambient air from the room, but the air on the top came from your mouth where you actually increased its speed without decreasing its pressure by forcing it out of your mouth. As a result the air on both sides of the flat paper actually has the same pressure, even though the air on the top is moving faster. The reason that a curved piece of paper does rise is that the air from your mouth speeds up even more as it follows the curve of the paper, which in turn lowers the pressure according to Bernoulli." From The Aeronautics File By Max Feil http://webcache.googleusercontent.com/search?q=cache:nutfrrTXLkMJ:www.mat.uc.pt/~pedro/ncientificos/artigos/aeronauticsfile1.ps+&cd=29&hl=en&ct=clnk&gl=us - "Some people blow over a sheet of paper to demonstrate that the accelerated air over the sheet results in a lower pressure. They are wrong with their explanation. The sheet of paper goes up because it deflects the air, by the Coanda effect, and that deflection is the cause of the force lifting the sheet. To prove they are wrong I use the following experiment: If the sheet of paper is pre bend the other way by first rolling it, and if you blow over it than, it goes down. This is because the air is deflected the other way. Airspeed is still higher above the sheet, so that is not causing the lower pressure." Pim Geurts. sailtheory.com http://www.sailtheory.com/experiments.html - "Finally, let’s go back to the initial example of a ball levitating in a jet of air. The naive explanation for the stability of the ball in the air stream, 'because pressure in the jet is lower than pressure in the surrounding atmosphere,' is clearly incorrect. The static pressure in the free air jet is the same as the pressure in the surrounding atmosphere..." Martin Kamela Thinking About Bernoulli The Physics Teacher Vol. 45, September 2007 http://tpt.aapt.org/resource/1/phteah/v45/i6/p379_s1 - "Aysmmetrical flow (not Bernoulli's theorem) also explains lift on the ping-pong ball or beach ball that floats so mysteriously in the tilted vacuum cleaner exhaust..." Norman F. Smith, Bernoulli and Newton in Fluid Mechanics" The Physics Teacher Nov 1972 p 455 - "Bernoulli’s theorem is often obscured by demonstrations involving non-Bernoulli forces. For example, a ball may be supported on an upward jet of air or water, because any fluid (the air and water) has viscosity, which retards the slippage of one part of the fluid moving past another part of the fluid." The Bernoulli Conundrum Robert P. Bauman Professor of Physics Emeritus University of Alabama at Birmingham http://www.introphysics.info/Papers/BernoulliConundrumWS.pdf - "In a demonstration sometimes wrongly described as showing lift due to pressure reduction in moving air or pressure reduction due to flow path restriction, a ball or balloon is suspended by a jet of air." Gale M. Craig PHYSICAL PRINCIPLES OF WINGED FLIGHT http://www.regenpress.com/aerodynamics.pdf - "A second example is the confinement of a ping-pong ball in the vertical exhaust from a hair dryer. We are told that this is a demonstration of Bernoulli's principle. But, we now know that the exhaust does not have a lower value of ps. Again, it is momentum transfer that keeps the ball in the airflow. When the ball gets near the edge of the exhaust there is an asymmetric flow around the ball, which pushes it away from the edge of the flow. The same is true when one blows between two ping-pong balls hanging on strings." Anderson & Eberhardt The Newtonian Description of Lift on a Wing http://lss.fnal.gov/archive/2001/pub/Pub-01-036-E.pdf - "This demonstration is often incorrectly explained using the Bernoulli principle. According to the INCORRECT explanation, the air flow is faster in the region between the sheets, thus creating a lower pressure compared with the quiet air on the outside of the sheets. UNIVERSITY OF MARYLAND PHYSICS LECTURE-DEMONSTRATION FACILITY http://www.physics.umd.edu/lecdem/services/demos/demosf5/f5-03.htm - "Although the Bernoulli effect is often used to explain this demonstration, and one manufacturer sells the material for this demonstration as "Bernoulli bags," it cannot be explained by the Bernoulli effect, but rather by the process of entrainment." UNIVERSITY OF MARYLAND PHYSICS LECTURE-DEMONSTRATION FACILITY http://www.physics.umd.edu/lecdem/outreach/QOTW/arch13/a256.htm - Clancy, L.J., Aerodynamics, Section 5.5 ("When a stream of air flows past an airfoil, there are local changes in flow speed round the airfoil, and consequently changes in static pressure, in accordance with Bernoulli's Theorem. The distribution of pressure determines the lift, pitching moment and form drag of the airfoil, and the position of its centre of pressure.") Further reading - Batchelor, G.K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2. - Clancy, L.J. (1975). Aerodynamics. Pitman Publishing, London. ISBN 0-273-01120-0. - Lamb, H. (1993). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 978-0-521-45868-9. Originally published in 1879; the 6th extended edition appeared first in 1932. - Landau, L.D.; Lifshitz, E.M. (1987). Fluid Mechanics. Course of Theoretical Physics (2nd ed.). Pergamon Press. ISBN 0-7506-2767-0. - Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group. ISBN 978-0-415-49271-3. |Wikimedia Commons has media related to: Bernoulli's principle| - Head and Energy of Fluid Flow - Interactive animation demonstrating Bernoulli's principle - Denver University – Bernoulli's equation and pressure measurement - Millersville University – Applications of Euler's equation - NASA – Beginner's guide to aerodynamics - Misinterpretations of Bernoulli's equation – Weltner and Ingelman-Sundberg
http://en.wikipedia.org/wiki/Bernoulli's_principle
13
67
This file is also available in Adobe Acrobat PDF format HISTORY AND NATURAL HISTORY The scientific name of the African wild dog (Lycaon pictus) means painted wolf, a reference to their patchwork coats of brown, black, and white, which Angier (1996) aptly called "a furred version of combat fatigues." Their shape follows the general canid body plan, with modifications accumulated over 3 million years of divergence from the rest of the dog family. For example, wild dogs have only four toes, having lost the fifth toe that persists as a vestigial dewclaw in most canids. Compared to wolves or coyotes, they are lean and tall, with outsized ears that complement their quiet vocalizations. Altogether, the wild dog is a unique and beautiful animal (Figure 1.1). Wild dogs stand 65 to 75 cm at the shoulder, and weigh from 18 to 28 kilograms (Smithers 1983). Though they have been described as sexually monomorphic (Malcolm 1979; Girman et al. 1993), males are from 3-7 percent larger than females in linear measures of body size (Table 2.3). The original suggestion that wild dogs are monomorphic was probably based on measurements of body mass, which is extremely variable, because a hungry wild dog can consume 8-9 kg of meat (about 1/3 of its own weight). Wild dogs have sparse hair, though there is variation among individuals. Part of this variation is related to age--yearlings have more hair than adult dogs, and old dogs can become almost hairless. Hair is particularly lost on the head, which begins to appear gray as the skin shows through. Captive wild dogs in cold climates also tend to have more hair. The color patterns of wild dogs are extraordinarily variable, and they appear to recognize one another individually at distances of 50 to 100 meters, suggesting that they make use of the information that coat variation provides. For example, when two packs encounter one another, dogs chase members of the other pack. The scene rapidly becomes chaotic, but we never saw dogs pursuing members of their own pack. Chases are often initiated from distances of 50 to 100 meters, so it seems likely that individuals are recognized by sight, though olfaction may also be involved. Most of the variation in color is on the trunk and legs. Patterns on the face are relatively invariant, with a black muzzle shading to brown on the cheeks and forehead, a black line extending up the forehead, and blackish-brown on the backs of the ears. Some dogs have a brown teardrop on the muzzle below the eyes. There is never white on the head, and the posterior part of the head and the dorsal surface of the neck are consistently brown or yellow. Colors on the body and legs are unpredictable. There is often a white patch just behind the forelegs, and dogs with little or no white elsewhere may have white on their forelegs or on the ventral surface of their neck or chest. The tail is almost always tricolored, with brown at the root, a black band, and a white tip. Some dogs have two black tail-bands, or black dots, or a black tip below the white, and a few have no white at all. Coat patterns are not bilaterally symmetrical. The asymmetry is great enough that photographs of a dog's right side cannot be matched to photographs of its left side without additional information. Wild dogs have only four toes on their forelegs, while members of the genus Canis have a vestigial fifth toe. The pads of the middle two toes are usually fused at the posterior edge, although in Selous we observed several individuals with unfused toes. The dental formula is 3 1 3 3 (upper), 3 1 4 3 (lower). The last lower molar is vestigial. The canine teeth are narrow for their length, in comparison to other carnivores (Van Valkenburgh 1989). In a set of 23 canids, felids, and hyenids, wild dogs had the largest premolars (relative to body mass) of all carnivores other than hyenas (Van Valkenburgh 1989). This suggests that wild dogs eat bone regularly, although they have a reputation for eating meat almost exclusively. In Selous, wild dogs often eat leg bones, ribs, vertebrae, and skulls. The droppings of wild dogs sometimes turn white with age due to a high proportion of digested bone, similar to the droppings of spotted hyenas. 1.1 Taxonomy and Phylogeny Fossil evidence does not resolve the origin of African wild dogs. Undisputed Lycaon fossils come from the mid-Pleistocene (about 1 million years ago), and are very similar to modern wild dogs (Savage 1978). There is some debate over the geographic range for fossils of Lycaon. Kurten (1968) suggests that skull fragments from the genus are found in late Pleistocene sites in Europe, but Thenius (1972) and Malcolm (1979) believe that these fragments came from wolves (Canis). If so, Lycaon may always have been restricted to Africa. Within Africa, identification of the oldest Lycaon is complicated by the difficulty of distinguishing Lycaon fossils from those of an early Pleistocene wolf, Canis africanus. The current view of fossil evidence is that wild dogs arose 2-3 million years ago, in Africa (Savage & Russell 1983). The first taxonomic description of a wild dog was by Temminck (1820), who considered it to be a type of hyena (and named it Hyena picta). Mat-thew (1930) placed wild dogs in a subfamily of the Canidae, the Simo-cyoninae, together with the dhole (Cuon alpinus) and the bush dog (Speothos venaticus). This group was proposed on the basis of the shape of the lower carnassial molar, which in these three species has a short blade and no basined cusp (Van Valkenburgh 1989). Lycaon, Cuon, and Speothos are not particularly similar in other respects. Bush dogs look nothing like dholes and wild dogs. Wild dogs and dholes are similar in morphology, behavior, and ecology (Johnsingh 1982; Venkataraman 1995), but Thenius (1954) described an Asian fossil lineage that leads from a jackal of the early Pleis-tocene to the dhole. Today, similar carnassial molars within the Simo-cyoninae are considered analogous rather than homologous, and the subfamily is no longer recognized (Wozencraft 1989). The wild dog has the same number of chromosomes as the domestic dog (Canis familiaris) and similar neuroanatomy (Radinsky 1973). The myo-globins of wild and domestic dogs differ by one amino acid, compatible with a single-point mutation (Romero-Herrera et al. 1976). Girman et al. (1993) sequenced 736 base-pairs of the cytochrome b gene in wild dogs and other canids. These sequence data suggest that wild dogs are phylogenetically distinct from the other wolflike canids (wolves, jackals, and coyotes), justifying their current placement in a monotypic genus. Wild dogs showed 11.3-13.7% sequence divergence from the other species, and the single most parsimonious phylogenetic tree placed the divergence of the wild dog just basal to the radiation of the Canis clade. Girman et al. (1993) also noted 1% sequence divergence within the species, and proposed that two geographically isolated subspecies occupy eastern and southern Africa. This suggestion was based on samples from three widely separated locations (Kruger, Hwange, and Serengeti National Parks, respectively located in South Africa, Zimbabwe, and Tanzania). With samples from more locations, and with the addition of data on nuclear micro-satellite genotypes and mtDNA control region sequences, the picture has changed (Girman et al. 1997). There are no geographically distinct subspecies, though there is substantial genetic variation among populations. Parsimony analysis of mtDNA control region haplotypes suggests that there are two clades of wild dogs, but the clades are geographically mingled. Unique mtDNA haplotypes are found at the northern and southern extremes of the sampled range (Serengeti and Kruger), but the genetic affinities of intervening populations are not clearly related to geography. In Selous, for example, the predominant mtDNA haplotype is most similar to a haplotype found only in Kruger, but not in the intervening populations of Zimbabwe and Botswana (Figure 1.2). Nuclear microsatellites also reveal gene flow among populations, but the patterns from nuclear and mitochondrial DNA do not match. For example, dogs from the Selous and Serengeti ecosystems share micro-satellite alleles that are not found elsewhere, but mtDNA places these populations in different clades (Girman & Wayne 1997). The data, though extensive, leave open some questions about genetic divergence among wild dog populations. In general, continentwide genetic patterns are consistent with a history of radiations north and south from the miombo woodland belt (extending from the latitude of southern Tanzania in the north, to the latitude of northern Zimbabwe and northern Botswana in the south). 1.2 Social Organization Wild dogs live in permanent packs of 2 to 27 adults and yearlings, though packs of 5 to 15 adults and yearlings are most common. Excluding yearlings, packs held 6.6 3 0.8 adults, taking the average for six populations (Table 3.9). Mean pack size varies from 4-5 adults in Kruger N. P. and Masai Mara N.R. to 8-9 adults in Moremi and Selous (Reich 1981; Fuller et al. 1992a; McNutt 1996; Mills & Gorman 1997). Within a pack, there is a clear dominance hierarchy among males, and another among females. The dominant female is usually the oldest in the pack, but old males often lose their rank to prime-aged males, so many packs include one or more old, formerly dominant males (Chapter 7). Only the dominant female is assured of breeding, though subordinate females do occasionally become pregnant. Reproduction is also largely monopolized by the alpha male, but the pups of a single litter can have more than one father, as in most carnivores (Girman et al. 1997; Chapter 8). The simplest pack structure is a set of related males and a set of related females, with no genetic relationship (or a distant relationship) between the males and females (Frame et al. 1979; Malcolm & Marten 1982; Girman et al. 1997). This structure becomes more complicated if offspring born within the pack are recruited. Individuals of either sex may stay in their natal pack well beyond the age of maturity. When this occurs, some individuals are related to pack members of the opposite sex. Pack structure can also be complicated by immigration. Generally, successful immigrants evict same-sexed residents. These pack takeovers replace the lineage of one sex, but do not alter the basic structure of unrelated male and female lineages. Occasionally, unrelated immigrants join a pack without evicting all of the same-sexed residents, and this dilutes relatedness within that sex. In Selous, unfamiliar and apparently unrelated individuals of both sexes have immigrated successfully without evicting residents. In short, variations in the patterns of immigration, emigration, and breeder turnover may produce a complex web of genetic relatedness within packs. The coefficient of relatedness between packmates averages 0.25-0.35, but for a specific pair of individuals can range from 0 to 0.5 or above in the case of mild inbreeding (Frame et al. 1979; Reich 1981; Girman & Wayne 1997; Chapters 8, 10). Short-distance dispersal can also create genetic ties between neighboring packs (McNutt 1996; Girman et al. 1997). Females are more likely than males to disperse in some populations, including Selous (Frame & Frame 1976; Chapter 8). In other populations, dispersal is not sex-biased, or is male-biased (Reich 1981; McNutt 1996). Emigrants of both sexes are likely to disperse as yearlings or two-year-olds, and usually disperse as a single-sex group of littermates, or as a group composed of two cohorts born one year apart (McNutt 1996; Chapter 8). Many populations, including Selous, have an adult sex ratio biased in favor of males. For populations in which dispersal is female biased, the biased adult sex ratio may result from mortality during dispersal. In captivity, however, the sex ratio of a large sample of pups was also male-biased (Malcolm 1979), and pup sex ratios are male-biased in some wild populations (Fuller et al. 1992a). The sex ratio of pups is 1:1 for some populations, including Selous and Kruger (Maddock & Mills 1994; Chapter 7). Differences among populations in pup sex ratios might be related to rates of alpha female turnover, because primiparous females produce a high proportion of sons, while multiparous females produce a high proportion of daughters (Creel et al. 1998). No unaided pair of wild dogs has been observed to raise pups, and in Selous no pack smaller than five adults raised pups to independence (Chapters 7, 10). Subordinates of both sexes help to raise the pack's young, which, as we mentioned, are normally produced by the dominant pair. The most important help comes in the form of food. For the first three months after they are born, pups cannot move quickly enough to follow a hunting pack, and are confined to a den. Most of the pack leaves to hunt twice a day, but one or more dogs remain behind as guards (Malcolm & Marten 1982). The alpha female normally guards the pups by herself, but in some cases another dog (usually a female) will remain with her. When the hunters return, both the pups and the mother solicit food, and dogs of both sexes (and all ages) respond by regurgitating meat (Malcolm & Marten 1982). Less often, dogs will carry a portion of a carcass to the den, usually a leg, to gnaw on. In addition to feeding pups, nonbreeding helpers take part in protecting the pups, from lions, leopards, and spotted hyenas. When the pups begin moving with the pack at about three months of age, they are often bivouacked during a hunt--left behind and later recovered (as with wolves; Mech 1970). One or more dogs of either sex may remain with the pups under these circumstances. If no adults remain with the bivouacked pups, dogs of either sex may go back to retrieve them and lead them to the kill. Pups are allowed to eat first at carcasses (though adults sometimes eat hastily until the pups arrive), followed by yearlings and then adults (Malcolm 1979; unpublished observations). Wild dogs rely almost exclusively on mammalian prey that they have killed for themselves. They hunt prey as small as hares (1-2 kg), and as large as adult zebra or juvenile buffalo and eland (about 200 kg), but concentrate on prey between 10 and 120 kg, with larger packs taking larger prey (Chapter 4). Impala and wildebeest are an important part of their diet in most ecosystems. The remainder of the diet is made up of species smaller than wildebeest that are locally abundant, such as greater kudu, warthogs, and duikers. Wild dogs rarely scavenge, probably to avoid risky encounters with larger carnivores (Kruuk & Turner 1967; Creel & Creel 1996). Where the density of spotted hyenas is high or visibility is good, kleptoparasitism by hyenas at wild dog kills is common (Estes & Goddard 1967; Malcolm 1979; Fanshawe & Fitzgibbon 1993). Predation on wild dogs by lions has been seen in most populations, and lion predation is the most common known cause of death in some populations (Ginsberg et al. 1995; Mills & Gorman 1997). Altogether, interference competition with larger carnivores is an important force shaping the behavior, number, and distribution of wild dogs (Creel & Creel 1996; Mills & Gorman 1997). 1.4 Conservation Issues CONflICT WITH HUMANS Like most large carnivores, the single most important conservation problem for wild dogs is conflict with an expanding human population. Wild dogs formerly had a wide distribution across sub-Saharan Africa, excepting only rainforest (Smithers 1983). Like many species, wild dogs have become patchily distributed as the human population has expanded (Figure 1.2). Wild dogs now live mainly in protected areas, and few areas are known to hold more than a hundred individuals (Fanshawe et al. 1991). As a landscape is settled and moves into agricultural use, prey populations are depleted so that carnivore populations cannot maintain themselves. If carnivores persist, they are often killed to remove threats to livestock and people. It is occasionally suggested that wild dogs kill people (Leakey 1983), but we know of no documented cases. Wild dogs are wary of people unless they have been habituated to tourism, and, in our experience, villagers near protected areas do not fear them. Wild dogs will kill unattended sheep and goats (Rasmussen 1996), but do not attack livestock that are attended by a shepherd. Wild dogs in our study area often moved out of the reserve through areas with scattered rice farms and small dirt tracks. In these areas, they skirted around people, and we never saw a direct interaction other than the dogs running from a person who had approached them without being detected. Like some other carnivores, notably spotted hyenas, wild dogs were actively persecuted by wildlife managers for much of the 20th century. In general, wildlife managers shot them whenever possible. In Zimbabwe, 3,404 wild dogs were shot for "vermin control" between 1956 and 1975 (Childes 1988). In Namibia, 156 wild dogs were killed over 19 months in 1965-1966 (Anonymous 1967). Most game scouts in Selous recall shooting wild dogs up to the mid 1980s, and it is likely that hundreds were shot, though there are no accurate records. In 1977, the South African Red Data Book stated "[wild dogs are] still considered vermin and are shot on sight even on nature reserves. . . [They are] likely to get little sympathy from farmers" (Skinner et al. 1977, p. 11). Dislike of wild dogs can easily be seen in writings from the 1900s through the 1970s. Some examples: It will be an excellent day for African game and its preservation when means can be devised for [wild dogs'] complete extermination.--Maugham (1914) Although wild dogs, when present in large numbers, are a scourge to the game, killing, terrifying, and scattering it all over the country, they still find a useful place in Nature's economy, and the Kruger National Park would certainly be the better for a considerably larger number than exists.--Stevenson-Hamilton (1947) The wild dog is the only animal of the veldt that is always feared. The lion is not. Many a hunter has watched a full-fed lion walk in plain sight of a herd of antelopes.--Hubbard (1954) Wild dogs hunt in packs, killing wantonly far more than they need for food, and by methods of the utmost cruelty.--Bere (1956) In a later annotation, Bere noted, "This is now known to be nonsense." The rapacious appetite of these foul creatures is staggering.--Hunter (1960) Though some of these authors had a grudging respect for the dogs (Stev-enson-Hamilton's 1947 book is a good example), there were two broad reasons for their persecution. The major problem was with the dogs' method of killing prey. Because they are small relative to their prey, and do not have a specialized killing bite, wild dogs kill their prey by pulling it to a halt and disemboweling it. Large prey can take a half-hour to die (though most die in minutes), and empathy for the prey led to antipathy toward the predator. A second strike against the dogs was the perception that they disrupted prey populations more than other predators. Because wild dogs are cursorial hunters that rely on an open chase to catch their prey, it is certainly true that a wild dog hunt can set a large number of prey in motion, especially in open habitat. However, it is also true that calm returns quickly to an area in which the dogs have hunted. Wildebeest herds often resume grazing in plain sight of wild dogs feeding on a herdmate. Prey show little fear of wild dogs at rest, just as with other predators. Anyone who watches a pack of wild dogs for a day will undoubtedly see prey herds moving past or feeding nearby, aware of the dogs but unbothered. Zebra and wildebeest sometimes approach resting dogs and harass them. Some early naturalists must have known that relations between wild dogs and prey were much like those of other carnivores. Active persecution decreased as field studies described the wild dogs' ecology and behavior. By the mid 1980s wild dogs were legally protected in the six nations that hold significant numbers (Botswana, Kenya, South Af-rica, Tanzania, Zambia, and Zimbabwe). Road accidents kill wild dogs in areas that are transected or bordered by high-speed roads (Fanshawe et al. 1991; Drews 1995). The rain-rutted dirt tracks in Selous do not allow high-speed driving, and we recorded no road kills. By contrast, Hwange National Park borders a high-speed highway between two large cities, Bulawayo and Victoria Falls, and road kills were the most common known cause of death (Ginsberg et al. 1995). In Mikumi National Park (Tanzania), traffic on the Tanzania-Zambia highway is estimated to kill between 3% and 12% of the wild dog population annually (Drews 1995; Creel & Creel 1998). Wire snares set for game species can unintentionally catch carnivores, and this is a surprisingly common cause of death in some places (Hofer et al. 1993). In Selous, snaring and poisoning by illegal game hunters caused 11% of 45 known-cause deaths. Snaring and shooting accounted for 18% of 57 deaths in Kruger (van Heerden et al. 1995), and 29% of 31 deaths in Hwange (Ginsberg et al. 1995). Though its force varies among populations, human impacts on wild dogs are substantial even in large protected areas. LOW DENSITY WITHIN PROTECTED AREAS AND INTERSPECIfiC COMPETITION WITH LARGER CARNIVORES If conflict with humans was the only problem that wild dogs faced, they would not be endangered. Many African nations have set aside large areas for wildlife, and these parks hold a great many lions, leopards, and hyenas. All three of these species pose a greater threat to livestock (and people) than wild dogs do, but they remain abundant and widespread. Although their ecological needs are similar, a fundamental difference between wild dogs and these larger carnivores is that wild dogs remain at low population density under all conditions. It seems likely that competition between wild dogs and larger carnivores explains this pattern (Creel & Creel 1996; Mills & Gorman 1997; Gorman et al. 1998). Frame (1985) described wild dog-hyena interactions in Serengeti: "Hyenas typically assembled behind wild dog packs as they hunted, and we recorded periods of weeks at a time in which hyenas stole almost all kills made by the dogs before the latter finished eating" (p. 3). Mills & Gorman (1997) showed that lions account for 43% of natural wild dog deaths in Kruger. Interference competition with lions and spotted hyenas also has a strong impact on cheetahs (Laurenson 1995; Du-rant 1998; cf. Crooks et al. 1998), and considerable data suggest that in-terspecific competition has strong effects on many carnivore populations (Palomares & Caro 1999; Creel et al. in press). We discuss interspecific competition in Chapter 11. Analyses of carnivores' distributions (within ecosystems and among ecosystems) suggest that interspecific competition limits wild dogs in number and distribution (Creel & Creel 1996; Mills & Gorman 1997). Regardless of the cause, wild dog densities are spectacularly low (Table 1.1). The highest population density on record is from the northern Selous, with an average of 1 adult/26.0 km2 over six years. A more typical density is 1 adult per 60-100 km2. Even at maximal density, an area of 1,000 km2 holds a population of only 40 adults, which is unlikely to be viable in the long run. As a result, small parks will play a small role in wild dog conservation, unless they are actively managed. In the end, conservation of wild dogs comes down to understanding the causes and consequences of their invariably low population densities. The literature on wild dogs often states that they are "particularly sensitive to disease" (Fanshawe et al. 1991, p. 140), or that infectious diseases have played "a main role in the numerical and distributional decline of African wild dogs" (Kat et al. 1995, p. 229). This idea is based almost exclusively on data from the Serengeti ecosystem. There, wild dogs declined to local extinction while experiencing recurrent outbreaks of rabies and possibly canine distemper (Schaller 1972; Malcolm 1979; Gascoyne et al. 1993; Alexander & Appel 1994). The data from Serengeti clearly shows that viral diseases can cause substantial mortality in wild dogs, and can contribute to a local extinction. However, the Serengeti population was probably vulnerable to extinction for other reasons. First, the population was small enough (less than 30 dogs) to be vulnerable to a knockout blow, regardless of the cause (Ginsberg et al. 1995). Second, the Serengeti dogs faced intense competition from larger carnivores (Frame & Frame 1981). Finally, Serengeti held a diverse suite of carnivores, many at high densities, that were known to carry rabies virus and/or canine distemper virus (Maas 1993; Alexander & Appel 1994; Alexander et al. 1994, 1995; Roelke-Parker et al. 1996). Under these conditions, it is expected that spillover transmission from high-density species will endanger species living at lower density (Grenfell & Dobson 1995). For these reasons, it might not be justified to generalize the conclusion that wild dogs are especially vulnerable to diseases. Little is known about the regulatory role of diseases in other wild dog populations, but current data suggest that disease is not a major factor for all populations. Several dogs have died of infection with the bacterium Bacillus anthracis, in the Luangwa valley, Kruger N. P. and Selous (Turnbull et al. 1991; Creel et al. 1995; van Heerden et al. 1995). In Kruger and Selous there have not been detectable disease-related population declines over periods of 22 and 6 years, respectively (Reich 1981; van Heerden et al. 1995; Creel et al. 1997c). Combining demography, serology, post-mortems and veterinary examinations, van Heer-den et al. (1995) concluded that "disease could not be incriminated as an important cause of death" (p. 18) in Kruger. In summary, current data are compatible with a wide range of views on the role of infectious disease in wild dog population dynamics. We discuss infectious diseases in Chapter 12. 1.5 Issues Addressed by the Research and Organization of the Book This book moves between results that are relevant to conservation and results that are relevant to behavioral ecology. Most chapters are more closely aligned to one of these fields than the other, but some data are relevant to both fields. For example, we use data on hunting success to address the evolution of cooperative hunting, but also test whether hunting success is a limiting factor for some populations. For this reason, some results appear in different forms in more than one chapter, with discussions aimed at different goals. Chapter 2 gives a description of the Selous Game Reserve, the study site and population, and our general methods. We give narrower descriptions of some specific methods in other chapters for the sake of coherence. In Chapter 3 we discuss habitat selection, determinants of home range size, and overlap of home ranges. These analyses are relevant to conservation, because (all else equal) large home ranges lead to low population density. Wild dogs can have home ranges larger than 1,000 km2, among the largest reported for carnivores (Frame et al. 1979; Mills & Gorman 1997). Understanding why wild dog packs require large ranges is central to understanding why they are endangered. Chapters 4-6 deal with hunting. In Chapter 4 we present basic data from wild dog hunts in Selous and analyze the energetic costs and benefits of hunting in different pack sizes. A prominent question in behavioral ecology has been whether cooperative hunting favors life in groups or is simply an unselected consequence of life in groups. Among carnivores, this question has been addressed by studies of lions (Schaller 1972; Packer et al. 1990; Stander 1992), spotted hyenas (Kruuk 1972; Mills 1990; Holekamp et al. 1997), cheetahs (Caro 1994), wolves (Schmidt & Mech 1997), and wild dogs (Estes & Goddard 1967; Fanshawe & Fitzgibbon 1993; Fuller et al. 1995; Creel & Creel 1995b; Creel 2001). The question has also been studied in other taxa, notably chimpanzees (Boesch 1994), Harris's hawk (Bednarz 1988), and killer whales (Baird & Dill 1996). Conclusions have varied, depending on the species studied and on the currency used to measure foraging success (Packer & Caro 1997; Creel 1997). Some authors argue that cooperative hunting has not been an important force in the evolution of sociality in carnivores (Packer et al. 1990; Caro 1994), but for wild dogs, it is clear that foraging success depends on group size. We feel that this important issue remains unresolved for carnivores in general (Creel 2001). In Chapter 5 we focus on prey selection. We use our hunting data to measure the profitability of each prey species, then test whether the proportion of each prey species in wild dogs' diet is correlated to its profitability. The diet might also be determined by the availability of prey types, rather than their profitability alone. Because we have data on the abundance of each prey species, encounter rates between dogs and prey, and measures of profitability, we consider availability and profitability together. Chapter 6 digresses to take the perspective of the prey. In particular, we examine how herd size affects the vulnerability of impala and wildebeest to wild dogs. Grouping could reduce vulnerability to predation in several ways (Caro & Fitzgibbon 1992; Fitzgibbon & Lazarus 1995; Lima 1995a, 1995b). Most studies of group size and vulnerability to predation have involved stalking predators, focusing on the benefit of detecting predators before they can come close enough to make a kill (fish: Krause & Godin 1995; birds: Lima & Zollner 1996; mammals: Fitzgibbon 1990). However, stalkers and coursers hunt in very different ways, and our analyses suggest that many mechanisms that reduce vulnerability to stalking predators do not reduce vulnerability to coursing predators. Chapters 7-10 describe social organization and behavior. In Chapter 7 we present basic data on demography and population dynamics, including the following topics: (1) life tables, (2) the effects of social rank on survival and reproduction, (3) the effectiveness of helpers, (4) sex-ratio evolution, (5) population dynamics, and (6) effective population size (Wright 1969; Nun-ney & Elam 1994). Chapter 8 describes patterns of immigration and emigration. Early studies showed that female wild dogs were the primary dispersers in Serengeti (Frame & Frame 1976). Female-biased dispersal is rare among carnivores (Waser 1996) and among mammals in general (Chepko-Sade & Halpin 1987). Even among wild dogs, female-biased dispersal may not be the general rule, as McNutt (1996) found that all dogs of both sexes dispersed in Moremi. In Selous, females are substantially more likely to disperse than males. Chapter 8 describes patterns of dispersal, and discusses the effect of dispersal on genetic relationships within and among packs. We compare predicted patterns of relatedness to data from mitochondrial DNA and micro-satellites (Girman et al. 1997). Dominant wild dogs do not disperse unless evicted by immigrants, but subordinates of both sexes commonly leave their pack. This suggests that escape from reproductive suppression is a driving force behind dispersal (Waser 1996). In Chapter 9, we discuss the behavioral and endocrine mechanisms that prevent reproduction in social subordinates, addressing six questions: (1) What is the effect of social subordination on mating rates? (2) To what degree is reproductive suppression of subordinates due to aggression from dominants? (3) Is reproductive suppression of subordinates strictly a behavioral process, or is it physiologically mediated by depressed sex-steroid levels? (4) Is suppression of subordinates mediated by stress? (5) How do mechanisms of suppression differ between males and females? (6) How do behavioral and endocrine patterns relate to wild dogs' social organization? We then take a comparative perspective, asking how the physiological and behavioral correlates of reproductive suppression differ among social carnivores, and among cooperative breeders in general. In Chapter 10 we examine reproductive suppression from an evolutionary standpoint, asking why social subordinates tolerate reproductive suppression and help to raise the young of others. A gargantuan literature addresses this question from a theoretical perspective (Hamilton 1964; Brown 1987) or with empirical data from birds (Brown 1987; Stacey & Koenig 1990), insects (Bourke & Franks 1995), and, to a lesser extent, mammals (Solomon & French 1997). Rather than attempting to review this subject (a book in itself), we take a narrow focus and apply a quantitative model for the evolution of reproductive suppression (Vehrencamp 1983). We compare patterns predicted by the model to data on mating rates and reproductive physiology, and to direct data on maternity and paternity. Chapters 11-13 turn to conservation, addressing the issues discussed earlier in this chapter. In Chapter 11 we examine interspecific competition between wild dogs and larger carnivores. Across ecosystems, the density of wild dogs is negatively correlated with the densities of lions and spotted hyenas, and considerable data suggest that interference competition and predation cause this correlation. In Chapter 12, we discuss the effects of infectious diseases. In Chapter 13, we provide an overview of six factors that may limit wild dogs in number or distribution: intraspecific competition, inter-specific competition, prey limitation, disease, genetic problems, and human activities. We then use simulations to model the probability of local extinction in Selous. Return to Book Description File created: 8/7/2007 Questions and comments to: [email protected] Princeton University Press
http://press.princeton.edu/chapters/s7316.html
13
78
In order to teach finding the area of a circle, I used and modified a couple of activities out of the “Hands on Math” book. Look under the “Resources” tab for a picture and review of this book. It is an excellent resource. Before I teach finding the area of a circle, I first teach my lesson on finding the circumference of a circle. Doing this, they are already familiar with pi and know that it is “three plus a little bit more.” The first activity we do is called “Circle Cover-Up.” For this activity, the students are groups of two but they each do the activity themselves. The reason for pairing them is so that they can watch the other person to make sure they are doing the activity correctly. In the “Hands on Math” book, there are black-line masters of the pages I use. You print the pages out in two different bright colors so that they stand out. Below is a picture of the two pages we use. One is a centimeter grid and the other is two different sized circles with a radius drawn and tick marks to show the measurement of the radius. I tell the students to cut out squares that have the same side length as the length of the radius. For example, if the radius is five units, then you would cut out a square that is five units long and five units wide. I usually go ahead and tell them to cut out four of those five by five squares out. They are actually suppose to cut each of the larger squares into the smaller squares and place each individual square on the circle. I found it much easier and less time consuming to have them cut the larger squares into strips or as big of pieces as they can. They are to glue these on the circle and make note of how many larger squares it takes to fit on the circle. This is kind of difficult and the students won’t all get the same answer. Most usually only fit three on the circle. This is fine because I look through the class and find the best example to show the students that they should have been able to get three larger squares plus a little bit of the fourth square. The students are to do this with the second circle on the page. That circle has a radius of four, so they are to cut out squares that are four by four. Again the students should use three squares plus a little bit of the fourth one. I ask them if they remember that from learning circumference. By now the students have already figured it out that pi has something to do with it. I draw the following diagram on the board using the measurements from the first circle. I explain to them that the meaning of the area of a circle is how many unit squares will actually fit onto that surface. I then draw an example of the squares we just did on that same circle. I know, I know, I am no artist, but the kids get the point of my drawings. I ask them how many small squares are in the one larger square on that circle. They say twenty-five and I make sure they understand they easily find this by taking the side times the side. I question them and make sure they understand that the side length is also the same as the radius. I then draw that same square on the other three sections of the circle. I ask them if we actually was able to fit all four of those large squares on the circle. No, then how many? Three plus a little bit more. What do we call that three plus a little bit more? Pi. Very good. So if there are twenty-five in one square then there would be twenty-five in each of the other squares too right? How many big squares were we able to fit on? Pi. How did we calculate how many small squares were in the big square? Side length times the side length. How did we determine the side length? The radius. So you could also say the radius times the radius. So if there are twenty-five small squares in this large square, how many are there in another larg square? Twenty-five. How many in the other large square? Twenty-five. So how many altogether in those three large squares? Three times twent-five which is seventy-five. Did we use all four squares? No. Three plus a little bit more. Sooooooo…… I could find out how many little squares there are on the circle if I took the radius times the radius times three plus a little bit more (pi). We also call radius time the radius, radius squared (sorry, I can’t make the little squared number so I’m just going to have to write it out.). So we could make our formula…… A = r^2 * pi. Wow, that was quite a dialogue I just had with myself. I hope it didn’t confuse you. Maybe reading a few times will help you. It’s much easier when I’m teaching it to actually human beings who respond back to me. I hope you got the main point of it and are able to use this information in your classes. We work several examples using different measurements for the radius. For the first several, I draw a circle and the squares in them until the students catch on and are able to understand it without the pictures. The activity we play to review finding the area of a circle is the same game I used when finding the circumference of a circle only making them find the area instead of the circumference. This game consists of the following modified paper with two spinners on it, a large paper clip, and a pencil. They spin the paper clip on the top spinner to get what dimension they have. They spin the paper clip on the bottom spinner to get the measurement and find the area using that information. I have found that doing this activity and the activity with circumference, the students actually understand the meaning and reasoning behind what those two measurements really are. When you combine area and circumference on the same worksheet is when most students become confused because they forget which formula goes with which measurement. I am able to remind them that with circumference, we had the string and wrapped it around the circle and then saw how many times the string could go all the way across the circle which is the diameter. With area, we cut up the squares to put on the circle and we needed the radius to find out how many squares were actually on the larger square. Hopefully this gives you some insight on a good way of teaching area. This activity is the best I’ve found so far.
http://www.fortheloveofteachingmath.com/2011/09/15/area-of-a-circle/
13
116
Although all circles are similar, note that circles are congruent if and only if they have the same radius. We extend this concept to arcs and chords in congruent circles being congruent if and only if they have the same measure. Several additional properties are summaried in the following theorem about chords and the circle's center. It can be proved using the properties of the altitude, median, and perpendicular bisector of an isosceles triangle. We repeat here the relationship between the measure of an inscribed angle and corresponding central angle. |An inscribed angle is half the measure of the central angle intercepting the same arc.| Since a semicircle is 180° the following is also true. |An angle inscribed in a semicircle is right.| |If two inscribed angles intercept the same arc, then they have the same measure.| If you know the measure of a central angle (and radius), you can calculate the arc length of its corresponding arc as the proportion between this measure over 360° equal to this unknown arc length over the circumference. Example: Suppose you have a 60° arc in a circle with radius 6 m. Find its arc length A. Solution: 60° is 1/6 of a circle, so the arc length is 1/6 the circumference: C=12 or A=2. Our textbook motivates this further by referring to the picture angle or field of vision of a camera lens. A normal lens has a picture angle of about 62°, whereas a wide angle lens may be as large as 118°, and a telephoto lens might be as small as 18°. It continues by using an inscribed angle to relate picture angle, and in the next section, the corresponding central angle and trigonometry to find the minimal distance from the building. Example: A camera has a 56° picture angle. Where can a person stand so that the front of a 60' building just fills the picture. Solution: Let the ends of the front of the building be points A and B. These form a chord of a circle O. All points P which form a 56° inscribed angle (and are in front of the building) are places the person could stand. The corresponding central angle would be 112°. If the person stands directly in front of the building an isosceles triangle is formed which we can bisect and have a 28°62°right triangle. 30/d=tan 28° or d=30/tan 28°=56.4'. One can construct the center of the circle by locating the point of intersection of the bisectors of two chords of the circle (perpendicular bisector method). Using right angles and the corresponding semicircles is another way (right angle method). Although draftsmen (and draftswomen) often use a T-square or square (ell), for drawing purposes, the corner of a book or index card will suffice. Be able to construct or draw the center of a circle when only given an arc. |A secant is the line containing a chord. It thus intersects a circle at the chord's two endpoints.| The measure of an angle formed by two intersecting chords is one-half| the sum of the measures of the arcs intercepted by it and its vertical angles. The measure of an angle formed by two secants intersecting outside| a circle is half the difference of the arcs intercepted by the angle. A tangent (to a circle) is a line in the plane of the circle which| intersecting the circle at one and only one point. The point of tangency is the point where the line and circle intersect. The word tangent is also used to describe other situations where circle or spheres intersect (internally tangent, externally tangent, etc). A common tangent is a single line which is tangent to more than one object. |A line is perpendicular to the radius at the radius endpoint on a circle if and only if it is a tangent line.| Example: A common application of this last theorem is to find the distance to the horizon from the top of a building, tower, or plane. Using the value 1260', which could apply to any of these, find the desired result. Solution: Assume the earth is a perfect sphere with r=3960 miles. 1260'=¼ miles or 0.25 miles. Using the Pythagorean theorem with the right angle at the horizon, we find the missing side is (3960.252-39602)=44.50 miles! |The measure of an angle formed by a tangent and a chord is half the measure of its intercepted arc.| The measure of the angle between two tangents (or between a tangent and a secant)| is the difference of the intercepted arcs. Let one secant intersect a circle at A and B and another secant intersect the circle at C and D.| If the point of intersection is P, then APBP=CPDP. Euclid's proof works whether P is inside or outside the circle! Steiner called the product used above the power of the point P for the circle. |The power of point P for a circle is the square of the length of a segment tangent to the circle from P.| |Given two different figures with the same perimeter (circumference), the circle has the greatest area.| It follows that: |Of different figures with the same area, the circle has the smallest perimeter.| This fact sets an upper bound on a figure's area given its perimeter. Specifically, if a figure has perimeter P, then its area A must be less than or equal to P2/(4). Analogous 3D versions of these theorems also holds. |Given two different solids with the same surface area, the sphere has the greatest volume.| |Of different solids with the same volume, the sphere has the smallest surface area.| Sometimes a large surface area is important. The recent history of the development of the catalytic converter is a case in point. Several technological factors had to be overcome, such as, get lead out of gasoline, find something which could be economically manufactured with a large surface area which would withstand high temperature, etc. Thus inside the catalytic converter you find a honeycomb of ceramic material coated with a catalyst. Usually there are two reaction areas, a reduction area using platinum or rhodium as catalyst and an oxidation area using platinum or palladium. All these metals are very expensive. Sponges (living, dead, or synthetic) are another example where a large surface area is desired.
http://www.andrews.edu/~calkins/math/webtexts/geom14.htm
13
204
The incenter of a triangle is the point of intersection of the triangle's three angle bisectors. Incenter, concurrency of the three angle bisectors The three angle bisectors have to meet in a single point because... If a circle is inscribed in an angle, the angle bisector passes through the center of the circle. The two sides of the angle each form a "point of tangency" where they intersect the circle (not shown on the diagram). At each point of tangency, there is a radius that meets the side of the angle at a right angle. You can there are two congruent triangles formed by the vertex of the angle, a point of tangency, and the intersection of the two radii, which, of course has to be at the center of the circle. There are several interesting relationships in a triangle between the inscribed circle, the angle bisectors, and the three "exscribed" circles. As a review of the barycentric coordinates of point P in triangle ABC, I'll remind you they are the three weights you need to give points A, B, and C so that P is the centroid (weighted average) of the three vertices. So, for example, A=(1,0,0), B=(0,1,0), and C=(0,0,1). Another way to view barycentric coordinates is as "proportional altitudes". Let me explain. A point, P, can be identified by its distance from each of the three sides as a proportion of the altitude to that side. So, if hA is the altitude of point A from its opposite side, BC, and hB and hC are the other altitudes, then barycentric coordinates of a point Q = (x,y,z) indicate that the distances of Q from the three sides are x�hA, y�hB, z�hC, respectively. Now, the incenter is equidistant from each of the sides, distant from each side by the length of the radius of the incircle. So, viewing the barycentric coordinates as proportional altitudes, and letting the incenter I = (x, y, z), we see that in order to put point I equidistant from the three sides, we need x=1/hA, y=1/hB, and z=1/hC, so the barycentric coordinates of I are (1/hA, 1/hB, 1/hC) The scale of each of the points is arbitrary, because these barycentric coordinates are not normalized. So, for example, the barycentric coordinates (1,2,3) and (2,4,6) represent the same point. Now, observe that the altitudes hA, hB, hC are inversely proportional to their bases, because the product of base�altitude is constant, the area of the triangle. So the barycentric coordinates of the incenter can be more simply represented, (a, b, c) You can calculate the vector coordinates of the incenter from its barycentric coordinates by multiplying each vertex (as a vector) by the corresponding barycentric coordinate, adding the results, and then dividing by the sum of the barycentric coordinates. If the vertices of the triangle are given on a two-dimensional plane as A (a,b), B (c,d), and C (e,f) then the incenter (h,k) is given by h = (a sqrt((c-e)2+(d-f)2) + c sqrt((e-a)2+(f-b)2) + e sqrt((a-c)2+(b-d)2) ) / (sqrt((c-e)2+(d-f)2) + sqrt((e-a)2+(f-b)2) + sqrt((a-c)2+(b-d)2) ) k = (b sqrt((c-e)2+(d-f)2) + d sqrt((e-a)2+(f-b)2) + f sqrt((a-c)2+(b-d)2) ) / (sqrt((c-e)2+(d-f)2) + sqrt((e-a)2+(f-b)2) + sqrt((a-c)2+(b-d)2) ) Do you see how we derived the two equations, above, from the barycentric coordinates of the incenter? If not, consider this: The calculation, above, is easier than it looks to carry out, because of the common subexpressions for the lengths of the sides. If we let the three side lengths be A = sqrt((c-e)2+(d-f)2), B = sqrt((e-a)2+(f-b)2), and C = sqrt((a-c)2+(b-d)2), then the values of h and k can be more simply represented as h = (aA + cB + eC ) / (A+B+C), and k = (bA + dB + fC ) / (A+B+C) Consider triangle ABC, pictured on the left side of this page. Now, if we inscribe a circle in any of its angles, say, angle ACB, then the center, O, of the circle bisects the angle. This is true because of symmetry: The quadrilateral formed by C, O, and the two points where the circle is tangent to the angle is symmetrical about line CO, which means angle ACO is equal to angle BCO. So the center of any circle inscribed in an angle defines the angle bisector of that angle. The reverse is also true. That is, if O is any point in the interior of an angle such that ray CO bisects angle ACB, then the circle with its center at O that is tangent to AC is also tangent to BC. take this idea a step further. If we draw two angle bisectors, one bisecting angle C, and the other bisecting angle B, then they will intersect at some point inside the triangle, which we will label point "I". Since point I is on the bisector of angle C, the circle centered at I and tangent to AC is also tangent to BC. Also, since point I is on the bisector of angle B, that same circle, tangent to AC, will also be tangent to AB. So, you see, this circle is tangent to all three sides of the From this we see that the intersection of any two angle bisectors is the center if the inscribed circle. It follows that all three internal angle bisectors intersect at one point, which is the center of the inscribed circle, or "incircle". The perpendicular distance from point I to any of the sides is the radius, r, of the incircle. The area, K, of triangle ABC is the sum of the areas of AIB, BIC, and CIA. If we label the lengths of the sides of triangle ABC in traditional form, with side a opposite vertex A, b opposite B, and c opposite C, then the areas of these three small triangles are cr/2, ar/2, and br/2 because the height of each small triangle is the radius of the incircle. The sum of these areas is K = (a+b+c)(r)/2, or K = sr. Also, r = K/s where s = (a+b+c)/2 is the semiperimeter of triangle ABC. By extending two of the sides of the triangle, AC, and AB, we can bisect the exterior angles of the triangle as well. Here we have drawn two exterior bisectors which intersect at point Ea. The same logic that we used for the incircle can be used again to show that a circle can be drawn at Ea that is tangent to all three sides, suitably extended, of the triangle. A circle centered at Ea and tangent to AC is also tangent to CB, because Ea is on the bisector of the angle formed by those two lines. Similarly, the same circle, which I remind you is tangent to CB is also tangent to AB because it lies on the external bisector of angle B. Finally, since this circle is tangent to both AC and AB, the internal bisector of angle A also passes through point Ea. This circle is called an "excribed" circle, or excircle. Just as the three "in-triangles" AIB, BIC, and CIA add up to triangle ABC, three "ex-triangles" AEaB, BEaC, and CEaA add up, in a way, to the same triangle ABC. That is, if you subtract the area of BEaC from the sum of the areas AEaB and CEaA, the result is the area of ABC. K = (-a+b+c)(Ra)/2, or K = (s-a)(Ra). Also, Ra = K/(s-a) where s is the semiperimeter of triangle ABC, and Ra is the radius of the excircle with center Ea. Why stop there? Both external bisectors of angle C are on the same line, as are those of angles B and A. In this diagram, these bisectors were extended to show two other excribed circles. As you can see, a triangle has three internal angle bisectors and three external angle bisectors. These six lines intersect in exactly seven points: the three vertices of the triangle (pairwise intersections), the centers of the four in- and excircles (triple intersections). The internal angle bisector of a given vertex is perpendicular to the external angle bisector. This makes each of the internal angle bisectors an altitude of triangle EaEbEc formed by the three excenters. The point I then, which is the incenter of triangle ABC, is also the orthocenter of triangle EaEbEc. The radii of the four circles shown are related by the equation 1/r = 1/Ra+1/Rb+1/Rc because Ra = K/(s-a), etc. so 1/Ra+1/Rb+1/Rc = (3s-a-b-c)/K = s/K = 1/r The radii of the four circles shown are related to the area of triangle ABC by the formula K=sqrt(r Ra Rb Rc), because from K2 = s(s-a)(s-b)(s-c). So, sqrt(r Ra Rb Rc) = sqrt(K/s K/(s-a) K/(s-b) K/(s-c)) = sqrt(K4 / (s(s-a)(s-b)(s-c)) ) = sqrt(K4 / K2) = K. If we let R stand for the radius of the circle that circumscibes triangle ABC (the larger gold circle in the diagram, below), we get one more interesting theorem, Ra+Rb+Rc-r=4R. This is because 4R=abc/K (proof) along with the following rather tedious algebra: Ra+Rb+Rc-r = K/(s-a) + K/(s-b) + K/(s-c) - K/s now, multiplying each fraction by 1 = s(s-a)(s-b)(s-c)/K2, Ra+Rb+Rc-r = s(s-b)(s-c)/K + s(s-a)(s-c)/K + s(s-a)(s-b)/K - (s-a)(s-b)(s-c)/K = (2s3 - as2 - bs2 - cs2 + abc)/K = (2s3 - (a+b+c)s2 + abc)/K The three excircles, Ea, Eb, and Ec are externally tangent to the Nine Point Circle, the smaller gold circle shown here. The Nine Point Circle is so-called because it passes through the three "feet" of the altitudes (or their extensions), the midpoints of the three sides, and the midpoints of the segments connecting each of the vertices to the orthocenter (where the three altitudes meet). In addition to being externally tangent to the three excircles, the Nine Point Circle is internally tangent to the incircle at a point called the Feuerbach point. This result is known as Feuerbach's theorem. Also, since A, B, and C are the feet of the altitudes of triangle EaEbEc, the circumcircle of ABC is also the nine-point circle of EaEbEc. For a given triangle, the radius of the nine-point circle is half that of the circumcircle, so the radius of the larger gold circle in this diagram is twice that of the smaller gold circle. The Angle Bisector Theorem of Triangles states that the point where a bisector intersects the opposite side divides that side in the same ratio as that of the other two sides. Referring to the drawing, below, the internal bisector of angle C divides side AB into two segments, x and y. The ratio of the lengths of two segments equals the ratio of the lengths of the two sides that form angle C; in other words, x:y = b:a. Referring to the diagram, below, the proof of this theorem starts by constructing a line parallel to CC1 that passes through B, meeting line AC at point D. Triangle BCD is isosceles, because angle ACC1 equals angle ADB, and angle C1CB equals angle CBD. Since line CC1 bisects angle ACB, angle ACC1 equals angle C1CB, so angle ADB equals angle CBD. Triangle AC1C is similar to triangle ABD, so AC:AD = x:(x+y), and so AC:CD = x:y. Since BCD is isosceles, BC=CD, so AC:BC = x:y, proving the theorem. Surprisingly, the external bisector, shown here as CC2, has the same property. That is, the ratios AC2:BC2 and b:a are equal. (. . . . . . proof? ) Note that if triangle ABC is isosceles with AC = BC, then point C1 is halfway between A and B (and so CC1 is an altitude), and point C2 doesn't exist, because if it did, it would have to be infinitely far away from the triangle. If CC1 were any cevian (not necessarily a bisector), whose length is t, then Stewart's Theorem tells us xa2 + yb2 = (x+y)(t2+xy), which has a number of other formulations, such as replacing (x+y) with c, or expanding the right hand side this way, which is the one we will find most useful: xa2 + yb2 = (x+y)t2 + xy2 + x2y The following proof of Stewart's Theorem uses the law of cosines on triangles AC1C and BC1C. We will let θ represent angle AC1C. Note that angle BC1C is the supplement of θ, and so its cosine is -cos θ. Then the law of cosines on these two triangles gives us b2 = x2 + t2 - 2xt cos θ a2 = y2 + t2 + 2yt cos θ Solving these two equations for cos θ, cos θ = (x2 + t2 - b2)/(2xt) = (a2 - y2 - t2)/(2yt), so (2xt)(a2 - y2 - t2) = (2yt)(x2 + t2 - b2) x(a2 - y2 - t2) = y(x2 + t2 - b2) Rearranging this equation gives us the desired result. Since x:y = b:a, ax=by. Solving Stewart's Theorem for t2, t2 = (xa2 + yb2 - xy2 - x2y)/(x+y) t2 = (xa2 + yb2 - (x+y)xy)/(x+y) t2 = (xa2 + yb2)/(x+y) - xy So far, we haven't taken advantage of the fact that CC1 is not just any cevian, but a bisector. Now, since x:y = b:a, ax=by, and so xa2=aby, and yb2=abx, so t2 = (abx + aby)/(x+y) - xy t2 = ab - xy Pat Ballew points out that each bisector, such as CC1 is divided into two pieces by the incenter, I. The lengths of these two pieces are always in a ratio related to the three sides. Using the figure, above, as our example, the ratio CI:IC1 = (a+b):c. To see this, look at triangle CC1B. The internal bisector of B cuts CC1 into two pieces at I in the same ratio as that of the lengths other two sides of that triangle, i.e. CI:IC1 = a:y. Similarly, using the bisector of A and triangle CC1A, CI:IC1 = b:x. Since CI:IC1 = a:y, and CI:IC1 = b:x, it follows that CI:IC1 = (a+b):(x+y) = (a+b):c www.pballew.net/Tribis.html describes the relationship between angle bisectors, the incircle, and the excircles, a very comprehensive summary, by Pat Ballew. The Nine-point circle, from mathworld, is the circle that passes through the "feet" of the three altitudes of a triangle. It also passes through the midpoints of the three sides, and the three midpoints of the segments connecting the vertices to the orthocenter; hence "nine points". Stewart's Theorem, from mathworld. Wikipedia: Nine-point circle Cut the knot: Feuerbach's Theorem Other triangle centers: Circumcenter, Incenter, Orthocenter, Centroid. The Orthocenter and Circumcenter of a triangle are isogonal conjugates, and the Incenter is its own isogonal conjugate. Summary of geometrical theorems summarizes the proofs of concurrency of the lines that determine these centers, as well as many other proofs in geometry. Barycentric Coordinates, which provide a way of calculating these triangle centers (see each of the triangle center pages for the barycentric coordinates of that center). Orthocenter -- the intersection of the altitudes (or their extensions) of a triangle Inversion Circle -- a page that describes orthogonal circles and inversion circles. The webmaster and author of this Math Help site is Graeme McRae.
http://2000clicks.com/MathHelp/GeometryTriangleCenterInscribedCircle2.aspx
13
53
Scientists have many shorthand ways of representing numbers. These representations make it easier for the scientist to perform a calculation or represent a number. A logarithm (log) of a number x is defined by the following equations. If You will see in the next section, logarithms do not need to be based on powers of 10. Logarithms are useful, in part, because of some of the relationships when using them. For example, A useful application of base ten logarithms is the concept of a decibel. A decibel is a logarithmic representation of a ratio of two quantities. For the ratio of two power levels (P1 and P2) a decibel (dB) is defined as Sometimes it is necessary to calculate decibels from voltage readings. The relationship between power (P) and voltage (V) is where R is the resistance of the circuit, which is usually constant. Substituting this equation into the definition of a dB we have The number 2.71828183 occurs so often in calculations that it is given the symbol e. When e is raised to the power x, it is often written exp(x). Logarithms based on powers of e are called natural logarithms. If Many of the dynamic MRI processes are exponential in nature. For example, signals decay exponentially as a function of time (t). It is therefore essential to understand the nature of exponential curves. Three common exponential functions are where τ is a constant. The basic trigonometric functions sine and cosine describe sinusoidal functions which are 90o out of phase. The trigonometric identities are used in geometric calculations. Csc(θ) = 1 / Sin(θ) = Hypotenuse / Opposite Sec(θ) = 1 / Cos(θ) = Hypotenuse / Adjacent Cot(θ) = 1 / Tan(θ) = Adjacent / Opposite Three additional identities are useful in understanding how the detector on a magnetic resonance imager operates. Sin(θ1) Cos(θ2) = 1/2 Sin(θ1 + θ2) + 1/2 Sin(θ1 - θ2) Sin(θ1) Sin(θ2) = 1/2 Cos(θ1 - θ2) - 1/2 Cos(θ1 + θ2) The function sin(x) / x occurs often and is called sinc(x). A differential can be thought of as the slope of a function at any point. For the function the differential of y with respect to x is An integral is the area under a function between the limits of the integral. An integral can also be considered a summation; in fact most integration is performed by computers by adding up values of the function between the integral limits. A vector is a quantity having both a magnitude and a direction. The magnetization from nuclear spins is represented as a vector emanating from the origin of the coordinate system. Here it is along the +Z axis. In this picture the vector is in the XY plane between the +X and +Y axes. The vector has X and Y components and a magnitude equal to A matrix is a set of numbers arranged in a rectangular array. This matrix has 3 rows and 4 columns and is said to be a 3 by 4 matrix. To multiply matrices the number of columns in the first must equal the number of rows in the second. Click sequentially on the next start buttons to see the individual steps associated with the multiplication. A rotation matrix, Ri(θ), is a three by three element matrix that rotates the location of a vector V about axis i to a new location V'. Rotation matrices are useful in magnetic resonance for determining the location of a magnetization vector after the application of a rotation pulse or after an evolution period. Using the conventional magnetic resonance coordinate system, which will be introduced in Chapter 3, the three rotation matrices are as follows.RX(θ) A coordinate transformation is used to convert the coordinates of a vector in one coordinate system (XY) to that in another coordinate system (X"Y"). A coordinate transformation can be achieved with one or more rotation matrices. For example, if a new coordinate system is rotated by ten degrees clockwise about +Z and then 20 degrees clockwise about +X, the position of the vector, V, in the new coordinate system, V', can be calculated by The convolution of two functions is the overlap of the two functions as one function is passed over the second. The convolution symbol is . The convolution of h(t) and g(t) is defined mathematically as The above equation is depicted for rectangular shaped h(t) and g(t) functions in this animation. Imaginary numbers are those which result from calculations involving the square root of -1. Imaginary numbers are symbolized by i. A complex number is one which has a real (RE) and an imaginary (IM) part. The real and imaginary parts of a complex number are orthogonal. Two useful relations between complex numbers and exponentials are The quantity e+ix is said to be the complex conjugate of e-ix. In other words, the complex conjugate of a complex number is the number with the sign of the imaginary component changed. The Fourier transform (FT) is a mathematical technique for converting time domain data to frequency domain data, and vice versa. The Fourier transform will be explained in detail in Chapter 5. Copyright © 1996-2010 J.P. Hornak. All Rights Reserved.
http://www.cis.rit.edu/htbooks/mri/chap-2/chap-2.htm
13
63
Posted by Anonymous on Wednesday, February 27, 2008 at 11:08am. Not all Cartesian equations are straight lines. It is any equation that uses the variables x and y. For example, b) and c) are circles. For (a), they want the equation of a line through C that is perpendicular to the side c (from A to B) The line from A to B has slope m = (1+2)/(9-0) = 1/3 Therefore the altitude has slope -3. The equation for the altitude from C is y = -3x + 8 b) BC diameter has a center at (4,6). and that is the center of the circuscribed circle . The length of the diameter is sqrt[(10^2 + 10^2)] = sqrt 100 = 10 sqrt 10. The radius of the circle around it is 5 sqrt10 From that information, the equation of the circle is (x-4)^2 + (y-6)^2 = [5 sqrt10]^2 = 50 c) I think you mean "circle with center at C" and line through A and B as tangent. Get the equation of that line and determine its distance from C. You are going to have to take it from here. It's time for you to practice what you've learned. Someone will be glad to critique your work Did you draw a diagram? Most people are "visual" and it will make the question easier to see. a) the altitude must meet AB and must be perpendicular to AB so slope of AB = 3/9 = 1/3 which makes the slope of the altitude -3 we know the altitude has slope -3 and must pass through (-1,11), so.... 11 = -3(-1) + b, (I am using y = mx+b) altitude equation: y = -3x + 8 b) are you familiar with the general equation of the circle: (x-h)^2 + (y-k)^2 = r^2, where the centre is (h,k) and the radius is r ?? if so, then the centre must be the midpoint of BC which is (4,6) so equation is (x-4)^2 +(y-6)^2 = r^2 but (9,1) lies on our circle, so then (x-4)^2 +(y-6)^2 = 50 c)If AB is to be a tangent of the circle with centre at C, then the altitude we found in a) must be the radius of our circle. (Can you see how important a diagram is??) I hope you have seen the formula to find the distance between a point (p,q) and the line Ax + By + C = 0 it says Dist = │Ap + Bq + C│/(A^2+B^2) First we need the equation of AB which in general form is x - 3y - 6 = 0 so radius of our circle is │1(-1) + (-3)(11) - 6│/√(1+9) equation of circle: (x+1)^2 + (y-11)^2 = 40^2/√10^2 (x+1)^2 + (y-11)^2 = 1600/10 = 160 d) I will let you do that one, here is the method 1. expand each of the two circle equations, each will contain an x^2 and a y^2 term 2. subtract one equation from the other, the square terms will drop away, leaving an equation in x and y. 3. solve for either x or y, whichever seems easier 4. substitute that back into the first circle equation 5. You should get two solutions, sub that back into the x and y equation you got in step 2. Let me know if it worked for you This is what I should have written: b) BC diameter has a center at (4,6). and that is the center of the circumscribed circle . The length of the diameter is sqrt[(10^2 + 10^2)] = sqrt 200 = 10 sqrt 2. The radius of the circle around the diameter is 5 sqrt 2 From that information, the equation of the circle is (x-4)^2 + (y-6)^2 = [5 sqrt2]^2=50 drwls, I was wondering what a "circuscribed circle" was, lol not as serious though as several of my students who somehow wanted to circumcise a circle. BTW, I wish somebody could come up with a way to avoid two or more tutors working on the same problem at the same time. The English/social studies/history et al tutors on this board suggested at the start of the school year that they might avoid two people doing a great deal of work on one question as follows: For a question involving proofing a paper or something similar in which a lot of work was required, that the first tutor to start on the project post a note that s/he is working on the response and will post the response later. That way, other tutors know the problem is being taken care of and will move on to another subject/question. You might email Writeacher or Ms Sue for more details. I'd rather not IM a lot of people to tell them what I'm working on, though doing so can avoid multiple answers and wasted effort. I answer most questions late at night when no one else is around, anyway. I'm quite often wrong with my sloppy math, as you know; so another point of view helps. When the answers are short, not much effort is wasted. We in the English/social studies fields sometimes will post a brief message on the board, saying that we're working on a long answer and that we'll post the completed answer in a little while. I like that idea Math - A(-11,8)B(2,-6) and C(-19,-8) are the vertices of Triangle ABC. N(X,Y) is... Orthocenter Question - Triangle ABC has vertices A(0,6), B(4,6) and C(1,3) Find... algebra - The vertices's of triangle ABC are A(-1,2), b(0,3), and c(3,1). ... Geometry - Within an orthonormal system consider points A(1,4) B(5,1) C(4,8) (a)... Math - Triangle ABC has vertices A(0,4),B(1,2),and C(4,6). Determine whether ... 6th grade math - The vertices of triangle ABC are a(8,23) B(2,2) C(2,8). What ... geometry - The medial triangle of a triangle ABC is the triangle whose vertices... Geometry - Triangle ABC has vertices A = (0,6) B = (-2,3), and C = (2,1). Find ... math - A trigonmetric polynomial of order n is t(x) = c0 + c1 * cos x + c2 * cos... geometry - The vertices of triangle ABC are A(2,2), B(3,-1) and C(0,1). Find the... For Further Reading
http://www.jiskha.com/display.cgi?id=1204128537
13
120
The method for finding the area of a circle is Where π is a constant roughly equal to 3.14159265358978 and r is the radius of the circle; a line drawn from any point on the circle to its center. Three ways of calculating the area inside of a triangle are mentioned here. First method If one of the sides of the triangle is chosen as a base, then a height for the triangle and that particular base can be defined. The height is a line segment perpendicular to the base or the line formed by extending the base and the endpoints of the height are the corner point not on the base and a point on the base or line extending the base. Let B = the length of the side chosen as the base. Let h = the distance between the endpoints of the height segment which is perpendicular to the base. Then the area of the triangle is given by: This method of calculating the area is good if the value of a base and its corresponding height in the triangle is easily determined. This is particularly true if the triangle is a right triangle, and the lengths of the two sides sharing the 90o angle can be determined. Second method - , also known as Heron's Formula If the lengths of all three sides of a triangle are known, Hero's formula may be used to calculate the area of the triangle. First, the semiperimeter, s, must be calculated by dividing the sum of the lengths of all three sides by 2. For a triangle having side lengths a, b, and c : Then the triangle's area is given by: If the triangle is needle shaped, that is, one of the sides is very much shorter than the other two then it can be difficult to compute the area because the precision needed is greater than that available in the calculator or computer that is used. In otherwords Heron's formula is numerically unstable. Another formula that is much more stable is: where , , and have been sorted so that . Third method In a triangle with sides length a, b, and c and angles A, B, and C opposite them, This formula is true because in the formula . It is useful because you don't need to find the height from an angle in a separate step, and is also used to prove the law of sines (divide all terms in the above equation by a*b*c and you'll get it directly!) Area of Rectangles The area calculation of a rectangle is simple and easy to understand. One of the sides is chosen as the base, with a length b. An adjacent side is then the height, with a length h, because in a rectangle the adjacent sides are perpendicular to the side chosen as the base. The rectangle's area is given by: Sometimes, the baselength may be referred to as the length of the rectangle, l, and the height as the width of the rectangle, w. Then the area formula becomes: Regardless of the labels used for the sides, it is apparent that the two formulas are equivalent. Of course, the area of a square with sides having length s would be: Area of Parallelograms The area of a parallelogram can be determined using the equation for the area of a rectangle. The formula is: A is the area of a parallelogram. b is the base. h is the height. The height is a perpendicular line segment that connects one of the vertices to its opposite side (the base). Remember in a rombus all sides are equal in length. and represent the diagonals. Area of Trapezoids The area of a trapezoid is derived from taking the arithmetic mean of its two parallel sides to form a rectangle of equal area. Where and are the lengths of the two parallel bases. The area of a kite is based on splitting the kite into four pieces by halving it along each diagonal and using these pieces to form a rectangle of equal area. Where a and b are the diagonals of the kite. Alternatively, the kite may be divided into two halves, each of which is a triangle, by the longer of its diagonals, a. The area of each triangle is thus Where b is the other (shorter) diagonal of the kite. And the total area of the kite (which is composed of two identical such triangles) is Which is the same as Areas of other Quadrilaterals The areas of other quadrilaterals are slightly more complex to calculate, but can still be found if the quadrilateral is well-defined. For example, a quadrilateral can be divided into two triangles, or some combination of triangles and rectangles. The areas of the constituent polygons can be found and added up with arithmetic. - Geometry Main Page - Geometry/Chapter 1 Definitions and Reasoning (Introduction) - Geometry/Chapter 2 Proofs - Geometry/Chapter 3 Logical Arguments - Geometry/Chapter 4 Congruence and Similarity - Geometry/Chapter 5 Triangle: Congruence and Similiarity - Geometry/Chapter 6 Triangle: Inequality Theorem - Geometry/Chapter 7 Parallel Lines, Quadrilaterals, and Circles - Geometry/Chapter 8 Perimeters, Areas, Volumes - Geometry/Chapter 9 Prisms, Pyramids, Spheres - Geometry/Chapter 10 Polygons - Geometry/Chapter 11 - Geometry/Chapter 12 Angles: Interior and Exterior - Geometry/Chapter 13 Angles: Complementary, Supplementary, Vertical - Geometry/Chapter 14 Pythagorean Theorem: Proof - Geometry/Chapter 15 Pythagorean Theorem: Distance and Triangles - Geometry/Chapter 16 Constructions - Geometry/Chapter 17 Coordinate Geometry - Geometry/Chapter 18 Trigonometry - Geometry/Chapter 19 Trigonometry: Solving Triangles - Geometry/Chapter 20 Special Right Triangles - Geometry/Chapter 21 Chords, Secants, Tangents, Inscribed Angles, Circumscribed Angles - Geometry/Chapter 22 Rigid Motion - Geometry/Appendix A Formulas - Geometry/Appendix B Answers to problems - Appendix C. Geometry/Postulates & Definitions - Appendix D. Geometry/The SMSG Postulates for Euclidean Geometry
http://en.wikibooks.org/wiki/Geometry/Area
13
131
In physics, net force is the overall force acting on an object. In order to perform this calculation the body is isolated and interactions with the environment or constraints are introduced as forces and torques forming a free-body diagram. The net force does not have the same effect on the movement of the object as the original system forces, unless the point of application of the net force and an associated torque are determined so that they form the resultant force and torque. It is always possible to determine the torque associated with a point of application of a net force so that it maintains the movement of the object under the original system of forces. With its associated torque, the net force becomes the resultant force and has the same effect on the rotational motion of the object as all actual forces taken together. It is possible for a system of forces to define a torque-free resultant force. In this case, the net force when applied at the proper line of action has the same affect on the body as all of the forces at their points of application. It is not always possible to find a torque-free resultant force. Total force The sum of forces acting on a particle is called the total force or the net force. The net force is a single force that replaces the effect of the original forces on the particle's motion. It gives the particle the same acceleration as all those actual forces together as described by the Newton's second law of motion. Force is a vector quantity, which means that it has a magnitude and a direction, and it is usually denoted using boldface such as F or by using an arrow over the symbol, such as . Graphically a force is represented as line segment from its point of application A to a point B which defines its direction and magnitude. The length of the segment AB represents the magnitude of the force. Vector calculus was developed in the late 1800s and early 1900s, however, the parallelogram rule for addition of forces is said to date from the ancient times, and it is explicitly noted by Galileo and Newton. The diagram shows the addition of the forces and . The sum of the two forces is drawn as the diagonal of a parallelogram defined by the two forces. Forces applied to an extended body can have different points of application. Forces are bound vectors and can be added only if they are applied at the same point. The net force obtained from all the forces acting on a body will not preserve its motion unless they are applied at the same point and the appropriate torque associated with the new point of application is determined. The net force on a body applied at a single point with the appropriate torque is known as the resultant force and torque. Parallelogram rule for the addition of forces A force is known as a bound vector which means it has a direction and magnitude and a point of application. A convenient way to define a force is by a line segment from a point A to a point B. If we denote the coordinates of these points as A=(Ax, Ay, Az) and B=(Bx, By, Bz), then the force vector applied at A is given by The length of the vector B-A defines the magnitude of F, and is given by The sum of two forces F1 and F2 applied at A can be computed from the sum of the segments that define them. Let F1=B-A and F2=D-A, then the sum of these two vectors is which can be written as where E is the midpoint of the segment BD that joins the points B and D. Thus, the sum of the forces F1 and F2 is twice the segment joining A to the midpoint E of the segment joining the endpoints B and D of the two forces. The doubling of this length is easily achieved by defining a segments BC and DC parallel to AD and AB, respectively, to complete the parallelogram ABCD. The diagonal AC of this parallelogram is the sum of the two force vectors. This is known as the parallelogram rule for the addition of forces. Translation and rotation due to a force Point forces When a force acts on a particle, it is applied to a single point (the particle volume is negligible): this is a point force and the particle is its application point. But an external force on an extended body (object) can be applied to a number of its constituent particles, i.e. can be "spread" over some volume or surface of the body. However, in order to determine its rotational effect on the body, it is necessary to specify its point of application (actually, the line of application, as explained below). The problem is usually resolved in the following ways: - Often the volume or surface on which the force acts is relatively small compared to the size of the body, so that it can be approximated by a point. It is usually not difficult to determine whether the error caused by such approximation is acceptable. - If it is not acceptable (obviously e.g. in the case of gravitational force), such "volume/surface" force should be described as a system of forces (components), each acting on a single particle, and then the calculation should be done for each of them separately. Such a calculation is typically simplified by the use of differential elements of the body volume/surface, and the integral calculus. In a number of cases, though, it can be shown that such a system of forces may be replaced by a single point force without the actual calculation (as in the case of uniform gravitational force). In any case, the analysis of the rigid body motion begins with the point force model. And when a force acting on a body is shown graphically, the oriented line segment representing the force is usually drawn so as to "begin" (or "end") at the application point. Rigid bodies In the example shown on the diagram, a single force acts at the application point H on a free rigid body. The body has the mass and its center of mass is the point C. In the constant mass approximation, the force causes changes in the body motion described by the following expressions: - is the center of mass acceleration; and - is the angular acceleration of the body. - is the torque vector, and - is the amount of torque. The vector is the position vector of the force application point, and in this example it is drawn from the center of mass as the reference point (see diagram). The straight line segment is the lever arm of the force with respect to the center of mass. As the illustration suggests, the torque does not change (the same lever arm) if the application point is moved along the line of the application of the force (dotted black line). More formally, this follows from the properties of the vector product, and shows that rotational effect of the force depends only on the position of its line of application, and not on the particular choice of the point of application along that line. The torque vector is perpendicular to the plane defined by the force and the vector , and in this example it is directed towards the observer; the angular acceleration vector has the same direction. The right hand rule relates this direction to the clockwise or counter-clockwise rotation in the plane of the drawing. The moment of inertia is calculated with respect to the axis through the center of mass that is parallel with the torque. If the body shown in the illustration is a homogenous disc, this moment of inertia is . If the disc has the mass 0,5 kg and the radius 0,8 m, the moment of inertia is 0,16 kgm2. If the amount of force is 2 N, and the lever arm 0,6 m, the amount of torque is 1,2 Nm. At the instant shown, the force gives to the disc the angular acceleration α = τ/I = 7,5 rad/s2, and to its center of mass it gives the linear acceleration a = F/m = 4 m/s2. Resultant force Resultant force and torque replaces the effects of a system of forces acting on the movement of a rigid body. An interesting special case is a torque-free resultant which can be found as follows: - First, vector addition is used to find the net force; - Then use the equation to determine the point of application with zero torque: where is the net force, locates its application point, and individual forces are with application points . It may be that there is no point of application that yields a torque-free resultant. The diagram illustrates simple graphical methods for finding the line of application of the resultant force of simple planar systems. - Lines of application of the actual forces and on the leftmost illustration intersect. After vector addition is performed "at the location of ", the net force obtained is translated so that its line of application passes through the common intersection point. With respect to that point all torques are zero, so the torque of the resultant force is equal to the sum of the torques of the actual forces. - Illustration in the middle of the diagram shows two parallel actual forces. After vector addition "at the location of ", the net force is translated to the appropriate line of application, where it becomes the resultant force . The procedure is based on decomposition of all forces into components for which the lines of application (pale dotted lines) intersect at one point (the so-called pole, arbitrarily set at the right side of the illustration). Then the arguments from the previous case are applied to the forces and their components to demonstrate the torque relationships. - The rightmost illustration shows a couple, two equal but opposite forces for which the amount of the net force is zero, but they produce the net torque where is the distance between their lines of application. This is "pure" torque, since there is no resultant force. Generally, a system of forces acting on a rigid body can always be replaced by one force plus one "pure" torque. The force is the net force, but in order to calculate the additional torque, the net force must be assigned the line of action. The line of action can be selected arbitrarily, but the additional "pure" torque will depend on this choice. In a special case it is possible to find such line of action that this additional torque is zero. The resultant force and torque can be determined for any configuration of forces. However, an interesting special case is a torque-free resultant which it is useful both conceptually and practically, because the body moves without rotating as if it was a particle. See also - Symon, Keith R. (1964), Mechanics, Addison-Wesley, ISBN 60-5164 - Michael J. Crowe (1967). A History of Vector Analysis : The Evolution of the Idea of a Vectorial System. Dover Publications; Reprint edition. ISBN 0-486-67910-1 - Resnick, Robert and Halliday, David (1966), Physics, (Vol I and II, Combined edition), Wiley International Edition, Library of Congress Catalog Card No. 66-11527
http://en.wikipedia.org/wiki/Net_force
13
111
Wetting is the ability of a liquid to maintain contact with a solid surface, resulting from intermolecular interactions when the two are brought together. The degree of wetting (wettability) is determined by a force balance between adhesive and cohesive forces. Wetting is important in the bonding or adherence of two materials. Wetting and the surface forces that control wetting are also responsible for other related effects, including so-called capillary effects. Regardless of the amount of wetting, the shape of a liquid drop on a rigid surface is roughly a truncated sphere. Various degrees of wetting are summarized in this article. |Contact angle||Degree of |θ = 0||Perfect wetting||strong||weak| |0 < θ < 90°||high wettability||strong||strong| |90° ≤ θ < 180°||low wettability||weak||strong| |θ = 180°||perfectly The contact angle (θ), as seen in Figure 1, is the angle at which the liquid–vapor interface meets the solid–liquid interface. The contact angle is determined by the resultant between adhesive and cohesive forces. As the tendency of a drop to spread out over a flat, solid surface increases, the contact angle decreases. Thus, the contact angle provides an inverse measure of wettability. A contact angle less than 90° (low contact angle) usually indicates that wetting of the surface is very favorable, and the fluid will spread over a large area of the surface. Contact angles greater than 90° (high contact angle) generally means that wetting of the surface is unfavorable so the fluid will minimize contact with the surface and form a compact liquid droplet. For water, a wettable surface may also be termed hydrophilic and a non-wettable surface hydrophobic. Superhydrophobic surfaces have contact angles greater than 150°, showing almost no contact between the liquid drop and the surface. This is sometimes referred to as the "Lotus effect". The table describes varying contact angles and their corresponding solid/liquid and liquid/liquid interactions. For non-water liquids, the term lyophilic is used for low contact angle conditions and lyophobic is used when higher contact angles result. Similarly, the terms omniphobic and omniphilic apply to both polar and apolar liquids. High-energy vs. low-energy surfaces There are two main types of solid surfaces with which liquids can interact. Traditionally, solid surfaces have been divided into high-energy solids and low-energy types. The relative energy of a solid has to do with the bulk nature of the solid itself. Solids such as metals, glasses, and ceramics are known as 'hard solids' because the chemical bonds that hold them together (e.g., covalent, ionic, or metallic) are very strong. Thus, it takes a large input of energy to break these solids so they are termed “high energy.” Most molecular liquids achieve complete wetting with high-energy surfaces. The other type of solids is weak molecular crystals (e.g., fluorocarbons, hydrocarbons, etc.) where the molecules are held together essentially by physical forces (e.g., van der Waals and hydrogen bonds). Since these solids are held together by weak forces it would take a very low input of energy to break them, and thus, they are termed “low energy.” Depending on the type of liquid chosen, low-energy surfaces can permit either complete or partial wetting. Wetting of low-energy surfaces - Zisman observed that cos θ increases linearly as the surface tension (γLV) of the liquid decreased. Thus, he was able to establish a rectilinear relation between cos θ and the surface tension (γLV) for various organic liquids. A surface is more wettable when γLV is low and when θ is low. He termed the intercept of these lines when the cos θ = 1, as the critical surface tension (γc) of that surface. This critical surface tension is an important parameter because it is a characteristic of only the solid. Knowing the critical surface tension of a solid, it is possible to predict the wettability of the surface. - The wettability of a surface is determined by the outermost chemical groups of the solid. - Differences in wettability between surfaces that are similar in structure are due to differences in packing of the atoms. For instance, if a surface has branched chains, it will have poorer packing than a surface with straight chains. Ideal solid surfaces An ideal solid surface is one that is flat, rigid, perfectly smooth, chemically homogeneous, and has zero contact angle hysteresis. Zero hysteresis implies that the advancing and receding contact angles are equal. In other words, there is only one thermodynamically stable contact angle. When a drop of liquid is placed on such a surface, the characteristic contact angle is formed as depicted in Fig. 1. Furthermore, on an ideal surface the drop will return to its original shape if it is disturbed. The following derivations apply only to ideal solid surfaces. In other words, they are only valid for the state in which the interfaces are not moving and the phase boundary line exists in equilibrium. Minimization of energy, three phases Figure 3 shows the line of contact where three phases meet. In equilibrium, the net force per unit length acting along the boundary line between the three phases must be zero. The components of net force in the direction along each of the interfaces are given by: where α, β, and θ are the angles shown and γij is the surface energy between the two indicated phases. These relations can also be expressed by an analog to a triangle known as Neumann’s triangle, shown in Figure 4. Neumann’s triangle is consistent with the geometrical restriction that , and applying the law of sines and law of cosines to it produce relations that describe how the interfacial angles depend on the ratios of surface energies. Because these three surface energies form the sides of a triangle, they are constrained by the triangle inequalities, γij < γjk + γik meaning that no one of the surface tensions can exceed the sum of the other two. If three fluids with surface energies that do not follow these inequalities are brought into contact, no equilibrium configuration consistent with Figure 3 will exist. Simplification to planar geometry, Young's relation which relates the surface tensions between the three phases: solid, liquid and gas. Subsequently this predicts the contact angle of a liquid droplet on a solid surface from knowledge of the three surface energies involved. This equation also applies if the "gas" phase is another liquid, immiscible with the droplet of the first "liquid" phase. Real smooth surfaces and the Young contact angle The Young equation assumes a perfectly flat and rigid surface. In many cases surfaces are far from this ideal situation, and two are considered here: the case of rough surfaces (see Non-ideal rough solid surfaces) and the case of smooth surfaces that are still real (finitely rigid). Even in a perfectly smooth surface a drop will assume a wide spectrum of contact angles ranging from the so-called advancing contact angle, , to the so-called receding contact angle, . The equilibrium contact angle () can be calculated from and as was shown by Tadmor as, The Young–Dupré equation and spreading coefficient The Young–Dupré equation (Thomas Young 1805, Lewis Dupré 1855) dictates that neither γSG nor γSL can be larger than the sum of the other two surface energies. The consequence of this restriction is the prediction of complete wetting when γSG > γSL + γLG and zero wetting when γSL > γSG + γLG. The lack of a solution to the Young–Dupré equation is an indicator that there is no equilibrium configuration with a contact angle between 0 and 180° for those situations. A useful parameter for gauging wetting is the spreading parameter S, When S > 0, the liquid wets the surface completely (complete wetting). When S < 0, there is partial wetting. Combining the spreading parameter definition with the Young relation yields the Young–Dupré equation: which only has physical solutions for θ when S < 0. Non-ideal rough solid surfaces Unlike ideal surfaces, real surfaces do not have perfect smoothness, rigidity, or chemical homogeneity. Such deviations from ideality result in phenomena called contact-angle hysteresis. Contact-angle hysteresis is defined as the difference between the advancing (θa) and receding (θr) contact angles In simpler terms, contact angle hysteresis is essentially the displacement of a contact line such as the one in figure 3, by either expansion or retraction of the droplet. Figure 6 depicts the advancing and receding contact angles. The advancing contact angle is the maximum stable angle, whereas the receding contact angle is the minimum stable angle. Contact-angle hysteresis occurs because there are many different thermodynamically stable contact angles on a non-ideal solid. These varying thermodynamically stable contact angles are known as metastable states. Such motion of a phase boundary, involving advancing and receding contact angles, is known as dynamic wetting. When a contact line advances, covering more of the surface with liquid, the contact angle is increased and generally is related to the velocity of the contact line. If the velocity of a contact line is increased without bound, the contact angle increases, and as it approaches 180° the gas phase will become entrained in a thin layer between the liquid and solid. This is a kinetic non-equilibrium effect which results from the contact line moving at such a high speed that complete wetting cannot occur. A well-known departure from ideality is when the surface of interest has a rough texture. The rough texture of a surface can fall into one of two categories: homogeneous or heterogeneous. A homogeneous wetting regime is where the liquid fills in the roughness grooves of a surface. On the other hand, a heterogeneous wetting regime is where the surface is a composite of two types of patches. An important example of such a composite surface is one composed of patches of both air and solid. Such surfaces have varied effects on the contact angles of wetting liquids. Cassie–Baxter and Wenzel are the two main models that attempt describe the wetting of textured surfaces. However, these equations only apply when the drop size is sufficiently large compared with the surface roughness scale. Wenzel's model where is the apparent contact angle which corresponds to the stable equilibrium state (i.e. minimum free energy state for the system). The roughness ratio, r, is a measure of how surface roughness affects a homogeneous surface. The roughness ratio is defined as the ratio of true area of the solid surface to the apparent area. θ is the Young contact angle as defined for an ideal surface. Although Wenzel's equation demonstrates that the contact angle of a rough surface is different from the intrinsic contact angle, it does not describe contact angle hysteresis. Cassie–Baxter model When dealing with a heterogeneous surface, the Wenzel model is not sufficient. A more complex model is needed to measure how the apparent contact angle changes when various materials are involved. This heterogeneous surface, like that seen in Figure 8, is explained using the Cassie–Baxter equation (Cassie's law): Here the rf is the roughness ratio of the wet surface area and f is the fraction of solid surface area wet by the liquid. It is important to realize that when f = 1 and rf = r, the Cassie–Baxter equations becomes the Wenzel equation. On the other hand, when there are many different fractions of surface roughness, each fraction of the total surface area is denoted by . A summation of all fi equals 1 or the total surface. Cassie–Baxter can also be recast in the following equation: Here γ is the Cassie–Baxter surface tension between liquid and vapor, the γi,sv is the solid vapor surface tension of every component and γi,sl is the solid liquid surface tension of every component. A case that is worth mentioning is when the liquid drop is placed on the substrate and creates small air pockets underneath it. This case for a two-component system is denoted by: Here the key difference to notice is that the there is no surface tension between the solid and the vapor for the second surface tension component. This is because of the assumption that the surface of air that is exposed is under the droplet and is the only other substrate in the system. Subsequently the equation is then expressed as (1 – f). Therefore the Cassie equation can be easily derived from the Cassie–Baxter equation. Experimental results regarding the surface properties of Wenzel versus Cassie–Baxter systems showed the effect of pinning for a Young angle of 180° to 90°, a region classified under the Cassie–Baxter model. This liquid air composite system is largely hydrophobic. After that point a sharp transition to the Wenzel regime was found where the drop wets the surface but no further than edges of the drop. Precursor film With the advent of high resolution imaging, researchers have started to obtain experimental data which has led them to question the assumptions of the Cassie–Baxter equation when calculating the apparent contact angle. These groups believe that the apparent contact angle is largely dependent on the triple line. The triple line, which is in contact with the heterogeneous surface, cannot rest on the heterogeneous surface like the rest of the drop. In theory it should follow the surface imperfection. This bending in triple line is unfavorable and is not seen in real world situations. A theory that preserves the Cassie–Baxter equation while at the same time explaining the presence of minimized energy state of the triple line hinges on the idea of a precursor film. This film of submicrometer thickness advances ahead of the motion of the droplet and is found around the triple line. Furthermore, this precursor film allows the triple line to bend and take different conformations that were originally considered unfavorable. This precursor fluid has been observed using environmental scanning electron microscopy (ESEM) in surfaces with pores formed in the bulk. With the introduction of the precursor film concept, the triple line can follow energetically feasible conformations and thereby correctly explaining the Cassie–Baxter model. The intrinsic hydrophobicity of a surface can be enhanced by being textured with different length scales of roughness. The red rose takes advantage of this by using a hierarchy of micro- and nanostructures on each petal to provide sufficient roughness for superhydrophobicity. More specifically, each rose petal has a collection of micropapillae on the surface and each papillae, in turn, has many nanofolds. The term “petal effect” describes the fact that a water droplet on the surface of a rose petal is spherical in shape, but cannot roll off even if the petal is turned upside down. The water drops maintain their spherical shape due to the superhydrophobicity of the petal (contact angle of about 152.4°), but do not roll off because the petal surface has a high adhesive force with water. When comparing the "petal effect" to the "lotus effect", it is important to note some striking differences. The surface structure of the lotus petal and the rose petal, as seen in Figure 9, can be used to explain the two different effects. The lotus petal has a randomly rough surface and low contact angle hysteresis, which means that the water droplet is not able to wet the microstructure spaces between the spikes. This allows air to remain inside the texture, causing a heterogeneous surface composed of both air and solid. As a result, the adhesive force between the water and the solid surface is extremely low, allowing the water to roll off easily (i.e. "self-cleaning" phenomena). On the other hand, the rose petal's micro- and nanostructures are larger in scale than the lotus leaf, which allows the liquid film to impregnate the texture. However, as seen in Figure 9, the liquid can enter the larger scale grooves, but it cannot enter into the smaller grooves. This is known as the Cassie impregnating wetting regime. Since the liquid can wet the larger scale grooves, the adhesive force between the water and solid is very high. This explains why the water droplet will not fall off even if the petal is tilted at an angle or turned upside down. However, this effect will fail if the droplet has a volume larger than 10 µL because the balance between weight and surface tension is surpassed. Cassie–Baxter to Wenzel transition In the Cassie–Baxter model, the drop sits on top of the textured surface with trapped air underneath. During the wetting transition from the Cassie state to the Wenzel state, the air pockets are no longer thermodynamically stable and liquid begins to nucleate from the middle of the drop, creating a “mushroom state,” as seen in Figure 10. The penetration condition is given by the following equation: - θC is the critical contact angle - Φ is the fraction of solid/liquid interface where drop is in contact with surface - r is solid roughness (for flat surface, r = 1) The penetration front propagates to minimize the surface energy until it reaches the edges of the drop, thus arriving at the Wenzel state. Since the solid can be considered an absorptive material due to its surface roughness, this phenomenon of spreading and imbibition is called hemi-wicking. The contact angles at which spreading/imbibition occurs are between 0 ≤ θ < π/2. The Wenzel model is valid between θC < θ < π/2. If the contact angle is less than ΘC, the penetration front spreads beyond the drop and a liquid film forms over the surface. Figure 11 depicts the transition from the Wenzel state to the surface film state. The film smoothes the surface roughness and the Wenzel model no longer applies. In this state, the equilibrium condition and Young's relation yields: By fine-tuning the surface roughness, it is possible achieves a transition between both super hydrophobic and super hydrophilic regions. Generally, the rougher the surface, the more hydrophobic it is. Spreading Dynamics If a drop is placed on a smooth, horizontal surface, it is generally not in the equilibrium state. Hence, it spreads until an equilibrium contact radius is reached (partial wetting). While taking into account capillary, gravitational and viscous contributions, the drop radius as a function of time can be expressed as For the complete wetting situation, the drop radius at any time during the spreading process is given by Effect of surfactants on wetting Many technological processes require control of liquid spreading over solid surfaces. When a drop is placed on a surface, it can completely wet, partially wet, or not wet the surface. By reducing the surface tension with surfactants, a non-wetting material can be made to become partially or completely wetting. The excess free energy (σ) of a drop on a solid surface is: - γ is the liquid–vapor interfacial tension - γSL is the solid–liquid interfacial tension - γSV is the solid–vapor interfacial tension - S is the area of liquid–vapor interface - P is the excess pressure inside liquid - R is the radius of droplet base Based on this equation, the excess free energy is minimized when γ decreases, γSL decreases, or γSV increases. Surfactants are absorbed onto the liquid–vapor, solid–liquid, and solid–vapor interfaces, which modify the wetting behavior of hydrophobic materials to reduce the free energy. When surfactants are absorbed onto a hydrophobic surface, the polar head groups face into the solution with the tail pointing outward. In more hydrophobic surfaces, surfactants may form a bilayer on the solid, causing it to become more hydrophilic. The dynamic drop radius can be characterized as the drop begins to spread. Thus, the contact angle changes based on the following equation: - θ0 is initial contact angle - θ∞ is final contact angle - τ is the surfactant transfer time scale As the surfactants are absorbed, the solid–vapor surface tension increases and the edges of the drop become hydrophilic. As a result, the drop spreads. See also - Sharfrin, E.; Zisman, William A. (1960). "Constitutive relations in the wetting of low energy surfaces and the theory of the retraction method of preparing monolayers". The Journal of Physical Chemistry 64 (5): 519–524. doi:10.1021/j100834a002. - Eustathopoulos, N.; Nicholas, M.G. and Drevet B. (1999). Wettability at high temperatures. Oxford, UK: Pergamon. ISBN 0-08-042146-6. - Schrader, M.E; Loeb, G.I. (1992). Modern Approaches to Wettability. Theory and Applications. New York: Plenum Press. ISBN 0-306-43985-9. - de Gennes, P.G. (1985). "Wetting: statics and dynamics". Reviews of Modern Physics 57 (3): 827–863. Bibcode:1985RvMP...57..827D. doi:10.1103/RevModPhys.57.827. - Johnson, Rulon E. (1993) in Wettability Ed. Berg, John. C. New York, NY: Marcel Dekker, Inc. ISBN 0-8247-9046-4 - Rowlinson, J.S.; Widom, B. (1982). Molecular Theory of Capillarity. Oxford, UK: Clarendon Press. ISBN 0-19-855642-X. - Young, T. (1805). "An Essay on the Cohesion of Fluids". Phil. Trans. R. Soc. Lond. 95: 65–87. doi:10.1098/rstl.1805.0005. - T. S. Chow (1998). "Wetting of rough surfaces". Journal of Physics: Condensed Matter 10 (27): L445. Bibcode:1998JPCM...10L.445C. doi:10.1088/0953-8984/10/27/001. - Tadmor, Rafael (2004). "Line energy and the relation between advancing, receding and Young contact angles". Langmuir 20 (18): 7659–64. doi:10.1021/la049410h. PMID 15323516. - Robert J. Good (1992). "Contact angle, wetting, and adhesion: a critical review". J. Adhesion Sci. Technol. 6 (12): 1269–1302. doi:10.1163/156856192X00629. - De Gennes, P. G. (1994). Soft Interfaces. Cambridge, UK: Cambridge University Press. ISBN 0-521-56417-4. - Abraham Marmur (2003). "Wetting of Hydrophobic Rough Surfaces: To be heterogeneous or not to be". Langmuir 19 (20): 8343–8348. doi:10.1021/la0344682. - Marmur, Abraham (1992) in Modern Approach to Wettability: Theory and Applications Schrader, Malcom E. and Loeb, Geroge New York: Plenum Press - Whyman, G.; Bormashenko, Edward; Stein, Tamir (2008). "The rigirious derivation of Young, Cassie–Baxter and Wenzel equations and the analysis of the contact angle hysteresis phenomenon". Chemical Physics Letters 450 (4–6): 355–359. Bibcode:2008CPL...450..355W. doi:10.1016/j.cplett.2007.11.033. - Bormashenko, E. (2008). "Why does the Cassie–Baxter equation apply?". Colloids and Surface A: Physicochemical and Engineering Aspects 324: 47–50. doi:10.1016/j.colsurfa.2008.13.025. - Lin, F.; Zhang, Y; Xi, J; Zhu, Y; Wang, N; Xia, F; Jiang, L (2008). "Petal Effect: Two major examples of the Cassie–Baxter model are the "Petal Effect" and Lotus Effect". A superhydrophobic state with high adhesive force". Langmuir 24 (8): 4114–4119. doi:10.1021/la703821h. PMID 18312016. - Okumura, K.; Okumura, K (2008). "Wetting transitions on textured hydrophilic surfaces". European Physical Journal 25 (4): 415–424. Bibcode:2008EPJE...25..415I. doi:10.1140/epje/i2007-10308-y. PMID 18431542. - Quere, D.; Thiele, Uwe; Quéré, David (2008). "Wetting of Textured Surfaces". Colloids and Surfaces 206 (1–3): 41–46. doi:10.1016/S0927-7757(02)00061-4. - Härth M., Schubert D.W., Simple Approach for Spreading Dynamics of Polymeric Fluids, Macromol. Chem. Phys., 213, 654 - 665, 2012, DOI: 10.1002/macp.201100631 - K.S. Lee; Ivanova, N; Starov, VM; Hilal, N; Dutschk, V (2008). "Kinetics of wetting and spreading by aqueous surfactant solutions". Advances in colloid and interface science 144 (1–2): 54–65. doi:10.1016/j.cis.2008.08.005. PMID 18834966.
http://en.wikipedia.org/wiki/Wetting
13
51
Polar molecules possess a weak electromagnetic field which is shared with their neighbours (known as the Van Der Waals or cohesive force). At the surface there are fewer neighbours (the Van Der Waals force is also present in non-polar molecules). Molecules on the surface of a liquid will experience a net downward force simply due to the absence of neighbours above. As the surface is pulled downward, the surface area is minimised and the density in the layer increases until limited by Coulomb repulsive forces. The energy density in the surface layer is higher than in the body of the liquid (where viscous forces reign). Surface tension produces effects such as a pin, of density much higher than water, floating if placed on the surface carefully. Another force to consider in determining how the surface of a liquid will shape itself is the adhesive force between the liquid and a solid with which it is in contact. Water adheres strongly to glass. The water molecules in contact with the glass will tend to be attracted upwards. The surface tension effect will simultaneously rearrange the shape of the surface of the water until it is again in the lowest energy configuration. This process (obviously over a very short time-scale) will continue until the angle that the surface of the water makes with the wall of the vessel reaches a certain critical angle. This turned up at the edges shape is known as the meniscus. Water does not adhere well with wax, explaining why rain falling on the waxed surface of a car will curl up into droplets while that falling on the windscreen will spread all over the surface. Reducing the surface tension of water will increase the degree to which it will spread over a surface (i.e. its 'wetness'). Raising the temperature of water or adding cleaning agents (surfactants) such as detergent will reduce surface tension. A washing machine is a surface tension reducer! A third method is to physically create turbulence on the surface so that the membrane is broken. Water is sprayed on the spot where a diver will enter the water so that a less painful impact results. OK, since I know by now you will be dying to know how to calculate the surface tension of a liquid, I present to you- How to determine surface tension by the capillary tube method What you will need- - 1 capillary tube - Some liquid (avoid mercury) - 1 PC microscope (available for under 100 dollars) - 1 PC with printer (most probably you have these) - 1 ruler - 1 micrometer screw gauge - 1 beaker (a transparent cup would do) - You will need to know the density of the liquid. If you don't have a micrometer screw gauge you could try your best with the ruler. First start up your image capture software and clamp the capillary tube into the beaker full of water. Notice the liquid rise up the tube due to a combination of surface tension and adhesive forces with the glass. Now the surface tension (which I will henceforth call Gamma) is the force per unit length along the circumference of the liquid in the tube. where R is the radius of the tube. This force is balanced out by the force due to gravity , namely the mass of the liquid that has risen multiplied by the acceleration due to gravity g (9.8 ms-2 ). This mass is the product of volume of liquid in the tube and the density of the liquid (water= 1000 kg m-3 ). So lets write that out in equation form. Furthermore, the tension force acts parallel to the surface of the liquid where it meets the glass. It is its vertical component which balances gravity. We take care of this by multipying Gamma by Cos(theta) where theta is the angle made between the liquid and the glass. The volume of liquid is best calculated in two parts. First, there is the liquid beneath the lowest part of the meniscus which is at a height h above the level of the main body of water. This is simply given by the volume of a cylinder h. Next there is the water above the lowest part of the meniscus. In a narrow tube such as this the surface of the water assumes a hemispherical shape of radius R. The volume of this hemisphere is 2/3*PI*R3 , while the total cylindrical volume of this section (of height R) is PI*R3 . This means the water in this section has volume 1/3*PI*R3 which is equivalent to the volume of a cylinder of height R/3. Thus the total volume of the water in the capillary tube is PI*R2 (h+R/3) (it might be helpful to draw a diagram of this). Finally, plug this expression for the volume into the equation above and one has or if you have are using browser that supports such symbols So thats the theory and the apparatus has been set up. Better get on with the experiment in that case. We need to find R, h and theta in order to obtain a value for Gamma. - Measure the outer diameter of the capillary tube using the micrometer screw gauge. The accuracy of this device is a fraction of a millimeter. - Take image of capillary tube using microscope. The image should include the level of the main body of liquid and the meniscus in the tube. - Print out image. - Measure outer diameter on capillary tube using your trusty 12". - Calculate ratio of the outer diameter as it is represented on the printed page and the one measured more accurately in step 1. This is your scale factor between reality and image. - From the image measure the height of the liquid in the tube. The position of the base of the liquid may be difficult to ascertain due to the pulling up of the water at the edge of the beaker. Scale down using ratio determined in previous step to find h. - Measure the inner diameter on the image. Scale down and divide by two to find the radius R of the liquid in the tube - Get out your long lost protractor and measure theta on the printed image. Remember this is the angle the surface of the liquid makes with the tube at the glass/liquid interface. - Plug figures for R, h and theta and the known value of density into the formula for the sought after surface tension Now you are equipped to test the claims of laundary detergent manufacturers. First find the surface tension of tap water. Next add your favourite non-Bio to test if Gamma is reduced. Try replicating conditions in washing machine by raising the temperature of water. Remember any contaminants in the liquid or on the surface of the capillary tube will severely skew the results. tdent clarified the physics for me.
http://everything2.com/title/surface+tension?showwidget=showCs1392234
13
134
Understanding Special Relativity Table of Content Understanding Special Relativity By Rafi Moor The purpose of this article is to introduce the theory of special relativity in an easily understandable way. It reviews the basic issues in special relativity in a somewhat informal way, without the use of higher mathematics, so that anyone with a basic knowledge of physics can easily understand it. When the wavelike nature of light (and other electromagnetic radiation) was discovered at the 18th, century scientists assumed that there must be some kind of substance in which the waves move. They believed that space was filled with such a substance, and called it ether. In 1887 Albert Michelson and Edward Morley carried out an experiment in which they tried to show the motion of Earth relative to the ether by measuring changes in light speed in different directions. To their astonishment they found no change in the light speed regardless of the relative motion between Earth and the source of light or the ether. As a result of this experiment known as the Michelson Morley experiment, the theory of ether was abandoned by most physicists. In the absence of ether there was also no absolute reference to determine what is at rest and what is moving in space. This was where Einstein started his work on relativity. Einstein based his theory on 2 simple postulates: The laws of physics are the same in all inertial frames of reference. The speed of light in vacuum has the same value c in all inertial frames of reference. The first postulate claims a symmetry or equivalence between all inertial (not accelerating) frames. Any inertial frame can be assumed at rest and look at other inertial frames as moving in constant speed along straight lines. Another important point is that without an absolute reference there is also no absolute definition for a point or a place in space. What is a fixed point in space for one frame is a moving point for another frame. People on two frames that are moving relative to each other will not agree about the place where an event has happened in the past. The second postulate is the one that forces us to change the laws of physics as they were known before relativity. Imagine two spaceships pass near each other. One is headed to the sun and the other away from the sun. They both watch a single photon coming from the sun. Without relativity, after one second they both see the photon one light second away from the ship. Since the ships have moved away from each other by this second, they would see the photon in two different places at the same time. To see how we can settle this, let us look at an example presented by Einstein himself. Suppose there is a train moving in a constant speed along a straight track. A woman is standing exactly in the middle of the train. A man is standing on the ground outside the train. Now, the following scenario is described as seen or measured, or actually happening from the man’s point of view: Just when the woman is in front of him two lightning bolts strike both ends of the train. They leave marks on the train as well as on the track. The light from the lighting bolts starts to spread in a constant speed in all directions as shown in figure 1A. A fraction of a second later the light coming from the front of the train reaches the woman as shown in 1B. A little later the light coming from both lightning bolts reach the man simultaneously as shown in 1C. A little later yet the light from the rear reaches the woman. The following phrases are true in the man’s frame: Now, let’s try to figure out how things happen in the woman’s frame. There is one difference we already know: In the man’s frame the place where the events of the lightning bolts occur are the points on the track where they left their marks. In the woman’s frame the points are at the edges of the train. Thus, in her own frame, the woman is at all times in the same distance from the points where the lightning bolts hit. If we assume that all the phrases above are also true in the woman’s frames, there will be a logical contradiction: Light cannot go through equal distances in the same speed and yet not in the same period of time. So, at least one of the four phrases (any of them) must be wrong in the woman’s frame. If we were talking not about light but about two pieces of metal that are sprayed by the lightning bolts, then phrase c. would not be true in the woman’s frame. If the speed of the two pieces is the same and equals to v in the man’s frame, in the woman’s frame the speed of the piece coming from the front will be v + V and the speed of piece from the back will be v – V, where V is the speed of the train. But from postulate 2 we know that this is not the case when we talk about light. The speed of light remains constant regardless of the reference frame. So, one of the other three phrases must be wrong in the woman’s frame. Suppose it is d. That is, while the man measures the light from the front getting to the woman before the light from the back, the woman sees the light from both sides simultaneously. This could lead to very strange consequences: Suppose we put two photoelectric cells at point P on the train where the two flashes of light meet in the man’s frame. One of the cells is directed to the front of the train and the other to the back. Now we connect the cells to a bomb in a way that if the two cells are illuminated simultaneously the bomb explodes. In the man’s frame the bomb will explode. In the woman’s it will not since in her frame the flashes meet by her and not at point P. This will be very hard to settle. Remember that it is very easy to move from one inertial frame to another. All you need is some acceleration. Imagine the man sitting in a bar the next day when the woman enters. “How come you are here alive?” asks the man surprisingly, "I saw your train exploding to pieces yesterday. There were no survivors." “What are you talking about?” says the woman. “The train got safely to its final station where I got off." We can see that a solution that results in different material state in two inertial frames is not a good one. We can show similar problem if we try the solve the contradiction by claming that a. is not true in the woman’s frame and while the man measures equal distances between her and the edges of the train, in her own frame she is closer to the front of the train. Suppose there is a long stick lying on the train’s floor from the woman’s feet up to the front of the train. The woman takes the stick and tries to use it to push the emergency stop button on the rear end of the train. In her frame it is too short and the train goes on. But in the men’s frame the stick is long enough and the train stops. So, we are left with phrase b. We must assume that in the woman’s frame the lightning in the front happens before the one in the back. The only problem with this solution is that it sounds very weird. We use to think about time as absolute and independent of a reference frame. What happens now happens now, no matter from where you are looking. But let’s not forget how weird it sounds when we are first told that Earth is round, and far below us people stand with their feet towards us and call the direction pointing to us "down". In the small part of Earth surface that we cover in everyday life the curvature of it is so small that it can be considered flat. In the same way, for the low speeds we experience in everyday life, time can be considered absolute. But when we talk about higher speeds we see that this is not. We got now to one of the laws of relativity - the relativity of simultaneity. Two events that are simultaneous in one frame have time difference in another frame. Two clocks that are synchronized in one frame show different time in another frame. We can show from the above example that this difference grows with the relative speed between the tow frames and also with the distance between the two events in the direction of the relative motion. Now we have to see what else is relative to the reference frame. Imagine two people on a dock trying to measure the length of a ship that is sailing along the dock. One is walking near the bow and the other by the stern. When the one in the front whistles, they mark both ends of the ship on the dock with a piece of chalk and then they measure the distance between the two marks. This works fine. But what if a single person tries to do the same? He marks one end of the ship, runs to the other end and marks it too. Since the ship moves while he runs, his measure will be either more or less than the ship’s length, depending on where started. In order to get the length of something we need to know where it’s two ends are simultaneously. But what if simultaneity is relative? Apparently we will measure a different length from different frames. Let’s go back to the train case. The man sees two lightning bolts happen simultaneously at both ends of the train. They leave marks on the track. For him the distance between the marks equals to the length of the train. The woman, in her frame, sees the track moving to the left of the drawing. The lightning in the right happen in her frame first and marks the position of the front end of the train on the track. The lightning at the left marks the rear end of the train a little later. During this time the track with the first mark on it has moved a little to the left. So in the woman’s frame the distance between the marks is less than the train length. What will we measure if we put the train and the track in the same inertial frame, like when the train stops? The length of an object in its own frame is called the proper length. Suppose the proper length of the train is equal to the proper distance between the marks on the track. This breaks the symmetry required by postulate 1: In the man’s frame the length of the train is its proper length. But in the woman’s frame the distance between the marks is shorter than its proper length. What if the proper length of the train is less than the proper distance between the marks? The man sees the train longer than its proper length and the woman sees the distance on the track shorter than the proper distance. Again, no symmetry. The only possibility that preserves symmetry is that the train’s proper length is longer than the proper distance between the marks. Both the man and the woman see the length on the other frame shorter than its proper length. This is another law of relativity – length contraction. When we measure distance on an inertial frame that is moving relative to us, the distance in the direction of the relative motion will be less than its proper distance. Let’s build a light clock. We place two perfect reflecting mirrors face to face in a distance of exactly one light microsecond. We send a light pulse so that it goes back and forth between the two mirrors. Every tick of the clock is exactly one microsecond. Now, what we see if we look at our light clock from a frame that is moving relative to the clock’s frame in a direction perpendicular to the motion of the light as shown in figure 2. Relative to the second frame the distance that light passes while moving from one mirror to the other, is more than one light microsecond. Since light speed is constant, it would take the light more than one microsecond to go from mirror to mirror. So, while for a man on the clocks frame one microsecond passes every tick of the clock, for the man on the other frame it is a longer time. Or, the man in the other frame sees time pass “slower” for the man in the clock’s frame. Symmetry requires also that the man on the clock’s frame sees time pass slower for the man on the other frame. This is a little hard to perceive. Because we are so used to think of time as absolute, it is much easier for us to accept that for two frames a length measured in each frame looks shorter to the other frame, than accepting that time measured in each frame looks longer in the other frame. We must understand that time is a different thing for every frame. It is not the same thing that looks different from every frame. We can make the following analogy: Two people start walking from the same point facing two different directions with an angle α between them as shown in figure 3. They both are walking forward at the same speed. After a while they look aside to see where the other person is. They both see that relative to the direction they walk the other is aligned with a point on their path they have already passed, so they both decide that the other is behind them. This is because forward and backward is a different thing for each one. In just the same way earlier and later are different things for two people that are in motion relative to each other. When they look at time on the other frame, they compare it to their own time. Like the two people that each sees the other behind him, the people on the moving frames both sees the other’s time pass slower. We have seen now the three basic principles of Special Relativity: Length contraction, time dilation and simultaneity relativity. Now we need the mathematics of all this. For this purpose we will introduce the concept of time-space. For anything happening in the world we can ask where it happened and when it happened. The answer to the first question positions the event in space and the answer to the second positions it in time. To position something in space we usually use an orthogonal coordinate system with three perpendicular axes and a common origin. To position something in time we need a reference point in time and a number that represent the time difference between the reference point and the event in question. The idea of time-pace is to combine these two things into one. We use a four dimensional coordinate system with three space axis and one time axis. The position of an events in it gives the answer to the where and when questions. Many times, when dealing with time-space, the units of length and time are chosen so that light speed c is 1, as with light-years and years. This gives a comfortable symmetry in the geometry of time-space. The problem with doing so is that it makes the equations we use unit dependent. We will use here a different trick. Instead of showing the time t on the fourth axis we will show the value of ct (time multiplied by light speed). This will represent time well because technically we just multiply the time t by a constant, which is just like choosing a different unit for time. By doing so, we get the wanted symmetry without making the equations unit dependent. And there is more to it: The units of ct are actually length units, so that we can use the same units for all our four time-space dimensions. Four dimensional time-pace is very hard to visualize. To make things more visual we will use only two or even one space dimensions to form a three or two dimensional time-space. Two dimensional time-space is enough for things that happen along a line as with the train example. The whole thing about relativity is how things look (or are) from different reference frames. The meaning of it is a transformation of events from one space-time coordinate system to another. We know how to transform points from one space coordinate system to another, but the mathematics or geometry of time-space is different from the Euclid geometry we use for space. Let’s draw two dimensional time-space coordinates axes x and ct as shown in figure 4. Suppose these are the time-space coordinates of the ground in the train example where the origin O is the point where the man stands at the moment the woman is in front of him. The red line marked as ct’ shows the position of the woman as seen from the ground. This line can be described by the equation: We define β as the ratio between the relative speed of the two frames and light speed: β = v/c (1) In the train’s coordinate system the woman is always at point x' = 0, so this line is actually the ct' axis of the train’s coordinates system. We know that for space, if one of the axes of a coordinate system is rotated by an angle α relative to another coordinates system, the other axis will be rotated at the same angle in the same direction. Is this also true in time-space? We know that on earth’s frame the two lightning bolts occurred at t = 0. We’ll mark them as points L1 and L2 on the coordinate system. On the train’s frame the right lightning bolt occurred before t' = 0 and the left one occurred after t' = 0. So, the line t’ = 0, which is the x-axis of the train, goes under point L1, through the origin O and above the point L2. That is, The x’-axis is rotated in the opposite direction to the rotation of the ct’-axis. Let’s try to find the angle of the rotation of the x’-axis. On a plane, a light pulse expends so that it forms a circle that grows with time. In three dimensional time-space it forms a cone. This is called the light cone. On two dimensional time-space, what is left from the light cone are two lines that represents the advance of light along the space line in the positive and the negative directions. For light moving in the direction x, ct = x and for light moving in the direction –x, ct = -x. So lines that describe the advance of a light pulse in space-time have a slope of ±45˚. Since light speed is the same at every reference frame, this is also true for the x', ct' coordinate system. Light pulse that starts at the origin and moves in the x direction goes through the point x=1, ct=1, and also through x'=1, ct'=1. Light pulse that starts at x'=1, ct'=0 and moves in the –x direction has a slope of -45˚ and goes through the point x'=0, ct'=1. So, the quadrangle (x'=0,ct'=0), (x'=1,ct'=0), (x'=1,ct'=1), (x'=0,ct'=1) is a rhombus, and the x'-axis and the ct'-axis form the same angle with the 45˚ line (Figure 5). How does the grid of the coordinate system x’,ct’ look on coordinate system x,ct? All the time lines (x’=constant) will be parallel to the ct’-axis because every point on the train moves at the same speed. All space lines (ct’=constant) will be parallel to the x’-axis. We know that one unit of time in the train’s frame looks longer in Earth frame, so the distance between grid lines of x’,ct’ coordinate system in ct' direction, is greater than one. By symmetry also the distance between grid lines in the x' direction is greater (Figure 6). Exactly by how much? At this point we will leave geometry for a while and look at some equations. The equations for transformation of events between two time-space coordinate systems were developed by Hendryc Lorentz and published about a year before Einstein published his first work on relativity. Here is one way to get to these equations that are known as Lorentz Transformation. First let’s find the factor of the time dilation. We will designate this factor with the letter γ. Going back to the light clock from the time dilation section we have the following triangle ABC: The length of BC is l which is the distance between the two mirrors. l = ct’ where t’ is the time it takes light to go from one mirror to the other in the clock’s frame. t is the time it takes light to make the longer distance AC between the mirrors as measured from the “static” frame. By our definition t = γt’. The length of AC equals ct = γct’ = γl. The length of AB equals to the distance the clock frame moves during the time t and is If we choose l = 1 we get: BC = 1 AC = γ AB = γβ We use Pythagoras’ theorem and get: Now, if we assume that the tranformation is linear, it must be of the form: ct = act’ + bx’ (3.1) x = dx’ + ect’ (3.2) Where a, b, d and e are constants for a pair of frames. We know that for a light pulse moving in the direction x we have x = ct and x’ = ct’ so for event on this line we get: act’ + bx’ = dx’ + ect’ ax’ + bx’ = dx’ + ex’ a + b = d + e (4) For light pulse moving in the direction –x we have x = -ct and x’ = -ct’ so act’ + bx’ = -dx’ – ect’ act’ – bct’ = dct’ – ect’ a – b = d – e (5) By adding (5) to (4) we get 2a = 2d a = d (6) By subtracting (5) from (4) we get 2b = 2e b = e (7) The line x' = 0 goes through the common origin of the two frames, and thus, for this line, when t' = 0 also t = 0. We have defined γ as the time dilation factor, so for this line t = γt'. Substituting x' = 0 and t = γ t' in (3.1) we get: ct = γct' = act' a = γ (8) For the line x' = 0 we also know that Substituting x = βct and x' = 0 in (3.2) we get: βct = ect' βγt' = et' e = βγ (9) From (6),(7),(8) and (9): a = d = γ and b = e = βγ And substituting this in (3.1) and (3.2) we get the Lorentz transformation for two dimensional time-space: ct = γ(ct' + βx') (10.1) x = γ(x' + βct') (10.2) While some sizes in geometry are coordinate system dependent, others are invariant and remain the same when measured from any coordinate system. In space, for instance, the distance between two points is invariant. Thus, for all the coordinates systems with a common origin the distance of a point from the origin is invariant. From Pythagoras’ theorem the square of this distance in a two dimensional space equals to . So the value of this expression is invariant in space. From the Lorentz transformation we can see that for time-space Since we get: So for two dimensional time-space the value of is invariant. Mathematicians define a function called the metric that is typical to a geometry. This function satisfies the following conditions: It takes two vectors as arguments and returns a number as a result. It is written as: It is symmetric: It is linear: Its value is invariant. The metric for Euclid (space) geometry is known as the dot product of vectors: It is easy to see that conditions 1 - 3 are satisfied by this function. It is a little more complicate to show that it is invariant. But if we operate the function on the same vector twice we get . We have shown that this expression is invariant. If we assume that V is the sum of two other vectors then, using symmetry and linearity rules, we can show that The left side of the equation is the metric applied on vector with itself so it is invariant. So are the first and last parts of the right side. That makes also g(v1,v2) invariant. The metric for time-space is known as the Minkowskian: It can be shown in a way similar to that we used for space, that this function satisfies the required conditions. Note that when applying this function on a vector with itself we get which we know is invariant in time-space. Al and Bob are twins who live on planet Earth. On their 20th birthday Bob buys himself a spaceship and starts a voyage to planet Traal that is 8.66 light years away from Earth. His spaceship’s cruising speed is 0.866c and it accelerates so fast that its acceleration time and distance are negligible. When he gets to Traal he makes a U-turn and goes back to Earth. When he gets there, he finds out that he is only 30 years old but his twin Al is already 40 years old. This is clear from Al's point of view: Bob is going 10 years to Traal and 10 years back. time dilation factor for both parts of the journey is γ = 2, and thus Bob's time goes twice slower and only 10 years pass for him during the journey. But how do things look from Bob's point of view? What about symmetry and the second postulate? Doesn't Bob see Al's time pass slower as well? Well, there is no symmetry between Al and Bob here. While Al stays at the same inertial frame all the time, Bob is changing frames. First when he starts going from Earth, again when he makes the U-turn at Trall, and once again when he stops at Earth. But yet, all the way to Traal there is symmetry between the two and also all the way back. So what happens in the points of acceleration that makes such a big difference? Let's see how things look from Bob's point of view: When he finishes accelerating after launch his time is nearly the same as Al's time. But there is already one big difference between the two: For Bob, due to length contraction, the distance to Traal is now only 4.33 light years . That makes his way much shorter and he gets there in 5 years. Since for him Al's time passes twice slower, his time looks to Bob only 2.5 years from launch when he gets to Traal. What happens during the U-turn? The distance to Earth grows while he slows down until it gets to its proper length 8.66 light years when Bob is at rest relative to Earth. Then it starts shrinking again until it gets back to 4.33 when Bob achieves his full speed (relative to Earth) again. More interesting is what happens to Earth time. Suppose we have on Traal a clock that is synchronizes with Earth clock (both planets are on the same inertial frame). We know that the event of Bob reaching Traal happens 10 years after launch in earth frame, so the clock on Traal shows this time when Bob gets there. As we saw in Bob's frame Earth clock shows 2.5 years after launch at that time. That is, Bob sees Earth clock showing 7.5 years earlier than Traal's clock. This is due to simultaneity differences. When Bob completes his U-turn he has the same speed and the same distance to Earth, but while before the U-turn Earth was moving away from him, now it moves towards him. Thus, the difference between clocks reads will be the same, but in the other direction. Traal's clock will still shows 10 years, but Earth clock shows 17.5 years. During the way back that takes 5 years in his frame, Bob sees 2.5 more years pass on Earth, so when he gets there, Earth clock shows 20 years and his own shows 10 years. In Bob's frame only 2.5 years pass on Earth during his way to Traal and 2.5 years in his way back. The remaining 15 years pass in the short time (according to his clock) in which he makes the U-turn. There are some arguments that show why an object or even information moving faster than light contradicts with the laws of special relativity. I shall describe one here. We distinguish between three types of lines in time-space: · A line with a slope of 1 or -1 (45°) is called a light-like line and can describe the motion of a light pulse. · A line with a slope greater than 1 or smaller than -1 is called a time-like line and can be the ct-axis of another frame. · A line with a slope greater than -1 and less than 1 is called a space-like line and can be the x-axis of another frame. A line in space-time remains the same type in any reference frame. Moving faster than light means going along a space-like line in time-space. But two events on a space-like line can happen in opposite order in two frames. In figure 8 we can see that for the black coordinate system A happens before B, but for the red coordinates system B happens before A. If we could send information about B along the space-like line connecting it to A in the red frame, this information would go back in time in the black frame, and get to A before B has happened. This leads to the paradox of action affects its own cause, like the known example of the time traveler who kills his own grandfather. Suppose Alpha and Beta are two enemy planets. Event B represents the launch of a missile from Beta that aims to destroy Alpha. A space-ship passing by notices the launch and transmits a warning message to Alpha. The message goes faster than light relative to the space-ship's frame (the red coordinates) and gets to Alpha before the launch occurred in its own (black) frame. This leaves enough time for beta to send another faster than light message to a war space-ship near Beta that in turn destroys Beta before the launch of the missile... If an object B is moving with velocity v relative to A, and C is moving relative to B in the same direction with velocity u, what will be the relative velocity w between A and C? Obviously if we just add v and u we can exceed light speed. Both v and u can be more than 0.5c. We can understand that w is less than v + u if we think about the fact that for A distances of B look shorter and the time of B is slower, thus A sees C passing a shorter distance in longer time than B sees. There is also a correction due to the increase of simultaneity difference with distance. Let's use Lorentz transformation to find the relation between u, v and w. If we take two points in time-space that C pass through and sign them as and in A's coordinates and and in B's coordinates, we can see that By Lorentz transformation or, when substituting (11.2) (12) is sometimes called the relativistic velocity addition equation. But since this non linear function is definitely not an addition, the name “velocity composition equation” is much more suitable for it. Note that if both v and u are very small compared to c, then. This nonlinear behavior of velocities breaks most of the laws of non-relative dynamics. Suppose we launch a missile to space. It burns all its fuel and gets a velocity v. a spaceship moving along with it, refills its tank, and it starts the engine again until all the fuel is gone. Now, relative to the spaceship, it gets the same velocity v, but relative to us it has less than 2v. We can see that neither Newton's second law F = ma, nor energy conservation as we know it are valid anymore. We had the same force and the same fuel energy in both parts of the journey of the missile, but, relative to us, the velocity after the second part is less than twice the velocity after the first part. Thus, it had less acceleration, and gained less kinetic energy in the second part. Let's now look at the following situation: Two balls of the same mass m are moving in opposite directions at the same speed v relative to an inertial frame S (Figure 10a). They collide, stick to each other and come to rest. Relative to S, the non-relative total momentum Mv of the balls before the collision is mv-mv=0 and is 2mv.0 after the collision. So, relative to S the momentum is conserved. But what do we see if we look at the same collision from a frame S' that is moving with speed v relative to S? Before the collision one of the balls is at rest and the other is moving at a speed w that is less than 2v. So, the momentum before the collision is less then 2mv. After the collision, the balls stuck together move at v since they are at rest relative to S. So, the momentum is 2v. The momentum is not conserved relative to S'. We saw that three of the most important laws of dynamics - the conservation of energy, the conservation of momentum and the second law of Newton - fail with relativity. New laws must be found and they have to fulfill the following conditions: · They must be mathematically correct, and comply with the Lorentz transformation. · They must reduce to the known non-relativistic laws when v << c. · They must be proved correct when tested by experiments. Most problematic is the conservation of energy. It appears that separate conservation of mass and energy is not possible when it comes to relativity. Einstein presented the following scenario: In a stationary closed box, a photon leaves the internal face of the left wall and is absorbed in the right wall. It was known before Einstein that light, though it has no mass, has momentum and can move objects. So, when the photon leaves the left wall of the box it pushes the box slightly to the left. The motion of the box stops shortly later as the photon hits the right wall, but the box is now shifted slightly to the left. Physics tells us that the center of mass of a closed system cannot be changed without an external force, so the photon must have carried a tiny amount of mass from the left wall to the right. Einstein concluded that mass and energy are equivalent and exchangeable. His famous equation shows the internal energy stored in a body at rest: The total energy of a moving body is the sum of the internal energy and the kinetic energy and is given by the formula: Where γ is calculated using the velocity of the body. This total energy is conserved in a closed system relative to any reference frame. The kinetic energy is thus given by: This formula is a little problematic, for it does not reduce to the non-relativistic form when v << c and γ ≈ 1. Instead, we get K ≈ 0. This happens because when speeds are low, the kinetic energy is very small relative to the internal energy. It makes this formula useless when we deal with kinetic energy of low speeds. A latest work called “Millennium Relativity” suggests a new expression for the kinetic energy: This equation is just another form of Einstein’s equation (15) and can be directly derived from it (you can look here). But this form of the equation does reduce to the non-relativistic formula when v << c and can be used for all the range of velocities. The relativistic momentum is defined by the expression: This value is also conserved for a closed system relative to any reference frame. Note that with inelastic collisions the mass is not necessarily conserved. Some or all of the kinetic energy that is lost in the collision might transform into mass. So, when we calculate momentum before and after an inelastic collision we must use a different mass in the momentum expression. Force and acceleration Instead of Newton’s second law, special relativity defines the relationship: We can see that for low velocities it becomes F = m(dv/dt) = ma. We must note that γ is also changed with velocity and that force may be different when measured from different reference frames. Thus, force is not a very useful concept in relativity. We can see that the expression γm appears in all the dynamic formulas. This expression is sometimes called the relativistic mass. We define the rest mass M0 of an object as the mass measured in its own reference frame, and say that the mass of an object increases with velocity so that M = γM0. M0 is what we referred to as the mass m. With this definition the expression for momentum retains the non-relativistic form p = Mv and the total energy becomes E = Mc2 for all reference frames. There are two arguments against this approach. First, it is preferred to look at mass as a property of an object that is independent from the reference frame. Second, one of the reasoning for the definition of relativistic mass was trying to keep Newton’s second law in its original form. But this requires a definition of a different mass for an object in the direction of its motion and perpendicular to it, because it is harder to accelerate an object in the direction of its motion than perpendicular to it. With the 4-vector approach all the physical entities are defined as four dimensional vectors. Examples are 4-velocity, 4-force, 4-acceleration and 4-momentum (energy is one of the components of the 4-momentum). All the vectors are defined so that they are transformed with Lorentz transformation in the same way that events in space-time do. If a vector is defined by its components so that V = (x0, x1, x2, x3), the value of is invariant. Its square root is sometimes called the 4-vector "length". Laws of physics are defined using these 4-vectors and matrix algebra is used to calculate them.
http://www.rafimoor.com/english/SRE.htm
13
93
A geometry based on the same fundamental premises as Euclidean geometry, except for the axiom of parallelism (see Fifth postulate). In Euclidean geometry, according to this axiom, in a plane through a point not lying on a straight line there passes precisely one line that does not intersect . The line is called a parallel to . It is sufficient to require that there is at most one straight line, since the existence of a non-intersecting line can be proved by successively drawing lines and . In Lobachevskii geometry the axiom of parallelism requires that through a point (Fig. a) there passes more than one line not intersecting . The non-intersecting lines fill the part of the pencil with vertex lying inside a pair of vertically opposite angles and situated symmetrically with respect to the perpendicular . The lines that form the sides of the vertically opposite angles separate the intersecting lines from the non-intersecting lines, and they themselves are also non-intersecting. These limiting lines are called the parallels at to the line in its two directions: is parallel to in the direction , and is parallel to in the direction . The remaining non-intersecting lines are called ultraparallels to (for details see below). The angle , , that a parallel at makes with the perpendicular , , is called the angle of parallelism of the interval and is denoted by . For , ; as increases the angle decreases, so that for every given , , there is a definite value of . This dependence is called the Lobachevskii function: where is a constant that determines the fixed scale of measurement. It is called the radius of curvature of the Lobachevskii space. Euclidean geometry can be obtained as a limiting case of Lobachevskii geometry when the two parallels passing through merge into one, that is, when the set of all lines passing through and not intersecting the given line reduce to a unique line. Then the angle for any . This condition is equivalent to the requirement that . In small regions of space, that is, when the linear dimensions of figures are infinitesimal with respect to , all relations of Lobachevskii geometry are approximated by relations of the Euclidean geometry obtained in the limit. Two distinct lines of the plane form a pair of one of three types. Intersecting lines. The distance from points of one line to the other line increases without limit as the distance from the intersection of the lines increases. If the lines are not perpendicular, then each is projected orthogonally on the other into an open interval of finite size. Parallel lines. Coplanar non-intersecting lines that have no common perpendicular. In Fig. a, and , in , with angle of parallelism . Parallelism is transitive (if and in one direction, then in the corresponding direction). In the direction of parallelism parallels approach each other without limit (in the sense of the distance from a moving point of one line to the other line). The orthogonal projection of one line on the other is an open half-line. Ultraparallels. They have one common perpendicular, the length of which gives the shortest distance. On both sides of the perpendicular the lines diverge without limit. Each line projects onto the other in an open interval of finite length. To the three types of lines there correspond in the plane three types of pencils of lines, each of which covers the whole plane: a pencil of the first kind — the set of all lines passing through one point (the centre of the pencil); a pencil of the second kind — the set of all lines perpendicular to one line (the base of the pencil); and a pencil of the third kind — the set of all lines parallel to one line in a given direction, including this line. The orthogonal trajectories of the lines of these pencils form analogues of circles of the Euclidean plane: a circle in the proper sense; an equi-distant, or curve of equal distances (if one does not consider the base), which is concave on the side of the base; or a limiting curve, or horocycle, which can be regarded as a circle with its centre at infinity. Horocycles are congruent. They are not compact and are concave on the side of parallelism. Two horocycles generated by the same pencil are concentric (they cut off equal intervals on the lines of the pencil). The ratio of the lengths of concentric arcs included between two lines of the pencil decreases on the side of parallelism like an exponential function of the distance between the arcs: Each of the analogues of a circle can slide on itself, generating three types of one-parameter motions of the plane: rotation about a proper centre; translation (one trajectory is the base, and the others are equi-distants); and parallel displacement (all the trajectories are horocycles). Rotation of the analogues of circles about a line of the generating pencil leads to analogues of the sphere: a proper sphere, a surface of equal distances and a horosphere, or limiting surface. On a sphere the geometry of great circles is ordinary spherical geometry; on a surface of equal distances it is the geometry of equi-distants, which is Lobachevskii planimetry, but with a larger value of ; and on a horosphere it is the Euclidean geometry of horocycles. The connection between lengths of arcs and chords of horocycles and Euclidean trigonometric relations on horospheres make it possible to derive trigonometric relations in the plane, that is, trigonometric formulas for rectilinear triangles. For example, the formula for the area of a triangle is: for the perimeter of a circle the formula is: The trigonometric formulas of Lobachevskii geometry can be obtained from the formulas of spherical geometry by replacing the radius by the imaginary number . The proof of the consistency of Lobachevskii geometry is carried out by constructing an interpretation (a model). The first such interpretation was the Beltrami interpretation, which establishes that in Euclidean space the intrinsic geometry of a surface of constant negative Gaussian curvature coincides locally with Lobachevskii geometry (the role of straight lines is played by the geodesic curves of the surface). A surface of this type is called a pseudo-sphere. Another interpretation of Beltrami consists of a geodesic mapping of a surface of constant negative curvature into the interior of a disc. However, the Beltrami interpretations model only part of the Lobachevskii plane. The first interpretation of the whole Lobachevskii plane was the Klein interpretation, in which Cayley's projective metric was used. In this interpretation (Fig. b) the straight lines of Lobachevskii space are realized by chords of the absolute (without the end-points), and perpendicular lines are realized by conjugate chords. Distances and angles are expressed by means of the cross ratios (cf. Cross ratio) of quadruples of points (the end-points , of an interval and the end-points , of the chord on which the interval lies) and the corresponding quadruples of lines (the sides of the angle and the (imaginary) tangents to the absolute that pass through the vertex). The parallels through a point to the line are realized by the lines and that intersect at points on the absolute. The points of the absolute model the "points at infinity" , which are not points of the Lobachevskii plane. In 1882, H. Poincaré, in constructing a theory of automorphic functions, arrived at two other models, in a disc and on a half-plane (cf. Poincaré model). In the first model (Fig. c) the Lobachevskii plane is realized by the interior of a disc, and lines by the inner parts of arcs of circles that intersect the main disc orthogonally. The metric is introduced by means of cross ratios, and the values of angles on the model are the same as those on the Lobachevskii plane (a conformal model). The introduction of coordinates makes it possible to obtain different analytical models of the Lobachevskii plane. Poincaré, in 1887, proposed a model of Lobachevskii geometry as the geometry of plane diametral sections of one of the sheets of a two-sheet hyperboloid, which can also be treated as the geometry of a sphere of purely imaginary radius in a pseudo-Euclidean space. These models can be generalized to the case of an -dimensional space. Like elliptic geometry, Lobachevskii geometry is the geometry of a Riemannian space of constant curvature. The origin of the creation of Lobachevskii geometry was the problem of parallels, that is, attempts to prove Euclid's fifth postulate concerning parallels. N.I. Lobachevskii (1826, published in 1829–1830) showed that the assumption of a postulate different from Euclid's postulate makes it possible to construct Lobachevskii geometry, which is more general than Euclidean geometry. Independently of Lobachevskii, J. Bolyai arrived at the same discovery in 1832. Not having obtained the open support of C.F. Gauss, Bolyai did not continue his research. Gauss worked out the beginnings of the new geometry much earlier, but he did not publish this research, and never spoke openly about these ideas. However, in private correspondence he regarded the work of Bolyai and Lobachevskii highly, but he did not say so openly in print. Applications of Lobachevskii geometry. In the first work on his geometry, Lobachevskii, relying on the yearly parallax of stars first measured by the astronomers of the time, showed that if his geometry is realized in physical space, then within the limits of the Solar System the deviations from Euclidean geometry would be several orders smaller than possible measuring errors. Thus, the first application of Lobachevskii geometry was a justification of the practical accuracy of Euclidean geometry. Lobachevskii applied his geometry to mathematical analysis. Going over from one coordinate system to another in his space, he found the values of about 200 distinct definite integrals. Other mathematical applications were found by Poincaré (1882), who successfully applied Lobachevskii geometry to the development of the theory of automorphic functions. The significance of Lobachevskii geometry for cosmology was first explained by A.A. Friedman. In 1922 he found a solution of the Einstein equations which implied that the universe expands in the course of time. This conclusion was subsequently confirmed by observations of E. Hubble (1929), who discovered the scattering of distant nebuli. The metric found by Friedman gives, for a fixed time, the Lobachevskii space. The velocity space of the special theory of relativity is a Lobachevskii space. Lobachevskii geometry has been used successfully in the study of collisions of elementary particles and in the development of other questions of nuclear research. Visual perception of nearby regions of space by man provides the effect of inverse perspective, which is explained by the fact that the geometry of these regions of perspective space are close to Lobachevskii geometry with radius of curvature of about 15 metres. The creation of Lobachevskii geometry was an important step in the development of studies on the possible properties of space. It also had special significance for the foundations of mathematics, since the principles of the modern axiomatic method were worked out to a significant extent thanks to the appearance of Lobachevskii geometry. |||N.I. Lobachevskii, "Zwei geometrische Abhandlungen" , Teubner (1898) (Translated from Russian)| |||J. Bolyai, "Appendix. Scientiam spatii absolute veram exhibens" , Tentamen , 1 , M. Vásárhelyini Die (1833)| |||A.D. Aleksandrov, "Abstract spaces" , Mathematics, its content, methods and meaning , 3 , Amer. Math. Soc. (1962) (Translated from Russian)| |||I.P. Egorov, "Introduction to non-Euclidean geometries" , Penze (1972) (In Russian)| |||N.V. Efimov, "Höhere Geometrie" , Deutsch. Verlag Wissenschaft. (1960) (Translated from Russian)| |||V.F. Kagan, "Foundations of geometry" , 1 , Moscow-Leningrad (1949) (In Russian)| |||B.L. Laptev, "Nikolai Ivanovich Lobachevskii" , Kazan' (1976) (In Russian)| |||A.P. Norden, "Elementare Einführung in die Lobatschewskische Geometrie" , Deutsch. Verlag Wissenschaft. (1958) (Translated from Russian)| |||Yu.Yu. Nut, "Lobachevskii geometry in an analytic setting" , Moscow (1961) (In Russian)| |||B.V. Raushenbakh, "Spatial constructions in old Russian paintings" , Moscow (1975) (In Russian)| |||B.A. Rozenfel'd, "Non-Euclidean spaces" , Moscow (1969) (In Russian)| |||P.A. Shirokov, "A sketch of the fundamentals of Lobachevskian geometry" , Noordhoff (1964) (Translated from Russian)| Lobachevskii geometry is also called hyperbolic geometry. Just as in spherical geometry it is natural to use a sphere of radius , in Lobachevskii geometry one usually assumes , thereby simplifying somewhat the formulas. (E.g. , , .) Beltrami could have anticipated F. Klein, by combining his two interpretations. For the introduction in Poincaré's first model (Fig. c) of a metric avoiding cross ratios see [a2]. |[a1]||H.S.M. Coxeter, "Non-Euclidean geometry" , Univ. Toronto Press (1965) pp. 224–240| |[a2]||H.S.M. Coxeter, "Parallel lines" Canad. Math. Bull. , 21 (1978) pp. 385–397| |[a3]||H.S.M. Coxeter, "The non-Euclidean symmetry of Escher's picture "Circle Limit III" " Leonardo , 12 (1979) pp. 19–25; 32| |[a4]||H.S.M. Coxeter, "Angles and arcs in the hyperbolic plane" Math. Chronicle (New Zealand) , 9 (1980) pp. 7–33| |[a5]||M. Berger, "Geometry" , 1–2 , Springer (1987) (Translated from French)| |[a6]||M. Greenberg, "Euclidean and non-Euclidean geometries" , Freeman (1974)| |[a7]||D.M.Y. Sommerville, "Bibliography of non-Euclidean geometry" , Chelsea, reprint (1970)| |[a8]||H.S.M. Coxeter, "Introduction to geometry" , Wiley (1961) pp. 11; 258| |[a9]||R. Bonola, "Non-Euclidean geometry" , Dover, reprint (1955) (Translated from Italian)| Lobachevskii geometry. B.L. Laptev (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Lobachevskii_geometry&oldid=12005
http://www.encyclopediaofmath.org/index.php/Lobachevskii_geometry
13
55
An earth-orbiting station, equipped to study the sun, the stars, and the earth, is a concept found in the earliest speculation about space travel. During the formative years of the United States space program, space stations were among many projects considered. But after the national decision in 1961 to send men to the moon, space stations were relegated to the background. Project Apollo was a firm commitment for the 1960s, but beyond that the prospects for space exploration were not clear. As the first half of the decade ended, new social and political forces raised serious questions about the nation's priorities and brought the space program under pressure. At the same time, those responsible for America's space capability saw the need to look beyond Apollo for projects that would preserve the country's leadership in space. The time was not propitious for such a search, for the national mood that had sustained the space program was changing. In the summer of 1965, the office that became the Skylab program office was established in NASA Headquarters, and the project that evolved into Skylab was formally chartered as a conceptual design study. During the years 1965-1969 the form of the spacecraft and the content of the program were worked out. As long as the Apollo goal remained to be achieved, Skylab was a stepchild of manned spaceflight, achieving status only with the first lunar landing. When it became clear that America's space program could not continue at the level of urgency and priority that Apollo had enjoyed, Skylab became the means of sustaining manned spaceflight while the next generation of hardware and missions developed. The first five chapters of this book trace the origins of the Skylab concept from its emergence in the period 1962-1965 through its evolution into final form in 1969. Directions for Manned Spaceflight. Space Stations after 1962. Sizing Up a Space Station. Air Force Seeks Role in Space. President Calls for NASA's Plans. Mueller Opens Apollo Applications Program Office. The summer of 1965 was an eventful one for the thousands of people involved in the American space program. In its seventh year, the National Aeronautics and Space Administration (NASA) was hard at work on the Gemini program, its second series of earth-orbiting manned missions. Mercury had concluded on 16 May 1963. For 22 months after that, while the two-man Gemini spacecraft was brought to flight readiness, no American went into space. Two unmanned test flights preceded the first manned Gemini mission, launched on 23 March 1965.1 Mercury had been used to learn the fundamentals of manned spaceflight. Even before the first Mercury astronaut orbited the earth, President John F. Kennedy had set NASA its major task: to send a man to the moon and bring him back safely by 1970. Much had to be learned before that could be done-not to mention the rockets, ground support facilities, and launch complexes that had to be built and tested-and Gemini was part of the training program. Rendezvous-bringing two spacecraft together in orbit-was a part of that program; another was a determination of man's ability to survive and function in the weightlessness of spaceflight. That summer the American public was getting acquainted, by way of network television, with the site where most of the Gemini action was taking place-the Manned Spacecraft Center (MSC). Located on the flat Texas coastal plain 30 kilometers southeast of downtown Houston- close enough to be claimed by that city and given to it by the media-MSC was NASA's newest field center, and Gemini was the first program managed there. Mercury had been planned and conducted by the Space Task Group, located at Langley Research Center, Hampton, Virginia. Creation of the new Manned Spacecraft Center, to be staffed initially by members of the Space Task Group, was announced in 1961; by the middle of 1962 its personnel had been moved to temporary quarters in Houston; and in 1964 it occupied its new home. The 4.1-square-kilometer center provided facilities for spacecraft design and testing, crew training, and flight operations or mission control. By 1965 nearly 5000 civil servants and about twice that many aerospace-contractor employees were working at the Texas site.2 Heading this second largest of NASA's manned spaceflight centers was the man who had formed its predecessor group in 1958, Robert R. Gilruth. Gilruth had joined the staff at Langley in 1937 when it was a center for aeronautics research of NASA's precursor, the National Advisory Committee for Aeronautics (NACA). He soon demonstrated his ability in Langley's Flight Research Division, working with test pilots in quantifying the characteristics that make a satisfactory airplane. Progressing to transonic and supersonic flight research, Gilruth came naturally to the problems of guided missiles. In 1945 he was put in charge of the Pilotless Aircraft Research Division at Wallops Island, Virginia, where one problem to be solved was that of bringing a missile back through the atmosphere intact. When the decision was made in 1958 to give the new national space agency the job of putting a man into earth orbit, Gilruth and several of his Wallops Island colleagues moved to the Space Task Group, a new organization charged with designing the spacecraft to do that job.3 The Space Task Group had, in fact, already claimed that task for itself, and it went at the problem in typical NACA fashion. NACA had been a design, research, and testing organization, accustomed to working with aircraft builders but doing no fabrication work itself. The same mode characterized MSC. The Mercury and Gemini spacecraft owed their basic design to Gilruth's engineers, who supervised construction by the McDonnell Aircraft Company of St. Louis and helped test the finished hardware.4 In the summer of 1965 the Manned Spacecraft Center was up to its ears in work. By the middle of June two manned Gemini missions had been flown and a third was in preparation. Thirty-three astronauts, including the first six selected as scientist-astronauts,i were in various stages of training and preparation for flight. Reflecting the general bullishness of the manned space program, NASA announced plans in September to recruit still more flight crews.5 Houston's design engineers, meanwhile, were hard at work on the spacecraft for the Apollo program. The important choice of mission mode-rendezvous in lunar orbit-had been made in 1962; it dictated two vehicles, whose construction MSC was supervising. North American Aviation, Inc., of Downey, California, was building the command ship consisting of a command module and a supporting service module- collectively called the command and service module-which carried the crew to lunar orbit and back to earth. A continent away in Bethpage Long Island, Grumman Aircraft Engineering Corporation was working on the lunar module, a spidery-looking spacecraft that would set two men down on the moon's surface and return them to the command module, waiting in lunar orbit, for the trip home to earth. Houston engineers had established the basic design of both spacecraft and were working closely with the contractors in building and testing them. All of the important subsystems-guidance and navigation, propulsion and attitude control, life-support and environmental control-were MSC responsibilities; and beginning with Gemini 4, control of all missions passed to Houston once the booster had cleared the launch pad.6 Since the drama of spaceflight was inherent in the risks taken by the men in the spacecraft, public attention was most often directed at the Houston operation. This superficial and news-conscious view, though true enough during flight and recovery, paid scant attention to the launch vehicles and to the complex operations at the launch site, without which the comparatively small spacecraft could never have gone anywhere, let alone to the moon. The Saturn launch vehicles were the responsibility of NASA's largest field center, the George C. Marshall Space Flight Center, 10 kilometers southwest of Huntsville in northern Alabama. Marshall had been built around the most famous cadre in rocketry-Wernher von Braun and his associates from Peenemunde, Germany's center for rocket research during World War II. Driven since his schoolboy days by the dream of spaceflight, von Braun in 1965 was well on the way to seeing that dream realized, for the NASA center of which he was director was supervising the development of the Saturn V, the monster three-stage rocket that would power the moon mission.7 Marshall Space Flight Center was shaped by experiences quite unlike those that molded the Manned Spacecraft Center. The rocket research and development that von Braun and his colleagues began in Germany in the 1930s had been supported by the German army, and their postwar work continued under the supervision of the U.S. army. In 1950 the group moved to Redstone Arsenal outside Huntsville, where it functioned much as an army arsenal does, not only designing launch vehicles but building them as well. From von Braun all the way down, Huntsville's rocket builders were dirty-hands engineers, and they had produced many Redstone and Jupiter missiles. In 1962 von Braun remarked in an article written for a management magazine, "we can still carry an idea for a space vehicle . . . from the concept through the entire development cycle of design, development, fabrication, and testing." That was the way he felt his organization should operate, and so it did; of 10 first stages built for the Saturn I, 8 were turned out at Marshall.8 The sheer size of the Apollo task required a division of responsibility, and the MSC and Marshall shares were sometimes characterized as "above and below the instrument unit." ii To be sure, the booster and its payload were not completely independent, and the two centers cooperated whenever necessary. But on the whole, as Robert Gilruth said of their roles, "They built a damned good rocket and we built a damned good spacecraft." Von Braun, however, whose thinking had never been restricted to launch vehicles alone, aspired to a larger role for Marshall: manned operations, construction of stations in earth orbit, and all phases of a complete space program-which would eventually encroach on Houston's responsibilities.9 But as long as Marshall was occupied with Saturn, that aspiration was far from realization. Saturn development was proceeding well in 1965. The last test flights of the Saturn I were run off that year and preparations were under way for a series of Saturn IB shots. iii In August each of the three stages of the Saturn V was successfully static-fired at full thrust and duration. Not only that, but the third stage was fired, shut down, and restarted, successfully simulating its role of injecting the Apollo spacecraft into its lunar trajectory. Flight testing remained to be done, but Saturn V had taken a long stride.10 Confident though they were of ultimate success, Marshall's 7300 employees could have felt apprehensive about their future that summer. After Saturn V there was nothing on the drawing boards. Apollo still had a long way to go, but most of the remaining work would take place in Houston. Von Braun could hardly be optimistic when he summarized Marshall's prospects in a mid-August memo. Noting the trend of spaceflight programs, especially booster development, and reminding his coworkers that 200 positions were to be transferred from Huntsville to Houston, von Braun remarked that it was time "to turn our attention to the future role of Marshall in the nation's space program." As a headquarters official would later characterize it, Marshall in 1965 was "a tremendous solution looking for a problem." Sooner than the other centers, Marshall was seriously wondering, "What do we do after Apollo ?" 11 Some 960 kilometers southeast of Huntsville, halfway down the Atlantic coast of Florida, the third of the manned spaceflight centers had no time for worry about the future. The John F. Kennedy Space Center, usually referred to as "the Cape" from its location adjacent to Cape Canaveral' was in rapid expansion. What had started as the Launch Operations Directorate of Marshall Space Flight Center was, by 1965, a busy center with a total work force (including contractor employees) of 20 000 people. In April construction teams topped off the huge Vehicle Assembly Building, where the 110-meter Saturn V could be assembled indoors. Two months later road tests began for the mammoth crawler-transporter that would move the rocket, complete and upright, to one of two launch pads. Twelve kilometers eastward on the Cape, NASA launch teams were winding up Saturn I flights and working Gemini missions with the Air Force.12 Under the directorship of Kurt Debus, who had come from Germany with von Braun in 1945, KSC's responsibilities included much more than launching rockets. At KSC all of the booster stages and spacecraft first came together, and though they were thoroughly checked and tested by their manufacturers, engineers at the Cape had to make sure they worked when put together. One of KSC's largest tasks was the complete checkout of every system in the completed vehicle, verifying that NASA's elaborate system of "interface control" actually worked. If two vehicle components, manufactured by different contractors in different states, did not function together as intended, it was KSC's job to find out why and see that they were fixed. Checkout responsibility brought KSC into close contact not only with the two other NASA centers but with all of the major contractors.13 Responsibility for orchestrating the operations of the field centers and their contractors lay with the Office of Manned Space Flight (OMSF) at NASA Headquarters in Washington. One of three program offices, OMSF reported to NASA's third-ranking official, Associate Administrator Robert C. Seamans, Jr. Ever since the Apollo commitment in 1961, OMSF had overshadowed the other program offices (the Office of Space Science and Applications and the Office of Advanced Research and Technology) not only in its share of public attention but in its share of the agency's budget. Directing OMSF in 1965 was George E. Mueller (pronounced "Miller"), an electrical engineer with a doctorate in physics and 23 years' experience in academic and industrial research. Before taking the reins as associate administrator for manned spaceflight in 1963, Mueller had been vice president of Space Technology Laboratories, Inc., in Los Angeles, where he was deeply involved in the Air Force's Minuteman missile program. He had spent his first year in Washington reorganizing OMSF and gradually acclimatizing the field centers to his way of doing business. Considering centralized control to be the prime requisite for achieving the Apollo goal, Mueller established an administrative organization that gave Headquarters the principal responsibility for policy-making while delegating as much authority as possible to the centers.14 Mueller had to pick his path carefully, for the centers had what might be called a "States'-rights attitude" toward direction from Headquarters and had enjoyed considerable autonomy. Early in his tenure, convinced that Apollo was not going to make it by the end of the decade, Mueller went against center judgment to institute "all-up" testing for the Saturn V. This called for complete vehicles to be test-flown with all stages functioning the first time-a radical departure from the stage-by-stage testing NASA and NACA had previously done, but a procedure that had worked for Minuteman. It would save time and money-if it worked- but would put a substantial burden on reliability and quality control. Getting the centers to accept all-up testing was no small feat; when it succeeded, Mueller's stock went up. Besides putting Apollo back on schedule, this practice increased the possibility that some of the vehicles ordered for Apollo might become surplus and thus available for other uses.15 In an important sense the decision to shoot for the moon shortcircuited conventional schemes of space exploration. From the earliest days of serious speculation on exploration of the universe, the Europeans who had done most of it assumed that the first step would be a permanent station orbiting the earth. Pioneers such as Konstantin Eduardovich Tsiolkowskiy and Hermann Oberth conceived such a station to be useful, not only for its vantage point over the earth below, but as a staging area for expeditions outward. Wernher von Braun, raised in the European school, championed the earth-orbiting space station in the early 1950s in a widely circulated national magazine article.16 There were sound technical reasons for setting up an orbiting waystation en route to distant space destinations. Rocket technology was a limiting factor; building a station in orbit by launching its components on many small rockets seemed easier than developing the huge ones required to leave the earth in one jump. Too, a permanent station would provide a place to study many of the unknowns in manned flight, man's adaptability to weightlessness being an important one. There was, as well, a wealth of scientific investigation that could be done in orbit. The space station was, to many, the best way to get into space exploration; all else followed from that.17 The sense of urgency pervading the United States in the year following Sputnik was reflected in the common metaphor, "the space race." It was a race Congress wanted very much to win, even if the location of the finish line was uncertain. In late 1958 the House Select Committee on Space began interviewing leading scientists, engineers, corporate executives, and government officials, seeking to establish goals beyond Mercury. The committee's report, The Next Ten Years in Space, concluded that a space station was the next logical step. Wernher von Braun and his staff at the Army Ballistic Missile Agency presented a similar view in briefings for NASA. Both a space station and a manned lunar landing were included in a list of goals given to Congress by NASA Deputy Administrator Hugh Dryden in February 1959.18 Later that year NASA created a Research Steering Committee on Manned Space Flight to study possibilities for post-Mercury programs. That committee is usually identified as the progenitor of Apollo; but at its first meeting members placed a space station ahead of the lunar landing in a list of logical steps for a long-term space program. Subsequent meetings debated the research value of a station versus a moon landing, advocated as a true "end objective" requiring no justification in terms of some larger goal to which it contributed. Both the space station and the lunar mission had strong advocates, and Administrator T. Keith Glennan declined to commit NASA either way. Early in 1960, however, he did agree that after Mercury the moon should be the end objective of manned spaceflight.19 Still, there remained strong justification for the manned orbital station and plenty of doubt that rocket development could make the lunar voyage possible at any early date. Robert Gilruth told a symposium on manned space stations in the spring of 1960 that NASA's flight missions were a compromise between what space officials would like to do and what they could do. Looking at all the factors involved, Gilruth said, "It appears that the multi-man earth satellites are achievable . . ., while such programs as manned lunar landing and return should not be directly pursued at this time. " Heinz H. Koelle, chief of the Future Projects Office at Marshall Space Flight Center, offered the opinion that a small laboratory was the next logical step in earth-orbital operations, with a larger (up to 18 metric tons) and more complex one coming along when rocket payloads could be increased.20 This was the Marshall viewpoint, frequently expressed up until 1962. During 1960, however, manned flight to the moon gained ascendancy. In the fiscal 1961 budget hearings, very little was said about space stations; the budget proposal, unlike the previous year's, sought no funds for preliminary studies. The agency's long-range plan of January 1961 dropped the goal of a permanent station by 1969; rather, the Space Task Group was considering a much smaller laboratory-one that could fit into the adapter section that supported the proposed Apollo spacecraft on its launch vehicle.21 Then, in May 1961, President John F. Kennedy all but sealed the space station's fate with his proclamation of the moon landing as America's goal in space. It was the kind of challenge American technology could most readily accept: concise, definite, and measurable. Success or failure would be self-evident. It meant, however, that all of the efforts of NASA and much of aerospace industry would have to be narrowly focused. Given a commitment for a 20-year program of methodical space development, von Braun's 1952 concept might have been accepted as the best way to go. With only 8 1/2 years it was out of the question. The United States was going to pull off its biggest act first, and there would be little time to think about what might follow. The decision to go for the moon did not in itself rule out a space station; it made a large or complex one improbable, simply because there would be neither time nor money for it. At Marshall, von Braun's group argued during the next year for reaching the moon by earth-orbit rendezvous-the mission mode whereby a moon-bound vehicle would be fueled from "tankers" put into orbit near the earth. Compared to the other two modes being considered-direct flight and lunar-orbit rendezvousiv-this seemed both safer and more practical, and Marshall was solidly committed to it. In studies done in 1962 and 1963, Marshall proposed a permanent station capable of checking out and launching lunar vehicles. In June 1962, however, NASA chose lunar-orbit rendezvous for Apollo, closing off prospects for extensive earth-orbital operations as a prerequisite for the lunar landing.22 From mid-1962, therefore, space stations were proper subjects for advanced studies-exercises to identify the needs of the space program and pinpoint areas where research and development were required. Much of this future-studies work went to aerospace contractors, since NASA was heavily engaged with Apollo. The door of the space age had just opened, and it was an era when, as one future projects official put it, "the sky was not the limit" to imaginative thinking. Congress was generous, too; between 1962 and 1965 it appropriated $70 million for future studies. A dozen firms received over 140 contracts to study earth-orbital, lunar, and planetary missions and the spacecraft to carry them out. There were good reasons for this intensive planning. As a NASA official told a congressional committee, millions of dollars in development costs could be saved by determining what not to try.23 Langley Research Center took the lead in space-station studies in the early 1960s. After developing a concept for a modest station in the summer of 1959-one that foreshadowed most of Skylab's purposes and even considered the use of a spent rocket stage-Langley's planners went on to consider much bigger stations. Artificial gravity, to be produced by rotating the station, was one of their principal interests from the start. Having established an optimum rate and radius of rotation (4 revolutions per minute and 25 meters), they studied a number of configurations, settling finally on a hexagonal wheel with spokes radiating from a central control module. Enclosing nearly 1400 cubic meters of work space and accommodating 24 to 36 crewmen, the station would weigh 77 metric tons at launch.24 Getting something of this size into orbit was another problem. Designers anticipated severe problems if the station were launched piecemeal and assembled in orbit-a scheme von Braun had advocated 10 years earlier-and began to consider inflatable structures. Although tests were run on an 8-meter prototype, the concept was finally rejected, partly on the grounds that such a structure would be too vulnerable to meteoroids. As an alternative Langley suggested a collapsible structure that could be erected, more or less umbrella-fashion, in orbit and awarded North American Aviation a contract to study it.25 Langley's first efforts were summarized in a symposium in July 1962. Papers dealt with virtually all of the problems of a large rotating station, including life support, environmental control, and waste management. Langley engineers felt they had made considerable progress toward defining these problems; they were somewhat concerned, however, that their proposals might be too large for NASA's immediate needs.26 Similar studies were under way in Houston, where early in 1962 MSC began planning a large rotating station to be launched on the Saturn V. As with Langley's proposed stations, Houston's objectives were to assess the problems of living in space and to conduct scientific and technological research. Resupply modules and relief crews would be sent to the station with the smaller Saturn IB and an Apollo spacecraft modified to carry six men, twice its normal complement. MSC's study proposed to put the station in orbit within four years.27 By the fall of 1962 the immediate demands of Apollo had eased somewhat, allowing Headquarters to give more attention to future programs. In late September Headquarters officials urged the centers to go ahead with their technical studies even though no one could foresee when a station might fly. Furthermore, it had begun to look as though rising costs in Apollo would reduce the money available for future programs. Responses from both MSC and Langley recognized the need for simplicity and fiscal restraint; but the centers differed as to the station's mission. Langley emphasized a laboratory for advanced technology. Accordingly, NASA's offices of space science and advanced technology should play important roles in planning. MSC considered the station's major purpose to be a base for manned flights to Mars.28 The following month Joseph Shea, deputy director for systems in the Office of Manned Space Flight, sought help in formulating future objectives for manned spaceflight. In a letter to the field centers and Headquarters program offices, Shea listed several options being considered by OMSF, including an orbiting laboratory. Such a station was thought to be feasible, he said, but it required adequate justification to gain approval. He asked for recommendations concerning purposes, configurations' and specific scientific and engineering requirements for the space station, with two points defining the context: the importance of a space station program to science, technology, or national goals; and the unique characteristics of such a station and why such a program could not be accomplished by using Mercury, Gemini, Apollo, or unmanned spacecraft.29 Public statements and internal correspondence during the next six months stressed the agency's intention to design a space station that would serve national needs.30 By mid-1963, NASA had a definite rationale for an earth-orbiting laboratory. The primary mission on early flights would be to determine whether man could live and work effectively in space for long periods. The weightlessness of space was a peculiar condition that could not be simulated on earth-at least not for more than 30 seconds in an airplane. No one could predict either the long-term effects of weightlessness or the results of a sudden return to normal gravity. These biomedical concerns, though interesting in themselves, were part of a larger goal: to use space stations as bases for interplanetary flight. A first-generation laboratory would provide facilities to develop and qualify the various systems, structures, and operational techniques needed for an orbital launch facility or a larger space station. Finally, a manned laboratory had obvious uses in the conduct of scientific research in astronomy, physics, and biology. Although mission objectives and space-station configuration were related, the experiments did not necessarily dictate a specific design. NASA could test man's reaction to weightlessness in a series of gradually extended flights beginning with Gemini hardware, a low-cost approach particularly attractive to Washington. An alternate plan would measure astronauts' reaction to varying levels of artificial gravity within a large rotating station. Joseph Shea pondered the choices at a conference in August 1963: Is a minimal Apollo-type MOL [Manned Orbiting Laboratory] sufficient for the performance of a significant biomedical experiment? Or perhaps the benefits of a truly multi-purpose MOL are so overwhelming . . that one should not spend unnecessary time and effort . . . building small stations, but, rather, proceed immediately with the development of a large laboratory in space.31 Whatever choice NASA made, it could select from a wide range of spacestation concepts generated since 1958 by the research centers and aerospace contractors. The possibilities fit into three categories: small, medium, and large. The minimum vehicle, emphasizing the use of developed hardware, offered the shortest development time and lowest cost. Most often mentioned in this category was Apollo, the spacecraft NASA was developing for the lunar landings. There were three basic parts to Apollo: command, service, and lunar modules. The conical command module carried the crew from launch to lunar orbit and back to reentry and recovery, supported by systems and supplies in the cylindrical service module to which it was attached until just before reentry. Designed to support three men, the CM was roomy by Gemini standards, even though its interior was no larger than a small elevator. Stowage space was at a premium, and not much of its instrumentation could be removed for operations in earth orbit. One part of the service module was left empty to accommodate experiments, but it was unpressurized and could only be reached by extravehicular activity. The lunar module was an even more specialized and less spacious craft. It was in two parts: a pressurized ascent stage containing the life-support and control systems, and a descent stage, considerably larger but unpressurized. The descent stage could be fitted with a fair amount of experiments; but like the service module, it was accessible only by extravehicular activity.32 The shortage of accessible space was an obvious difficulty in using Apollo hardware for a space station. Proposals had been made to add a pressurized module that would fit into the adapter area, between the launch vehicle and the spacecraft, but this tended to offset the advantages of using existing hardware. Still, in July 1963, with the idea of an Apollo laboratory gaining favor, Headquarters asked Houston to supervise a North American Aviation study of an Extended Apollo mission.33 North American, MSC's prime Apollo contractor, had briefly considered the Space Task Group's proposal for an Apollo laboratory two years earlier. Now company officials revived the idea of the module in the adapter area, which had grown considerably during the evolution of the Saturn design. Though the study's primary objective was to identify the modifications required to support a 120-day flight, North American also examined the possibility of a one-year mission sustained by periodic resupply of expendables. Three possible configurations were studied: an Apollo command module with enlarged subsystems; Apollo with an attached module supported by the command module; and Apollo plus a new, selfsupporting laboratory module. A crew of two was postulated for the first concept; the others allowed a third astronaut.34 Changing the spacecraft's mission would entail extensive modifications but no basic structural changes. Solar cells would replace the standard hydrogen-oxygen fuel cells, which imposed too great a weight penalty. In view of the adverse effects of breathing pure oxygen for extended periods, North American recommended a nitrogen-oxygen atmosphere, and instead of the bulky lithium hydroxide canister to absorb carbon dioxide, the study proposed to use more compact and regenerable molecular sieves.v Drawing from earlier studies, the study group prepared a list of essential medical experiments and established their approximate weights and volumes, as well as the power, time, and workspace required to conduct them. It turned out that the command module was too small to support more than a bare minimum of these experiments, and even with the additional module and a third crewman there would not be enough time to perform all of the desired tests.35 North American's study concluded that all three concepts were technically sound and could perform the required mission. The command module alone was the least costly, but reliance on a two-man crew created operational liabilities. Adding a laboratory module, though obviously advantageous, increased costs by 15-30% and posed a weight problem. Adding the dependent module brought the payload very near the Saturn IB's weight-lifting limit, while the independent module exceeded it. Since NASA expected to increase the Saturn's thrust by 1967, this was no reason to reject the concept; however, it represented a problem that would persist until 1969: payloads that exceeded the available thrust. North American recommended that any follow-up study be limited to the Apollo plus a dependent module, since this had the greatest applicability to all three mission proposals. The findings were welcomed at Headquarters, where the funding picture for post-Apollo programs remained unclear. The company was asked to continue its investigation in 1964, concentrating on the technical problems of extending the life of Apollo subsystems.36 Several schemes called for a larger manned orbiting laboratory that would support four to six men for a year with ample room for experiments Like the minimum vehicle, the medium-sized laboratory was usually a zero-gravity station that could be adapted to provide artificial gravity Langley's Manned Orbiting Research Laboratory, a study begun in late 1962, was probably the best-known example of this type: a four-man canister 4 meters in diameter and 7 meters long containing its own life-support systems. Although the laboratory itself would have to be developed, launch vehicles and ferry craft were proven hardware. A Saturn IB or the Air Force's Titan III could launch the laboratory, and Gemini spacecraft would carry the crews. Another advantage was simplicity: the module would be launched in its final configuration, with no requirement for assembly or deployment in orbit. Use of the Gemini spacecraft meant there would be no new operational problems to solve. Even so, the initial cost was unfavorable and Headquarters considered the complicated program of crew rotation a disadvantage.37 Large station concepts, like MSC's Project Olympus, generally required a Saturn V booster and separately launched crew-ferry and logistics spacecraft. Crew size would vary from 12 to 24, and the station would have a five-year life span. Proposed large laboratories ranged from 46 to 61 meters in diameter, and typically contained 1400 cubic meters of space. Most provided for continuous rotation to create artificial gravity, with non-rotating central hubs for docking and zero-gravity work. Such concepts represented a space station in the traditional sense of the term, but entailed quite an increase in cost and development time.38 Despite the interest in Apollo as an interim laboratory, Houston was more enthusiastic about a large space station. In June 1963, MSC contracted for two studies, one by Douglas Aircraft Company for a zerogravity station and one with Lockheed for a rotating station. Study specifications called for a Saturn V booster, a hangar to enclose a 12-man ferry craft, and a 24-man crew. Douglas produced a cylindrical design 31 meters long with pressurized compartments for living quarters and recreation, a command center, a laboratory that included a one-man centrifuge to simulate gravity for short periods, and a hangar large enough to service four Apollos. The concept, submitted in February 1964, was judged to be within projected future capabilities, but the work was discontinued because there was no justification for a station of that size.39 Lockheed's concept stood a better chance of eventual adoption, since it provided artificial gravity-favored by MSC engineers, not simply for physiological reasons but for its greater efficiency. As one of them said, "For long periods of time [such as a trip to Mars], it might just be easier and more comfortable for man to live in an environment where he knew where the floor was, and where his pencil was going to be, and that sort of thing." Lockheed's station was a Y-shaped module with a central hub providing a zero-gravity station and a hangar for ferry and logistics spacecraft. Out along the radial arms, 48 men could live in varying levels of artificial gravity.40 While studies of medium and large stations continued, NASA began plans in 1964 to fly Extended Apollo as its first space laboratory. George Mueller's all-up testing decision in November 1963 increased the likelihood of surplus hardware by reducing the number of launches required in the moon program. Officials refused to predict how many flights might be eliminated, but 1964 plans assumed 10 or more excess Saturns. Dollar signs, however, had become more important than surplus hardware Following two years of generous support, Congress reduced NASA's budget for fiscal 1964 from $5.7 to $5.1 billion. The usually Optimistic von Braun told Heinz Koelle in August 1963, "I'm convinced that in view of NASA's overall funding situation, this space station thing will not get into high gear in the next few years. Minimum C-IB approach [Saturn IB and Extended Apollo] is the only thing we can afford at this time." The same uncertainty shaped NASA's planning the following year. In April 1964, Koelle told von Braun that Administrator James Webb had instructed NASA planners to provide management with "various alternative objectives and missions and their associated costs and consequences rather than detailed definition of a single specific long term program." Von Braun's wry response summed up NASA's dilemma: "Yes, that's the new line at Hq., so they can switch the tack as the Congressional winds change."41 At the FY 1965 budget hearings in February 1964, testimony concerning advanced manned missions spoke of gradual evolution from Apollo-Saturn hardware to more advanced spacecraft. NASA had not made up its mind about a post-Apollo space station. Two months later, however, Michael Yarymovych, director for earth-orbital-mission studies, spelled out the agency's plans to the First Space Congress meeting at Cocoa Beach, Florida. Extended Apollo, he said, would be an essential element of an expanding earth-orbital program, first as a laboratory and later as a logistics system. Some time in the future, NASA would select a more sophisticated space station from among the medium and large concepts under consideration. Mueller gave credence to his remarks the following month by placing Yarymovych on special assignment to increase Apollo system capabilities.42 Meanwhile, a project had appeared that was to become Skylab's chief competitor for the next five years: an Air Force orbiting laboratory. For a decade after Sputnik, the U.S. Air Force and NASA vied for roles in space. The initial advantage lay with the civilian agency, for the Space Act of 1958 declared that "activities in space should be devoted to peaceful purposes." In Line with this policy, the civilian Mercury project was chosen over the Air Force's "Man in Space Soonest" as America's first manned space program.43 But the Space Act also gave DoD responsibility for military operations and development of weapon systems; Consequently the Air Force sponsored studies over the next three years to define space bombers, manned spy-satellites, interceptors, and a command and control center. In congressional briefings after the 1960 elections, USAF spokesmen stressed the theme that "military space, defined as space out to 10 Earth diameters, is the battleground of the future."44 For all its efforts, however, the Air Force could not convince its civilian superiors that space was the next battleground. When Congress added $86 million to the Air Force budget for its manned space glider, Dyna-Soar, Secretary of Defense Robert S. McNamara refused to spend the money. DoD's director of defense research and development testified to a congressional committee, "there is no definable need at this time, or military requirement at this time" for a manned military space program. It was wise to advance American space technology, since military uses might appear; but "NASA can develop much of it or even most of it." Budget requests in 1962 reflected the Air Force's loss of position. NASA's $3.7 billion authorization was three times what the Air Force got for space activities; three years earlier the two had been almost equal.45 Throughout the Cold War, Russian advances proved the most effective stimuli for American actions; so again in August 1962 a Soviet space spectacular strengthened the Air Force argument for a space role. Russia placed two spacecraft into similar orbits for the first time. Vostok 3 and 4 closed to within 6 1/2 kilometers, and some American reports spoke of a rendezvous and docking. Air Force supporters saw military implications in the Soviet feat, prompting McNamara to reexamine Air Force plans. Critics questioned the effectiveness of NASA-USAF communication on technical and managerial problems. In response, James Webb created a new NASA post, deputy associate administrator for defense affairs, and named Adm. Walter F. Boone (USN, ret.) to it in November 1962. In the meantime, congressional demands for a crash program had subsided, partly because successful NASA launchesvi bolstered confidence in America's civilian programs.46 The Cuban missile crisis occupied the Pentagon's attention through much of the fall, but when space roles were again considered, McNamara showed a surprising change of attitude. Early in 1962 Air Force officials had begun talking about a "Blue Gemini" program, a plan to use NASA's Gemini hardware in early training missions for rendezvous and support of a military space station. Some NASA officials welcomed the idea as a way to enlarge the Gemini program and secure DoD funds. But when Webb and Seamans sought to expand the Air Force's participation in December 1962, McNamara proposed that his department assume responsibility for all America's manned spaceflight programs. NASA officials successfully rebuffed this bid for control, but did agree, at McNamara's insistence, that neither agency would start a new manned program in near-earth orbit without the other's approval.47 The issue remained alive for months. At one point the Air Force attempted to gain control over NASA's long-range planning. An agreement was finally reached in September protecting NASA's right to conduct advanced space-station studies but also providing for better liaison through the Aeronautics and Astronautics Coordinating Board (the principal means for formal liaison between the two agencies). The preamble to the agreement expressed the view that, as far as practicable, the two agencies should combine their requirements in a common space-station.48 McNamara's efforts for a joint space-station were prompted in part by Air Force unhappiness with Gemini. Talk of a "Blue Gemini" faded in 1963 and Dyna-Soar lost much of its appeal. If NASA held to its schedules, Gemini would fly two years before the space glider could make its first solo flight On 10 December Secretary McNamara terminated the Dyna-soar project, transferring a part of its funds to a new project, a Manned Orbiting Laboratory (MOL).49 With MOL the Air Force hoped to establish a military role for man in space; but since the program met no specific defense needs, it had to be accomplished at minimum cost. Accordingly, the Air Force planned to use proven hardware: the Titan IIIC launch vehicle, originally developed for the Dyna-Soar, and a modified Gemini spacecraft. Only the system's third major component, the laboratory, and its test equipment would be new. The Titan could lift 5700 kilograms in addition to the spacecraft; about two-thirds of this would go to the laboratory, the rest to test equipment. Initial plans provided 30 cubic meters of space in the laboratory, roughly the volume of a medium-sized house trailer. Laboratory and spacecraft were to be launched together; when the payload reached orbit, two crewmen would move from the Gemini into the laboratory for a month's occupancy. Air Force officials projected a cost of $1.5 billion for four flights, the first in 1968.50 The MOL decision raised immediate questions about the NASA-DoD pact on cooperative development of an orbital station. Although some outsiders considered the Pentagon's decision a repudiation of the Webb-McNamara agreement, both NASA and DoD described MOL as a single military project rather than a broad space program. They agreed not to construe it as the National Space Station, a separate program then under joint study; and when NASA and DoD established a National Space Station Planning Subpanel in March 1964 (as an adjunct of the Aeronautics and Astronautics Coordinating Board), its task was to recommend a station that would follow MOL. Air Force press releases implied that McNamara's approval gave primary responsibility for space stations to the military, while NASA officials insisted that the military program complemented its own post-Apollo plans. Nevertheless, concern that the two programs might appear too similar prompted engineers at Langley and MSC to rework their designs to look less like MOL.51 Actually, McNamara's announcement did not constitute program approval, and for the next 20 months MOL struggled for recognition and adequate funding. Planning went ahead in 1964 and some contracts were let, but the deliberate approach to MOL reflected political realities. In September Congressman Olin Teague (Dem., Tex.), chairman of the House Subcommittee on Manned Space Flight and of the Subcommittee on NASA Oversight, recommended that DoD adapt Apollo to its needs. Shortly after the 1964 election, Senate space committee chairman Clinton Anderson (Dem., N.M.) told the president that he opposed MOL; he believed the government could save more than a billion dollars in the next five years by canceling the Air Force project and applying its funds to an Extended Apollo station. Despite rumors of MOL's impending cancellation, the FY 1966 budget proposal included a tentative commitment of $ 150 million.52 The Bureau of the Budget, reluctant to approve two programs that seemed likely to overlap, allocated funds to MOL in December with the understanding that McNamara would hold the money pending further studies and another review in May. DoD would continue to define military experiments, while NASA identified Apollo configurations that might satisfy military requirements. A joint study would consider MOL's utility for non-military missions. A NASA-DoD news release on 25 January 1965 said that overlapping programs must be avoided. For the next few years both agencies would use hardware and facilities "already available or now under active development" for their manned spaceflight programs-at least "to the maximum degree possible."53 In February a NASA committee undertook a three-month study to determine Apollo's potential as an earth-orbiting laboratory and define key scientific experiments for a post-Apollo earth-orbital flight program. Although the group had worked closely with an Air Force team, the committee's recommendations apparently had little effect on MOL, the basic concept for which was unaltered by the review. More important, the study helped NASA clarify its own post-Apollo plans.54 Since late 1964, advocates of a military space program had increased their support for MOL, the House Military Operations Subcommittee recommending in June that DoD begin full-scale development without further delay. Two weeks later a member of the House Committee on Science and Astronautics urged a crash program to launch the first MOL within 18 months. Russian and American advances with the Voskhod and Gemini flights-multi-manned missions and space walks-made a military role more plausible. On 25 August 1965, MOL finally received President Johnson's blessing.55 Asked if the Air Force had clearly established a role for man in space, a Pentagon spokesman indicated that the chances seemed good enough to warrant evaluating man's ability "much more thoroughly than we're able to do on the ground." NASA could not provide the answers because the Gemini spacecraft was too cramped. One newsman wanted to know why the Air Force had abandoned Apollo; the reply was that Apollo's lunar capabilities were in many ways much more than MOL needed. If hindsight suggests that parochial interests were factor, the Air Force nevertheless had good reasons to shun Apollo. The lunar landing remained America's chief commitment in space. Until the goal was accomplished, an Air Force program using would surely take second place.56 In early 1964 NASA undertook yet another detailed examination of its plans, this time at the request of the White House. Lyndon Johnson had played an important role in the U.S. space program since his days as the Senate majority leader. Noting that post-Apollo programs were likely to prove costly and complex, the president requested a statement of future space objectives and the research and development programs that supported them.57 Webb handed the assignment to an ad hoc Future Programs Task Group. After five months of work, the group made no startling proposals. Their report recognized that Gemini and Apollo were making heavy demands on financial and human resources and urged NASA to concentrate on those programs while deferring "large new mission commitments for further study and analysis." By capitalizing on the "size, versatility, and efficiency" of the Saturn and Apollo, the U.S. should be able to maintain space preeminence well into the 1970s. Early definition of an intermediate set of missions using proven hardware was recommended. Then, a relatively small commitment of funds within the next year would enable NASA to fly worthwhile Extended Apollo missions by 1968. Finally, long-range planning should be continued for space stations and manned flights to Mars in the 1970s.58 The report apparently satisfied Webb, who used it extensively in subsequent congressional hearings. It should also have pleased Robert Seamans, since he was anxious to extend the Apollo capability beyond the lunar landing. Others in and outside of NASA found fault with the document. The Senate space committee described the report as "somewhat obsolete," containing "less information than expected in terms of future planning." Committee members faulted its omission of essential details and recommended a 50% cut in Extended Apollo funding, arguing that enough studies had already been conducted. Elsewhere on Capitol Hill, NASA supporters called for specific recommendations. Within the space agency, some officials had hoped for a more ambitious declaration, perhaps a recommendation for a Mars landing as the next manned project. At Huntsville, a future projects official concluded that the plan offered no real challenge to NASA (and particularly to Marshall) once Apollo was accomplished.59 In thinking of future missions, NASA officials were aware of how little experience had been gained in manned flight. The longest Mercury mission had lasted less than 35 hours. Webb and Seamans insisted before congressional committees that the results of the longer Gemini flights might affect future planning, and a decision on any major new program should, in any event, be delayed until after the lunar landing. The matter of funding weighed even more heavily against starting a new program. NASA budgets had reached a plateau at $5.2 billion in fiscal 1964, an amount just sufficient for Gemini and Apollo. Barring an increase in available money, new manned programs would have to wait for the downturn in Apollo spending after 1966. There was little support in the Johnson administration or Congress to increase NASA's budget; indeed, Great Society programs and the Vietnam war were pushing in the opposite direction. The Air Force's space program was another problem, since some members of Congress and the Budget Bureau favored MOL as the country's first space laboratory.60 Equally compelling reasons favored an early start of Extended Apollo. A follow-on program, even one using Saturn and Apollo hardware, would require three to four years' lead time. Unless a new program started in 1965 or early 1966, the hiatus between the lunar landing program and its successor would adversely affect the 400 000-member Apollo team. Already, skilled design engineers were nearing the end of their tasks. The problem was particularly worrisome to Marshall, for Saturn IB-Apollo flights would end early in 1968. In the fall of 1964, a Future Projects Group appointed by von Braun began biweekly meetings to consider Marshall's future. In Washington, George Mueller pondered ways of keeping the Apollo team intact. By 1968 or 1969, when the U.S. Ianded on the moon, the nation's aerospace establishment would be able to produce and fly 8 Apollos and 12 Saturns per year; but Mueller faced a cruel paradox: the buildup of the Apollo industrial base left him no money to employ it effectively after the lunar landing.61 Until mid-1965 Extended Apollo was classified as advanced study planning; that summer Mueller moved it into the second phase of project development, project definition. A Saturn-Apollo Applications Program Office was established alongside the Gemini and Apollo offices at NASA Headquarters. Maj. Gen. David Jones, an Air Force officer on temporary duty with NASA, headed the new office; John H. Disher became deputy director, a post he would fill for the next eight years.62 Little fanfare attended the opening on 6 August 1965. Apollo and Gemini held the spotlight, but establishment of the program office was a significant milestone nonetheless. Behind lay six years of space-station studies and three years of post-Apollo planning. Ahead loomed several large problems: winning fiscal support from the Johnson administration and Congress, defining new relationships between NASA centers, and coordinating Apollo Applications with Apollo. Mueller had advanced the new program's cause in spite of these uncertainties, confident in the worth of Extended Apollo studies and motivated by the needs of his Apollo team. In the trying years ahead, the Apollo Applications Program (AAP) would need all the confidence and motivation it could muster. i All three of the Skylab scientist-astronauts were in this first group, selected on 27 June 1965. ii The instrument unit was the electronic nerve center of inflight rocket control and was located between the booster's uppermost stage and the spacecraft. iii The Saturn IB or "uprated Saturn 1" was a two-stage rocket like its predecessor but with an improved and enlarged second stage. iv In direct flight the vehicle travels from the earth to the moon by the shortest route, brakes, and lands; it returns the same way. This requires taking off with all the stages and fuel needed for the round trip, dictating a very large booster. In lunar-orbit rendezvous two spacecraft are sent to the moon: a landing vehicle and an earth-return vehicle. While the former lands, the latter stays in orbit awaiting the lander's return; when they have rejoined, the lander is discarded and the crew comes home in the return ship. Von Braun and his group adopted earth-orbit rendezvous as doctrine. v Molecular sieves contain a highly absorbent mineral usually a zeolite (a potassium aluminosilicate), whose structure is a 3-dimensional lattice with regularly spaced channels of molecular dimensions; the channels comprise up to half the volume of the material. Molecules (such as carbon dioxide) small enough to enter these channels are absorbed, and can later be driven off by heating regenerating the zeolite for further use. vi Mariner 2 was launched toward Venus on 27 August 1962; in October came two Explorer launches and the Mercury flight of Walter M. Schirra, on 16 November NASA conducted its third successful Saturn I test flight.
http://history.nasa.gov/SP-4208/ch1.htm
13
64
A Few Words About widely-used method for determining the age of fossils is to date them by the "known age" of the rock strata in which they are found. At the same time, the most widely-used method for determining the age of the rock strata is to date them by the "known age" of the fossils they contain. In this "circular dating" method, all ages are based on uniformitarian assumptions about the date and order in which fossilized plants and animals are believed to have evolved. Most people are surprised to learn that there is, in fact, no way to directly determine the age of any fossil or rock. The so called "absolute" methods of dating (radiometric methods) actually only measure the present ratios of radioactive isotopes and their decay products in suitable specimens - not their age. These measured ratios are then extrapolated to an "age" determination. with all radiometric "clocks" is that their accuracy critically depends on several starting assumptions which are largely unknowable. To date a specimen by radiometric means, one must first know the starting amount of the parent isotope at the beginning of the specimen's existence. Second, one must be certain that there were no daughter isotopes in the beginning. Third, one must be certain that neither parent nor daughter isotopes have ever been added or removed from the specimen. And fourth, one must be certain that the decay rate of parent isotope to daughter isotope has always been the same. That one or more of these assumptions are often invalid is obvious from the published radiometric "dates" (to say nothing of "rejected" dates) found in the literature. the most obvious problems is that several samples from the same location often give widely-divergent ages. Apollo moon samples, for example, were dated by both uranium-thorium-lead and potassium-argon methods, giving results which varied from 2 million to 28 billion years. Lava flows from volcanoes on the north rim of the Grand Canyon (which erupted after its formation) show potassium-argon dates a billion years "older" than the most ancient basement rocks at the bottom of the canyon. Lava from underwater volcanoes near Hawaii (that are known to have erupted in 1801 AD) have been "dated" by the potassium-argon method with results varying from 160 million to nearly 3 billion years. It's really no wonder that all of the laboratories that "date" rocks insist on knowing in advance the "evolutionary age" of the strata from which the samples were taken -- this way, they know which dates to accept as "reasonable" and which to ignore. precisely, it is based on the assumption that nothing "really exceptional" happened in the meantime. What I mean by "really exceptional" is this: an event theoretically possible, but whose mechanism is not yet understood in terms of the established paradigms. To give an example: a crossing of two different universes. This is theoretically possible, taking into account modern physical theories, but it is too speculative to discuss its "probability" and possible consequences. such an event change radioactive decay data? Could it change the values of some fundamental physical constants? Yes it could. possible that similar events have happened in the past? Yes, it is possible. How possible it is? We do not know. We do not know, in fact, what would be an exact meaning of 'crossing of two different universes.' to considering the idea of cataclysms that could have destroyed ancient civilizations more than once, there is another matter to consider in special relationship to radioactive decay: that ancient civilizations may have destroyed themselves with nuclear war. dates for Pleistocene remains in northeastern North America, according to scientists Richard Firestone of Lawrence Berkeley National Laboratory, and William Topping, are younger?as much as 10,000 years younger?than for those in the western part of the country. Dating by other methods like thermoluminescence (TL), geoarchaeology, and sedimentation suggests that many radiocarbon dates are grossly in error. For example, materials from the Gainey Paleoindian site in Michigan, radiocarbon dated at 2880 yr BC, are given an age by TL dating of 12,400 BC. It seems that there are so many anomalies reported in the upper US and in Canada of this type, that they cannot be explained by ancient aberrations in the atmosphere or other radiocarbon reservoirs, nor by contamination of data samples (a common source of error in radiocarbon dating). Assuming correct methods of radiocarbon dating are used, organic remains associated with an artifact will give a radiocarbon age younger than they actually are only if they contain an artificially high radiocarbon keel. research indicates that the entire Great Lakes region (and beyond) was subjected to particle bombardment and a catastrophic nuclear irradiation that produced secondary thermal neutrons from cosmic ray interactions. The neutrons produced unusually large quantities of Pu239 and substantially altered the natural uranium abundance rations in artifacts and in other exposed materials including cherts , sediments, and the entire landscape. These neutrons necessarily transmuted residual nitrogen in the dated charcoals to radiocarbon, thus explaining anomalous dates. […] C14 level in the fossil record would reset to a higher value. The excess global radiocarbon would then decay with a half-life of 5730 years, which should be seen in the radiocarbon analysis of varied systems.[…] increases in C14 are apparent in the marine data at 4,000, 32,000-34,000, and 12,500 BC. These increases are coincident with geomagnetic excursions. enormous energy released by the catastrophe at 12,500 BC could have heated the atmosphere to over 1000 C over Michigan, and the neutron flux at more northern locations would have melted considerable glacial ice. Radiation effects on plants and animals exposed to the cosmic rays would have been lethal, comparable to being irradiated in a 5 megawatt reactor more than 100 seconds. overall pattern of the catastrophe matches the pattern of mass extinction before Holocene times. The Western Hemisphere was more affected than the Eastern, North America more than South America, and eastern North America more than western North America. Extinction in the Great lakes area was more rapid and pronounced than elsewhere. Larger animals were more affected than smaller ones, a pattern that conforms to the expectation that radiation exposure affects large bodies more than smaller ones. [Firestone, Richard B., Topping, William, Terrestrial Evidence of a Nuclear Catastrophe in Paleoindian Times, dissertation research, 1990 evidence that Firestone and Topping discovered is puzzling for a lot of reasons. But, the fact is, there are reports of similar evidence from such widely spread regions as India, Ireland, Scotland, France, and Turkey; ancient cities whose brick and stone walls have literally been vitrified, that is, fused together like glass. which relate to "Earth Changes" are scattered throughout the more than 1,000 pages of text that have been delivered to date... Here is a selection of some of the most pertinent to the CAUSES and RESULTS ... and, it is mostly through questions regarding the past and cycles of history that we were given insight as to our future. the earliest questions we asked was regarding the fate of the dinosaurs. (L) What event took place at that time that caused the death of the dinosaurs? A: Comet impact. Q: (L) Did a comet actually strike the earth? Q: (L) Was it a large comet? Q: (L) How large? A: The largest was 18 miles diameter, but no one was large enough for event. 14 hit at that occasion. Q: (L) Is there any regular periodicity or cycle to this comet business? Q: (L) What is the period? A: 3600 years roughly. Q: (L) Where are these cometary showers from? A: Clusters in their own orbit. Q: (L) Does this cluster of comets orbit around the sun? Q: (L) Is the orbit perpendicular to the plane of the ecliptic? A: Yes and no. Q: (L) Is this cluster of comets the remains of a planet? Q: (L) What body were the Sumerians talking about when they described the Planet of the crossing or Nibiru? Q: (L) This body of comets you have talked about? Q: (L) Does this cluster of comets appear to be a single body as Sitchen Q: (L) Now, this cluster of comets, when was the last time it came into the solar system? A: 3582 yrs ago. Q: (L) So, when is this cluster expected to hit the plane of the ecliptic A: 12 to 18 years. [This session was in 1994] Q: (L) How long has this comet cluster been part of our solar system? A: 890 million years. Q: (L) What was the origin of this comet cluster? Was it originally a A: No. Your government already knows they're coming close again. Q: (L) You said that the orbit is around the Sun. Where does it enter the plane of the ecliptic? Q: (L) Does it enter between Mars and Jupiter, for example? Q: (L) Is the orbit perpendicular to the plane of the ecliptic? Or is it at an angle? A: Not correct idea structure. Picture a spirograph. Q: (L) Do the comets orbit around themselves. Do they have a sort of axis? Q: (L) How many are in this cluster? Q: (L) Do they lose or pick any up from time to time? Q: (L) How big is the biggest one at this time? A: 900 miles diameter. Spirograph. Q: (L) According to my calculations, the comet cluster came by 8,788 B.C., is that correct? A: Close enough. Q: (L) Was there any historical cataclysm recorded in history that we could relate to that passing? Q: (L) And this was part of the great flood that occurred 12,388 B.C., is that correct? Q: (L) Now, you have said that Venus was drawn into the solar system by the gravitational pull of the cluster of comets? Q: (L) Where did Venus get all its gases and clouds and so forth? What was its origin? Where did it get all this stuff? A: Collected during fiery, friction filled journey and space matter in Q: (L) Where was Venus originally from? A: Ancient wanderer from near Arcturus. Q: (L) Now, I have compared certain passages of Revelations to the work of Immanuel Velikovsky, were those sections accurate in what they describe? Q: (L) Is there going to be massive disruption on the planet and maybe a lot of people transitioning out of the body simultaneously because of the interaction of this cluster of comets and the earth. Q: Now, when you talk about the comet cluster, as you have on many occasions, the technical definition of a comet is that it is a chunk of ice. Is this the case with the comets in this cluster? A: And other substances, primarily iridium cores. (T) Can we ask one more quick question? NASA has announced that the space telescope, Hubble, has detected clusters of comets, is this, in effect, the beginning of the government's of the world preparing the people for what is to come? A: It certainly is a possibility, but, again, you are accessing a very touchy area. Too much knowledge for you to gather in this particular area would not be beneficial to you. Q: There is a rumor going around that a large object is coming our way, that is a gigantic, intelligently controlled space craft, loaded with Lizzies. Could you comment on this please? A: Comet cluster. Sitchen believes it is a "planet." Q: In terms of this comet cluster, how many bodies are in this cluster as a discrete unit? Q: What is the ETA? Q: Why is it open? Why can't we look at it and determine the factors, the direction, trajectory, velocity... A: If you could, you would be interrupted in your learning cycle. Q: What exactly does THAT mean? (F) Because you would never do anything else because all you would think about is the day the comet is coming! Q: Is the government planning to stage an invasion by aliens to cause the populace of the world to go into such a fear state that they will accept total control and domination? A: Open. But if so, will "flop". A: Many reasons: 1. Visual effects will be inadequate and will have "glitches". 2. Real invasion may take place first. 3. Other events may intercede. Q: Such as what? A: Earth changes. Q: Am I correct in assuming that some of these hot-shot, big-wig guys in the government who have plans for taking over the whole world and making everything all happy and hunky-dory with them in charge, are just simply not in synch with the fact that there are some definite earth changes on the agenda? Are they missing something here? A: Close. They are aware but in denial. Q: Are these earth changes going to occur prior to the arrival of the A: No. But "time" frame is, as of yet, undetermined. A: Now would be a good "time" for you folks to begin to reexamine some of the extremely popular "Earth Changes" prophecies. Why, you ask. Because, remember, you are third density beings, so real prophecies are being presented to you in terms you will understand, I.E. physical realm, I.E. Earth changes. This "may" be symbolism. Would most students of the subject understand if prophecies were told directly in fourth density terms? Q: (L) Is this comparable to my idea about dream symbolism. For example, the dream I had about the curling cloud which I saw in a distance and knew it was death dealing and I interpreted it to be a tornado, but it was, in fact, a dream of the Challenger disaster. I understood it to be a tornado, but in fact, what I saw was what I got, a death dealing force in the sky, a vortex, in the distance. I guess my dream was a fourth density representation but I tried to interpret it in terms I was familiar with. Is this what you mean? A: Close. But it is easy for most to get bogged down by interpreting prophecies in literal terms. Q: (L) In terms of these Earth Changes, Edgar Cayce is one of the most famous prognosticators of recent note, a large number of the prophecies he made seemingly were erroneous in terms of their fulfillment. For example, he prophesied that Atlantis would rise in 1969, but it did not though certain structures were discovered off the coast of Bimini which are thought by many to be remnants of Atlantis. These did, apparently, emerge from the sand at that time. A: Example of one form of symbolism. Q: (L) And the symbolism was that one is reading the event from 3rd density into sixth density terms and then transmitting it back into 3rd, and while the ideation was correct, the exact specifics, in 3rd density terms, were slightly askew. Is that what we are dealing with here? A: 99.9 per cent would not understand that concept. Most are always looking for literal translations of dates A: Analogy is novice who attends art gallery, looks at abstract painting and says "I don't get it." Q: (L) Well, let's not denigrate literal translations or at least attempts to get things into literal terms. I like realistic art work. I am a realist in my art preferences. I want trees to look like trees and people to have only two arms and legs. Therefore, I also like some literalness in my A: Some is okay, but, beware or else "California falls into the ocean" will always be interpreted as California falling into the ocean. Q: (General uproar) (F) Wait a minute, what was the question? (L) I just said I liked literalness in my prophecies. (F) Oh, I know what they are saying. People believe that California is just going to go splat and that Phoenix is going to be on the seacoast, never mind that it's at 1800 feet elevation, it's just going to drop down to sea level, or the sea level is going to rise, but it's not going to affect Virginia Beach even though that's at sea level. I mean... somehow Phoenix is just going to drop down and none of the buildings are going to be damaged, even though its going to fall 1800 feet... (T) Slowly. It's going to settle. (F) Slowly? It would have to be so slowly it's unbelievable how slowly it would have to be. (T) It's been settling for the last five million years, we've got a ways to go in the next year and a half! (F) Right! That's my point. (T) In other words, when people like Scallion and Sun Bear and others say California is going to fall into the ocean, they are not saying that the whole state, right along the border is going to fall into the ocean, they are using the term California to indicate that the ocean ledge along the fault line has a probability of breaking off and sinking on the water side, because it is a major fracture. We understand that that is not literal. Are you telling us that there is more involved here as far as the way we are hearing what these predictions say? Q: (T) So, when we talk about California falling into the ocean, we are not talking about the whole state literally falling into the ocean? A: In any case, even if it does, how long will it take to do this? Q: (L) It could take three minutes or three hundred years. (T) Yes. That is "open" as you would say. A: Yes. But most of your prophets think it is not open. Q: (J) Yeah, because they think they have the only line on it. (T) Okay. So they are thinking in the terms that one minute California will be there and a minute and a half later it will be all gone. Is this what you are A: Or similar. Q: (T) So, when we are talking: "California will fall into the ocean, which is just the analogy we are using, we are talking about, as far as earth changes, is the possibility that several seismic events along the fault line, which no one really knows the extent of... A: Or it all may be symbolic of something else. Q: (L) Such as? Symbolic of what? A: Up to you to examine and learn. Q: (L) Now, wait a minute here! That's like sending us out to translate a book in Latin without even giving us a Latin dictionary. A: No it is not. We asked you to consider a reexamination. Q: (L) You have told us through this source, that there is a cluster of comets connected in some interactive way with our solar system, and that this cluster of comets comes into the plane of the ecliptic every 3600 years. Is this correct? A: Yes. But, this time it is riding realm border wave to 4th level, where all realities are different. Q: (L) Okay, so the cluster of comets is riding the realm border wave. Does this mean that when it comes into the solar system, that its effect on the solar system, or the planets within the solar system, (Jan or us), may or may not be mitigated by the fact of this transition? Is this a A: Will be mitigated. Q: (L) Does any of this mean that the earth changes that have been predicted, may not, in fact, occur in physical reality as we understand it? A: You betcha Q: (L) Does this mean that all of this running around and hopping and jumping to go here and go there and do this and do that is... A: That is strictly 3rd level thinking. Q: (L) Now, if that is 3rd level thinking, and if a lot of these things are symbolic, I am assuming they are symbolic of movement or changes in Q: (L) And, if these changes in energy occur does this mean that the population of the planet are, perhaps, in groups or special masses of groups, are they defined as the energies that are changing in these descriptions of events and happenings of great cataclysm. Is it like a cataclysm of the soul on an individual and or collective basis? Q: (L) When the energy changes to 4th density, and you have already told us that people who are moving to 4th density when the transition occurs, that they will move into 4th density, go through some kind of rejuvenation process, grow new teeth, or whatever, what happens to those people who are not moving to 4th density, and who are totally unaware of it? Are they taken along on the wave by, in other words, piggy-backed by the ones who are aware and already changing in frequency, or are they going to be somewhere else doing something else? (T) In other words, we are looking at the fact that what's coming this time is a wave that's going to allow the human race to move to 4th density? A: And the planet and your entire sector of space/time. Q: (T) Okay, when the people are talking about the earth changes, when they talk in literal terms about the survivors, and those who are not going to survive, and the destruction and so forth and so on, in 3rd, 4th, 5th level reality we are not talking about the destruction of the planet on 3rd level physical terms, or the loss of 90 per cent of the population on the 3rd level because they died, but because they are going to move to 4th level? A: Whoa! You are getting "warm." Q: (T) Okay. So, when they talk about 90 per cent of the population not surviving, it is not that they are going to die, but that they are going to transform. We are going to go up a level. This is what the whole light thing is all about? A: Or another possibility is that the physical cataclysms will occur only for those "left behind" on the remaining 3rd level density earth. Q: (L) Let me get this straight. When we move through this conduit, are A: You will be on the 4th level earth as opposed to 3rd level earth. Q: (L) What I am trying to get here, once again, old practical Laura, is trying to get a handle on practical terms here. Does this mean that a 4th density earth and a 3rd density earth will coexist side by side... A: Not side by side, totally different realms. Q: (L) Do these realms interpenetrate one another but in different dimensions... Q: (L) So, in other words, a being from say, 6th density, could look at this planet we call the earth and see it spinning through space and see several dimensions of earth, and yet the point of space/time occupation is the same, in other words, simultaneous. (J) They can look down but we can't look up. Q: (L) So, in other words, while all of this cataclysmic activity is happening on the 3rd dimensional earth, we will be just on our 4th dimensional earth and this sort of thing won't be there, and we won't see the 3rd dimensional people and they won't see us because we will be in different densities which are not "en rapport", so to speak? A: You understand concept, now you must decide if it is factual. Q: (T) Let's see, last week we concentrated on MM, and you were all busy trying to keep us out of trouble. A: We would very much like to "concentrate" on things of worldly/universal import. What do you suppose happens when the mantle stops, or slows, and the crust does not? Q: (L) Frank had a dream about this the other night, too. (T) About the mantle slowing? Okay, if the mantle slows and the crust doesn't... (L) It's like walking around the room, carrying a bowl of soup, and then stopping... (T) It sloshes over because the crust keeps moving... water in all of the oceans is going to slosh... A: No sloshing. Q: (L) Okay, what happens when the... is it that there will be lots of A: Maybe, but what is the bigger picture? Q: (L) The bigger picture is that the earth changes its orbital position, velocity... (T) No. The bigger picture is that life on earth gets pretty well wiped out. Q: (L) It exchanges energy potentials with other bodies? Q: (J) Gravity changes... Q: (L) Gravity changes, ok... gravity lessens... A: What have we hinted about gravity. Q: (L) Oh, gravity is the binder... (T) and is the one truth of the universe. Q: (T) The element. Gravity is the one true element. This is what you're Q: (L) So, if gravity is lessened, and it is the binder, then, everything... ohhh, I see what you're getting at! (J) Yes, gravity is the binder. Without gravity, it just all fall s apart... A: Not "Falls apart," my dear, it all "opens up!" Q: (L) And when it opens up what happens? Q: (L) So, in other words, this cosmic event is the catalyst for the change in those human beings who are ready and prepared to experience it in a A: Well, sort of, but... Remember... There is no "supernatural" or "paranormal," only natural and normal. Your 'Noah Syndrome' implied, originally, a discrimination between "wicked" and good. Being ready does not recognize such distinctions! Q: (L) What does being ready imply? A: Being on the verge of tranformation to next density level, be it STO or STS. So, you see, the transformation maintains the balance! A: Now, for the remainder of this session, we wish to address the so called earth changes for your benefit, as you are stuck here. Those present need to be equipped to stop buying into popular deceptions once and for all! Reread Bramley. Q: (L) Funny I took him off the shelf today... Ok, address the subject. A: All such changes are caused by three things and three things only! 1) Human endeavors. 2) Cosmic objects falling upon or too near earth. 3) Planetary orbital aberrations. Q: (L) All right, carry on. A: Don't believe any of the nonsense you hear from other sources. It is designed to facilitate mass programming and deception. Q: (L) Oh, okay. A: Just as your bible says; "You will know not the day, nor the hour." This means there is no warning. None. No clue. No prophecy. And these events are of the "past" as well. Q: (V) What events of the past, as well? A: Cosmic and "man made" cataclysms. Q: (L) Well, since you put 'man-made' on the list, am I to infer that perhaps some of the activities of the consortium, the secret government, are going to precipitate some of these events? Q: (L) Yes. Okay, is there any more that you want to say on this? Go ahead, you have the floor. Please. A: Ask away. Q: (L) Well, you've said that there's a comet cluster that's coming this way. Is that still correct? Q: (L) Is this body that has been called Hale-Bopp, is this that comet Q: (L) Is this comet cluster that's coming, and you've indicated that it could arrive anywhere between 12 and 18 years, something like that, is that correct? Q: (L) Now, is this something that can be seen from a great way off? Q: (L) Is this something that's going to impact our particular immediate location, and appear suddenly, as this comet that has flown overhead just did? Nobody saw it until a very short time ago, and all of a sudden everybody A: The cluster is a symptom, not the focus. Q: (V) What is the focus? A: Wave, remember, is "realm border" crossing... what does this imply? Consult your knowledge base for Latin roots and proceed. Q: (L) So, the Latin root of realm is regimen, which means a domain or rulership or a system for the improvement of health. Does this mean that, and as I assume we are now moving into the STO realm, now, out of the Q: (L) And also, can I infer from this, that the comet cluster exists in the other realm? Q: (L)Well, previously, you had said that the comet cluster would come before the realm border. Which indicated that the comet... Q: (L) Well, how can something so... you said it appears to be one single large body, and that our government knows that it's on it's way, and that apparently somebody has spotted it. Which direction is it coming from? Q: (L) Well, the comet cluster. That comet cluster, is, I am assuming, a real body, in third density experience, right? A part of a real cluster of bodies in third density experience. Is that correct? A: Cluster can approach from all directions. Q: (L) So, can I infer from what has been said, that we are going to move into this comet cluster, as into a realm? A: Border changes rules. Q: (L) But if we run into the comet cluster before we cross the border, then, I mean, I would understand if we were going into the realm border A: Part in part out. Q: (L) OK, is this so-called HAARP project instrumental in any of these realm border changes, these realm changes? A: All is interconnected, as usual. Q: (L) Sheldon Nidle has written a book called "Becoming a Galactic Human." He has said that the Earth is going to go into a photon belt sometime this summer, that there is going to be 3 days of darkness, and the poo is going to hit the fan, so to speak, the aliens are going to land in the late summer or the fall, and they are all coming here to help us. Could you comment on these predictions? Q: (L) Is a fleet of aliens going to land on Earth and be announced by the media in 1996? Q: (P) In 1997? Q: (L) Could you comment on the source of this book: "Three days of Darkness," by Divine Mercy? Q: (L) Well, is there going to be 3 days of darkness in 1998 like it says? A: Why does this continue to be such a popular notion? And, why is everyone so obsessed with, are you ready for this, triviA:..? Does it matter if there is three days of darkness?!? Do you think that is the "be all and end all?" What about the reasons for such a thing?... at all levels, the ramifications? It's like describing an atomic war in prophecy by saying: "Oh my, oh my, there is going to be three hours of a lot of big bangs, Q: (L) Well, you didn't say it wasn't going to happen in the fall of 1998. A: First of all, as we have warned you repeatedly, it is literally impossible to attach artificially conceived calendar dates to any sort of prophecy or prediction for the many reasons that we have detailed for you numerous times. [Note: the 'fluid' nature of the future. Probability, etc.] And we have not said that this was going to happen. Q: (L) I know that you are saying that this 3 days of darkness is trivial considering the stupendous things that are involved in realm crossings. But, a lot of these people are interpreting this as just 3 days of darkness.. then wake up in paradise. I would like to have some sort of response to A: Trust us to lead you when and how it is appropriate. You should already know that to attempt apply 3rd density study and interpretations to 4th density events and realities is useless in the extreme... This is why UFO researchers keep getting 3 new questions for every 1 answer they seek with their "research." If you will trust us, we will always give you not only the most correct answers to each and every inquiry, but also the most profound answers. If the individual does not understand, then that means they are either prejudiced, or not properly tuned in. Q: (A) I am trying to write down some things about a cosmology, and I have some questions mainly about the coming events. First there was the story of the sun's companion brown star [to read about this subject, click url at bottom of this page] which is apparently approaching the solar system, and I would like to know, if possible, details of its orbit; that is, how far it is, what is its speed, and when it will be first seen. Can we know it? Orbit: how close will it come? A: Flat eliptical. Q: (A) But how close will it come? A: Distance depends upon other factors, such as intersecting orbit of locator of witness. Q: (L) What is the closest it could come to earth... (A) Solar system... (L) Yes, but which part of the solar system? We have nine planets... which one? (A) I understand that this brown star will enter the Oort cloud... (L) I think they said it just brushes against it and the gravity disturbs A: Passes through Oort cloud on orbital journey. Already has done this on its way "in." Q: (A) You mean it has already entered the Oort cloud? A: Has passed through. Q: (A) So, it will not approach... A: Oort cloud is located on outer perimeter orbital plane at distance of approximately averaged distance of 510,000,000,000 miles. Q: (L) Well, 510 billion miles gives us some time! (A) Yes, but what I want to know... this Oort cloud is around the solar system, so this brown star, once it has passed through... (L) It must already be in the solar system? (A) No, it could have passed through and may not come closer. Is it coming closer or not? Is it coming closer all the time? A: Solar system, in concert with "mother star," is revolving around companion star, a "brown" star. Q: (A) So, that means that the mass of the companion star is much... Q: (A) Less? A: They are moving in tandem with one another along a flat, eliptical orbital plane. Outer reaches of solar system are breached by passage of brown companion, thus explaining anomalies recently discovered regarding outer planets and their moons. Q: (A) But I understand that the distance that the distance between the sun and this brown star is changing with time. Eliptical orbit means there is perihelion and aphelion. I want to know what will be, or what was, or what is the closest distance between this brown star and the sun? What is perihelion? Can we know this, even approximately. Is it about one light year, or less or more? A: Less, much less. Distance of closest passage roughly corresponds to the distance of the orbit of Pluto from Sun. Q: (A) Okay. Now, this closest pass, is this something that is going to Q: (A) And it is going to happen within the next 6 to 18 years? A: 0 to 14. Q: (A) Okay, that's it. I have some idea about this. Now, I understand that, either by chance or by accident, two things are going to happen at essentially the same time. That is the passing of this brown star, and this comet cluster. These are two different things? A: Yes. Different, but related. Q: (L) Is there a comet cluster that was knocked into some kind of orbit of its own, that continues to orbit... Q: (L) And in addition to that comet cluster, there are also additional comets that are going to get whacked into the solar system by the passing of this brown star? Q: (A) I understand that the main disaster is going to come from this A: Disasters involve cycles in the human experiential cycle which corresponds to the passage of comet cluster. Q: (A) I understant that this comet cluster is cyclic and comes every 3600 years. I want to know something about the shape of this comet cluster. I can hardly imagine... A: Shape is variable. Effect depends on closeness of passage. Q: (L) So, it could be spread out... (A) We were asking at some point where it will be coming from. The answer was that we were supposed to look at a spirograph. Q: (A) Now, spirograph suggests that these comets will not come from one direction, but from many directions at once. Is this correct? A: Very good!!! Q: (A) Okay, they will come from many directions... A: But, initial visibility presents as single, solid body. Q: (A) Do we know what is the distance to this body at present? A: Suggest you keep your eyes open! Q: (A) I am keeping my eyes open. A: Did you catch the significance of the answer regarding time table of cluster and brown star? Human cycle mirrors cycle of catastrophe. Earth benefits in form of periodic cleansing. Time to start paying attention to the signs. They are escalating. They can even be "felt" by you and others, if you pay attention. Q: (L) We have certainly been paying attention to the signs! A: How so? Q: (L) Well, the weather is completely bizarre. The rain, the fires, the Q: (L) I notice that the tides are awfully high all the time with no ostensible A: And low, too. Q: (L) Yes. I have noticed that particularly. (F) I have too. Not too long ago I noticed that the tides were so incredibly low for this time of year. (L) And also the signs in people - these kids killing their parents, all these people going berserk - you know... Q: (L) What do you mean spike? A: On a graph... Q: (L) Just spikes, not the biggie... A: Spikes are big. Q: (L) Well, from what you are saying about this - I mean how are we supposed to do all these things you say we are supposed to do? I mean, we won't A: Who says? Q: (L) That is kind of what it is sounding like. Unless our lives and experiences escalate in concert with all these other events... (A) I have a last question which I have prepared. So, we have these two physical disasters or events, the coming brown star and the comet cluster, but we have been told that this time it is going to be different because this time it is accompanied by a plane convergence. A: Yes. Magnetic field alteration. Q: (A) This plane convergence, or this magnetic field alteration, it's supposed to be related to realms crossing or passing. A realm border. A: Realm. What is root of "realm?" Q: (L) Reality. A: Yes. How does the magnetic field "plug in?" Other planchette, please. Carbon disturbance, as someone "melted" crystal on top. [We replaced planchette.] We want to stay on this general subject matter through this session, for Q: (L) Okay, in terms of these signs, these things going on on the planet, these fires and so forth - you never said anything about all these fires in FloridA: You said Arizona was going to burn, but you never said Florida was going to burn... A: We did not say it would not. Q: (L) I know. But, it is really oppressive. I have read a couple of signs in the last day or so that we are going to have a change in the weather, a break, is my little method of predicting... A: Reverse extreme?!? Q: (L) Oh! Floods again! Well, I guess floods are better than fires... but, maybe not! A: Italy and Greece are burning too. Q: (L) Yes, we noticed that in the paper today. Is there a relationship between Italy and Greece and where we are on the planet? Some kind of A: Just same current malady. Q: (L) Okay, back to the comet cluster and realm border... A: Not yet. Q: (L) Well, which direction should we take right now? A: Step by step. Q: (L) Okay, you just said we are going to have a reversal in our weather. Are there any other conditions that we should be aware of at the present A: Point is to watch, look, listen. Q: (L) When we are watching, looking and listening, is there some particular thing we are supposed to be watching for that is to give us a clue about Q: (L) Is there something we are supposed to do at some point when we perceive a particular clue or event at some point? A: What would you suggest? Q: (L) I don't know that I would suggest anything except to keep a low profile and keep on working until we figure out the answer. It is like a race against time. We have to figure out the answer because, obviously, you are not going to tell us... A: No. No race needed. Q: (L) Well, I sometimes feel completely inadequate for all of this. A: Stop thinking 3rd density! Q: (L) Well, I don't want to just live in LaLa land and say, 'oh yes, I'm watching. I see the signs! I'm looking! I'm listening! And then count them off on my fingers and say: but I'm not gonna think about it because that's 3rd density!' See what I am saying here? A: No, because you are still thinking 3rd density. Better to have a "front row seat," and enjoy! Q: (L) But I feel like I am not supposed to be enjoying myself so much! I feel guilty! A: Why not? Q: (L) Well! I'm supposed to be DOING something! A: You are. Q: (A) When you watch, look and listen, you are getting some signals, and these signals cause a certain pattern of thinking which were not yet able to emerge, but now, after you receive certain signals, you start to think in a different way. So, you cannot now think in a different way, but when you learn this and this has happened, then you start to think in a different pattern. So, you cannot now do things, but you always have to be ready to change your thinking at any moment when you understand more, when you see more, when you notice more, when you put things together which are not yet together. Then, there may be a big change of perspective, a total change. And this we have to keep our minds and thinking patterns open and ready to change, and work and put the puzzle and mosaic together. And, this is all that counts. It is this work that we are now doing that counts, not some future big thing: oh! Now we go on a ship! No, it is only doing our best, and what is it? Our best? It will change. I believe so. That is the ideA: So, everything depends on this. A: Yes. You see, my dear, you cannot anticipate that which is not anticipatable. Q: (L) Well, swell. Okay, you want to stay on this subject, so let us move another step. A: We are glad you noticed this birth of the spike. Q: (L) Is that a clue? Is this one of those obscure remarks? Yes, I noticed, the kids killing their parents, all the shooting going on, the weather... is this connected in some way to some other event? A: 27 days of record heat out of 30, oh my oh my! Suggest you awaken your internet pals, as they are too busy chasing "goblins" to notice. Q: (L) So, I should have something to say about this? A: In Florida now, where to next? How about a shattering subduction quake in Pacific Northwest of U.S.? We estimate 10.4 on the Richter scale. We have warned of Ranier. Imagine a 150 meter high tsunami in Puget Sound... Q: (L) Does this subduction quake have anything to do with that UFO that was rumored to have buried itself in the Pacific? A: All are interconnected. Q: (L) The information I got on that was that it was about 600 mile north and east of Hawaii. A couple of submersibles were sent down and disappeared or were destroyed or didn't come back... it is supposedly giving off a lot of energy. Any comment? Q: (L) Should I follow that direction? A: All directions lead to lessons. Q: (L) Now, you have mentioned this earthquake. I know that you don't usually give predictions, why have you done so now? A: We do not give time tables. Q: (L) Anything else other than a tsunami in Puget Sound and a big subduction quake... 10.4 on the Richter scale is almost inconceivable. A: Ranier... caldera. Q: (L) What about the caldera? A: Expect one. Q: (L) Other than floods, anything else for Florida upcoming? A: All areas experience accelerating "freak weather patterns." Q: (L) Okay, all of these freaky weather patterns and bizarre things going on on the planet, how does it relate to the comet cluster and the brown star? Is it related? A: Human experiential cycle intersects. Q: (L) Any specifice physical manifestation of either this brown star or this comet cluster or this realm border, that is related to these events on the planet? A: Approach of wave stimulates precursor activity which in turn causes effects which in turn stimulates further "heating up" of activity... Q: (L) I thought it was curious that you used the term 'birth of the spike.' Is there something or someone that was born at that particular time? A: No. Spike is as on a graph... Q: (L) Okay, is there anyway we could graph this ourselves, and if so, what types of events would we include to create the background data? A: "El Nino, La Nina," etc... Q: (L) Is this El Nino thing connected to sunspot cycles? Q: (L) It has its own cycle. I don't think it has been tracked for long enough to get... A: Global warming, a part of the human experiential cycle. Q: (L) I read where Edgar Cayce said that a sligth increase in global temperature would make hurricanes something like 5 times stronger... given a baseline temperature. Does this mean we are going to have stronger and more frequent hurricanes? Q: (L) Will they hit land more frequently, or just spin out in the ocean? A: Either, or. Q: (A) I want to continue questions from the previous session. First, about this companion star: where is it now; which part of the zodiac? A: Libra Constellation. Q: (A) Where is the periodic comet cluster? A: Not visible. It approaches in "scatter pattern," indicating that one or two of the members may have already made the circuit. Q: (A) We have been told that it will look like one solid object at first. A: Yes, but not necessarily only one grouping. Will show up first in the region of the Magellanic Clouds. Q: (A) So, we have the ideA: Next question concerning this companion star; we were told that its mass is less than the sun, can we have a figure on how much less? A: 56 percent of the mass of the sun. Q: (A) Okay, if this is really so, then when it really starts to approach the solar system, and they rotate in tandem, it means that the sun will really start to feel its gravity, and because of this, the solar system will start to move with respect to other stars, so all the constellations will shift, is this correct? A: More like a slight "wobble" effect. Q: (L) Will that be perceptible to us here on the Earth? A: Only through measurements. Q: (T) There have been a lot of reports of late regarding major solar activity, solar flares, solar winds, etc. The surface of the sun has a whole lot going on. The last I heard, one of the two satellites observing the sun, one of them is gone. Is this an effect from the sun itself, or is this an effect of the approaching brown dwarf? A: The sun. Q: (T) As the brown dwarf approaches, will it intensify the solar flare A: The effect on the physical orientation of the sun from the periodic passage of its companion is to flatten the sphere slightly. This returns to its original spherical shape with the retreat. Q: (L) Is this flattening of the sphere of the sun going to have any noticeable effects in terms of enhanced, accelerated, or magnified radiation from Q: (T) Solar flares or anything like that? Q: (T) So there is not going to be any appreciable effect on the planet from this as far as the sun goes? A: The sun's gravity increases, thus inhibiting flares. Q: (T) Inhibiting flares is good. (L) Not necessarily. Solar minimums have been periods of ice ages. (T) One of the recent crop circles this year shows what the crop circle interpreters say is an image of the sun with a large solar flare coming off of it. It is supposed to be a warning to us that the surface of the sun has become unstable... A: All events intersect. Q: (A) Okay, I would like to ask what kind of effects other than just gravity we should expect from the close passage of this star? Any particular electro-magnetic, gamma radiation, or what to look for? In which part of the spectrum? A: Radiation emits from those cosmic bodies which radiate. Q: (L) Are you saying that the brown star does not radiate? Q: (L) If it doesn't radiate, what does it do? A: Its radioactive field is severely limited as the "fire" went out long ago. It does not give off light. Q: (J) It's a brown star. (T) One or two steps away from being a collapsing black hole. (L) Well, that's friendly, especially after watching "Event Horizon" last night! A: No. Black holes only form from 1st magnitude stars. Q: (A) We have been told that there is going to be a change of the magnetic field of the earth. Does this mean that the magnetic pole will shift? Q: (A) About this shift of the poles, is it going to be a complete pole Q: (A) What is going to happen inside Earth that could cause this magnetic A: Is caused by disturbances in the mineral content of the substrata rock, brought on by the interaction of Earth with outside forces. Q: (L) What specific outside forces? A: Those already discussed. Q: (L) What is going to be the specific mechanism of this disturbance? Can you describe for us the steps by which this pole reversal will take A: Pole reversal is cyclical anyway, these events merely serve as trigger Q: (L) Let me ask it this way: is there a charge that builds up in the mineral substrata that requires discharge, or that becomes excited to the point that it discharges and then reverses? Is this what we are talking about in terms of the mechanism? A: Examine what is needed to magnetize metal. Ask Arkadiusz. Q: (A) What is needed to magnetize metal. One has to align the spins of the atoms which means one has to strike the metal, or one has to bring a magnetic field close. (T) Strike as in annealing... heating and striking metal or rock which causes the crystalline structure to decompose so that the metal becomes pliable. Then, each time it is hit, it reforms until it cools again. (L) Is this what we are talking about here? Q: (A) One can also have an external magnetic field to align. But, where is it going to come from? Q: (L) The wave? A: All are interconnected. Q: (T) Now, this is just a thought: where would an external magnetic field strong enough to do something like that come from? On the earth, if you supercharge the ionosphere, you create an extremely intense magnetic field. That's what the ionosphere basically is. That is what the HAARP group of programs is about. That is what Tesla was into - the ionosphere - cause there is a large charge there. (L) Is the HAARP project involved here? Q: (T) They are monkeying around with that stuff. (L) Do the people in charge of the HAARP project know about all of this and are they constructing this HAARP array to utilize this energy in some way? A: Those who know are at foundation. Q: (T) To protect us? A: HAARP is for mind control. It is hoped it can be successful in 4th Q: (L) Well, if these people are aware that this sort of thing is getting ready to happen... never mind. (A) We have been told that this magnetic disturbance is closely related to this realm border crossing, and you asked us the question 'what is the root of realm' and it is reality. Now, realm has an m at the end. Does this have something to do with magnetic? A: Realm border is when the reality shifts for all. Q: (A) Yes, but why is this reality shift related to magnetic field disturbance? What is the connection? A: Your physiology and etheric orientation are both tied into the magnetic state of your environment. Q: (L) Okay, you said before that the magnetic field is going to reverse... A: Magnetic poles reverse. Q: (L) Okay, what is the magnetic field going to do. It is going to change in some way. Is it going to increase, decrease... is this to a degree - something other than direction - amplification? (T) Will anything change in the strength of the field? A: Let us illustrate. Now: Earth. [A circle is drawn with radiating spikes all around fairly close to the surface.] Earth after: [A circle is drawn with double radiating spikes with those in polar regions considerably longer than the others.] Q: (T) So, it is the same, except it is larger? Q: (T) Are you indicating that the magnetic field will be stronger? A: Broader and larger. Q: (L) What is the cross in the middle? A: Geodirectional grid reference. You incorrectly added circle on side. Lines of magnetic field alignment should be shown as longer at poles. "Crosshairs" in illustration are for directional reference only. Q: (T) Does this mean it will be stronger also? A: Larger and broader. Q: (A) People work near strong magnets much stronger than the Earth's magnetic field, yet nothing happens to them that we can see. A: Not true. Body chemistry is altered. Is not long term or permanent Q: (L) So, apparently long term and permanent exposure can make a big Q: (A) Now, you say that this brown star and the comet cluster are coming into our solar system at the same time, and you have said that they are different events, but that they are related. Now, what is the relation, if not just the point in time, the deeper relation between these two phenomena? A: Picture biorythm graph. Q: (A) What is the period of pole shift? A: 100,000 years, roughly. Q: (A) Now, about the relation between the phenomenon of physical disasters that are going to happen and psychic changes related to the realm border. What is cause and what is effect? A: One precedes the other. Q: (L) Okay, so disasters happen and then the reality changes in psychic Q: (T) Is the approach of the realm border, is the change in the magnetic field... does the reversal of the poles and the broadening of the magnetic field, is that going to be before the realm border crossing? Q: (L) So, in practical terms, it may be that, what we observe will be a series of cataclysms, disasters, the 'cleansing' of the Earth... A: This has already begun. Q: (L) So, it is already happening. It will accelerate and intensify. And what we will observe is all of these things happening. And, as a result of the intersecting of these various energies, this realm border, this reality change, this change in the magnetics because of the interaction with the comet cluster, the sun's companion, the realm border, and so forth, it will then have an effect upon the people left on the planet who will then change in some way as a result of this, is that correct? A: Your Bible says that there will be many wonders on the Earth and in the Heavens in the last days. Q: (L) Okay, this period of time after this realm border, is this period a preliminary to the total end of the Earth and all life on it? Q: (L) After all of this change, those people who continue to be on the Earth will be in a new environment, and it will be almost like having to grow gills to live in water, and some people will have the ability and some will not. Is that it? It will be more gradual in terms of individual physical structures and psychic structures? Q: (L) It will be a sudden, total change? Like flipping a switch and everything is going to be different? A: The key is awareness. Q: (L) Are there going to be people... Q: (L) You didn't let me finish asking my question! A: But we knew it! Q: (L) In other words, there are going to be people who are simply not going to see what is happening? A: Lost lambs beying in the knight. Q: (T) They are not getting it. They are lost sheep. That really describes it. (L) Why did you give that funny twist on the spelling? A: Why not? Q: (T) Now, didn't they tell us that when the transition occurs, those who are moving to 4th density will move, and those who aren't, won't. And that it is not a physical move. That we really would not notice a difference when we shifted because it's all right here. And, those who are going to shift will shift right where they stand, and they won't really notice a change, and the perception is not the issue, it is the awareness of the shift, because we will still be physical... am I heading anywhere in the right direction? A: Variability of physicality. Q: (L) Does this have anything to do with changing of DNA via awareness? A: Both ways. Q: (T) So, we will notice that we can control much more than we could before in terms of our environment and physical structure, but we will still be doing a lot of things we have always done. Okay, good night. JUNE 22, 1999 from Session Date: June 19, 1999 Q: ...there has been another matter that has come up in the past few days that I would like to cover first. Apparently there is a newly discovered comet that some people are suggesting fits the prophecies of Nostradamus where he says in quatrain 10.72: mil neuf cens nonante neuf sept mois, Du ciel viendra vn grand Roy d'effrayeur: Resusciter le grand Roy d'Angolmois, Auant apres Mars regner par bon-heur. translates into English: year 1999, seventh month, From the sky will come a great King of Terror: [see note below] To bring back to life the great King of the Mongols, Before and after Mars to reign by good luck. for the comparison is because it is thought that Nostradamus was referring to September and NOT July, and, in point of fact, this comet will have made its circuit around the by September, and some folks are theorizing that it could be 'vacuuming' and picking up a lot of matter which could slow it down, change its direction, and cause it to interact with the Earth in a detrimental way. Is this, in fact, going to happen? A: Nostradamus had a specific date tied to a vague prediction. Q: You are right. Yes, that's true. Are you suggesting that there is some other event besides a cometary one that he is referring to in this A: If he was, let it not be known. The question is: is "1999" a number, or is it more? Perhaps it is best for you to see events in this subject unfold, then analyze later. Maybe it is a beginning of a cycle... June 29, 1999: Lemesurier says he is agnostic as to whether Nostradamus was a true prophet or not, but that he nevertheless enjoys solving his linguistic puzzles as an intellectual exercise. To interpret the "King of Terror" quatrain Lemesurier has gone back to the original text of 1555 and discovered that it actually refers to "roy deffraieur" (an appeaser king). In subsequent corrupt editions, after the death of Nostradamus, the word acquired an apostrophe to become "roy d'effraieur" (king of terror). If this is truly the case, then all the speculations about a "king of terror" are leading in the wrong direction.] Q: Well, there is another thing that the 'Millennium Group' has brought up, and that is the possibility that there is some object that has entered our solar system and the increased sun-spot activity and so forth is a reaction of the sun to this object or objects. Can you comment on this? A: The sun needs no such prompting to react thusly. It is a reactor, after all. It tends to react to less than others react to it. Q: (A) But what is the 'it?' (L) And, the sunspot prediction was around a hundred the other day, but the actual sun spot activity was 240, I believe. Now, supposedly, when this Comet Lee swings around the sun during this period of high sun-spot activity, the Millennium Group are saying that it is going to discharge the solar capacitor and that there is a possibility that great bolts of electricity will pass between planets and the comet or between planets and other planets, or between the sun and the comet, or something. Is this, in fact, likely to occur? A: Bolts charge between positive and negative ions. What is the sun's atomic structure vis a vis its "children?" Research this for definitive answers to your question. What about the positive/negative ratio between earth's ionosphere and possible passing "objects?" Q: (L) Well, that's what I was asking. These fellows claim that there will be discharges either between the sun and the comet, or the comet and the planets, or the planets and the planets, or the sun and the planets... all the bases are covered! A: No they have not. What about the "vacuum" of space? We are posing a question in order to stimulate intellectual debate and inquiry. Learning is fun, after all! Q: (A) I want to ask about the 'plasma theory of comets' which the Millennium Group are promoting. A: The plasma theory is correct, when certain factors are present. Could it have something to do with the composition of the object in question? Q: (A) This is exactly the question. The object in question is the comet, and the question is: what is its composition, is it a dirty snowball, or is it a charged object that collects particles on its way like the 'vacuum cleaner' model? A: Nickel? We are almost desparately trying to "jump start" your intellectual capacities. Remember, this is a group effort here. Not a series of questions from the meek and helpless to the Lord High Commander!! Q: (L) What do we know about nickel? (A) There are all kinds of things about nickel. It is a metal. The question is whether nickel has anything to do with this particular comet. A: If it does, it could be vital. Most comets are indeed "dirty snowballs," composed largely of water ice and particulate matter. But, some are more like fast moving asteroids caught up in an orbital plane. Your "Millennium Group" is maybe just a bit too one-side-or-the-otherish at this point. Thus, a spectral analysis of this object is in order before one assumes it to be a cosmic vacuum cleaner. Q: (A) I guess from this that, even if these guys can be in some cases correct, this comet, after analyzing, will prove to be just an ordinary dirty snowball. That is my guess. A: No guessing allowed! Have we not already indicated? Knowledge is power. If we give it to you like Halloween candy, it is diffused. What does molten nickel look like against the backdrop of space? Does it conduct electricity? Is it magnetic? What about the "tail" of such an object coming into contact with the ionosphere? Q: (L) Oh. I think I get it. The Nostradamus thing about a great comet's tail or something... let me look it up: great trouble for humanity, a greater one is prepared The Great Mover renews the ages: Rain, blood, milk, famine, steel and plague, Is the heavens fire seen, a long spark running. is supposed to refer to something that occurs at the turn of the Millennium... is this what we are getting at here? Something that will look like a 'long spark running' which then comes in contact with the ionosphere which may exchage potentials with the earth by virtue of this conducting, molten nickel tail? Yes????? Is that good? (A) The point is that this comet is in space. Space is rather cold, so the question is: what would make nickel molten? (L) Well, it will be close to the sun! That will heat it up! (A) This particular comet is not going to come close enough to the sun to melt it! (L) Well then, how can the nickel be molten??? A: What about flares? Q: (A) But it is not coming close enough to the sun to be caught in a solar flare! A: Is nickel magnetic? Does nickel have a companion? Q: A) Well, when we say a 'companion,' it means another metal in the same family in the Mendeleev Table. I will have to check... A: And cobalt is invisible in the good old vacuum of space, but not Q: L) Does that mean it will attract cobalt? A: No, cobalt will attract. Q: L) The cobalt will attract flares... electromagnetic phenomena... A: Et al. Q: (L) I see. A: Now, you need to know the composition of this comet... And any other closely following same. We have alluded to the increased cometary activity before. Oort, and that which cyclically disturbs it. Q: (L) I just want to know one thing... are any of these comets gonna hit the earth? A: Someday, certainly. As have before. this session, I believe that the Cassiopaeans are suggesting that this Comet Lee is NOT a threat to the Earth, but that there may be others that follow, and that we can expect quite a show in the not too distant future. the courtesy of Dr. Nicolas Biver, (Hawaii) we have received information on recent spectral observations in the infra/red tu millimeter/centimeter wavelengths of Comet Lee and, so far, it looks like a dirty snowball. there are other contenders for the Nostradamus' prophecy... McBeath, IMO Vice-President, e-mail: [email protected] David Asher presented a paper to the International Meteor Conference in Puimichel describing a theoretical resonant swarm of particles within the Taurid/Beta Taurid stream which could account for various meteor shower enhancements, increased fireball fluxes and even meteoritic impacts associated with the Taurid Complex of meteoroid streams, asteroids and comets. He used this theory to suggest times when future returns of the proposed "swarm" might lead to increased activity from the nighttime Taurid showers active in October-November, and the daytime Beta Taurids of June-July. He suggested that 1999 could see a return of the swarm during the Beta Taurid activity period, which is the purpose of this reminder warning now. Results from late October 1998 suggested an enhanced Taurid period had been detected by radio and visual observers in the closing days of the month, along with an increased flux of minor Taurid fireballs (magnitudes -3 to -8). Another of David's predictions for the swarm was that a recurrence might be expected in October-November 1998, which may well be what was recorded." D. Asher, "Meteoroid Swarms and the Taurid Complex", in: "Proceedings IMC, Puimichel 1993", ed. P. Roggemans, IMO, 1993, pp.88-91. A. McBeath, "SPA Meteor Section Results: September-October 1998", WGN, (in press). G. W. Kronk, "Meteor Showers: A Descriptive Catalog", Enslow, 1988, pp.115-118. A. McBeath, "The Forward Scatter Meteor Year", in: "Proceedings IMC, Petnica 1997, eds. A. Knoefel & A. McBeath, IMO, 1998, pp.39-54. of the "Millennium Group" suggests that "supernova explosions create the chunks of matter that become the seed comet nuclei and that they come at us from the sites of the old explosion, the fast ones arriving sooner and the slower ones later, but they come from exactly the same location in intergalactic space." interesting. Consider the following: first year of the period Chih-ho, the fifth moon, the day chi-ch'ou, a guest star appeared approximately several inches southeast of Thien-kuan. After more than a year it gradually became invisible." (Statement of Toktaga and Ouyang Hsuan, fourteenth century Chinese authors of the official history of the Sung dynasty) Chinese, guest stars were well worth noting and, indeed, looking out for. They believed that humans lived on Earth in a kingdom roofed with stars, and that human destiny was subject to a 'COSMIC WIND.' star was also seen and recorded by Japanese astronomers. They noted its appearance 'in the orbit of Orion,' as bright as the planet Jupiter, in early June 1054. are NO European records of this guest star. One of the puzzles about the ancient observations of the supernova of 1054 is the question as to why a new star which was as bright as Venus, visible in daylight for 3 weeks, and visible in the night sky ffor 2 years, should go unrecorded in European annals. Supernovae were neither totally unprecedented nor unknown in historical sources available in Europe. explanation that has been advanced for the lack of records of this supernova is that its appearance coincided with the split in Christianity between the Roman Church and the Greek Orthodox... July 16, 1054. This was what was known as the 'Great Schism.' The Church Fathers might have found it expedient to expunge from history such a dramatic portent..." including many other historical documents... textbook, Uyun al-Anba, composed by Ibn Abi Usaybia in about AD 1242, contains a statement by Ibn Butlan (a Christian physician from Baghdad, who had lived in Cairo and in Constantinople between 1052 and 1055). Butlan said: ' One of the well-known epidemics of our time is that which occurred when the spectacular star appeared in Gemini in the year 446 in question, given in Islamic dates, lasted from April 12, 1054 to April 1, 1055. The reference to Gemini rather than to Taurus can be understood as a reference to the precession of the equinox. the date and position, and the general circumstances of the observation suggest that Butlan was ascribing to the supernova of AD 1054, the cause of a noteworthy plague (which, by his account, spread from Constantinople where 14,000 people perished, to Egypt where it killed most of the population of Cairo, and to Iraq, the Yemen and Syria). Butlan's explanation is in general agreement with the belief that diseases were influences from the stars; the word 'influenza' still preserves this belief in European for Butlan's aside, there are no other definite western records of the AD 1054 supernova in Euorpe. The records are silent about an event which must have been an astronomical spectacle far outshining such pale rivals as Halley's Comet, which appeared 12 years later and was commemorated in the Bayeux Tapestry." (quoted from "Supernovae," by Paul and Lesley Murdin: Cambridge: Cambridge University Press, 1985) of this supernova of 1054 is called the Crab Nebula. Its distance from earth is about 6500 light years which means that it actually occurred about 7500 BC. have any idea how long it would take these "cosmic vaccum cleaners" to arrive... how much longer after the visible light arrives would the matter get here, considering that it would be slowed down in space due to accumulation seems that the Cassiopaean's are onto something in the above excerpts, because I was reading about the stages of a supernova, and, according to a couple of guys, Stanford Woosley and David Arnett, who independently developed a theory that explains the light curve of a supernova, a supernova originates from a star with a dense core of iron. The core becomes unstable and expodes, releasing a substantial amount of radioactive nickel-56. The radioactive decay of nickel into cobalt-56 and then into stable iron-56 produces gamma rays that illuminate the surrounding gas. This illumination is the primary source of energy that makes the supernova shine. So, in the clues the Cassiopaean's were giving about comets, it seems that they were pointing somewhat in this direction, that we ought to be watching for something in the way of a cometary shower that we may never have experienced before, that derives from an ancient supernova. Or, on the other hand, one that has a VERY LONG [3,600 years, perhaps?] period of return as stated in many other sessions. Added August 30, 1999 date: August 28, 1999 mind as you read the following the "Prime Directive" of the Cassiopaeans which is that Free Will cannot be abridged which is what would result if certain things were told to us outright. Also keep in mind their famous tendency to use puns and double and triple entendres. Q: (L) There were several dark streaks seen in the sky at the time of the recent solar eclipse. There has been a great deal of comment about this at various sites on the internet including some of the 'hard science' astronomical ones, yet there is no general consensus as to what they were. There were also strange lights and objects seen around the sun, and later, 'things' or 'shadows' reported crossing the surface of the moon. There is also an ongoing discussion about some sort of 'Incoming Object' as though there is an anomalous object in our solar system that is creating this effect. Now, I realize that you said before that we ought to watch for something AFTER Comet Lee, the exact words being ' Now, you need to know the composition of this comet... And any other closely following same. We have alluded to the increased cometary activity before. Oort, and that which cyclically disturbs it.' So, it does seem that what you were alluding to is transpiring at the present. I mean, 'closely following' probably means close in terms of time, also. And you did say that such comets as have been disturbed would approach in a 'scatter pattern,' which would mean from any and all directions. Can you comment on these current events? A: Not yet. Q: Why? Is there something we should be watching for? A: Best to keep watching.... Be open to all possibilities. Q: Are we on the verge of a) an alien invasion; b) a cometary impact; c) the appearance of a twin sun, a death star in our solar system? A: Wait and see. Q: I don't want to wait and see! (A) Wait and see! When it hits us, we will know! (L) Let me ask this.... A: What a glorious transition to 4th density STO. With maybe a quick stopover in 5th just to pick up a few things for the trip! Q: Are you saying that we are getting ready for the Big Kahuna? A: Only Don Ho knows for sure. A: Well, you did say "Kahuna," yes? Q: Okay, you have repeatedly, in the past year, alluded to something that we are supposed to be watching for, that we are supposed to 'enjoy the show,' and all that sort of thing. Now you have made this remark about 5th density, where 3rd density goes at death. Are you suggesting that a lot of folks are going to check out? A: Maybe we were trying to Lighten things up a bit! Q: That's all fine and good, but I just want one word here, a clue about what is coming down in the near future that you are making all these hints about... just a one word clue? Q: Now wait a minute! You can't DO that! What do you mean 'kaboom?' I changed my mind, I want two words! One more word! Q: Okay, a word that applies to us sitting right here.... Q: We KNOW it's Florida! What about a word that will give me a clue about our location in terms of this 'kaboom' and 'splat' business? Q: (A) I guess that means it will be hot here, but safe. A: Okay. Hot but safe, maybe. Q: Kaboom and splat? Does that relate to what you said about C____ last year when I asked if she had a plan to fulfill that she was not attending to at present and you said 'Fate will intercede.' When I asked HOW fate was going to intercede, you said 'Do you really want to know?' So, I said that I just wanted a one word clue, and you said: 'CRUNCH.' Is this what we are talking about here? Can we connect these dots? A: Oh no, she is in the mountains. So of course, she is safe!!! Q: (A) She is safe because she is in the mountains? A: Humor, people! Q: So that is a backward clue? A: Open. . was all they would say about it! Go figure! But, before getting hysterical, I would like to point out that "kaboom!" and "splat!" could apply equally to a cometary event as well as a major solar flare and CME. After all, CME's are the result of explosions on the sun, and when they impact the Earth, it could be considered a "splat." If, by any chance, the Cassiopaeans are correct, and a dark "companion sun" is sneaking into our solar system, it would account for a number of recent anomalous events. And, this could all tie in with the Velikovskian idea of comets as "cosmic vacuum cleaners," as described by the Millennium Group. A larger comet with EM properties could stimulate unheard of solar activity. Re-read above the funny clues about Comet Lee and if anybody has any insights, e-mail me. October 7, 1999: all the antics of the rumor mongers on the internet, monitoring all the information sites, scientific, current events oriented, and others, the only thing that can have been construed to have been a "kaboom splat" type event during the period in question was the loss of the Mars probe which did, indeed, go "kaboom! splat!" The ostensible reason for this was claimed to have been a mix-up in measurement conversions from metric to inches. I am not sure if this is a good explanation. Perhaps this probe was acted against in such a way that it went "kaboom!" and then "splat!" on the surface of Mars? Just a thought. course, "near" in terms of "time" to the Cassiopaeans, can often mean a rather longer period of time than we generally consider as "soon." In my own experience, "soon" as described by them, amounted to three years! point I would like to mention regarding Nostradamus is the following: Nostredame, or Nostradamus, one of the most acclaimed of post-Biblical prophets, has been claimed to have had a very good record of "hits." Unfortunately, it is often difficult to determine how good due to the obscurity of his prophetic style. It is usually after the fact that his quatrains are discovered to be "incredibly accurate." And, the same quatrains have been discovered to be "incredibly accurate" for DIFFERENT EVENTS by different So, we have to be very careful about getting ourselves all in an uproar about such things. The Cassiopaeans have remarked about the impossibility of accurate prediction due to the "fluid nature" of the Universe and have Q: (P) I would like to know about the apparitions of the Virgin Mary at Conyers, GA, as well as this book "Mary's Message to the World" and all the other messages about the End Times that are coming out all over? A: The forces at work here are far too clever to be accurately anticipated so easily. You never know what twists and turns will follow, and they are aware of prophetic and philosophical patternings and usually shift course to fool and discourage those who believe in fixed futures. taking the above into consideration, if, as is claimed, Nostradamus was a Jew, might he not have been referring to a Jewish reckoning of months when he spoke about the "Seventh Month?" In such a case, what month would that be? Certainly not the modern Western July or even September... just a thought. If anyone knows what month this might be, let me know. Let me emphasize again that we don't need to get hysterical about any of it. There is so much in the way of lies and disinformation raging through the internet, the media, and in our world in general, that it does none of us any good to get excited about what is going on around us. We all need to keep a cool head and watch what is going on around us. If the "negative forces" do, in fact, feed on fear and panic, they have been on a veritable feeding frenzy in the past few months! Continue to: Comets, Twin Sun, Armageddon? FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright The owners and publishers of these pages wish to state that the material presented here that is the product of our research and experimentation in Superluminal Communication is offered with the caveat that the reader ought always to research on their own. We invite the reader to share in our seeking of Truth by reading with an Open, but skeptical mind. We do not encourage "devotee-ism" nor "True Belief." We DO encourage the seeking of Knowledge and Awareness in all fields of endeavor as the best way to be able to discern lies from truth. We constantly seek to validate and/or refine what we understand to be either possible or probable or both. We do this in the sincere hope that all of mankind will benefit, if not now, then at some point in one of our probable futures. Contact Webmaster at cassiopaea.com Cassiopaean materials Copyright © 1994-2009 Arkadiusz Jadczyk and Laura Knight-Jadczyk. All rights reserved. "Cassiopaea, Cassiopaean, Cassiopaeans," is a registered trademark of Arkadiusz Jadczyk and Laura Knight-Jadczyk. Letters addressed to Cassiopaea, Quantum Future School, Ark or Laura, become the property of Arkadiusz Jadczyk and Laura Knight-Jadczyk and re-dissemination of our copyrighted material in any manner is expressly prohibited without prior written consent. You are visitor number [TextCounter Fatal Error: Could Not Write to File _cass_earth2_htm].
http://www.cassiopaea.org/cass/earth2.htm
13
61
The first humans migrate into the boundary waters region, including the Clovis people of the Paleoindian Tradition, also called the Big Game Hunters. They hunt woolly mammoths, mastodons, giant ground sloths, muskoxen, camels, horses, giant beavers, giant bison, saber-tooth tigers, and other megafauna or large Ice Age mammals. Around 11,000 years Hunting pressure and a warming climate lead to the extinction of many large Ice Age mammals. Voyageurs era," fur traders canoe the lakes and portage routes of the boundary waters region transporting furs for French and British fur companies. With the signing of the Treaty of Paris at the conclusion of the American Revolutionary War, the British surrender control over lands west of the Appalachian mountains, and the United States gains sovereignty over the southern Great Lakes region. Felt hats made from beaver fur go out of style in Europe and are replaced by silk hats, ending a fashion that lasted 300 years. By this time, the beaver population in the boundary waters region is decimated. It doesn't fully recover for 150 international boundary through canoe country is established by the Webster-Ashburton Treaty, signed by the United States and Great Britain. Seven chiefs of the Chippewa Indian Peace Commission travel to Washington, D.C., to sign the Treaty of LaPointe, ceding the entire Arrowhead region to the United States government and opening it to exploration and development by white settlers. In return, small reservations for the Ojibwe of Lake Superior are created at Grand Portage, Fond du Lac, and Nett Lake, and they are promised monetary payments for 20 years, annual food supplies, 80 acres of land to each head of family, fishnets, guns and ammunition, agricultural teachers, and a blacksmith for each reservation. Red and white pine are logged in the boundary waters area during "the big-pine logging era," with the first significant logging occurring in the Trout Lake area north of Lake Vermilion. Red pine is taken principally for timbers used in mining, and white pine principally for lumber. June 30, 500,000 acres of public domain in Lake and Cook Counties in northeastern Minnesota, much of which is now part of the Boundary Waters Canoe Area Wilderness, are set aside from logging, mining, and homesteading by Minnesota's Forestry Commissioner Christopher At the request of the Minnesota Forestry Board, Congress grants 20,000 acres to the State for the Burntside Forest Reserve. As stated in the 1905 Minnesota Forestry Commissioners Report, "State Forest Reserves should be devoted not alone to the business of raising timber, but to the pleasure of all the people." With financing from Edward Wellington Backus, the dam at Koochiching, now International Falls, is completed to provide waterpower for Backus's Minnesota and Ontario Paper Company. The dam is planned as the first in a series of dams that would affect parts of present-day Superior National Forest, Voyageurs National Park, Quetico Provincial Park, and the Boundary Waters Canoe Area Wilderness. Spring, the Quetico Provincial Forest Reserve is created by the government of Ontario, setting aside one million acres as a forest and game preserve. Weeks later the Superior National Forest is created as a reciprocal act when President Theodore Roosevelt signs Proclamation 848, setting aside one million acres on the U.S. side of the border. Major fires are suppressed in the boundary waters area during "the fire-suppression period," resulting in unintended consequences. Fire suppression interferes with the natural cycle of fires that create new stands of forests, curtails periodic elimination of the tree-killing spruce budworm, and causes a buildup of dead trees in forest understories. These unnaturally high fuel loads increase the likelihood of super hot fires that scorch the thin topsoil of the boundary waters area, killing organic matter and the seeds of trees such as jack pine, black spruce, and red pine, which normally reestablish themselves rapidly after fires. Quetico Provincial Park is established from the forest and game reserve created in 1909. Canada's provincial parks are closed to hunting. A recreation plan for the boundary waters area is developed by the U.S. Forest Service in response to increasing numbers of people seeking recreation. A plan for preserving the border lakes region as a canoeing area is proposed by Arthur Carhart, a landscape architect hired by the U.S. Forest Service. Though the plan is not implemented, it is the country's first proposal for managing and protecting a wilderness area. It calls for a fully protected core area and limited, controlled development in outer areas. April 23, at the first of many conferences to resolve differences regarding management of the Superior National Forest, Will Dilg, first president and founder of the Izaak Walton League of America, makes an impassioned plea opposing a U.S. Forest Service plan to bisect the core of the "roadless area" with a road linking Ely and the Gunflint Trail. The county governments and local chambers of commerce advocate development, adopting as their slogan "A Road to Every Lake." The Superior National Forest Recreation Association is organized with Paul Riis as its president to oppose construction of roads in the "roadless areas" of the Superior National Forest. Lumber baron and industrialist Edward Wellington Backus proposes building a series of seven dams along the boundary waters lakes to create four main water storage areas to provide hydroelectric power for his papermills. The dams would affect the 14,500-square-mile Rainy Lake watershed by significantly raising water levels above natural levels (Little Vermilion Lake by 80 feet, Loon Lake by 33 feet, Lac La Croix by 16 feet, and Saganaga and Crooked lakes by 15 feet). Conservationist and explorer "Ober" Oberholtzer – with support from attorneys Sewell Tyng, Frank Hubachek, Charles Kelly, Frederick Winston, and many other conservationists – wage a five-year battle to defeat the plan. September 17, the Little Indian Sioux, the Caribou, and the Superior "roadless areas" of the Superior National Forest are designated as a 640,000-acre roadless wilderness area under a policy issued by the U.S. Forest Service under U.S. Agricultural Secretary William Jardine to "retain as much as possible of the land which has recreational opportunities of this nature as a wilderness," curbing an ambitious road plan to push "a road to every lake." The policy allows construction of the Ely-Buyck road (now the Echo Trail), the Ely to Fernberg road, and the extension of the Gunflint Trail to Sea Gull Lake, but prohibits a connection from Fernberg northeast to Gunflint and spurs from the Ely-Buyck northwest to Lac La Croix and to Trout Lake, roads that would have further segmented the present Boundary Waters Canoe Area Wilderness. January 27, the Quetico-Superior Council holds its first meeting, with "Ober" Oberholtzer as its president, for the purpose of promoting an International Peace Memorial Forest on both sides of the border, encompassing the entire July 10, 1930, the Shipstead-Newton-Nolan Act, the first statute in which Congress expressly orders land be protected as "wilderness," is signed into law by President Herbert Hoover at the urging of a group of conservationists led by "Ober" Oberholtzer. The Act withdraws all federal land in the boundary waters region from homesteading or sale, prevents the alteration of natural water levels by dams, prohibits logging within 400 feet of shorelines, and preserves the wilderness nature of shorelines. The regulations apply to a 4,000-square-mile area extending from Lake Superior on the east to Rainy Lake on the west. Passage of the Act represents a defeat for Edward Wellington Backus's plan to build a series of dams in the Rainy Lake watershed to create storage basins for industrial waterpower. The General Logging Company ceases its railroad logging operations around Brule and Gunflint lakes, bringing to an end the railroad logging era in the boundary waters April 19, despite vigorous opposition by Minnesota Power and Light, legislation applying the protections of the Shipstead-Nolan Act to state lands is passed by the Minnesota Legislature. The bill is titled "An Act To Protect Certain Public Lands and Waters Adjacent Thereto Owned by the State of Minnesota." The National Industrial Recovery Act (NIRA), one of the first acts signed by President Franklin Delano Roosevelt on taking office, creates work camps directed by the U.S. Forest Service in the boundary waters area to put people back to work. In the fall of 1933 two permanent camps are built at Lake Three and Alice Lake. During the brutal winter that follows, several workers become ill, and the foreman dies at the Lake Three camp, apparently the result of sewage seeping into the water supply. The program ends a short time later, and workers are transferred to the newly formed Civilian Conservation Corps (CCC). Conservation Corps (CCC) enlists thousands of unemployed men to plant trees, rebuild and improve portages, build canoe rests, install landing docks, post direction signs, build four lookout towers, fight forest fires, and do other conservation projects in the boundary waters area. Fourteen major camps, each housing approximately 200 young men and dozens of highly skilled outdoorsmen, are constructed in and around the wilderness areas of the Superior National Forest. The docks, signs, and rests are later removed to comply with the 1964 Wilderness Act, but still evident today are the raised walkways, the rocks placed to reinforce trails, the canoe landings (now mostly submerged), and other signs of trail improvements. June 30, the President's five-member Quetico-Superior Committee is established by executive order by President Franklin Delano Roosevelt, with as its chair. The other members are Charles Kelly, Robert Marshall, Sigurd Olson, and Sewell Tyng. The Committee's purpose is to consult with and advise the State of Minnesota and the several federal departments and agencies operating in the Superior National Forest area. October 29, Edward Wellington Backus, lumberman and industrialist, dies of a heart attack in his hotel room in New York City, ending a nine-year struggle with "Ober" Oberholtzer and other conservationists for control of the Quetico-Superior Forest's three wilderness areas are renamed the Superior Roadless Primitive Areas under a plan formulated with the help of Robert Marshall, then in charge of recreation in the Washington office of the U.S. Forest Service. The designation protects the areas from development but allows timber cutting and Improved and less costly outboard motors, including small, easily portaged models that are usable on canoes, are now available. The Izaak Walton League of America establishes a fund to purchase private lands and resorts in the boundary waters area to be turned over to the government. From 1945 to 1965, the League purchases nearly 7,000 acres. Nearly 20 resorts serviced by pontoon-equipped planes are operating on Basswood, Crooked, Knife, La Croix, Saganaga, and Seagull lakes. Some offer amenities such as bars, slot machines, and motorboats, with Ely now serving as the largest inland seaplane base in North America. Act, Public Law 733, is passed by Congress, directing the Secretary of Agriculture to acquire resorts, cabins, and private lands within the boundary waters area and prohibiting any permanent residents after 1974. The Act provides for in-lieu-of-tax payments to Cook, Lake, and St. Louis Counties for federal wilderness land. It is extended and funded with an additional $2 million for acquisition of private property in 1956 and an additional $2 million in 1961. The amendments are denounced by the commissioners of Cook, Lake, and St. Louis counties and by the Ely Chamber of Commerce as "another ruthless inroad on the economy of affected counties." Railroad tracks are laid to Lake Isabella and construction begins on Forest Center, a logging town carved out of the southern edge of the roadless area, in preparation for logging by the Tomahawk Kraft Timber Company. A large turnaround and sawmill are built by the lake, and eventually more than 50 homes – as well as a church, restaurant, school, store, and recreation hall – are built, along with five smaller camps in the area. Logging by Tomahawk ends in 1964, when loggers reach a buffer zone created by the Shipstead-Nolan Act. By 1965 the town is gone, though the alteration in the southern boundary of the present Boundary Waters Canoe Area Wilderness remains. March 27, the Ely Rod and Gun Club reconfirms its support for an airspace reservation over the boundary waters at a meeting in which Forest Ranger Bill Trygg faces down angry opponents. Later that night a homemade bomb explodes outside the house of Bill Rom, an outfitter who supports the ban, but it causes little damage. April, Friends of the Wilderness is founded by William "Bill" Magie, Frank Robertson, and other conservationists, to represent organizations supporting a ban on airplanes over the boundary waters area. Executive Order 10092 is signed by President Truman creating an "airspace reservation" that bans private flights below the altitude of 4,000 feet above sea level, in part as a result of the work of activists Sigurd Olson, Charles Kelly, Frank Hubachek, William "Bill" Magie, and others. Truck portages into Basswood Lake, Lac La Croix, and Big Trout Lake are established, providing easy access to these lakes and their connecting waters by large, high-speed Aluminum canoes and boats are now widely available, making travel easier and resulting in dramatic increases in the number of canoeists accessing remote lakes. designed to control both the amount and type of recreational activities in the Boundary Waters, visitor use increases nearly threefold. The U.S. Forest Service prohibits the storage of boats on national lands within the BWCA, a common practice by Cook County and Lake County resorts. Agriculture Secretary Orville Freeman, a former Minnesota governor, appoints George Selke to head a special Boundary Waters Canoe Area Review Committee to recommend changes in BWCA management. September 3, the Wilderness Act, U.S. Public Law 88-577, is signed by President Lyndon Baines Johnson, establishing the U.S. wilderness preservation system and prohibiting the use of motorboats and snowmobiles within wilderness areas except for areas where use is well established within the Boundary Waters, defining wilderness as an area "where the earth and its community of life are untrammeled by man . . . an area of undeveloped . . . land retaining its primeval character and influence without permanent improvements." This date is considered by many to be the birth of the Boundary Waters Canoe Area. Many of the Selke Committee's recommendations for restrictions on visitor permits, motor use, and logging are implemented by Secretary of Agriculture Orville Freeman in a new management plan for the Boundary Waters. One recommendation is that permits be required for entrance. In addition, the plan divides the BWCA into an Interior Zone of 600,000 acres, which is closed to logging, and a Portal Zone of 400,000 acres, which is open to logging. The plan also calls for the immediate addition of 150,000 acres to the no-cut zone, with another 100,000 acres to be added by 1975 as existing logging contracts are completed. This would bring the total no-cut area to 612,000 acres by 1975. A mandatory permit system for visitors (with no fee) is instituted by the U.S. Forest Service following the Selke committee hearings, the Wilderness Act of 1964, and the Freeman Directive of 1965. June 21, the Superior National Forest Advisory Committee is formed to advise the U.S. Forest Supervisor on policies, programs, and management of the Superior National Forest and the Boundary Waters Canoe Area. The gray wolf the lower 48 states is listed as "endangered" under the 1966 Endangered Species Preservation Act. A maximum group size limit of 15 persons for visitors is instituted by the U.S. Forest Park is established by Public Law 91-661, as amended by Public Law 97-405, enacted by Congress on January 8 and signed by President Richard Nixon, to "preserve, for the inspiration and enjoyment of present and future generations, the outstanding scenery, geological conditions, and waterway system which constituted a part of the historic route of the Voyageurs who contributed significantly to the opening of the Northwestern United States." The park is officially established under these laws by the Secretary of the Interior on April 8, 1975. A rule limiting visitors to "designated campsites" on heavy-use routes is instituted by the U.S. Forest Service. Cans and glass bottles are prohibited from the Boundary Waters. According to the U.S. Forest Service, the measure is expected to reduce refuse by 360,000 pounds, saving $90,000 per year on cleanup. A limited moose hunt is authorized for the first time since 1922. Interest Research Group, a student group at the University of Minnesota, files a lawsuit to prohibit logging of old growth forest in the BWCAW until an Environmental Impact Statement is completed by the U.S. Forest Service. Species Act is passed by Congress, declaring timber wolves an endangered species and affording federal protection. Since 1965, when the last bounty was paid on a wolf in Minnesota, approximately 200 animals were killed annually. Park is given full wilderness protection. All logging is permanently banned, snowmobiles are banned, and a motorboat phaseout is begun. The rule limiting visitors to "designated campsites" that was instituted by the U.S. Forest Service on heavy-use routes in 1966 is extended to the entire Boundary Waters. The maximum group size limit for visitors is lowered from 15 to 10 persons by the U.S. Forest Logging of old growth forests is banned in a ruling by Federal District Judge Miles Lord. ruling is reversed on appeal in 1976. District Representative James Oberstar (D-MN) introduces a bill that if passed would have established a Boundary Waters Wilderness Area of 625,000 acres and a Boundary Waters National Recreation Area (NRA) of 527,000 acres, permitting logging and mechanized travel in the latter area and removing from wilderness designation a number of large scenic lakes such as La Croix, Basswood, Saganaga, and Seagull. The bill is strongly opposed by environmentalists. May 7, Friends of the Boundary Waters Wilderness is formed with Miron "Bud" Heinselman in opposition to Representative James Oberstar's 1975 bill, which would remove land from a designated wilderness for the purpose of creating a recreational area that would allow logging and mechanized travel. Its purpose is advocating greater protection of the Boundary Waters and "promoting the biological, intrinsic, aesthetic, economic, scientific, and spiritual values of wilderness." Other founding members include Fern Arpi, Chuck Dayton, Dan Engstrom, Dick Flint, Jan Green, Herb Johnson, Jack Mauritz, Steve Payne, Chuck Stoddard, Paul Toren, Herb Wright, and Dick Wyman. sophisticated visitor distribution system, using entry-point quotas on visitor numbers as a mechanism to redistribute visitor use and impacts throughout the wilderness, is instituted by the U.S. Forest Service. Cans and glass bottles are prohibited from Quetico Provincial Park. "Ober" Oberholtzer (born February 6, 1884) dies at age 93. Explorer, photographer, student of Ojibwe legend and oral tradition, authority on the Minnesota-Ontario boundary lakes region, lifetime President of the Quetico-Superior Council, and one of eight founders of the Wilderness Society, Ober devoted his life to preserving wilderness and protecting the Boundary Waters Canoe Area Wilderness. July 8, an effigy identified as Sigurd Olson and Miron "Bud" Heinselman is hung outside the Ely High School, where approximately 1,000 people gather to participate in a Congressional hearing. Amid boos and catcalls, Olson speaks in favor of Congressman Don Fraser's bill that becomes the Boundary Waters Canoe Area Wilderness Act of 1978. "This is the most beautiful lake country on the continent," Olson declares. "We can afford to cherish and protect it. Some places should be preserved from development of exploitation for they satisfy a human need for solace, belonging, and perspective. In the end we turn to nature in a frenzied chaotic world, there to find silence – oneness – wholeness – spiritual release." The Eastern timber wolf is reclassified from "endangered" to "threatened" by the U.S. Fish and Wildlife Service, the agency that administers the Endangered Species Act of 1973. The law still prohibits the killing of wolves with the exception of problem animals causing agricultural damage. The Fish and Wildlife Service also adopts a recovery plan (revised in 1992) for the purpose of increasing the number and range of timber wolves to ensure the animal's survival in the eastern half of the U.S. The recovery plan sets a population goal for Minnesota of 1,251 to 1,400 wolves by the year 2000, a goal that is achieved in the early 1980s. In 1989 a wolf population survey estimates the statewide population at between 1,550 and 1,750 animals. Boundary Waters Canoe Area Wilderness Act, U.S. Public Law 95-495, is signed by President Jimmy Carter. The act adds 50,000 acres to the Boundary Waters, which now encompasses 1,098,057 acres, and extends greater wilderness protection to the area. The name is changed from the Boundary Waters Canoe Area to the Boundary Waters Canoe Area Wilderness. The Act bans logging, mineral prospecting, and mining; all but bans snowmobile use; limits motorboat use to about two dozen lakes; limits the size of motors; and regulates the number of motorboats and motorized portages. It calls for limiting the number of motorized lakes to 16 in 1984, and 14 in 1999, totaling about 24% of the area's water acreage. All logging in the wilderness ceases under the Boundary Water Canoe Area Wilderness Act, U.S. Public Law 95-495, ending some 85 years of logging in the Boundary Waters. A $5 reservation fee for entry into the Boundary Waters is implemented. January 13, Sigurd Olson dies at age 82 after suffering a heart attack while snowshoeing with his wife Elizabeth near his home in Ely. Canoe outfitter, guide, educator, conservationist, wilderness advocate, and elder statesman of the Minnesota environmental movement, one-time president of the National Parks Association and of the Wilderness Society, eloquent and outspoken advocate of wilderness values, Sig published 9 books and more than 100 articles. February 5, Calvin Rutstrum dies at age 86. A conservationist who worked with his friend Sigurd Olson in the successful campaign to restrict airplane travel above the Boundary Waters, Calvin published 15 books on wilderness, nature, and March 4, William "Bill" Magie dies at age 79. Bill was a canoe guide in the waters around Ely from 1962 to 1978, a co-founder of Friends of the Wilderness, and a lifelong advocate of wilderness protection. March 8, the 1978 BWCA Act is upheld when the Supreme Court decides in an 8-1 decision (with Sandra Day O'Connor casting the dissenting vote) not to review lower court rulings in a three-year legal battle by the State of Minnesota and others challenging the constitutionality of the 1978 law. May 29-June 24, a small island on Lake Two is set on fire by a careless camper, resulting in a major forest fire. For the first time in 76 years, a significant fire in the Boundary Waters is allowed to burn without intervention by the U.S. Forest banned from Brule Lake. In response to strong opposition to a motor ban on Brule, an exception was written into the 1978 Boundary Waters Canoe Area Wilderness Act providing that motors could be used on Brule until January 1994, or until businesses already in operation in 1977 were terminated. With the closing of the last business on Brule, the Sky Blue Water Resort, the motor ban goes into effect. In subsequent years, use by canoeists administrative appeal is filed by four groups – the Friends of the Boundary Waters Wilderness, the Sierra Club, the Wilderness Society, and Defenders of Wildlife – on 12 issues in U.S. Forest Service's new land and Resource management Plan for Superior National Forest, including an appeal for the Forest Service to close three truck portages in compliance with the 1978 BWCA March 20, following the 1986 Lake Two fire, a new prescribed natural-fire-management program adopted by the U.S. Forest Service and implemented in the Boundary Waters. The policy allows lightning-ignited fires that pose no threat to people or property to burn themselves out naturally. This departure from the policy of suppressing all fires ends the "fire-suppression period" of management that began in 1911. April, the Izaak Walton League, with four other groups, goes to court to stop the National Guard from conducting training flights as low as 2,500 feet over the Boundary Waters by F-4 Phantom Jet Fighters, which create sonic booms. August, a 611-foot radio tower, proposed by Connecticut developer Timothy Martz to be constructed on a ridge near Esther Lake in Cook, is blocked by temporary injunction granted by County Ramsey County District Judge Donald Gross. The judge accepts the argument of environmentalist Harry Drabik, who sued on behalf of the state, arguing that the tower would ruin the scenic quality of an unspoiled October 15, a compromise regarding rebuilding of the Sawbill Trail is developed by the Sawbill Trail Consensus Committee, facilitated by Forest Service Tofte District Ranger Larry Dawson. The original plan calls for widening the trail's clearance for construction of a 55-mile-an-hour two-lane paved highway. When the bulldozers start clearing the first segment from Tofte in 1990, there is an uproar of protest. Many people believe the rebuilding will alter the trail's primitive character. The compromise results in special variances being sought from state and federal highway administrations, so that after the first 3 miles from Tofte the road is left unpaved and calcium chloride is applied for dust abatement. Road clearance is widened from 45 to 56 feet (rather than the 64 feet originally proposed) for a 45-mile-an-hour, 9-ton road with 12-foot driving lanes, 2-foot shoulders, and 3-foot ditches, with trees cleared an additional 10 feet on the ditch slopes on both sides of the road. The road is rerouted near Plouff Creek, known as Dead Man's Curve because of its many accidents, reducing the overall length of the Trail from 24 to 23 miles. Clearance for the last six miles is not altered, so that the Sawbill Trail now has an increasingly rustic feel as it approaches Sawbill Lake. November 6, as a result of a 15-year effort on the part of the Friends of the Boundary Waters Canoe Area truck portages that were to have been phased out as stipulated by the 1978 BWCA Wilderness Act are closed when the U.S. Court of Appeals for the Eighth District reverses the decision of a lower court to allow them to November 17, a BWCAW draft management plan is released to the public via a news conference. Many people object to some of the provisions, especially the proposal to reduce the group size limit from 10 to 6 persons. A new BWCAW management plan is implemented by the Superior National Forest, reducing visitor-group size limit from 10 to 9 persons, limiting the number of watercraft per group to 4, and operating the visitor distribution program at 67 percent rather than 85 percent campsite occupancy. After a series of cuts in the Boundary Waters wilderness management budget, a $10 per-person user fee, in addition to the $12 registration fee, is authorized under the User Fee Demonstration Project, a three-year pilot program passed by Congress. In 1997, the first year the fee takes effect, about $1 million is generated to help fund portage and canoe landing maintenance, campsite rehabilitation, and law enforcement. The funds make up for the shortfall in the $2.5 million called for by the U.S. Forest Service plan to properly manage the Boundary A federal mediation process is initiated by U.S. Senator Paul Wellstone to resolve issues relating to three motorized portages. The process, which lasts nearly nine months, is concluded on April 28, 1997, with recommendations for reducing airborne mercury pollution but without consensus on the core issues, losing an opportunity, in the words of Bill Hansen, for "the healing effect of a broad community consensus on wilderness policy." allowing three motorized portages to resume operation is introduced by Eighth District Representative James Oberstar (D-MN) and Senator Rod Grams (R-MN) but does not pass. In part as a result of the debates surround the issue, membership in the Friends of the Boundary Waters Wilderness peaks at 2,783 are permitted to continue transporting motorboats across two portages, Trout and Prairie, by a rider on an unrelated transportation bill passed by February 2001, the Minnesota Department of Natural Resources develops a wolf management plan. plan seeks to demonstrate that Minnesota is prepared to assume responsibility for the Eastern timber wolf when delisting occurs and that Minnesota will ensure the long-term survival of the wolf as required by the federal recovery October 19, Mardy Murie dies at age 101. Mardy was known as the "grandmother" and the "matriarch" of the modern conservation movement for her work on garnering support for the 1964 Wilderness Act and the Arctic National Wildlife Refuge. In her 1980 testimony before Congress in support of expanding the Arctic National Wildlife Refuge (ANWR) she said, "I hope the United States of America is not so rich that she can afford to let these wildernesses pass by, or so poor she cannot afford to keep them." January 29, Deputy Secretary of the Interior Lynn Scarlett announces that the U.S. Fish and Wildlife Service is "de-listing" or removing the western Great Lakes population of gray wolves from the federal list of threatened and endangered species. The Service is also proposing removal of the northern Rocky Mountain population of gray wolves from the list. Both actions are taken in recognition of the success of gray wolf recovery efforts under the Endangered Species Act. Gray wolves were previously listed as endangered in the lower 48 states, except in Minnesota, where they were listed as threatened. traveling in two motorboats, five local men Barney J. Lakner, 37, Jay A. Olson, 19, Zachary R. Barton, 19, Travis J. Erzar, 20, and Casey J. Fenske, 19 and one 16-year-old juvenile, who come to be known as the "Ely Six," go on a rampage on Basswood Lake. During a night of drinking beer and discharging firearms, they terrorize and harass dozens of campers, including families with children. They use foul language, shoot a flare that explodes in the air, on two occasions release gasoline onto the lake and set it on fire, and occupy one campsite for 45 minutes, threatening to rape and kill the three a retired schoolteacher from suburban Chicago, his 26-year-old daughter, and his 11-year-old son, who hide deep in the brush during the ordeal. During the spree the men are reported to have shouted, "Fucking tourists . . . get the hell off our fucking property," and using local slang for "environmentally obnoxious" people ― "go home, fucking enox tree-huggers." After some of the campers report the disturbance by calling 911, the five men and the teenager within the hour not far from Basswood Lake. From the two boats authorities recover a high-powered, semi-automatic assault-style rifle with three 30-round clips, a .45 caliber semi-automatic pistol, a .22 caliber rifle, a .22 caliber pistol, ammunition, spent shell casings, fireworks residue, beer, and items stolen from one campsite. authorities file 79 charges against the six including terroristic threats, aggravated harassment, criminal damage to property, reckless discharge of firearms, underage possession of firearms, and underage alcohol consumption. The group also faces felony counts and charges from federal and Canadian authorities because they crossed into Ontario's Quetico Provincial Park, where they continued their rampage. Newsweek and other publications link the night of terror to deep-seeded resentment on the part of local people who oppose the 1978 restrictions limiting their access to the area. Some of these people were forced to sell their resorts and cabins when the area was set aside as protected wilderness. As reported by Larry Oakes in the Minneapolis Star Tribune, "Lakner, a bread-truck driver, husband, and father, paid a $275 fine in 2004 for driving a snowmobile in the BWCA and [in July] Olson and Fenske were fined $225 each for driving ATVs in the BWCA in May." Perplexingly, five of the six members of the group were not yet born when the area was set aside. Newsweek asks if the behavior was "just youthful indiscretion or a troubling community character flaw?" Local outfitter Nancy Piragis says, "They learned these attitudes." Mayor chuck Novak says, "If what's in those complaints is proven true, I don't see any public support for this around here." The Timberjay, the community newspaper, says in an editorial, "While there has long been a tendency in our area to paint youthful rebels who run afoul of the Boundary Waters regulations as folk heroes, this is a different situation entirely . . . This wasn't . . . like motoring in a paddle-only lake, or a late-night border run on a snowmobile . . . This isn't folk hero material. Such actions should horrify everyone."
http://www.wilbers.com/ChronologyWildernessManagement.htm
13
214
A pendulum is a weight suspended from a pivot so that it can swing freely. When a pendulum is displaced sideways from its resting equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. When released, the restoring force combined with the pendulum's mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one complete cycle, a left swing and a right swing, is called the period. A pendulum swings with a specific period which depends (mainly) on its length. From its discovery around 1602 by Galileo Galilei the regular motion of pendulums was used for timekeeping, and was the world's most accurate timekeeping technology until the 1930s. Pendulums are used to regulate pendulum clocks, and are used in scientific instruments such as accelerometers and seismometers. Historically they were used as gravimeters to measure the acceleration of gravity in geophysical surveys, and even as a standard of length. The word 'pendulum' is new Latin, from the Latin pendulus, meaning 'hanging'. The simple gravity pendulum is an idealized mathematical model of a pendulum. This is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. When given an initial push, it will swing back and forth at a constant amplitude. Real pendulums are subject to friction and air drag, so the amplitude of their swings declines. Period of oscillation The period of swing of a simple gravity pendulum depends on its length, the local strength of gravity, and to a small extent on the maximum angle that the pendulum swings away from vertical, θ0, called the amplitude. It is independent of the mass of the bob. If the amplitude is limited to small swings,[Note 1] the period T of a simple pendulum, the time taken for a complete cycle, is: where L is the length of the pendulum and g is the local acceleration of gravity. For small swings the period of swing is approximately the same for different size swings: that is, the period is independent of amplitude. This property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, take the same amount of time. For larger amplitudes, the period increases gradually with amplitude so it is longer than given by equation (1). For example, at an amplitude of θ0 = 23° it is 1% larger than given by (1). The period increases asymptotically (to infinity) as θ0 approaches 180°, because the value θ0 = 180° is an unstable equilibrium point for the pendulum. The true period of an ideal simple gravity pendulum can be written in several different forms (see Pendulum (mathematics) ), one example being the infinite series: The difference between this true period and the period for small swings (1) above is called the circular error. In the case of a longcase clock whose pendulum is about one metre in length and whose amplitude is ±0.1 radians, the θ2 term adds a correction to equation (1) that is equivalent to 54 seconds per day and the θ4 term a correction equivalent to a further 0.03 seconds per day. For real pendulums, corrections to the period may be needed to take into account the presence of air, the mass of the string, the size and shape of the bob and how it is attached to the string, flexibility and stretching of the string, motion of the support, and local gravitational gradients. The length L of the ideal simple pendulum above, used for calculating the period, is the distance from the pivot point to the center of mass of the bob. A pendulum consisting of any swinging rigid body, which is free to rotate about a fixed horizontal axis is called a compound pendulum or physical pendulum. For these pendulums the appropriate equivalent length is the distance from the pivot point to a point in the pendulum called the center of oscillation. This is located under the center of mass, at a distance called the radius of gyration, that depends on the mass distribution along the pendulum. However, for any pendulum in which most of the mass is concentrated in the bob, the center of oscillation is close to the center of mass. Using the parallel axis theorem, the radius of gyration L of a rigid pendulum can be shown to be Substituting this into (1) above, the period T of a rigid-body compound pendulum for small angles is given by For example, for a pendulum made of a rigid uniform rod of length L pivoted at its end, I = (1/3)mL2. The center of mass is located in the center of the rod, so R = L/2. Substituting these values into the above equation gives T = 2π√. This shows that a rigid rod pendulum has the same period as a simple pendulum of 2/3 its length. Christiaan Huygens proved in 1673 that the pivot point and the center of oscillation are interchangeable. This means if any pendulum is turned upside down and swung from a pivot located at its previous center of oscillation, it will have the same period as before, and the new center of oscillation will be at the old pivot point. In 1817 Henry Kater used this idea to produce a type of reversible pendulum, now known as a Kater pendulum, for improved measurements of the acceleration due to gravity. One of the earliest known uses of a pendulum was in the 1st. century seismometer device of Han Dynasty Chinese scientist Zhang Heng. Its function was to sway and activate one of a series of levers after being disturbed by the tremor of an earthquake far away. Released by a lever, a small ball would fall out of the urn-shaped device into one of eight metal toad's mouths below, at the eight points of the compass, signifying the direction the earthquake was located. Many sources claim that the 10th century Egyptian astronomer Ibn Yunus used a pendulum for time measurement, but this was an error that originated in 1684 with the British historian Edward Bernard. During the Renaissance, large pendulums were used as sources of power for manual reciprocating machines such as saws, bellows, and pumps. Leonardo da Vinci made many drawings of the motion of pendulums, though without realizing its value for timekeeping. 1602: Galileo's research Italian scientist Galileo Galilei was the first to study the properties of pendulums, beginning around 1602. His first existent report of his research is contained in a letter to Guido Ubaldo dal Monte, from Padua, dated November 29, 1602. His biographer and student, Vincenzo Viviani, claimed his interest had been sparked around 1582 by the swinging motion of a chandelier in the Pisa cathedral. Galileo discovered the crucial property that makes pendulums useful as timekeepers, called isochronism; the period of the pendulum is approximately independent of the amplitude or width of the swing. He also found that the period is independent of the mass of the bob, and proportional to the square root of the length of the pendulum. He first employed freeswinging pendulums in simple timing applications. A physician friend invented a device which measured a patient's pulse by the length of a pendulum; the pulsilogium. In 1641 Galileo conceived and dictated to his son Vincenzo a design for a pendulum clock; Vincenzo began construction, but had not completed it when he died in 1649. The pendulum was the first harmonic oscillator used by man. 1656: The pendulum clock In 1656 the Dutch scientist Christiaan Huygens built the first pendulum clock. This was a great improvement over existing mechanical clocks; their best accuracy was increased from around 15 minutes deviation a day to around 15 seconds a day. Pendulums spread over Europe as existing clocks were retrofitted with them. The English scientist Robert Hooke studied the conical pendulum around 1666, consisting of a pendulum that is free to swing in two dimensions, with the bob rotating in a circle or ellipse. He used the motions of this device as a model to analyze the orbital motions of the planets. Hooke suggested to Isaac Newton in 1679 that the components of orbital motion consisted of inertial motion along a tangent direction plus an attractive motion in the radial direction. This played a part in Newton's formulation of the law of universal gravitation. Robert Hooke was also responsible for suggesting as early as 1666 that the pendulum could be used to measure the force of gravity. During his expedition to Cayenne, French Guiana in 1671, Jean Richer found that a pendulum clock was 2 1⁄2 minutes per day slower at Cayenne than at Paris. From this he deduced that the force of gravity was lower at Cayenne. In 1687, Isaac Newton in Principia Mathematica showed that this was because the Earth was not a true sphere but slightly oblate (flattened at the poles) from the effect of centrifugal force due to its rotation, causing gravity to increase with latitude. Portable pendulums began to be taken on voyages to distant lands, as precision gravimeters to measure the acceleration of gravity at different points on Earth, eventually resulting in accurate models of the shape of the Earth. 1673: Huygens' Horologium Oscillatorium In 1673, Christiaan Huygens published his theory of the pendulum, Horologium Oscillatorium sive de motu pendulorum. He demonstrated that for an object to descend down a curve under gravity in the same time interval, regardless of the starting point, it must follow a cycloid curve rather than the circular arc of a pendulum. This confirmed the earlier observation by Marin Mersenne that the period of a pendulum does vary with its amplitude, and that Galileo's observation of isochronism was accurate only for small swings. Huygens also solved the issue of how to calculate the period of an arbitrarily shaped pendulum (called a compound pendulum), discovering the center of oscillation, and its interchangeability with the pivot point. The existing clock movement, the verge escapement, made pendulums swing in very wide arcs of about 100°. Huygens showed this was a source of inaccuracy, causing the period to vary with amplitude changes caused by small unavoidable variations in the clock's drive force. To make its period isochronous, Huygens mounted cycloidal-shaped metal 'cheeks' next to the pivot in his 1673 clock, that constrained the suspension cord and forced the pendulum to follow a cycloid arc. This solution didn't prove as practical as simply limiting the pendulum's swing to small angles of a few degrees. The realization that only small swings were isochronous motivated the development of the anchor escapement around 1670, which reduced the pendulum swing in clocks to 4°–6°. 1721: Temperature compensated pendulums During the 18th and 19th century, the pendulum clock's role as the most accurate timekeeper motivated much practical research into improving pendulums. It was found that a major source of error was that the pendulum rod expanded and contracted with changes in ambient temperature, changing the period of swing. This was solved with the invention of temperature compensated pendulums, the mercury pendulum in 1721 and the gridiron pendulum in 1726, reducing errors in precision pendulum clocks to a few seconds per week. The accuracy of gravity measurements made with pendulums was limited by the difficulty of finding the location of their center of oscillation. Huygens had discovered in 1673 that a pendulum has the same period when hung from its center of oscillation as when hung from its pivot, and the distance between the two points was equal to the length of a simple gravity pendulum of the same period. In 1818 British Captain Henry Kater invented the reversible Kater's pendulum which used this principle, making possible very accurate measurements of gravity. For the next century the reversible pendulum was the standard method of measuring absolute gravitational acceleration. 1851: Foucault pendulum In 1851, Jean Bernard Léon Foucault showed that the plane of oscillation of a pendulum, like a gyroscope, tends to stay constant regardless of the motion of the pivot, and that this could be used to demonstrate the rotation of the Earth. He suspended a pendulum free to swing in two dimensions (later named the Foucault pendulum) from the dome of the Panthéon in Paris. The length of the cord was 67 m (220 ft). Once the pendulum was set in motion, the plane of swing was observed to precess or rotate 360° clockwise in about 32 hours. This was the first demonstration of the Earth's rotation that didn't depend on celestial observations, and a "pendulum mania" broke out, as Foucault pendulums were displayed in many cities and attracted large crowds. 1930: Decline in use Around 1900 low-thermal-expansion materials began to be used for pendulum rods in the highest precision clocks and other instruments, first invar, a nickel steel alloy, and later fused quartz, which made temperature compensation trivial. Precision pendulums were housed in low pressure tanks, which kept the air pressure constant to prevent changes in the period due to changes in buoyancy of the pendulum due to changing atmospheric pressure. The accuracy of the best pendulum clocks topped out at around a second per year. The timekeeping accuracy of the pendulum was exceeded by the quartz crystal oscillator, invented in 1921, and quartz clocks, invented in 1927, replaced pendulum clocks as the world's best timekeepers. Pendulum clocks were used as time standards until World War 2, although the French Time Service continued using them in their official time standard ensemble until 1954. Pendulum gravimeters were superseded by "free fall" gravimeters in the 1950s, but pendulum instruments continued to be used into the 1970s. Use for time measurement For 300 years, from its discovery around 1602 until development of the quartz clock in the 1930s, the pendulum was the world's standard for accurate timekeeping. In addition to clock pendulums, freeswinging seconds pendulums were widely used as precision timers in scientific experiments in the 17th and 18th centuries. Pendulums require great mechanical stability: a length change of only 0.02%, 0.2 mm in a grandfather clock pendulum, will cause an error of a minute per week. Pendulums in clocks (see example at right) are usually made of a weight or bob (b) suspended by a rod of wood or metal (a). To reduce air resistance (which accounts for most of the energy loss in clocks) the bob is traditionally a smooth disk with a lens-shaped cross section, although in antique clocks it often had carvings or decorations specific to the type of clock. In quality clocks the bob is made as heavy as the suspension can support and the movement can drive, since this improves the regulation of the clock (see Accuracy below). A common weight for seconds pendulum bobs is 15 pounds. (6.8 kg). Instead of hanging from a pivot, clock pendulums are usually supported by a short straight spring (d) of flexible metal ribbon. This avoids the friction and 'play' caused by a pivot, and the slight bending force of the spring merely adds to the pendulum's restoring force. A few precision clocks have pivots of 'knife' blades resting on agate plates. The impulses to keep the pendulum swinging are provided by an arm hanging behind the pendulum called the crutch, (e), which ends in a fork, (f) whose prongs embrace the pendulum rod. The crutch is pushed back and forth by the clock's escapement, (g,h). Each time the pendulum swings through its centre position, it releases one tooth of the escape wheel (g). The force of the clock's mainspring or a driving weight hanging from a pulley, transmitted through the clock's gear train, causes the wheel to turn, and a tooth presses against one of the pallets (h), giving the pendulum a short push. The clock's wheels, geared to the escape wheel, move forward a fixed amount with each pendulum swing, advancing the clock's hands at a steady rate. The pendulum always has a means of adjusting the period, usually by an adjustment nut (c) under the bob which moves it up or down on the rod. Moving the bob up decreases the pendulum's length, causing the pendulum to swing faster and the clock to gain time. Some precision clocks have a small auxiliary adjustment weight on a threaded shaft on the bob, to allow finer adjustment. Some tower clocks and precision clocks use a tray attached near to the midpoint of the pendulum rod, to which small weights can be added or removed. This effectively shifts the centre of oscillation and allows the rate to be adjusted without stopping the clock. The pendulum must be suspended from a rigid support. During operation, any elasticity will allow tiny imperceptible swaying motions of the support, which disturbs the clock's period, resulting in error. Pendulum clocks should be attached firmly to a sturdy wall. The most common pendulum length in quality clocks, which is always used in grandfather clocks, is the seconds pendulum, about 1 metre (39 inches) long. In mantel clocks, half-second pendulums, 25 cm (10 in) long, or shorter, are used. Only a few large tower clocks use longer pendulums, the 1.5 second pendulum, 2.25 m (7 ft) long, or occasionally the two-second pendulum, 4 m (13 ft) as is the case of Big Ben. The largest source of error in early pendulums was slight changes in length due to thermal expansion and contraction of the pendulum rod with changes in ambient temperature. This was discovered when people noticed that pendulum clocks ran slower in summer, by as much as a minute per week (one of the first was Godefroy Wendelin, as reported by Huygens in 1658). Thermal expansion of pendulum rods was first studied by Jean Picard in 1669. A pendulum with a steel rod will expand by about 11.3 parts per million (ppm) with each degree Celsius increase, causing it to lose about 0.27 seconds per day for every degree Celsius increase in temperature, or 9 seconds per day for a 33 °C (60 °F) change. Wood rods expand less, losing only about 6 seconds per day for a 33 °C (60 °F) change, which is why quality clocks often had wooden pendulum rods. However, care had to be taken to reduce the possibility of errors due to changes in humidity. The first device to compensate for this error was the mercury pendulum, invented by George Graham in 1721. The liquid metal mercury expands in volume with temperature. In a mercury pendulum, the pendulum's weight (bob) is a container of mercury. With a temperature rise, the pendulum rod gets longer, but the mercury also expands and its surface level rises slightly in the container, moving its centre of mass closer to the pendulum pivot. By using the correct height of mercury in the container these two effects will cancel, leaving the pendulum's centre of mass, and its period, unchanged with temperature. Its main disadvantage was that when the temperature changed, the rod would come to the new temperature quickly but the mass of mercury might take a day or two to reach the new temperature, causing the rate to deviate during that time. To improve thermal accommodation several thin containers were often used, made of metal. Mercury pendulums were the standard used in precision regulator clocks into the 20th century. The most widely used compensated pendulum was the gridiron pendulum, invented in 1726 by John Harrison. This consists of alternating rods of two different metals, one with lower thermal expansion (CTE), steel, and one with higher thermal expansion, zinc or brass. The rods are connected by a frame, as shown in the drawing above, so that an increase in length of the zinc rods pushes the bob up, shortening the pendulum. With a temperature increase, the low expansion steel rods make the pendulum longer, while the high expansion zinc rods make it shorter. By making the rods of the correct lengths, the greater expansion of the zinc cancels out the expansion of the steel rods which have a greater combined length, and the pendulum stays the same length with temperature. Zinc-steel gridiron pendulums are made with 5 rods, but the thermal expansion of brass is closer to steel, so brass-steel gridirons usually require 9 rods. Gridiron pendulums adjust to temperature changes faster than mercury pendulums, but scientists found that friction of the rods sliding in their holes in the frame caused gridiron pendulums to adjust in a series of tiny jumps. In high precision clocks this caused the clock's rate to change suddenly with each jump. Later it was found that zinc is subject to creep. For these reasons mercury pendulums were used in the highest precision clocks, but gridirons were used in quality regulator clocks. They became so associated with quality that, to this day, many ordinary clock pendulums have decorative 'fake' gridirons that don't actually have any temperature compensation function. Invar and fused quartz Around 1900 low thermal expansion materials were developed which, when used as pendulum rods, made elaborate temperature compensation unnecessary. These were only used in a few of the highest precision clocks before the pendulum became obsolete as a time standard. In 1896 Charles Edouard Guillaume invented the nickel steel alloy Invar. This has a CTE of around 0.5 µin/(in·°F), resulting in pendulum temperature errors over 71 °F of only 1.3 seconds per day, and this residual error could be compensated to zero with a few centimeters of aluminium under the pendulum bob (this can be seen in the Riefler clock image above). Invar pendulums were first used in 1898 in the Riefler regulator clock which achieved accuracy of 15 milliseconds per day. Suspension springs of Elinvar were used to eliminate temperature variation of the spring's restoring force on the pendulum. Later fused quartz was used which had even lower CTE. These materials are the choice for modern high accuracy pendulums. The effect of the surrounding air on a moving pendulum is complex and requires fluid mechanics to calculate precisely, but for most purposes its influence on the period can be accounted for by three effects: - By Archimedes' principle the effective weight of the bob is reduced by the buoyancy of the air it displaces, while the mass (inertia) remains the same, reducing the pendulum's acceleration during its swing and increasing the period. This depends on the air pressure and the density of the pendulum, but not its shape. - The pendulum carries an amount of air with it as it swings, and the mass of this air increases the inertia of the pendulum, again reducing the acceleration and increasing the period. This depends on both its density and shape. - Viscous air resistance slows the pendulum's velocity. This has a negligible effect on the period, but dissipates energy, reducing the amplitude. This reduces the pendulum's Q factor, requiring a stronger drive force from the clock's mechanism to keep it moving, which causes increased disturbance to the period. So increases in barometric pressure increase a pendulum's period slightly due to the first two effects, by about 0.11 seconds per day per kilopascal (0.37 seconds per day per inch of mercury or 0.015 seconds per day per torr). Researchers using pendulums to measure the acceleration of gravity had to correct the period for the air pressure at the altitude of measurement, computing the equivalent period of a pendulum swinging in vacuum. A pendulum clock was first operated in a constant-pressure tank by Friedrich Tiede in 1865 at the Berlin Observatory, and by 1900 the highest precision clocks were mounted in tanks that were kept at a constant pressure to eliminate changes in atmospheric pressure. Alternatively, in some a small aneroid barometer mechanism attached to the pendulum compensated for this effect. Pendulums are affected by changes in gravitational acceleration, which varies by as much as 0.5% at different locations on Earth, so pendulum clocks have to be recalibrated after a move. Even moving a pendulum clock to the top of a tall building can cause it to lose measurable time from the reduction in gravity. Accuracy of pendulums as timekeepers The timekeeping elements in all clocks, which include pendulums, balance wheels, the quartz crystals used in quartz watches, and even the vibrating atoms in atomic clocks, are in physics called harmonic oscillators. The reason harmonic oscillators are used in clocks is that they vibrate or oscillate at a specific resonant frequency or period and resist oscillating at other rates. However, the resonant frequency is not infinitely 'sharp'. Around the resonant frequency there is a narrow natural band of frequencies (or periods), called the resonance width or bandwidth, where the harmonic oscillator will oscillate. In a clock, the actual frequency of the pendulum may vary randomly within this bandwidth in response to disturbances, but at frequencies outside this band, the clock will not function at all. The measure of a harmonic oscillator's resistance to disturbances to its oscillation period is a dimensionless parameter called the Q factor equal to the resonant frequency divided by the bandwidth. The higher the Q, the smaller the bandwidth, and the more constant the frequency or period of the oscillator for a given disturbance. The reciprocal of the Q is roughly proportional to the limiting accuracy achievable by a harmonic oscillator as a time standard. The Q is related to how long it takes for the oscillations of an oscillator to die out. The Q of a pendulum can be measured by counting the number of oscillations it takes for the amplitude of the pendulum's swing to decay to 1/e = 36.8% of its initial swing, and multiplying by 2π. In a clock, the pendulum must receive pushes from the clock's movement to keep it swinging, to replace the energy the pendulum loses to friction. These pushes, applied by a mechanism called the escapement, are the main source of disturbance to the pendulum's motion. The Q is equal to 2π times the energy stored in the pendulum, divided by the energy lost to friction during each oscillation period, which is the same as the energy added by the escapement each period. It can be seen that the smaller the fraction of the pendulum's energy that is lost to friction, the less energy needs to be added, the less the disturbance from the escapement, the more 'independent' the pendulum is of the clock's mechanism, and the more constant its period is. The Q of a pendulum is given by: where M is the mass of the bob, ω = 2π/T is the pendulum's radian frequency of oscillation, and Γ is the frictional damping force on the pendulum per unit velocity. ω is fixed by the pendulum's period, and M is limited by the load capacity and rigidity of the suspension. So the Q of clock pendulums is increased by minimizing frictional losses (Γ). Precision pendulums are suspended on low friction pivots consisting of triangular shaped 'knife' edges resting on agate plates. Around 99% of the energy loss in a freeswinging pendulum is due to air friction, so mounting a pendulum in a vacuum tank can increase the Q, and thus the accuracy, by a factor of 100. The Q of pendulums ranges from several thousand in an ordinary clock to several hundred thousand for precision regulator pendulums swinging in vacuum. A quality home pendulum clock might have a Q of 10,000 and an accuracy of 10 seconds per month. The most accurate commercially produced pendulum clock was the Shortt-Synchronome free pendulum clock, invented in 1921. Its Invar master pendulum swinging in a vacuum tank had a Q of 110,000 and an error rate of around a second per year. Their Q of 103–105 is one reason why pendulums are more accurate timekeepers than the balance wheels in watches, with Q around 100-300, but less accurate than the quartz crystals in quartz clocks, with Q of 105–106. Pendulums (unlike, for example, quartz crystals) have a low enough Q that the disturbance caused by the impulses to keep them moving is generally the limiting factor on their timekeeping accuracy. Therefore the design of the escapement, the mechanism that provides these impulses, has a large effect on the accuracy of a clock pendulum. If the impulses given to the pendulum by the escapement each swing could be exactly identical, the response of the pendulum would be identical, and its period would be constant. However, this is not achievable; unavoidable random fluctuations in the force due to friction of the clock's pallets, lubrication variations, and changes in the torque provided by the clock's power source as it runs down, mean that the force of the impulse applied by the escapement varies. If these variations in the escapement's force cause changes in the pendulum's width of swing (amplitude), this will cause corresponding slight changes in the period, since (as discussed at top) a pendulum with a finite swing is not quite isochronous. Therefore, the goal of traditional escapement design is to apply the force with the proper profile, and at the correct point in the pendulum's cycle, so force variations have no effect on the pendulum's amplitude. This is called an isochronous escapement. The Airy condition In 1826 British astronomer George Airy proved what clockmakers had known for centuries; that the disturbing effect of a drive force on the period of a pendulum is smallest if given as a short impulse as the pendulum passes through its bottom equilibrium position. Specifically, he proved that if a pendulum is driven by an impulse that is symmetrical about its bottom equilibrium position, the pendulum's amplitude will be unaffected by changes in the drive force. The most accurate escapements, such as the deadbeat, approximately satisfy this condition. The presence of the acceleration of gravity g in the periodicity equation (1) for a pendulum means that the local gravitational acceleration of the Earth can be calculated from the period of a pendulum. A pendulum can therefore be used as a gravimeter to measure the local gravity, which varies by over 0.5% across the surface of the Earth.[Note 2] The pendulum in a clock is disturbed by the pushes it receives from the clock movement, so freeswinging pendulums were used, and were the standard instruments of gravimetry up to the 1930s. The difference between clock pendulums and gravimeter pendulums is that to measure gravity, the pendulum's length as well as its period has to be measured. The period of freeswinging pendulums could be found to great precision by comparing their swing with a precision clock that had been adjusted to keep correct time by the passage of stars overhead. In the early measurements, a weight on a cord was suspended in front of the clock pendulum, and its length adjusted until the two pendulums swung in exact synchronism. Then the length of the cord was measured. From the length and the period, g could be calculated from (1). The seconds pendulum The seconds pendulum, a pendulum with a period of two seconds so each swing takes one second, was widely used to measure gravity, because most precision clocks had seconds pendulums. By the late 17th century, the length of the seconds pendulum became the standard measure of the strength of gravitational acceleration at a location. By 1700 its length had been measured with submillimeter accuracy at several cities in Europe. For a seconds pendulum, g is proportional to its length: - 1620: British scientist Francis Bacon was one of the first to propose using a pendulum to measure gravity, suggesting taking one up a mountain to see if gravity varies with altitude. - 1644: Even before the pendulum clock, French priest Marin Mersenne first determined the length of the seconds pendulum was 39.1 inches (993 mm), by comparing the swing of a pendulum to the time it took a weight to fall a measured distance. - 1669: Jean Picard determined the length of the seconds pendulum at Paris, using a 1-inch (25 mm) copper ball suspended by an aloe fiber, obtaining 39.09 inches (993 mm). - 1672: The first observation that gravity varied at different points on Earth was made in 1672 by Jean Richer, who took a pendulum clock to Cayenne, French Guiana and found that it lost 2 1⁄2 minutes per day; its seconds pendulum had to be shortened by 1 1⁄4 lignes (2.6 mm) shorter than at Paris, to keep correct time. In 1687 Isaac Newton in Principia Mathematica showed this was because the Earth had a slightly oblate shape (flattened at the poles) caused by the centrifugal force of its rotation, so gravity increased with latitude. From this time on, pendulums began to be taken to distant lands to measure gravity, and tables were compiled of the length of the seconds pendulum at different locations on Earth. In 1743 Alexis Claude Clairaut created the first hydrostatic model of the Earth, Clairaut's formula, which allowed the ellipticity of the Earth to be calculated from gravity measurements. Progressively more accurate models of the shape of the Earth followed. - 1687: Newton experimented with pendulums (described in Principia) and found that equal length pendulums with bobs made of different materials had the same period, proving that the gravitational force on different substances was exactly proportional to their mass (inertia). - 1737: French mathematician Pierre Bouguer made a sophisticated series of pendulum observations in the Andes mountains, Peru. He used a copper pendulum bob in the shape of a double pointed cone suspended by a thread; the bob could be reversed to eliminate the effects of nonuniform density. He calculated the length to the center of oscillation of thread and bob combined, instead of using the center of the bob. He corrected for thermal expansion of the measuring rod and barometric pressure, giving his results for a pendulum swinging in vacuum. Bouguer swung the same pendulum at three different elevations, from sea level to the top of the high Peruvian altiplano. Gravity should fall with the inverse square of the distance from the center of the Earth. Bouguer found that it fell off slower, and correctly attributed the 'extra' gravity to the gravitational field of the huge Peruvian plateau. From the density of rock samples he calculated an estimate of the effect of the altiplano on the pendulum, and comparing this with the gravity of the Earth was able to make the first rough estimate of the density of the Earth. - 1747: Daniel Bernoulli showed how to correct for the lengthening of the period due to a finite angle of swing θ0 by using the first order correction θ02/16, giving the period of a pendulum with an infinitesimal swing. - 1792: To define a pendulum standard of length for use with the new metric system, in 1792 Jean-Charles de Borda and Jean-Dominique Cassini made a precise measurement of the seconds pendulum at Paris. They used a 1 1⁄2-inch (14 mm) platinum ball suspended by a 12-foot (3.7 m) iron wire. Their main innovation was a technique called the "method of coincidences" which allowed the period of pendulums to be compared with great precision. (Bouguer had also used this method). The time interval ΔT between the recurring instants when the two pendulums swung in synchronism was timed. From this the difference between the periods of the pendulums, T1 and T2, could be calculated: - 1821: Francesco Carlini made pendulum observations on top of Mount Cenis, Italy, from which, using methods similar to Bouguer's, he calculated the density of the Earth. He compared his measurements to an estimate of the gravity at his location assuming the mountain wasn't there, calculated from previous nearby pendulum measurements at sea level. His measurements showed 'excess' gravity, which he allocated to the effect of the mountain. Modeling the mountain as a segment of a sphere 11 miles (18 km) in diameter and 1 mile (1.6 km) high, from rock samples he calculated its gravitational field, and estimated the density of the Earth at 4.39 times that of water. Later recalculations by others gave values of 4.77 and 4.95, illustrating the uncertainties in these geographical methods The precision of the early gravity measurements above was limited by the difficulty of measuring the length of the pendulum, L . L was the length of an idealized simple gravity pendulum (described at top), which has all its mass concentrated in a point at the end of the cord. In 1673 Huygens had shown that the period of a real pendulum (called a compound pendulum) was equal to the period of a simple pendulum with a length equal to the distance between the pivot point and a point called the center of oscillation, located under the center of gravity, that depends on the mass distribution along the pendulum. But there was no accurate way of determining the center of oscillation in a real pendulum. To get around this problem, the early researchers above approximated an ideal simple pendulum as closely as possible by using a metal sphere suspended by a light wire or cord. If the wire was light enough, the center of oscillation was close to the center of gravity of the ball, at its geometric center. This "ball and wire" type of pendulum wasn't very accurate, because it didn't swing as a rigid body, and the elasticity of the wire caused its length to change slightly as the pendulum swung. However Huygens had also proved that in any pendulum, the pivot point and the center of oscillation were interchangeable. That is, if a pendulum were turned upside down and hung from its center of oscillation, it would have the same period as it did in the previous position, and the old pivot point would be the new center of oscillation. British physicist and army captain Henry Kater in 1817 realized that Huygens' principle could be used to find the length of a simple pendulum with the same period as a real pendulum. If a pendulum was built with a second adjustable pivot point near the bottom so it could be hung upside down, and the second pivot was adjusted until the periods when hung from both pivots were the same, the second pivot would be at the center of oscillation, and the distance between the two pivots would be the length of a simple pendulum with the same period. Kater built a reversible pendulum (shown at right) consisting of a brass bar with two opposing pivots made of short triangular "knife" blades (a) near either end. It could be swung from either pivot, with the knife blades supported on agate plates. Rather than make one pivot adjustable, he attached the pivots a meter apart and instead adjusted the periods with a moveable weight on the pendulum rod (b,c). In operation, the pendulum is hung in front of a precision clock, and the period timed, then turned upside down and the period timed again. The weight is adjusted with the adjustment screw until the periods are equal. Then putting this period and the distance between the pivots into equation (1) gives the gravitational acceleration g very accurately. Kater timed the swing of his pendulum using the "method of coincidences" and measured the distance between the two pivots with a microscope. After applying corrections for the finite amplitude of swing, the buoyancy of the bob, the barometric pressure and altitude, and temperature, he obtained a value of 39.13929 inches for the seconds pendulum at London, in vacuum, at sea level, at 62 °F. The largest variation from the mean of his 12 observations was 0.00028 in. representing a precision of gravity measurement of 7×10−6 (7 mGal or 70 µm/s2). Kater's measurement was used as Britain's official standard of length (see below) from 1824 to 1855. Reversible pendulums (known technically as "convertible" pendulums) employing Kater's principle were used for absolute gravity measurements into the 1930s. Later pendulum gravimeters The increased accuracy made possible by Kater's pendulum helped make gravimetry a standard part of geodesy. Since the exact location (latitude and longitude) of the 'station' where the gravity measurement was made was necessary, gravity measurements became part of surveying, and pendulums were taken on the great geodetic surveys of the 18th century, particularly the Great Trigonometric Survey of India. - Invariable pendulums: Kater introduced the idea of relative gravity measurements, to supplement the absolute measurements made by a Kater's pendulum. Comparing the gravity at two different points was an easier process than measuring it absolutely by the Kater method. All that was necessary was to time the period of an ordinary (single pivot) pendulum at the first point, then transport the pendulum to the other point and time its period there. Since the pendulum's length was constant, from (1) the ratio of the gravitational accelerations was equal to the inverse of the ratio of the periods squared, and no precision length measurements were necessary. So once the gravity had been measured absolutely at some central station, by the Kater or other accurate method, the gravity at other points could be found by swinging pendulums at the central station and then taking them to the nearby point. Kater made up a set of "invariable" pendulums, with only one knife edge pivot, which were taken to many countries after first being swung at a central station at Kew Observatory, UK. - Airy's coal pit experiments: Starting in 1826, using methods similar to Bouguer, British astronomer George Airy attempted to determine the density of the Earth by pendulum gravity measurements at the top and bottom of a coal mine. The gravitational force below the surface of the Earth decreases rather than increasing with depth, because by Gauss's law the mass of the spherical shell of crust above the subsurface point does not contribute to the gravity. The 1826 experiment was aborted by the flooding of the mine, but in 1854 he conducted an improved experiment at the Harton coal mine, using seconds pendulums swinging on agate plates, timed by precision chronometers synchronized by an electrical circuit. He found the lower pendulum was slower by 2.24 seconds per day. This meant that the gravitational acceleration at the bottom of the mine, 1250 ft below the surface, was 1/14,000 less than it should have been from the inverse square law; that is the attraction of the spherical shell was 1/14,000 of the attraction of the Earth. From samples of surface rock he estimated the mass of the spherical shell of crust, and from this estimated that the density of the Earth was 6.565 times that of water. Von Sterneck attempted to repeat the experiment in 1882 but found inconsistent results. - Repsold-Bessel pendulum: It was time-consuming and error-prone to repeatedly swing the Kater's pendulum and adjust the weights until the periods were equal. Friedrich Bessel showed in 1835 that this was unnecessary. As long as the periods were close together, the gravity could be calculated from the two periods and the center of gravity of the pendulum. So the reversible pendulum didn't need to be adjustable, it could just be a bar with two pivots. Bessel also showed that if the pendulum was made symmetrical in form about its center, but was weighted internally at one end, the errors due to air drag would cancel out. Further, another error due to the finite diameter of the knife edges could be made to cancel out if they were interchanged between measurements. Bessel didn't construct such a pendulum, but in 1864 Adolf Repsold, under contract by the Swiss Geodetic Commission made a pendulum along these lines. The Repsold pendulum was about 56 cm long and had a period of about 3⁄4 second. It was used extensively by European geodetic agencies, and with the Kater pendulum in the Survey of India. Similar pendulums of this type were designed by Charles Pierce and C. Defforges. - Von Sterneck and Mendenhall gravimeters: In 1887 Austro-Hungarian scientist Robert von Sterneck developed a small gravimeter pendulum mounted in a temperature-controlled vacuum tank to eliminate the effects of temperature and air pressure. These used "half-second pendulums," having a period close to one second, and were about 25 cm long. They were nonreversible, so it was used for relative gravity measurements, but their small size made them small and portable. The period of the pendulum was picked off by reflecting the image of an electric spark created by a precision chronometer off a mirror mounted at the top of the pendulum rod. The Von Sterneck instrument, and a similar instrument developed by Thomas C. Mendenhall of the US Coast and Geodetic Survey in 1890, were used extensively for surveys into the 1920s. - The Mendenhall pendulum was actually a more accurate timekeeper than the highest precision clocks of the time, and as the 'world's best clock' it was used by A. A. Michelson in his 1924 measurements of the speed of light on Mt. Wilson, California. - Double pendulum gravimeters: Starting in 1875, the increasing accuracy of pendulum measurements revealed another source of error in existing instruments: the swing of the pendulum caused a slight swaying of the tripod stand used to support portable pendulums, introducing error. In 1875 Charles S Peirce calculated that measurements of the length of the seconds pendulum made with the Repsold instrument required a correction of 0.2 mm due to this error. In 1880 C. Defforges used a Michelson interferometer to measure the sway of the stand dynamically, and interferometers were added to the standard Mendenhall apparatus to calculate sway corrections. A method of preventing this error was first suggested in 1877 by Hervé Faye and advocated by Peirce, Cellérier and Furtwangler: mount two identical pendulums on the same support, swinging with the same amplitude, 180° out of phase. The opposite motion of the pendulums would cancel out any sideways forces on the support. The idea was opposed due to its complexity, but by the start of the 20th century the Von Sterneck device and other instruments were modified to swing multiple pendulums simultaneously. - Gulf gravimeter: One of the last and most accurate pendulum gravimeters was the apparatus developed in 1929 by the Gulf Research and Development Co. It used two pendulums made of fused quartz, each 10.7 inches (272 mm) in length with a period of 0.89 second, swinging on pyrex knife edge pivots, 180° out of phase. They were mounted in a permanently sealed temperature and humidity controlled vacuum chamber. Stray electrostatic charges on the quartz pendulums had to be discharged by exposing them to a radioactive salt before use. The period was detected by reflecting a light beam from a mirror at the top of the pendulum, recorded by a chart recorder and compared to a precision crystal oscillator calibrated against the WWV radio time signal. This instrument was accurate to within (0.3–0.5)×10−7 (30–50 microgals or 3–5 nm/s2). It was used into the 1960s. Relative pendulum gravimeters were superseded by the simpler LaCoste zero-length spring gravimeter, invented in 1934 by Lucien LaCoste. Absolute (reversible) pendulum gravimeters were replaced in the 1950s by free fall gravimeters, in which a weight is allowed to fall in a vacuum tank and its acceleration is measured by an optical interferometer. Standard of length Because the acceleration of gravity is constant at a given point on Earth, the period of a simple pendulum at a given location depends only on its length. Additionally, gravity varies only slightly at different locations. Almost from the pendulum's discovery until the early 19th century, this property led scientists to suggest using a pendulum of a given period as a standard of length. Until the 19th century, countries based their systems of length measurement on prototypes, metal bar primary standards, such as the standard yard in Britain kept at the Houses of Parliament, and the standard toise in France, kept at Paris. These were vulnerable to damage or destruction over the years, and because of the difficulty of comparing prototypes, the same unit often had different lengths in distant towns, creating opportunities for fraud. Enlightenment scientists argued for a length standard that was based on some property of nature that could be determined by measurement, creating an indestructible, universal standard. The period of pendulums could be measured very precisely by timing them with clocks that were set by the stars. A pendulum standard amounted to defining the unit of length by the gravitational force of the Earth, for all intents constant, and the second, which was defined by the rotation rate of the Earth, also constant. The idea was that anyone, anywhere on Earth, could recreate the standard by constructing a pendulum that swung with the defined period and measuring its length. Virtually all proposals were based on the seconds pendulum, in which each swing (a half period) takes one second, which is about a meter (39 inches) long, because by the late 17th century it had become a standard for measuring gravity (see previous section). By the 18th century its length had been measured with sub-millimeter accuracy at a number of cities in Europe and around the world. The initial attraction of the pendulum length standard was that it was believed (by early scientists such as Huygens and Wren) that gravity was constant over the Earth's surface, so a given pendulum had the same period at any point on Earth. So the length of the standard pendulum could be measured at any location, and would not be tied to any given nation or region; it would be a truly democratic, worldwide standard. Although Richer found in 1672 that gravity varies at different points on the globe, the idea of a pendulum length standard remained popular, because it was found that gravity only varies with latitude. Gravitational acceleration increases smoothly from the equator to the poles, due to the oblate shape of the Earth. So at any given latitude (east-west line), gravity was constant enough that the length of a seconds pendulum was the same within the measurement capability of the 18th century. So the unit of length could be defined at a given latitude and measured at any point at that latitude. For example, a pendulum standard defined at 45° north latitude, a popular choice, could be measured in parts of France, Italy, Croatia, Serbia, Romania, Russia, Kazakhstan, China, Mongolia, the United States and Canada. In addition, it could be recreated at any location at which the gravitational acceleration had been accurately measured. By the mid 19th century, increasingly accurate pendulum measurements by Edward Sabine and Thomas Young revealed that gravity, and thus the length of any pendulum standard, varied measurably with local geologic features such as mountains and dense subsurface rocks. So a pendulum length standard had to be defined at a single point on Earth and could only be measured there. This took much of the appeal from the concept, and efforts to adopt pendulum standards were abandoned. One of the first to suggest defining length with a pendulum was Flemish scientist Isaac Beeckman who in 1631 recommended making the seconds pendulum "the invariable measure for all people at all times in all places". Marin Mersenne, who first measured the seconds pendulum in 1644, also suggested it. The first official proposal for a pendulum standard was made by the British Royal Society in 1660, advocated by Christiaan Huygens and Ole Rømer, basing it on Mersenne's work, and Huygens in Horologium Oscillatorum proposed a "horary foot" defined as 1/3 of the seconds pendulum. Christopher Wren was another early supporter. The idea of a pendulum standard of length must have been familiar to people as early as 1663, because Samuel Butler satirizes it in Hudibras: - Upon the bench I will so handle ‘em - That the vibration of this pendulum - Shall make all taylors’ yards of one - Unanimous opinion In 1671 Jean Picard proposed a pendulum defined 'universal foot' in his influential Mesure de la Terre. Gabriel Mouton around 1670 suggested defining the toise either by a seconds pendulum or a minute of terrestrial degree. A plan for a complete system of units based on the pendulum was advanced in 1675 by Italian polymath Tito Livio Burratini. In France in 1747, geographer Charles Marie de la Condamine proposed defining length by a seconds pendulum at the equator; since at this location a pendulum's swing wouldn't be distorted by the Earth's rotation. British politicians James Steuart (1780) and George Skene Keith were also supporters. By the end of the 18th century, when many nations were reforming their weight and measure systems, the seconds pendulum was the leading choice for a new definition of length, advocated by prominent scientists in several major nations. In 1790, then US Secretary of State Thomas Jefferson proposed to Congress a comprehensive decimalized US 'metric system' based on the seconds pendulum at 38° North latitude, the mean latitude of the United States. No action was taken on this proposal. In Britain the leading advocate of the pendulum was politician John Riggs Miller. When his efforts to promote a joint British–French–American metric system fell through in 1790, he proposed a British system based on the length of the seconds pendulum at London. This standard was adopted in 1824 (below). In the discussions leading up to the French adoption of the metric system in 1791, the leading candidate for the definition of the new unit of length, the metre, was the seconds pendulum at 45° North latitude. It was advocated by a group led by French politician Talleyrand and mathematician Antoine Nicolas Caritat de Condorcet. This was one of the three final options considered by the French Academy of Sciences committee. However, on March 19, 1791 the committee instead chose to base the metre on the length of the meridian through Paris. A pendulum definition was rejected because of its variability at different locations, and because it defined length by a unit of time. (However, since 1983 the metre has been officially defined in terms of the length of the second and the speed of light.) A possible additional reason is that the radical French Academy didn't want to base their new system on the second, a traditional and nondecimal unit from the ancien regime. Although not defined by the pendulum, the final length chosen for the metre, 10−7 of the pole-to-equator meridian arc, was very close to the length of the seconds pendulum (0.9937 m), within 0.63%. Although no reason for this particular choice was given at the time, it was probably to facilitate the use of the seconds pendulum as a secondary standard, as was proposed in the official document. So the modern world's standard unit of length is certainly closely linked historically with the seconds pendulum. Britain and Denmark Britain and Denmark appear to be the only nations that (for a short time) based their units of length on the pendulum. In 1821 the Danish inch was defined as 1/38 of the length of the mean solar seconds pendulum at 45° latitude at the meridian of Skagen, at sea level, in vacuum. The British parliament passed the Imperial Weights and Measures Act in 1824, a reform of the British standard system which declared that if the prototype standard yard was destroyed, it would be recovered by defining the inch so that the length of the solar seconds pendulum at London, at sea level, in a vacuum, at 62 °F was 39.1393 inches. This also became the US standard, since at the time the US used British measures. However, when the prototype yard was lost in the 1834 Houses of Parliament fire, it proved impossible to recreate it accurately from the pendulum definition, and in 1855 Britain repealed the pendulum standard and returned to prototype standards. A pendulum in which the rod is not vertical but almost horizontal was used in early seismometers for measuring earth tremors. The bob of the pendulum does not move when its mounting does, and the difference in the movements is recorded on a drum chart. As first explained by Maximilian Schuler in a 1923 paper, a pendulum whose period exactly equals the orbital period of a hypothetical satellite orbiting just above the surface of the earth (about 84 minutes) will tend to remain pointing at the center of the earth when its support is suddenly displaced. This principle, called Schuler tuning, is used in inertial guidance systems in ships and aircraft that operate on the surface of the Earth. No physical pendulum is used, but the control system that keeps the inertial platform containing the gyroscopes stable is modified so the device acts as though it is attached to such a pendulum, keeping the platform always facing down as the vehicle moves on the curved surface of the Earth. In 1665 Huygens made a curious observation about pendulum clocks. Two clocks had been placed on his mantlepiece, and he noted that they had acquired an opposing motion. That is, their pendulums were beating in unison but in the opposite direction; 180° out of phase. Regardless of how the two clocks were started, he found that they would eventually return to this state, thus making the first recorded observation of a coupled oscillator. The cause of this behavior was that the two pendulums were affecting each other through slight motions of the supporting mantlepiece. Many physical systems can be mathematically described as coupled oscillation. Under certain conditions these systems can also demonstrate chaotic motion. Pendulum motion appears in religious ceremonies as well. The swinging incense burner called a censer, also known as a thurible, is an example of a pendulum. Pendulums are also seen at many gatherings in eastern Mexico where they mark the turning of the tides on the day which the tides are at their highest point. See also pendulums for divination and dowsing. During the Middle Ages, pendulums were used as a method of torture by the Spanish Inquisition. Using the basic principle of the pendulum, the weight (bob) is replaced by an axe head. The victim is strapped to a table below, the device is activated, and the axe begins to swing back and forth through the air. With each pass, or return, the pendulum is lowered, gradually coming closer to the victim's torso, until finally cleaved. Because of the time required before the mortal action of the axe is complete, the pendulum is considered a method of torturing the victim before his or her demise. - Barton's Pendulums - Blackburn pendulum - Conical pendulum - Doubochinski's pendulum - Double pendulum - Double inverted pendulum - Foucault pendulum - Furuta pendulum - Gridiron pendulum - Inertia wheel pendulum - Inverted pendulum - Harmonograph (a.k.a. "Lissajous pendulum") - Kapitza's pendulum - Kater's pendulum - Pendulum (mathematics) - Pendulum clock - Pendulum rocket fallacy - Seconds pendulum - Simple harmonic motion - Spherical pendulum - Torsional pendulum |Wikimedia Commons has media related to: Pendulums| - NAWCC National Association of Watch & Clock Collectors Museum - Graphical derivation of the time period for a simple pendulum - A more general explanation of pendula - Web-based calculator of pendulum properties from numerical inputs - An animated and interactive rigid pendulum model in MS Excel - G. L. Baker and J. A. Blackburn (2009). The Pendulum: A Case Study in Physics (Oxford University Press). - M. Gitterman (2010). The Chaotic Pendulum (World Scientific). - Michael R. Matthews, Arthur Stinner, Colin F. Gauld (2005)The Pendulum: Scientific, Historical, Philosophical and Educational Perspectives, Springer - Michael R. Matthews, Colin Gauld and Arthur Stinner (2005) The Pendulum: Its Place in Science, Culture and Pedagogy. Science & Education, 13, 261-277. - Matthys, Robert J. (2004). Accurate Pendulum Clocks. UK: Oxford Univ. Press. ISBN 0-19-852971-6. - Nelson, Robert; M. G. Olsson (February 1986). "The pendulum - Rich physics from a simple system". American Journal of Physics 54 (2): 112–121. Bibcode:1986AmJPh..54..112N. doi:10.1119/1.14703. Retrieved 2008-10-29. - L. P. Pook (2011). Understanding Pendulums: A Brief Introduction (Springer). Note: most of the sources below, including books, are viewable online through the links given. - "Pendulum". Miriam Webster's Collegiate Encyclopedia. Miriam Webster. 2000. p. 1241. ISBN 0-87779-017-5. - Marrison, Warren (1948). "The Evolution of the Quartz Crystal Clock". Bell System Technical Journal 27: 510–588. - Morris, William, Ed. (1979). The American Heritage Dictionary, New College Ed. New York: Houghton-Mifflin. p. 969. ISBN 0-395-20360-0. - defined by Christiaan Huygens: Huygens, Christian (1673). "Horologium Oscillatorium" (PDF). Some mathematical works of the 17th and 18th centuries. 17thcenturymaths.com. Retrieved 2009-03-01., Part 4, Definition 3, translated July 2007 by Ian Bruce - Nave, Carl R. (2006). "Simple pendulum". Hyperphysics. Georgia State Univ. Retrieved 2008-12-10. - Xue, Linwei (2007). "Pendulum Systems". Seeing and Touching Structural Concepts. Civil Engineering Dept., Univ. of Manchester, UK. Retrieved 2008-12-10. - Weisstein, Eric W. (2007). "Simple Pendulum". Eric Weisstein's world of science. Wolfram Research. Retrieved 2009-03-09. - Milham, Willis I. (1945). Time and Timekeepers. MacMillan., p.188-194 - Halliday, David; Robert Resnick, Jearl Walker (1997). Fundamentals of Physics, 5th Ed. New York: John Wiley & Sons. p. 381. ISBN 0-471-14854-7. - Cooper, Herbert J. (2007). Scientific Instruments. New York: Hutchinson's. p. 162. ISBN 1-4067-6879-0. - Nelson, Robert; M. G. Olsson (February 1987). "The pendulum - Rich physics from a simple system". American Journal of Physics 54 (2): 112–121. Bibcode:1986AmJPh..54..112N. doi:10.1119/1.14703. Retrieved 2008-10-29. - "Clock". Encyclopædia Britannica, 11th Ed. 6. The Encyclopædia Britannica Publishing Co. 1910. p. 538. Retrieved 2009-03-04. includes a derivation - L. M. Burko, "Effect of the spherical Earth on a simple pendulum," European Journal of Physics, Vol 24 (2003) 125-130. - J. S. Deschaine and B. H. Suits, "The hanging cord with a real tip mass," European Journal of Physics, Vol 29 (2008) 1211-1222. - B. H. Suits, "Long pendulums in gravitational gradients," European Journal of Physics, Vol 27 (2006) L7-L11. - Huygens, Christian; translated July 2007 by Ian Bruce (1673). "Horologium Oscillatorium". Some mathematical works of the 17th and 18th centuries. 17thcenturymaths.com. Retrieved 2009-03-01., Part 4, Proposition 5 - Glasgow, David (1885). Watch and Clock Making. London: Cassel & Co. p. 278. - Fowles, Grant R (1986). Analytical Mechanics, 4th Ed. NY, NY: Saunders. pp. 202 ff. - Huygens (1673) Horologium Oscillatorium, Part 4, Proposition 20 - Morton, 70. - Needham, Volume 3, 627-629 - Good, Gregory (1998). Sciences of the Earth: An Encyclopedia of Events, People, and Phenomena. Routledge. p. 394. ISBN 0-8153-0062-X. - "Pendulum". Encyclopedia Americana 21. The Americana Corp. 1967. p. 502. Retrieved 2009-02-20. - Baker, Cyril Clarence Thomas (1961). Dictionary of Mathematics. G. Newnes. p. 176. - Newton, Roger G. (2004). Galileo's Pendulum: From the Rhythm of Time to the Making of Matter. US: Harvard University Press. p. 52. ISBN 0-674-01331-X. - King, D. A. (1979). "Ibn Yunus and the pendulum: a history of errors". Archives Internationales d'Histoire des Sciences 29 (104): 35–52. - Hall, Bert S. (September 1978). "The scholastic pendulum". Annals of Science (Taylor & Francis) 35 (5): 441–462. doi:10.1080/00033797800200371. ISSN 0003-3790. Retrieved 2010-04-22. - O'Connor, J. J.; Robertson, E. F. (November 1999). "Abu'l-Hasan Ali ibn Abd al-Rahman ibn Yunus". University of St Andrews. Retrieved 2007-05-29. - Matthews, Michael R. (2000). Time for science education. Springer. p. 87. ISBN 0-306-45880-2. - Drake, Stillman (2003). Galileo at Work: His scientific biography. USA: Courier Dover. pp. 20–21. ISBN 0-486-49542-6. - [Galileo] (1890–1909; reprinted 1929–1939 and 1964–1966). Favaro, Antonio, ed. Le Opere di Galileo Galilei, Edizione Nazionale [The Works of Galileo Galilei, National Edition] (in italian). Florence: Barbera. ISBN 88-09-20881-1. - Murdin, Paul (2008). Full Meridian of Glory: Perilous Adventures in the Competition to Measure the Earth. Springer. p. 41. ISBN 0-387-75533-0. - Van Helden, Albert (1995). "Pendulum Clock". The Galileo Project. Rice Univ. Retrieved 2009-02-25. - Drake 2003, p.419–420 - although there are unsubstantiated references to prior pendulum clocks made by others: Usher, Abbott Payson (1988). A History of Mechanical Inventions. Courier Dover. pp. 310–311. ISBN 0-486-25593-X. - Eidson, John C. (2006). Measurement, Control, and Communication using IEEE 1588. Burkhausen. p. 11. ISBN 1-84628-250-0. - Milham 1945, p.145 - O'Connor, J.J.; E.F. Robertson (August 2002). "Robert Hooke". Biographies, MacTutor History of Mathematics Archive. School of Mathematics and Statistics, Univ. of St. Andrews, Scotland. Retrieved 2009-02-21. - Nauenberg, Michael (2006). "Robert Hooke's seminal contribution to orbital dynamics". Robert Hooke: Tercentennial Studies. Ashgate Publishing. pp. 17–19. ISBN 0-7546-5365-X. - Nauenberg, Michael (2004). "Hooke and Newton: Divining Planetary Motions". Physics Today 57 (2): 13. Bibcode:2004PhT....57b..13N. doi:10.1063/1.1688052. Retrieved 2007-05-30. - The KGM Group, Inc. (2004). "Heliocentric Models". Science Master. Retrieved 2007-05-30. - Lenzen, Victor F.; Robert P. Multauf (1964). "Paper 44: Development of gravity pendulums in the 19th century". United States National Museum Bulletin 240: Contributions from the Museum of History and Technology reprinted in Bulletin of the Smithsonian Institution. Washington: Smithsonian Institution Press. p. 307. Retrieved 2009-01-28. - Richer, Jean (1679). Observations astronomiques et physiques faites en l'isle de Caïenne. Mémoires de l'Académie Royale des Sciences. cited in Lenzen & Multauf, 1964, p.307 - Lenzen & Multauf, 1964, p.307 - Poynting, John Henry; Joseph John Thompson (1907). A Textbook of Physics, 4th Ed.. London: Charles Griffin & Co. pp. 20–22. - Huygens, Christian; translated by Ian Bruce (July 2007). "Horologium Oscillatorium" (PDF). Some mathematical works of the 17th and 18th centuries. 17thcenturymaths.com. Retrieved 2009-03-01. - The constellation of Horologium was later named in honor of this book. - Huygens, Horologium Oscillatorium, Part 2, Proposition 25 - Mahoney, Michael S. (March 19, 2007). "Christian Huygens: The Measurement of Time and of Longitude at Sea". Princeton University. Archived from the original on 2007-12-04. Retrieved 2007-05-27. - Bevilaqua, Fabio; Lidia Falomo, Lucio Fregonese, Enrico Gianetto, Franco Giudise, Paolo Mascheretti (2005). "The pendulum: From constrained fall to the concept of potential". The Pendulum: Scientific, Historical, Philosophical, and Educational Perspectives. Springer. pp. 195–200. ISBN 1-4020-3525-X. Retrieved 2008-02-26. gives a detailed description of Huygen's methods - Headrick, Michael (2002). "Origin and Evolution of the Anchor Clock Escapement". Control Systems magazine, Inst. of Electrical and Electronic Engineers 22 (2). Archived from the original on 2009-10-25. Retrieved 2007-06-06. - "...it is affected by either the intemperance of the air or any faults in the mechanism so the crutch QR is not always activated by the same force... With large arcs the swings take longer, in the way I have explained, therefore some inequalities in the motion of the timepiece exist from this cause...", Huygens, Christiaan (1658). Horologium. The Hague: Adrian Vlaqc., translation by Ernest L. Edwardes (December 1970) Antiquarian Horology, Vol.7, No.1 - Andrewes, W.J.H. Clocks and Watches: The leap to precision in Macey, Samuel (1994). Encyclopedia of Time. Taylor & Francis. pp. 123–125. ISBN 0-8153-0615-6. - Usher, 1988, p.312 - Beckett, Edmund (1874). A Rudimentary Treatise on Clocks and Watches and Bells, 6th Ed.. London: Lockwood & Co. p. 50. - Graham, George (1726). "A contrivance to avoid irregularities in a clock's motion occasion'd by the action of heat and cold upon the rod of the pendulum". Philos. Trans. Royal Soc. 34 (392–398): 40–44. doi:10.1098/rstl.1726.0006. cited in Day, Lance; Ian McNeil (1996). Biographical Dictionary of the History of Technology. Taylor & Francis. p. 300. ISBN 0-415-06042-7. - Kater, Henry (1818). "An account of experiments for determining the length of the pendulum vibrating seconds in the latitude of London". Phil. Trans. R. Soc. (London) 104 (33): 109. Retrieved 2008-11-25. - Rubin, Julian (September 2007). "The Invention of the Foucault Pendulum". Following the Path of Discovery. Retrieved 2007-10-31. - Amir Aczel (2003) Leon Foucault: His life, times and achievements, in Matthews,, Michael R.; Colin F. Gauld, Arthur Stinner (2005). The Pendulum: Scientific, Historical, Educational, and Philosophical Perspectives. Springer. p. 177. ISBN 1-4020-3525-X. - Giovannangeli, Françoise (November 1996). "Spinning Foucault's Pendulum at the Panthéon". The Paris Pages. Retrieved 2007-05-25. - Tobin, William (2003). The Life and Science of Leon Foucault: The man who proved the Earth rotates. UK: Cambridge University Press. pp. 148–149. ISBN 0-521-80855-3. - "Clock". Encyclopædia Britannica, 11th Ed. 6. The Encyclopædia Britannica Publishing Co. 1910. pp. 540–541. Retrieved 2009-03-04. - Jones, Tony (2000). Splitting the Second: The Story of Atomic Time. CRC Press. p. 30. ISBN 0-7503-0640-8. - Kaler, James B. (2002). Ever-changing Sky: A Guide to the Celestial Sphere. UK: Cambridge Univ. Press. p. 183. ISBN 0-521-49918-6. - Audoin, Claude; Bernard Guinot, Stephen Lyle (2001). The Measurement of Time: Time, Frequency, and the Atomic Clock. UK: Cambridge Univ. Press. p. 83. ISBN 0-521-00397-0. - Torge, Wolfgang (2001). Geodesy: An Introduction. Walter de Gruyter. p. 177. ISBN 3-11-017072-8. - Milham 1945, p.334 - calculated from equation (1) - Glasgow, David (1885). Watch and Clock Making. London: Cassel & Co. pp. 279–284. - Matthys, Robert J. (2004). Accurate Pendulum Clocks. UK: Oxford Univ. Press. p. 4. ISBN 0-19-852971-6. - Mattheys, 2004, p. 13 - Matthys 2004, p.91-92 - Beckett 1874, p.48 - "Regulation". Encyclopedia of Clocks and Watches. Old and Sold antiques marketplace. 2006. Retrieved 2009-03-09. - Beckett 1874, p.43 - Glasgow 1885, p.282 - "Great Clock facts". Big Ben. London: UK Parliament. 13 November 2009. Retrieved 31 October 2012. - Matthys 2004, p.3 - "Clock". Encyclopædia Britannica, 11th Ed. 6. The Encyclopædia Britannica Publishing Co. 1910. pp. 539–540. Retrieved 2009-03-04. - Huygens, Christiaan (1658). Horologium. The Hague: Adrian Vlaqc., translation by Ernest L. Edwardes (December 1970) Antiquarian Horology, Vol.7, No.1 - Zupko, Ronald Edward (1990). Revolution in Measurement: Western European Weights and Measures since the Age of Science. Diane Publishing. p. 131. ISBN 0-87169-186-8. - Matthys 2004, p.7-12 - Milham 1945, p.335 - Milham 1945, p.331-332 - Matthys 2004, Part 3, p.153-179 - Poynting & Thompson, 1907, p.13-14 - Updegraff, Milton (February 7, 1902). "On the measurement of time". Science (American Assoc. for the Advancement of Science) 15 (371): 218–219. doi:10.1126/science.ns-15.374.218-a. PMID 17793345. Retrieved 2009-07-13. - Dunwoody, Halsey (1917). Notes, Problems, and Laboratory Exercises in Mechanics, Sound, Light, Thermo-Mechanics and Hydraulics, 1st Ed.. New York: John Wiley & Sons. p. 87. - "Resonance Width". Glossary. Time and Frequency Division, US National Institute of Standards and Technology. 2009. Retrieved 2009-02-21. - Jespersen, James; Fitz-Randolph, Jane; Robb, John (1999). From Sundials to Atomic Clocks: Understanding Time and Frequency. New York: Courier Dover. pp. 41–50. ISBN 0-486-40913-9. p.39 - Matthys, Robert J. (2004). Accurate Pendulum Clocks. UK: Oxford Univ. Press. pp. 27–36. ISBN 0-19-852971-6. has an excellent comprehensive discussion of the controversy over the applicability of Q to the accuracy of pendulums. - "Quality Factor, Q". Glossary. Time and Frequency Division, US National Institute of Standards and Technology. 2009. Retrieved 2009-02-21. - Matthys, 2004, p.32, fig. 7.2 and text - Matthys, 2004, p.81 - "Q, Quality Factor". Watch and clock magazine. Antica Orologeria Lamberlin website. Retrieved 2009-02-21. - Milham 1945, p.615 - "The Reifler and Shortt clocks". JagAir Institute of Time and Technology. Retrieved 2009-12-29. - Betts, Jonathan (May 22, 2008). "Expert's Statement, Case 6 (2008-09) William Hamilton Shortt regulator" (DOC). Export licensing hearing, Reviewing Committee on the Export of Works of Art and Objects of Cultural Interest. UK Museums, Libraries, and Archives Council. Retrieved 2009-12-29. - Airy, George Biddle (November 26, 1826). "On the Disturbances of Pendulums and Balances and on the Theory of Escapements". Trans. of the Cambridge Philosophical Society (University Press). 3 (Part 1): 105. Retrieved 2008-04-25. - Beckett 1874, p.75-79 - Vočadlo, Lidunka. "Gravity, the shape of the Earth, isostasy, moment of inertia". Retrieved 5 November 2012. - Baker, Lyman A. (Spring 2000). "Chancellor Bacon". English 233 - Introduction to Western Humanities. English Dept., Kansas State Univ. Retrieved 2009-02-20. - Poynting & Thompson 1907, p.9 - Poynting, John Henry; Joseph John Thompson (1907). A Textbook of Physics, 4th Ed.. London: Charles Griffin & Co. p. 20. - Victor F., Lenzen; Robert P. Multauf (1964). "Paper 44: Development of gravity pendulums in the 19th century". United States National Museum Bulletin 240: Contributions from the Museum of History and Technology reprinted in Bulletin of the Smithsonian Institution. Washington: Smithsonian Institution Press. p. 307. Retrieved 2009-01-28. - Poynting & Thompson, 1907, p.10 - Poynting, John Henry (1894). The Mean Density of the Earth. London: Charles Griffin. pp. 22–24. - Cox, John (1904). Mechanics. Cambridge, UK: Cambridge Univ. Press. pp. 311–312. - Poynting & Thomson 1904, p.23 - Poynting, John Henry (1894). The Mean Density of the Earth. London: Charles Griffin & Co. pp. 24–29. - "Gravitation". Encyclopædia Britannica, 11th Ed. 7. The Encyclopædia Britannica Co. 1910. p. 386. Retrieved 2009-05-28. - Lenzen & Multauf 1964, p.320 - Poynting & Thompson 1907, p.18 - "The downs and ups of gravity surveys". NOAA Celebrates 200 Years. US NOAA (National Oceanographic and Atmospheric Administration). 2007-07-09. - Lenzen & Multauf 1964, p.324 - Lenzen & Multauf 1964, p.329 - Woolard, George P. (June 28–29, 1957). "Gravity observations during the IGY". Geophysics and the IGY: Proceedings of the symposium at the opening of the International Geophysical Year. Washington DC: American Geophysical Union, Nat'l Academy of Sciences. p. 200. Retrieved 2009-05-27. - Lenzen & Multauf 1964, p.336, fig.28 - Michael R., Matthews (2001). "Methodology and Politics in Science: The fate of Huygens 1673 proposal of the pendulum as an international standard of length and some educational suggestions". Science, Education, and Culture: The contribution of history and philosophy of science. Springer. p. 296. ISBN 0-7923-6972-6. - Renwick, James (1832). The Elements of Mechanics. Philadelphia: Carey & Lea. pp. 286–287. - Alder, Ken (2003). The measure of all things: The seven-year odyssey and hidden error that transformed the world. US: Simon and Schuster. p. 88. ISBN 0-7432-1676-8. - cited in Jourdan, Louis (Mon, 22 Oct 2001 06:59:02). "Re: SI and dictionaries". USMA mailing list. http://[email protected]/msg07023.html. Retrieved 2009-01-27. - Agnoli, Paolo; Giulio D'Agostini (December 2004). "Why does the meter beat the second?". Arxiv database. Retrieved 2009-01-27., p.6 - quoted in LeConte, John (August 1885). "The Metric System". The Overland Monthly (San Francisco: Bacon and Co.) 6 (2): 178. Retrieved 2009-03-04. - Zupko, 1990, p.131 - Zupko, 1990, p.140-141 - Zupko, 1990, p.93 - Schumacher, Heinrich (1821). "Danish standard of length". The Quarterly Journal of Science, Literature and the Arts (London: The Royal Institution of Great Britain) 11 (21): 184–185. Retrieved 2009-02-17. - "Schumacher, Heinrich Christian". The American Cyclopedia 14. D. Appleton & Co., London. 1883. p. 686. Retrieved 2009-02-17. - Trautwine, John Cresson (1907). The Civil Engineer's Pocket-Book, 18th Ed.. New York: John Wiley. p. 216. - Toon, John (September 8, 2000). "Out of Time: Researchers Recreate 1665 Clock Experiment to Gain Insights into Modern Synchronized Oscillators". Georgia Tech. Retrieved 2007-05-31. - An interesting simulation of thurible motion can be found at this site. - R.D. Melville (1905), "The Use and Forms of Judicial Torture in England and Scotland", The Scottish Historical Review, vol. 2, p. 228; Geoffrey Abbott (2006) Execution: the guillotine, the Pendulum, the Thousand Cuts, the Spanish Donkey, and 66 Other Ways of Putting Someone to Death, MacMillan, ISBN 0-312-35222-0, p. 213. Both refer to the use of the pendulum (pendola) by inquisitorial tribunals. Melville, however, refers only to its use as a torture method, while Abbott suggests that the device was purposely allowed to kill the victim if he refused to confess.
http://en.wikipedia.org/wiki/Pendulum
13
54
- to continue developing the student's sense of area, especially the relation between the area of a right triangle and the area of the corresponding rectangle - to associate certain attributes of right triangles with appropriate terminology (e.g., similar, isosceles) - to review some geometric concepts (e.g., congruent, similar) in the context of parallelograms - to experiment with geometric pattern making 5 x 5geoboards (the kind that fit together to make a 10 x 10geoboard) and rubber bands (various sizes and colors) - at least five geoboards for the teacher and one for each student 5 x 5geoboard dot paper (two or three sheets for each student) 10 x 10geoboard dot paper (for extended activities) - overhead projector; transparent geoboard or dot paper (with marking pens) - Make a rectangle on your geoboard. What is its area? - Make another right triangle on your geoboard and determine its area. - Find the right triangle with smallest area. - Find the right triangle with next smallest area. - Find the right triangle with largest area. - Find the right triangle with next largest area. Find all possible right triangles on a 5 x 5geoboard. (There is a total of 17 right triangles.) Order each of the right triangles found in the main activity by area. (In order, the 17 right triangles have area 1/2, 1, 1, 3/2, 2, 2, 2, 5/2, 3, 3, 4, 4, 9/2, 5, 5, 6, and 8 square units.) - Make an isosceles right triangle, namely, a right triangle having two equal sides - Make another isosceles right triangle, but with different area. Is this triangle similar to the first one? - Find another pair of isosceles right triangles (with different area). - Find four similar right triangles, all with different area. - Make a right triangle that is not isosceles. - Make two right triangles which are similar, but not isosceles. - Make a right triangle with area 1 square unit. Can you find another right triangle with area 1 square unit? - Find four right triangles with area less than 2 square units - There are three right triangles with area 2 square units. Can you find them? - Find a right triangle with area 2 1/2 square units. - Find a right triangle with area 3 square units. Can you find another right triangle with area 3 square units? - How many right triangles can you find with area 4 square units? - Find a right triangle with area 4 1/2 square units. - Find two right triangles with area 5 square units. - Continue this pattern of right triangles on a 10 x 10geoboard and complete the table: Home || The Math Library || Quick Reference || Search || Help
http://mathforum.org/trscavo/geoboards/geobd6.html
13
77
Digital Signal Processing/Discrete Operations There are a number of different operations that can be performed on discrete data sets. Let's say that we have 2 data sets, A[n] and B[n]. We will define these two sets to have the following values: A[n] = [W X& Y Z] B[n] = [I J& K L] Arithmetic operations on discrete data sets are performed on an item-by-item basis. Here are some examples: A[n] + B[n] = [(W+I) (X+J)& (Y+K) (Z+L)] If the zero positions don't line up (like they conveniently do in our example), we need to manually line up the zero positions before we add. A[n] - B[n] = [(W-I) (X-J)& (Y-K) (Z-L)] Same as addition, only we subtract. In this situation, we are using the "asterisk" (*) to denote multiplication. Realistically, we should use the "" symbol for multiplication. In later examples, we will use the asterisk for other operations. A[n] * B[n] = [(W*I) (X*J)& (Y*K) (Z*L)] A[n] / B[n] = [(W/I) (X/J)& (Y/K) (Z/L)] Even though we are using the square brackets for discrete data sets, they should NOT be confused with matrices. These are not matrices, and we do not perform matrix operations on these sets. If we time-shift a discrete data set, we essentially are moving the zero-time reference point of the set. The zero point represents "now", creates the starting point of our view of the data, and the point's location is typically established by a need of the processing we're involved in. Let's say we have the data set F[n] with the values: F[n] = [1 2 3& 4 5 ] Then we can shift this set backwards by 1 data point as such: F[n-1] = [1 2& 3 4 5] We can shift the data forward by 1 data point in a similar manner: F[n+1] = [1 2 3 4& 5] Discrete data values are time oriented. Values to the right of the zero point are called "future values", and values to the left are "past values". When the data set is populated on both sides of the zero reference, "future" and "past" are synthetic terms relative to the zero point "now", and don't refer to current physical time. In this context, the data values have already been collected and, as such, can only come from our past. It is important to understand that a physically-realizable digital system that is still receiving data cannot perform operations on future values. It makes no sense to require a future value in order to make a calculation, because such systems are often getting input from a sensor in real-time, and future values simply don't exist. Time Inversion Let's say that we have the same data set, F[n]: F[n] = [1 2 3& 4 5] We can invert the data set as such: F[-n] = [5 4 3& 2 1] We keep the zero point in the same place, and we flip all the data items around. in essence, we are taking a mirror image of the data set, and the zero point is the mirror. Convolution is a much easier operation in the discrete time domain than in the continuous time domain. Let's say we have two data sets, A[n] and B[n]: A[n] = [1& 0 1 2] B[n] = [2& 2 2 1] We will denote convolution with the asterisk (*) in this section; it isn't "multiplication" here. Our convolution function is shown like this: Y[n] = A[n] * B[n] and it specifies that we will store our convolution values in the set named Y[n]. Convolution is performed by following a series of steps involving both sets of data points. First, we time invert one of the data sets. It doesn't matter which one, so we can pick the easiest choice: A[n] = [1& 0 1 2] B[-n] = [1 2 2 2&] Next, we line up the data vertically so that only one data item overlaps: A[n] -> [1& 0 1 2] B[-n] -> [1 2 2 2&] Now, we are going to zero-pad both sets, making them equal length, putting zeros in open positions as needed: A[n] -> [0 0 0 1& 0 1 2] B[-n] -> [1 2 2 2& 0 0 0] Now, we will multiply the contents of each column, and add them together: A[n] -> [0 0 0 1& 0 1 2] B[-n] -> [1 2 2 2& 0 0 0] Y[m] = This gives us the first data point: Y[m] = . Next, we need to time shift B[-n] forward in time by one point, and do the same process: multiply the columns, and add: A[n] -> [0 0 0 1& 0 1 2] B[-n-1] -> [1 2 2 2& 0 0 0] Y[m+1] = Repeat the "time shift, multiply, and add" steps until no more data points overlap. A[n] -> [0 0 0 1& 0 1 2] B[-n-2] -> [1 2 2 2& 0 0 0] Y[m+2] = A[n] -> [0 0 0 1& 0 1 2] B[-n-3] -> [1 2 2 2& 0 0 0] Y[m+3] = A[n] -> [0 0 0 1& 0 1 2] B[-n-4] -> [1 2 2 2& 0 0 0] Y[m+4] = A[n] -> [0 0 0 1& 0 1 2] B[-n] -> [1 2 2 2& 0 0 0] Y[m+5] = A[n] -> [0 0 0 1& 0 1 2] B[-n+6] -> [1 2 2 2& 0 0 0] Y[m+6] = Now we have our full set of data points, Y[m]: Y[m] = [2 2 4 7 6 5 2] We have our values, but where do we put the zero point? It turns out that the zero point of the result occurs where the zero points of the two operands overlapped in our shift calculations. The result set becomes: Y[n] = [2& 2 4 7 6 5 2] It is important to note that the length of the result set is the sum of the lengths of the unpadded operands, minus 1. Or, if you prefer, the length of the result set is the same as either of the zero-padded sets (since they are equal length). Discrete Operations in Matlab/Octave These discrete operations can all be performed in Matlab, using special operators. - Division and Multiplication - Matlab is matrix-based, and therefore the normal multiplication and division operations will be the matrix operations, which are not what we want to do in DSP. To multiply items on a per-item basis, we need to prepend the operators with a period. For instance: Y = X .* H %X times H Y = X ./ H %X divided by H If we forget the period, Matlab will attempt a matrix operation, and will alert you that the dimensions of the matrixes are incompatible with the operator. - The convolution operation can be performed with the conv command in Matlab. For instance, if we wanted to compute the convolution of X[n] and H[n], we would use the following Matlab code: Y = conv(X, H); Y = conv(H, X); Difference Calculus When working with discrete sets, you might wonder exactly how we would perform calculus in the discrete time domain. In fact, we should wonder if calculus as we know it is possible at all! It turns out that in the discrete time domain, we can use several techniques to approximate the calculus operations of differentiation and integration. We call these techniques Difference Calculus. What is differentiation exactly? In the continuous time domain, the derivative of a function is the slope of the function at any given point in time. To find the derivative, then, of a discrete time signal, we need to find the slope of the discrete data set. The data points in the discrete time domain can be treated as geometrical points, and we know that any two points define a unique line. We can then use the algebraic equation to solve for the slope, m, of the line: Now, in the discrete time domain, f(t) is sampled and replaced by F[n]. We also know that in discrete time, the difference in time between any two points is exactly 1 unit of time! With this mind, we can plug these values into our above equation: We can then time-shift the entire equation one point to the left, so that our equation doesn't require any future values: The derivative can be found by subtracting time-shifted (delayed) versions of the function from itself. Difference Calculus Equations Difference calculus equations are arithmetic equations, with a few key components. Let's take an example: We can see in this equation that X[n-1] has a coefficient of 2, and X[n-2] has a coefficient of 5. In difference Calculus, the coefficients are known as "tap weights". We will see why they are called tap weights in later chapters about digital filters.
http://en.wikibooks.org/wiki/Digital_Signal_Processing/Discrete_Operations
13
65
Warning: the HTML version of this document is generated from Latex and may contain translation errors. In particular, some mathematical expressions are not translated correctly. Creating a new data type Object-oriented programming languages allow programmers to create new data types that behave much like built-in data types. We will explore this capability by building a Fraction class that works very much like the built-in numeric types: integers, longs and floats. Fractions, also known as rational numbers, are values that can be expressed as a ratio of whole numbers, such as 5/6. The top number is called the numerator and the bottom number is called the denominator. We start by defining a Fraction class with an initialization method that provides the numerator and denominator as integers: The denominator is optional. A Fraction with just one parameter represents a whole number. If the numerator is n, we build the Fraction n/1. The next step is to write a __str__ method that displays fractions in a way that makes sense. The form "numerator/denominator" is natural here: To test what we have so far, we put it in a file named Fraction.py and import it into the Python interpreter. Then we create a fraction object and print it. >>> from Fraction import Fraction As usual, the print command invokes the __str__ method implicitly. We would like to be able to apply the normal addition, subtraction, multiplication, and division operations to fractions. To do this, we can overload the mathematical operators for Fraction objects. We'll start with multiplication because it is the easiest to implement. To multiply fractions, we create a new fraction with a numerator that is the product of the original numerators and a denominator that is a product of the original denominators. __mul__ is the name Python uses for a method that overloads the * operator: We can test this method by computing the product of two fractions: >>> print Fraction(5,6) * Fraction(3,4) It works, but we can do better! We can extend the method to handle multiplication by an integer. We use the isinstance function to test if other is an integer and convert it to a fraction if it is. Multiplying fractions and integers now works, but only if the fraction is the left operand: >>> print Fraction(5,6) * 4 To evaluate a binary operator like multiplication, Python checks the left operand first to see if it provides a __mul__ that supports the type of the second operand. In this case, the built-in integer operator doesn't support fractions. Next, Python checks the right operand to see if it provides an __rmul__ method that supports the first type. In this case, we haven't provided __rmul__, so it fails. On the other hand, there is a simple way to provide __rmul__: This assignment says that the __rmul__ is the same as __mul__. Now if we evaluate 4 * Fraction(5,6), Python invokes __rmul__ on the Fraction object and passes 4 as a parameter: >>> print 4 * Fraction(5,6) Since __rmul__ is the same as __mul__, and __mul__ can handle an integer parameter, we're all set. Addition is more complicated than multiplication, but still not too bad. The sum of a/b and c/d is the fraction (a*d+c*b)/(b*d). Using the multiplication code as a model, we can write __add__ and __radd__: We can test these methods with Fractions and integers. >>> print Fraction(5,6) + Fraction(5,6) The first two examples invoke __add__; the last invokes __radd__. In the previous example, we computed the sum 5/6 + 5/6 and got 60/36. That is correct, but it's not the best way to represent the answer. To reduce the fraction to its simplest terms, we have to divide the numerator and denominator by their greatest common divisor (GCD), which is 12. The result is 5/3. In general, whenever we create a new Fraction object, we should reduce it by dividing the numerator and denominator by their GCD. If the fraction is already reduced, the GCD is 1. Euclid of Alexandria (approx. 325--265 BCE) presented an algorithm to find the GCD for two integers m and n: If n divides m evenly, then n is the GCD. Otherwise the GCD is the GCD of n and the remainder of m divided by n. This recursive definition can be expressed concisely as a function: def gcd (m, n): In the first line of the body, we use the modulus operator to check divisibility. On the last line, we use it to compute the remainder after division. Since all the operations we've written create new Fractions for the result, we can reduce all results by modifying the initialization method. Now whenever we create a Fraction, it is reduced to its simplest form: A nice feature of gcd is that if the fraction is negative, the minus sign is always moved to the numerator. Suppose we have two Fraction objects, a and b, and we evaluate a == b. The default implementation of == tests for shallow equality, so it only returns true if a and b are the same object. More likely, we want to return true if a and b have the same value We have to teach fractions how to compare themselves. As we saw in Section 15.4, we can overload all the comparison operators at once by supplying a __cmp__ method. By convention, the __cmp__ method returns a negative number if self is less than other, zero if they are the same, and a positive number if self is greater than other. The simplest way to compare fractions is to cross-multiply. If a/b > c/d, then ad > bc. With that in mind, here is the code for __cmp__: If self is greater than other, then diff will be positive. If other is greater, then diff will be negative. If they are the same, diff is zero. Taking it further Of course, we are not done. We still have to implement subtraction by overriding __sub__ and division by overriding __div__. One way to handle those operations is to implement negation by overriding __neg__ and inversion by overriding __invert__. Then we can subtract by negating the second operand and adding, and we can divide by inverting the second operand and multiplying. Next, we have to provide __rsub__ and __rdiv__. Unfortunately, we can't use the same trick we used for addition and multiplication, because subtraction and division are not commutative. We can't just set __rsub__ and __rdiv__ equal to __sub__ and __div__. In these operations, the order of the operands makes a difference. To handle unary negation, which is the use of the minus sign with a single operand, we override __neg__. We can compute powers by overriding __pow__, but the implementation is a little tricky. If the exponent isn't an integer, then it may not be possible to represent the result as a Fraction. For example, Fraction(2) ** Fraction(1,2) is the square root of 2, which is an irrational number (it can't be represented as a fraction). So it's not easy to write the most general version of __pow__. There is one other extension to the Fraction class that you might want to think about. So far, we have assumed that the numerator and denominator are integers. As an exercise, finish the implementation of the Fraction class so that it handles subtraction, division and exponentiation. Warning: the HTML version of this document is generated from Latex and may contain translation errors. In particular, some mathematical expressions are not translated correctly.
http://www.greenteapress.com/thinkpython/thinkCSpy/html/app02.html
13
56
SamplingThe sampling of a wave form must be done at a high enough speed in order to avoid false data. The sampling works like a frequency mixing with half the sampling frequency and overtones of that. Corresponding to mirror frequencies in frequency mixing are the alias frequencies that occur in sampling. The aliasing is a phenomenon produced by the sampling, and it is there regardless if the data is used for a digital filter or for FFT or just a digital recording. To avoid aliasing, no signal at frequencies above a certain frequency may reach the sampling input. Depending on the intended use of the digital data, the permissible maximum frequency may be higher than the Nyquist frequency, which equals half the sampling frequency. Look here for more details on sampling and anti aliasing filtering. The digital DataLet us start at some time t = start time, and sample data at 8kHz. We label the data points 0,1,2,3,4..... This means that we will have a row of numbers stored in the computer memory, and these numbers are x( 0 ), x( 1 ), x( 2 ),x( 3 ), .... These numbers are just numbers, but they contain all information necessary to reproduce exactly the original continuos wave form x( t ). Starting at some particular time, with the corresponding sample point number k, we pick N consecutive points. These points, x( k ), x( k + 1 )..... x( k + N - 1 ) form an array A[k] of length N, where k is the number of the first sample point in A[k]. For the discussion here, comparing FFT to digital filters, the elements of A[k] are denoted A[k](m), where m is the point number within A[k]. Obviously m is in the interval 0 to N-1. The Fourier TransformAny array of N points can be expressed as a sum of N arrays, each one describing a sine wave. There is a one to one correspondence between the function of time, the array of N points, and the N amplitudes and phases that describe the N sine wave functions that, when added together, become the time function. The following basic computer program will calculate the fourier transform of an array X of length N: FOR I=0 TO N/2-1 This is the fourier transform for real valued functions. There are better algorithms, known as FFT algorithms, but they do exactly the same thing. The FFT looks up the sine function values in a table, and it avoids doing the same multiplication over and over again as is done in the simple program above. The result however comes out identical, so for the purpose of discussion the above computer program may illustrate the fourier transform. The fourier transform for complex valued functions is more efficient to use, but the final result is identical. Look here for details: The Complex Fourier Transform Frequency Response of the Fourier TransformEach amplitude component in the fourier transform corresponds to a sine wave. Each sine wave has exactly 0,1,2... up to N/2 periods within the time spanned by the N samples. A sine wave at some intermediate frequency will be represented in the fourier transform as several different frequencies present at the same time. If we feed a sine wave signal into a 512 point fourier transform, and sweep the frequency, the spectra obtained look like shown in figure 1. The broadening is usually explained by saying that the fourier transform can only represent periodic functions. The discontinuity represented by the mismatch between the beginning and the end of the input data creates "keying clicks" that have a wide spectrum. If the input data is white noise with only a very weak signal, the broadening due to mismatch at the ends has little influence, but if there is some signal well above the noise, the broadening of it will cause extra noise that may hide weak signals at moderate frequency separations. WindowingThe problem of broadening of signals due to mismatch at the ends is solved by windowing. The trick is simple: Force the points to zero at both ends by multiplication by some function that gradually goes to zero at both ends. Obviously a string of zeroes at one end will give a perfect match with the string of zeroes at the other. Windowing removes the tails that stretch far away from the centre frequency, and thus it gives a very significant improvement of the dynamic range. When windowing is used, the resolution becomes lower. The frequency response becomes broader also for sine waves that fit exactly to the fourier frequencies. On the other hand the broadening is controlled and independent on whether the signal is right at a fourier frequency or somewhere between. An infinite number of window functions is possible. As a demonstration, look at figures 2 to 4. The stop band rejection is greatly improved by windowing. Each transform uses fewer points, so the bandwidth increases. The window function greatly improves the shape of the filters in the equivalent filter bank - and it also makes the equivalent filters overlap. The windowing is illustrated in the basic code below, which of course is very inefficient, but gives an identical result as a properly written FFT with a sine squared window. FOR I=0 TO N/2-1 Sliding FFTWhen the fourier transform is used without a window function, it is natural to use each point only once, with the notations presented above, this means that the consecutive input arrays for the windowless FFT will be: When a window function is used, some points have to be reused because otherwise the points that happen to be the last or the first point in one of the A arrays will be multiplied by zero, and thus have no effect on the final result. From an information theoretical point of view, all points must be equally significant for the final result, and throwing away something like half the input data has to be a serious waist of S/N. The consecutive input arrays to the windowed FFT procedure will be: K is a number between 2 and N, and how large it has to be depends on the window function. If K=1, each point is used only one time, and the input data fits the windowless FFT. If K=N, each point is used N times. This would always be serious overkill, but it is interesting to look AT what the output would be. As an illustration, the basic program for the windowed fourier transform given above is modified to produce an output for one frequency only, but with one data point out for each data point in corresponding to K=N. This means that one frequency in the FFT is selected, and consequently the process will be some kind of digital filter. Let the frequency of interest correspond to the point IFRQ in the FFT output. 1 FOR J=0 TO N-2 What we see here is a computer program that makes a FIR filter. (Finite-duration Impulse Response) In fact it is two FIR filters with a 90 degree phase shift between them, but we may neglect SUMQ and just route SUMI through a D/A converter to our headphones. A sine wave with amplitude 1 at the input will have an amplitude of about N/2 (depends on the window), so if the noise is allowed to have a maximum amplitude of about 2 bits below the maximum level of the A/D converter, and if we want the output to saturate for a sine wave 6dB below, SUMI has to be divided by N/16 before it is sent to the loudspeaker. SIN(IFRQ*J*2*3.141593/N)*SIN(J*3.141593/N)^2 is the impulse response of this digital filter, (J goes from 0 to 2N-1) and we immediately see that the window chosen for the FFT becomes the magnitude response of this FIR filter. When K=N, the complete fourier transform is a filter bank with N/2 evenly distributed equal digital filters. An average power spectrum is obtained by computing the average of the output signal squared for each frequency. Such average power spectra is the best method for detecting the presence of weak signals in noise. For maximum sensitivity, the length of the window function (in time) should match the coherence length of the expected signal. Too long windows make no harm, but require more computing resources without improving sensitivity. Using the complex amplitude Consider the filter above producing SUMI and SUMQ. This complex pair gives a complex amplitude, and this complex number describes the output from the FIR filter. Now, since the filter is narrow, the output has to be a narrow band signal, and consequently SUMI and SUMQ do not change fast with time. The actual numbers change because the reference for the phase angle they describe changes, but if this is taken into account, they change very slowly. This can be used to update SUMI and SUMQ relatively infrequently. One way to use the complex amplitude is to select a frequency within the FFT, (or a few frequencies if more bandwidth is desired) The filtered output signal is then obtained from the centre part of the backwards fourier transform. If the phase is properly managed, successive inverse transforms will match each other and form a nice filtered output signal. If K is made a little too small, each sequence becomes a little too long, and the matching becomes less good. This would create wide band noise, while saving a lot of computing time. A simple band pass filter at the output (digital, before the D/A or analog after it) will restore a nice signal. Another way is to use SUMI and SUMQ to set the amplitude and phase of a local tone oscillator at a fixed frequency. The amplitude and phase will be updated every (N/K)'th sample. A quite conventional analog filter at the selected fixed frequency will conveniently remove high frequency clicks which makes very low values for K give satisfactory results. Demo program for sliding FFTThe "basic program" below is intended to show how the system I currently use for weak signal communication works. The system is implemented in a TMS320C25, which provides two complete sets of FFT's, one for each polarisation of my cross yagi system, and a 80186 that does the rest (and is nearly idle). How to combine the two FFT's coming from two orthogonal polarisations is described here: Adaptive polarisation The algorithm described below, partly as plain text and partly as basic code, will produce a bandwidth that depends on the window length. For a sin squared window, a suitable value for K is N/4. The window I use with N=512 and K=128 is slightly more rectangular and produces a bandwidth of about 17 Hz. If you wish to listen to it, look here: Station XX, a demonstration of a weak EME signal This algorithm has two outputs, one for the screen, which is the average power spectrum, and one for the loudspeaker/headphones. Very weak signals are easily seen on the average power spectrum, and the program assumes that the operator has pointed the variable IFREQ to a point close to one of the possible peaks in the power spectrum. The "program" below is of course only intended to illustrate the method. The real code does not multiply with numbers known to be zero, and it has a buffer allowing the average power spectrum to be calculated from points both before and after the time actually being processed for output. Further, only those transforms containing above average energy close to IFREQ, are used to calculate the average power spectrum. Subroutine getfft: Wait for N/4 new points to arrive (complex or real). Produce windowed fft of 3N/4 old plus N/4 new points. Store the NMAX complex amplitudes in ARE and AIM. NMAX=N/2-1 for real fft and N-1 for complex fft. Loop I to form average power spectrum. Update power spectrum on screen. Possibly small part each time. Locate the maximum in the power spectrum and get peak shape. If the level is below MINPWR, keep the old peak position and shape. rem Multiply the complex amplitudes by the normalised cleaned average Produce a single complex amplitude from those few that remain. It is not meaningful to do any sophisticated back transformation because the time between each new fft is too long in this example (K=N/4). As long as the signal is narrow banded enough to be compatible with update time interval, just summing points with alternating sign works fine. The whole procedure becomes equivalent to picking the complex amplitude at the peak, and it is independent on if the peak is centred on a point in the fft or between points. If two orthogonal antennas are used in a stereo system, here is the point to combine the two complex amplitudes REH,IMH and REV,IMV into a new set of complex amplitudes in such a way that one contains all the signal (and some noise) and the other contains only noise. Convert complex amplitude to amplitude and phase. Expand the dynamic range of amplitude by table lookup. Approximately exponential function to compensate for logarithmic characteristics of human ears. Restore complex amplitude (with proper phase) Send REH and IMH to the hardware to generate a sine wave with this complex amplitude (see diagram below). Then go back to start of loop. One way to produce a sine wave from digital complex amplitudes. The narrow output filter removes clicks from phase and amplitude discontinuities. It also converts the square waves into sine waves. I guess a soundblaster can produce a sine wave from complex amplitudes directly as two "musical instruments" being a sine and a cosine function. Then, clicks may be removed just by interpolation in the complex phase. It is not necessary to do fft's more often. Multichannel filteringThe sliding FFT can easily provide the complex amplitudes for several different stations simultaneously. The 80186 I use, can keep long buffers with complex amplitudes for about 15 stations simultaneously, and it has a lot of idle time. Of course it is very tempting to try to convert the contents of these buffers to ASCII code and put them all out on the screen. My attempts in this direction have not been very successful despite quite a lot of effort. I can not make the computer nearly as clever as my ears in receiving weak EME signals. The main problem is the QSB. Maybe in the future....
http://www.sm5bsz.com/slfft/slfft.htm
13
118
|Political structure||Military alliance| |Historical era||World War II| |-||Tripartite Pact||27 September 1940| |-||Anti-Comintern Pact||25 November 1936| |-||Pact of Steel||22 May 1939| |-||Dissolved||2 September 1945| The Axis powers (German: Achsenmächte, Italian: Potenze dell'Asse, Japanese: 枢軸国 Sūjikukoku), also known as the Axis alliance, Axis nations, Axis countries, or just the Axis, was the alignment of nations that fought in the Second World War against the Allied forces. The Axis promoted the alliance as a part of a revolutionary process aimed at breaking the hegemony of plutocratic-capitalist Western powers and defending civilization from communism. The Axis grew out of the Anti-Comintern Pact, an anti-communist treaty signed by Germany and Japan in 1936. Italy joined the Pact in 1937. The "Rome–Berlin Axis" became a military alliance in 1939 under the Pact of Steel, with the Tripartite Pact of 1940 leading to the integration of the military aims of Germany and its two treaty-bound allies. At their zenith during World War II, the Axis powers presided over empires that occupied large parts of Europe, Africa, Asia, and the islands of the Pacific Ocean. The war ended in 1945 with the defeat of the Axis powers and the dissolution of the alliance. Like the Allies, membership of the Axis was fluid, with nations fighting and not fighting over the course of the war. Origins and creation The term "axis" is believed to have been first coined by Hungary's fascist prime minister Gyula Gömbös, who advocated an alliance of Germany, Hungary, and Italy. He worked as an intermediary between Germany and Italy to lessen differences between the two countries to achieve such an alliance. Gömbös' sudden death in 1936 while negotiating with Germany in Munich and the arrival of Kálmán Darányi, a non-fascist successor to him, ended Hungary's initial involvement in pursuing a trilateral axis. The lessening of differences between Germany and Italy led to the formation of a bilateral axis. Initial proposals of a German-Italian alliance Italy under Duce Benito Mussolini had pursued a strategic alliance of Italy with Germany against France since the early 1920s. Mussolini, prior to becoming head of government in Italy, as leader of the Italian Fascist movement, had advocated alliance with recently-defeated Germany after the Paris Peace Conference of 1919 settled World War I. He believed that Italy could expand its influence in Europe by allying with Germany against France. In early 1923, as a goodwill gesture to Germany, Italy secretly delivered weapons to Germany for use in the German Army that had faced major disarmament under the provisions of the Treaty of Versailles. In September 1923, Mussolini offered German Chancellor Gustav Stresemann a "common policy", he sought German military support against potential French military intervention over Italy's diplomatic dispute with Yugoslavia over Fiume, should an Italian seizure of Fiume result in war between Italy and Yugoslavia. The German ambassador to Italy in 1924 reported that Mussolini saw a nationalist Germany as an essential ally to Italy against France, and hoped to tap into the desire within the German army and the German political right for a war of revenge against France. Italy since the 1920s had identified the year 1935 as a crucial date for preparing for a war against France, as 1935 was the year when Germany's obligations to the Treaty of Versailles were scheduled to expire. However Mussolini at this time stressed one important condition that Italy must pursue in an alliance with Germany, that Italy "must ... tow them, not be towed by them". Italian foreign minister Dino Grandi in the early 1930s stressed the importance of "decisive weight", involving Italy's relations between France and Germany, in which he recognized that Italy was not yet a major power, but perceived that Italy did have strong enough influence to alter the political situation in Europe by placing the weight of its support onto one side or another. However Grandi stressed that Italy must seek to avoid becoming a "slave of the rule of three" in order to pursue its interests, arguing that although substantial Italo-French tensions existed, that Italy would not unconditionally commit itself to an alliance with Germany, just as it would neither unconditionally commit itself to an alliance with France over conceivable Italo-German tensions. Grandi's attempts to maintain a diplomatic balance between France and Germany were challenged by pressure from the French in 1932 who had begun to prepare an alliance with Britain and the United States against the then-potential threat of a revanchist Germany. The French government warned Italy that it had to choose whether to be on the side of the pro-Versailles powers or the side of the anti-Versailles revanchists. Grandi responded by stating that Italy would be willing to offer France support against Germany if France gave Italy its mandate over Cameroon and allowed Italy a free hand in Ethiopia. France refused Italy's proposed exchange for support, as it believed Italy's demands were unacceptable and the threat from Germany was not yet immediate. On 23 October 1932, Mussolini declared support for a Four Power Directorate that included Britain, France, Germany, and Italy, that would bring about an orderly treaty revision outside of what he considered the outmoded League of Nations. The proposed Directorate was pragmatically designed to reduce French hegemony in continental Europe, to reduce tensions between the great powers in the short term to buy Italy time from being pressured into a specific war alliance while at the same time being able to benefit from diplomatic deals on treaty revisions. Danube alliance, dispute over Austria In 1932, Gyula Gömbös and the Party of National Unity rose to power in Hungary, and immediately sought alliance with Italy. Gömbös sought to end Hungary's post-Treaty of Trianon borders, but knew that Hungary alone was not capable of challenging the Little Entente powers by forming an alliance with Austria and Italy. Mussolini was elated by Gömbös' offer of alliance with Italy, and both Mussolini and Gömbös cooperated in seeking to win over Austrian Chancellor Engelbert Dollfuss to joining an economic tripartite agreement with Italy and Hungary. During the meeting between Gömbös and Mussolini in Rome on 10 November 1932, the question came up over the sovereignty of Austria in regards to the predicted inevitable rise of the Nazi Party to power in Germany. Mussolini was worried over Nazi ambitions towards Austria, and indicated that at least in the short-term he was committed to maintaining Austria as a sovereign state. Italy had concerns over a Germany with Austria within it, laying land claims to German-populated territories of the South Tyrol (also known as Alto-Adige) within Italy that bordered Austria on the Brenner Pass. Gömbös responded to Mussolini by saying that as the Austrians primarily identified as Germans, that the Anschluss of Austria to Germany was inevitable, and advised that it would be better for Italy to have a friendly Germany on the border of the Brenner Pass than a hostile Germany bent on entering the Adriatic. Mussolini replied by expressing hope that the Anschluss could be postponed as long as possible until the breakout of a European war that he estimated would begin in 1938. In 1933, Adolf Hitler and the Nazi Party came to power in Germany. The first diplomatic visitor to meet Hitler was Gömbös who in a previous letter to Hitler within a day of Hitler being appointed Chancellor, in which Gömbös told the Hungarian ambassador to Germany to remind Hitler that "that ten years ago, on the basis of our common principles and ideology, we were in contact via Dr. Scheubner-Richter". Gömbös told the Hungarian ambassador to inform Hitler of Hungary's intentions "for the two countries to cooperate in foreign and economic policy". Hitler as Nazi leader had long advocated an alliance between Germany and Italy since the 1920s. Shortly after being appointed Chancellor, Hitler sent a personal message to Mussolini, declaring "admiration and homage" to him as well as declaring his anticipation of the prospects of German-Italian friendship and even alliance. Hitler was aware that Italy held concerns over potential German land claims on South Tyrol, and assured Mussolini that Germany was not interested in South Tyrol. Hitler in Mein Kampf had declared that South Tyrol was a non-issue considering the advantages that would be gained from a German-Italian alliance. After Hitler's rise to power, the Four Power Directorate proposal by Italy had been looked at with interest by Britain, but Hitler was not committed to it, resulting in Mussolini urging Hitler to consider the diplomatic advantages Germany would gain by breaking out of isolation by entering the Directorate and avoiding an immediate armed conflict. The Four Power Directorate proposal stipulated that Germany would no longer be required to have limited arms and would be granted the right to re-armament under foreign supervision in stages. Hitler completely rejected the idea of controlled rearmament under foreign supervision. Mussolini did not trust Hitler's intentions regarding Anschluss nor Hitler's promise of no territorial claims on South Tyrol. Mussolini informed Hitler that he was satisfied with the presence of the anti-Marxist government of Dollfuss in Austria, and warned Hitler that he was adamantly opposed to Anschluss. Hitler responded in contempt to Mussolini that he intended "to throw Dollfuss into the sea". With this disagreement over Austria, relations between Hitler and Mussolini steadily became more distant. Hitler attempted to break the impasse with Italy over Austria by sending Hermann Göring to negotiate with Mussolini in 1933 to convince Mussolini to press the Austrian government to appoint members of Austria's Nazis to the government. Göring claimed that Nazi domination of Austria was inevitable and that Italy should accept this, as well as repeating to Mussolini of Hitler's promise to "regard the question of the South Tyrol frontier as finally liquidated by the peace treaties". In response to Göring's visit with Mussolini, Dollfuss immediately went to Italy to counter any German diplomatic headway. Dollfuss claimed that his government was actively challenging Marxists in Austria and claimed that once the Marxists were defeated in Austria, that support for Austria's Nazis would decline. In 1934, Hitler and Mussolini met in person for the first time in Venice. The meeting did not proceed amicably, Hitler demanded that Mussolini compromise on Austria by pressuring Dollfuss to appoint Austrian Nazis his cabinet, in which Mussolini flatly refused the demand. In response, Hitler promised that he would accept Austria's independence for the time being, saying that due to the internal tensions in Germany (referring to sections of the Nazi SA that Hitler would soon kill in the Night of the Long Knives) that Germany could not afford to provoke Italy. Several weeks after the Venice meeting between Hitler and Mussolini, on 25 June 1934, Austrian Nazis assassinated Dollfuss. Mussolini was outraged as he held Hitler directly responsible for the assassination that violated Hitler's promise made only weeks ago to respect Austrian independence. Mussolini violently responded to the assassination of Dollfuss by rapidly deploying several army divisions and air squadrons to the Brenner Pass, and warned that a German move against Austria would result in war between Germany and Italy. Hitler responded by both denying responsibility for the Austrian Nazis' assassination of Dollfuss and issuing orders to dissolve all ties between the German Nazi Party and its Austrian branch which Germany claimed was responsible for the political crisis. Italy effectively abandoned diplomatic relations Germany, and turned to France to challenge Germany's intransigence by signing a Franco-Italian accord to protect Austrian independence. French and Italian military staff discussed possible military cooperation involving a war with Germany should Hitler dare to attack Austria. As late as May 1935, Mussolini spoke of his desire to destroy Hitler. Relations between Germany and Italy recovered due to Hitler's support of Italy's invasion of Ethiopia in 1935, while other countries condemned the invasion and advocated sanctions against Italy. Development of German-Italian-Japanese alliance Interest between Germany and Japan in forming an alliance began when Japanese diplomat Oshima Hiroshi visited Joachim von Ribbentrop in Berlin in 1935. Oshima informed von Ribbentrop of Japan's interest in forming a German-Japanese alliance against the Soviet Union. Von Ribbentrop expanded on Oshima's proposal by advocating that the alliance be based in a political context of a pact to oppose the Comintern. The proposed pact was met with mixed reviews in Japan, with a faction of ultra-nationalists within the government supporting the pact while the Japanese Navy and the Japanese Foreign Ministry were staunchly opposed to the pact. There was great concern in the Japanese government that such a pact with Germany could alienate Japan's relations with Britain, endangering years of a beneficial Anglo-Japanese accord, that had allowed Japan to ascend in the international community in the first place. The response to the pact was met with similar division in Germany; while the proposed pact was popular amongst the upper echelons of the Nazi Party, it was opposed by many in the German Foreign Ministry, the German Army, and the German business community who held financial interests in China that Japan was hostile to. Italy upon learning of German-Japanese negotiations, also began to take an interest in forming an alliance with Japan. Italy had hoped that due to Japan's long-term close relations with Britain, that an Italo-Japanese alliance could pressure Britain into adopting a more accommodating stance towards Italy in the Mediterranean. In the summer of 1936, Ciano informed Japanese Ambassador to Italy, Sugimura Yotaro, "I have heard that a Japanese-German agreement concerning the Soviet Union has been reached, and I think it would be natural for a similar agreement to be made between Italy and Japan". Initially Japan's attitude towards Italy's proposal was generally dismissive, viewing a German-Japanese alliance against the Soviet Union as imperative while regarding an Italo-Japanese alliance as secondary, as Japan anticipated that an Italo-Japanese alliance would antagonize Britain that had condemned Italy's invasion of Ethiopia. This attitude by Japan towards Italy altered in 1937 after the League of Nations condemned Japan for aggression in China and faced international isolation, while Italy remained favourable to Japan. As a result of Italy's support for Japan against international condemnation, Japan took a more positive attitude towards Italy and offered proposals for a non-aggression or neutrality pact with Italy. The "Axis powers" formally took the name after the Tripartite Pact was signed by Germany, Italy, and Japan on 27 September 1940, in Berlin. The pact was subsequently joined by Hungary (20 November 1940), Romania (23 November 1940), Slovakia (24 November 1940), and Bulgaria (1 March 1941). Economic resources The total Axis population in 1938 was 258.9 million, while the total Allied population (excluding the Soviet Union and the United States, who later joined the Allies) was 689.7 million. Thus the Allied powers at that time outnumbered the Axis powers in terms of population by 2.7 to 1. The leading Axis states had the following domestic populations: Germany (including recently annexed Austria, with a population of 6.8 million) had 75.5 million, Japan (excluding its colonies) had a population of 71.9 million, and Italy (excluding its colonies) had 43.4 million. The United Kingdom (excluding its colonies) had a domestic population of 47.5 million and France (excluding its colonies) had 42 million. The wartime gross domestic product (GDP) of the Axis powers combined was $911 billion at its highest in 1941 in international dollars by 1990 prices. The total GDP of the Allied powers in 1941 was $1,798 billion – with the United States alone providing $1,094 billion, more GDP than all the Axis powers combined. The burden of the war upon the economies of the participating countries has been measured through the percentage of gross national product (GNP) devoted to military expenditures. Nearly one-quarter of Germany's GNP was committed to the war effort in 1939, and this rose three-quarters of GNP in 1944, prior to the collapse of the economy. In 1939, Japan committed 22 percent of its GNP to its war effort in China; this rose to three-quarters of Japan's GNP in 1944. Italy did not mobilize its economy; its GNP committed to the war effort remained at prewar levels. Italy and Japan lacked industrial capacity; their economies were small, dependent on international trade, external sources of fuel and other industrial resources. As a result, Italian and Japanese mobilization remained low, even by 1943. Among the three major Axis powers – Germany, Italy, and Japan – Japan had the lowest per capita income, while Germany and Italy had an income level comparable to the United Kingdom. Major powers War justifications Führer Adolf Hitler in 1941 described the outbreak of World War II as the fault of the intervention of Western powers against Germany during its war with Poland, describing it as the result of "the European and American warmongers". Hitler denied accusations by the Allies that he wanted a world war, and invoked anti-Semitic claims that the war was wanted and provoked by politicians either of Jewish origin or associated with Jewish interests. However Hitler clearly had designs for Germany to become the dominant and leading state in the world, such as his intention for Germany's capital of Berlin to become the Welthauptstadt ("World Capital") renamed as Germania. The German government also justified its actions by claiming that Germany inevitably needed to territorially expand because it was facing an overpopulation crisis that Hitler described: "We are overpopulated and cannot feed ourselves from our own resources". Thus expansion was justified as an inevitable necessity to provide lebensraum ("living space") for the German nation and end the country's overpopulation within existing confined territory, and provide resources necessary to its people's well-being. Since the 1920s, the Nazi Party publicly promoted the expansion of Germany into territories held by the Soviet Union. On the issue of Germany's war with Poland that provoked Allied intervention against Germany, Germany claimed that it had sought to resolve its dispute with Poland over its German minorities particularly within the densely German-populated "Polish Corridor" by an agreement in 1934 between Germany and Poland whereby Poland would end its assimilationist policies towards Germans in Poland; however Germany later complained that Poland was not upholding the agreement. In 1937, Germany condemned Poland for violating the minorities agreement, but proposed that it would accept a resolution whereby Germany would reciprocally accept the Polish demand for Germany abandon assimilation of Polish minorities if Poland upheld its agreement to abandon assimilation of Germans. Germany's proposal was met with resistance in Poland, particularly by the Polish Western Union (PZZ) and the National Democratic party, with Poland only agreeing to a watered down version of the Joint Declaration on Minorities, on 5 November 1937. On the same day, Hitler declared his intention to prepare for a war to destroy Poland. Germany used legal precedents to justify its intervention against Poland and annexation of the German-majority Free City of Danzig (led by a local Nazi government that sought incorporation into Germany) in 1939 was justified because of Poland repeatedly violating the sovereignty of Danzig. Germany noted one such violation as being in 1933 when Poland sent additional troops into the city in violation of the limit of Polish troops admissible to Danzig as agreed to by treaty. After Poland had agreed only to a watered-down agreement to guarantee that its German minorities would not be assimilated, Hitler decided that the time had come to prepare for war with Poland to forcibly implement lebensraum by destroying Poland to allow for German settlement in its territories. Although Germany had prepared for war with Poland in 1939, Hitler still sought to use diplomatic means along with threat of military action to pressure Poland to make concessions to Germany involving Germany annexing Danzig without Polish opposition, and believed that Germany could gain concessions from Poland without provoking a war with Britain or France. Hitler believed that Britain's guarantee of military support to Poland was a bluff, and with a German-Soviet agreement on both countries recognizing their mutual interests involving Poland. The Soviet Union had diplomatic grievances with Poland since the Soviet-Polish War of 1919–1921 in which the Soviets were pressured to cede Western Belarus and Western Ukraine to Poland after intense fighting in those years over the territories, and the Soviet Union sought to regain those territories. Hitler believed that a conflict with Poland would be an isolated conflict, as Britain would not engage in a war with both Germany and the Soviet Union. Poland rejected the German demand for negotiations on the issue of proposed German annexation of Danzig, and Germany in response prepared a general mobilization on the morning of 30 August 1939. Hitler thought that the British would accept Germany's demands and pressure Poland to agree to them. At midnight 30 August 1939, German foreign minister Joachim Ribbentrop was expecting the arrival of the British ambassador Nevile Henderson as well as a Polish plenipotentiary to negotiate terms with Germany. Only Henderson arrived, and Henderson informed Ribbentrop that no Polish plenipotentiary was arriving. Ribbentrop became extremely upset and demanded the immediate arrival of a Polish diplomat, informing Henderson that the situation was "damned serious!", and read out to Henderson Germany's demands that Poland accept Germany annexing Danzig as well as Poland granting Germany the right to connect East Prussia to mainland Germany via an extraterritorial highway and railway that passed through the Polish Corridor, and a plebiscite to determine whether the Polish Corridor (with a German majority population) should remain within Poland or be transferred to Germany. Germany justified its invasion of the Low Countries of Belgium, Luxembourg, and the Netherlands in May 1940 by claiming that it suspected that Britain and France were preparing to use the Low Countries to launch an invasion of the industrial Ruhr region of Germany. When war between Germany versus Britain and France appeared likely in May 1939, Hitler declared that the Netherlands and Belgium would need to be occupied, saying: "Dutch and Belgian air bases must be occupied ... Declarations of neutrality must be ignored". In a conference with Germany's military leaders on 23 November 1939, Hitler declared to the military leaders that "We have an Achilles heel, the Ruhr", and said that "If England and France push through Belgium and Holland into the Ruhr, we shall be in the greatest danger", and thus claimed that Belgium and the Netherlands had to be occupied by Germany to protect Germany from a British-French offensive against the Ruhr, irrespective of their claims to neutrality. The issue of Germany's invasion of the Soviet Union in 1941 involved the issue of lebensraum and anti-communism. Hitler in his early years as Nazi leader had claimed that he would be willing to accept friendly relations with Russia on the tactical condition that Russia agree to return to the borders established by the German-Russian peace agreement of the Treaty of Brest-Litovsk signed by Vladimir Lenin of the Russian Soviet Federated Socialist Republic in 1918 which gave large territories held by Russia to German control in exchange for peace. Hitler in 1921 had commended the Treaty of Brest Litovsk as opening the possibility for restoration of relations between Germany and Russia, saying: Through the peace with Russia the sustenance of Germany as well as the provision of work were to have been secured by the acquisition of land and soil, by access to raw materials, and by friendly relations between the two lands.—Adolf Hitler, 1921 Hitler from 1921 to 1922 evoked rhetoric of both the achievement of lebensraum involving the acceptance of a territorially reduced Russia as well as supporting Russian nationals in overthrowing the Bolshevik government and establishing a new Russian government. However Hitler's attitudes changed by the end of 1922, in which he then supported an alliance of Germany with Britain to destroy Russia. Later Hitler declared how far into Russia he intended to expand Germany to: Asia, what a disquieting reservoir of men! The safety of Europe will not be assured until we have driven Asia back behind the Urals. No organized Russian state must be allowed to exist west of that line.—Adolf Hitler. Policy for lebensraum planned mass expansion of Germany eastwards to the Ural Mountains. Hitler planned for the "surplus" Russian population living west of the Urals were to be deported to the east of the Urals. After Germany invaded the Soviet Union in 1941, the Nazi regime's stance towards an independent, territorially-reduced Russia was affected by pressure beginning in 1942 from the German Army on Hitler to endorse a Russian national liberation army led by Andrey Vlasov that officially sought to overthrow Joseph Stalin and the communist regime and establish a new Russian state. Initially the proposal to support an anti-communist Russian army was met with outright rejection by Hitler, however by 1944 as Germany faced mounting losses on the Eastern Front, Vlasov's forces were recognized by Germany as an ally, particularly by Reichsführer-SS Heinrich Himmler. After the Japanese attack on Pearl Harbor and the outbreak of war between Japan and the United States, Germany supported its ally Japan by declaring war on the US. During the war Germany denounced the Atlantic Charter and the Lend-Lease Act that the US adopted to support the Allied powers prior to entry into the alliance, as imperialism directed at dominating and exploit countries outside of the continental Americas. Hitler denounced American President Roosevelt's invoking of the term "freedom" to describe US actions in the war, and accused the American meaning of "freedom" to be the freedom for democracy to exploit the world and the freedom for plutocrats within such democracy to exploit the masses. At the end of World War I, German citizens felt that their country had been humiliated as a result of the Treaty of Versailles, in which Germany was forced to pay enormous reparations payments and forfeit German-populated territories and all its colonies. The pressure of the reparations on the German economy led to hyperinflation during the early 1920s. In 1923 the French occupied the Ruhr region when Germany defaulted on its reparations payments. Although Germany began to improve economically in the mid-1920s, the Great Depression created more economic hardship and a rise in political forces that advocated radical solutions to Germany's woes. The Nazis, under Adolf Hitler, promoted the nationalist stab-in-the-back legend stating that Germany had been betrayed by Jews and Communists. The party promised to rebuild Germany as a major power and create a Greater Germany that would include Alsace-Lorraine, Austria, Sudetenland, and other German-populated territories in Europe. The Nazis also aimed to occupy and colonize non-German territories in Poland, the Baltic states, and the Soviet Union, as part of the Nazi policy of seeking Lebensraum ("living space") in eastern Europe. Germany renounced the Versailles treaty and remilitarized the Rhineland in March 1936. Germany had already resumed conscription and announced the existence of a German air force in 1935. Germany annexed Austria in 1938, the Sudetenland from Czechoslovakia, and the Memel territory from Lithuania in 1939. Germany then invaded the rest of Czechoslovakia in 1939, creating the Protectorate of Bohemia and Moravia and the country of Slovakia. On 23 August 1939, Germany and the Soviet Union signed the Molotov-Ribbentrop Pact, which contained a secret protocol dividing eastern Europe into spheres of influence. Germany's invasion of its part of Poland under the Pact eight days later triggered the beginning of World War II. By the end of 1941, Germany occupied a large part of Europe and its military forces were fighting the Soviet Union, nearly capturing Moscow. However, crushing defeats at the Battle of Stalingrad and the Battle of Kursk devastated the German armed forces. This, combined with Western Allied landings in France and Italy, led to a three-front war that depleted Germany's armed forces and resulted in Germany's defeat in 1945. There was substantial internal opposition within the German military to the Nazi regime's aggressive strategy of rearmament and foreign policy in the 1930s. From 1936 to 1938, Germany's top four military leaders, Ludwig Beck, Werner von Blomberg, Werner von Fritsch, Walther von Reichenau, all were in opposition to the Nazi regime's rearmament strategy and its foreign policy. They criticized the hurried nature of rearmament, the lack of planning, Germany's insufficient resources to carry out a war, the dangerous implications of Hitler's foreign policy, and the increasing subordination of the army to the Nazi Party's rules. These four military leaders were outspoken and public in their opposition to these tendencies. The Nazi regime responded with contempt to the four military leaders' opposition, and Nazi members brewed a false crass scandal that alleged that the two top army leaders von Blomberg and von Fritsch were homosexual lovers, in order to pressure them to resign. Though started by lower ranking Nazi members, Hitler took advantage of the scandal by forcing von Blomberg and von Fritsch to resign and replaced them with opportunists who were subservient and loyal to him. Shortly afterwards Hitler announced on 4 February 1938 that he was taking personal command over Germany's military with the new High Command of the Armed Forces with the Führer as its head. The opposition to the Nazi regime's aggressive foreign policy in the military became so strong from 1936 to 1938, that considerations of overthrowing the Nazi regime were discussed within the upper echelons of the military and remaining non-Nazi members of the German government. Minister of Economics, Hjalmar Schacht met with Beck in 1936 in which Schacht declared to Beck that he was considering an overthrow of the Nazi regime and was inquiring what the stance was by the German military on support of an overthrow of the Nazi regime. Beck was lukewarm to the idea, and responded that if a coup against the Nazi regime began with support at the civilian level, the military would not oppose it. Schacht considered this promise by Beck to be inadequate because he knew that without the support of the army, any coup attempt would be crushed by the Gestapo and the SS. However by 1938, Beck became a firm opponent of the Nazi regime out of his opposition to Hitler's military plans of 1937–38 that that told the military to prepare for the possibility of a world war as a result of German annexation plans for Austria and Czechoslovakia. Colonies and dependencies In Europe Belgium initially was under a military occupation authority from 1940 to 1944, however Belgium and its Germanic population were planned to be incorporated into the planned Greater Germanic Reich, this was initiated by the creation of Reichskommissariat Belgien, an authority run directly by the German government that sought the incorporation of the territory into the planned Germanic Reich. However Belgium was soon occupied by Allied forces in 1944. The Protectorate of Bohemia and Moravia was a protectorate and dependency considered an autonomous region within the sovereign territory of Germany. The General Government was the name given to the territories of occupied-Poland that were not directly annexed into German provinces, but was like Bohemia and Moravia, a dependency and autonomous region within the sovereign territory of Germany. Reichskommissariat Niederlande was an occupation authority and territory established in the Netherlands in 1940 designated as a colony to be incorporated into the planned Greater Germanic Reich. Reichskommissariat Norwegen was established in Norway in 1940. Like the Reichskommissariats in Belgium and the Netherlands, its Germanic peoples were to be incorporated into the Greater Germanic Reich. In Norway, the Quisling regime, headed by Vidkun Quisling, was installed by the Germans as a client regime during the occupation, while king Haakon VII and the legal government were in exile. Quisling encouraged Norwegians to serve as volunteers in the Waffen-SS, collaborated in the deportation of Jews, and was responsible for the executions of members of the Norwegian resistance movement. About 45,000 Norwegian collaborators joined the pro-Nazi party Nasjonal Samling (National Union), and some police units helped arrest many of Norway's Jews. However, Norway was one of the first countries where resistance during World War II was widespread before the turning point of the war in 1943. After the war, Quisling and other collaborators were executed. Quisling's name has become an international eponym for traitor. Reichskommissariat Ostland was established in the Baltic region in 1941. Unlike the western Reichskommissariats that sought the incorporation of their majority Germanic peoples, Ostland were designed for settlement by Germans who would displace the majority non-Germanic peoples living there, as part of lebensraum. Reichskommissariat Ukraine was established in Ukraine in 1941. It like Ostland was slated for settlement by Germans. War justifications The Japanese government justified its actions by claiming that it was seeking to unite East Asia under Japanese leadership in a Greater East Asia Co-Prosperity Sphere, that would free East Asians from domination and rule by clients of Western imperialism and particularly American imperialism. Japan invoked themes of Pan-Asianism and said that the Asian people needed to be free from Anglo-American influence. The United States opposed the Japanese war in China, and recognized Chiang Kai-Shek's Nationalist Government as the legitimate government of China. As a result, the United States sought to bring the Japanese war effort to a complete halt by imposing a full embargo on all trade between the United States and Japan. Japan was dependent on the United States for 80 percent of its petroleum, so the embargo resulted in an economic and military crisis for Japan, as Japan could not continue its war effort against China without access to petroleum. In order to maintain its military campaign in China with the major loss of petroleum trade with the United States, Japan saw the best means to secure an alternative source of petroleum in the petroleum-rich and natural-resources-rich Southeast Asia. This threat of retaliation by Japan to the total trade embargo by the United States was known by the American government, including American Secretary of State Cordell Hull, who was negotiating with the Japanese to avoid a war, feared that the total embargo would pre-empt a Japanese attack on the Dutch East Indies. Japan identified the American Pacific fleet based in Pearl Harbor as the principal threat to its designs to invade and capture Southeast Asia. Thus Japan initiated the attack on Pearl Harbor on 7 December 1941 as a means to inhibit an American response to the invasion of Southeast Asia, and buy time to allow Japan to consolidate itself with these resources to engage in a total war against the United States, and force the United States to accept Japan's acquisitions. The Empire of Japan, a constitutional monarchy ruled by Hirohito, was the principal Axis power in Asia and the Pacific. The Japanese constitution prescribed that "the Emperor is the head of the Empire, combining in Himself the rights of sovereignty, and exercises them, according to the provisions of the present Constitution" (article 4) and that "The Emperor has the supreme command of the Army and the Navy" (article 11). Under the emperor were a political cabinet and the Imperial General Headquarters, with two chiefs of staff. At its height, Japan's Greater East Asia Co-Prosperity Sphere included Manchuria, Inner Mongolia, large parts of China, Malaysia, French Indochina, Dutch East Indies, The Philippines, Burma, some of India, and various Pacific Islands in the central Pacific. As a result of the internal discord and economic downturn of the 1920s, militaristic elements set Japan on a path of expansionism. As the Japanese home islands lacked natural resources needed for growth, Japan planned to establish hegemony in Asia and become self-sufficient by acquiring territories with abundant natural resources. Japan's expansionist policies alienated it from other countries in the League of Nations and by the mid-1930s brought it closer to Germany and Italy, who had both pursued similar expansionist policies. Cooperation between Japan and Germany began with the Anti-Comintern Pact, in which the two countries agreed to ally to challenge any attack by the Soviet Union. Japan entered into conflict against the Chinese in 1937. The Japanese invasion and occupation of parts of China resulted in numerous atrocities against civilians, such as the Nanking massacre and the Three Alls Policy. The Japanese also fought skirmishes with Soviet–Mongolian forces in Manchukuo in 1938 and 1939. Japan sought to avoid war with the Soviet Union by signing a non-aggression pact with it in 1941. Japan's military leaders were divided on Japan's diplomatic relationships with Germany and Italy and the attitude towards the United States. The Imperial Japanese Army was in favour of war with the United States, and the Imperial Japanese Navy was generally strongly opposed. When Prime Minister of Japan General Hideki Tojo refused American demands that Japan withdraw its military forces from China, a confrontation became more likely. War with the United States was being discussed within the Japanese government by 1940. Commander of the Combined Fleet Admiral Isoroku Yamamoto was outspoken in his opposition, especially after the signing of the Tripartite Pact, saying on 14 October 1940: "To fight the United States is like fighting the whole world. But it has been decided. So I will fight the best I can. Doubtless I shall die on board Nagato [his flagship]. Meanwhile Tokyo will be burnt to the ground three times. Konoe and others will be torn to pieces by the revengeful people, I [shouldn't] wonder. " In October and November 1940, Yamamoto communicated with Navy Minister Oikawa, and stated, "Unlike the pre-Tripartite days, great determination is required to make certain that we avoid the danger of going to war. " With the European powers focused on the war in Europe, Japan sought to acquire their colonies. In 1940 Japan responded to the German invasion of France by occupying French Indochina. The Vichy France regime, a de facto ally of Germany, accepted the takeover. The allied forces did not respond with war. However, the United States instituted an embargo against Japan in 1941 because of the continuing war in China. This cut off Japan's supply of scrap metal and oil needed for industry, trade, and the war effort. To isolate the American forces stationed in the Philippines and to reduce American naval power, the Imperial General Headquarters ordered an attack on the U. S. naval base at Pearl Harbor, Hawaii, on 7 December 1941. They also invaded Malaya and Hong Kong. Initially achieving a series of victories, by 1943 the Japanese forces were driven back towards the home islands. The Pacific War lasted until the atomic bombings of Hiroshima and Nagasaki in 1945. The Soviets formally declared war in August 1945 and engaged Japanese forces in Manchuria and northeast China. Colonies and dependencies In Asia Korea was a Japanese protectorate and dependency formally established by the Japan–Korea Treaty of 1910. The South Pacific Mandate were territories granted to Japan in 1919 in the peace agreements of World War I, that designated to Japan the German South Pacific islands. Japan received these as a reward by the Allies of World War I, when Japan was then allied against Germany. Taiwan, then known as Formosa, was a Japanese dependency established in 1895. War justifications Duce Benito Mussolini described Italy's declaration of war against the Western Allies of Britain and France in June 1940 as the following: "We are going to war against the plutocratic and reactionary democracies of the West who have invariably hindered the progress and often threatened the very existence of the Italian people ...". Italy condemned the Western powers for enacting sanctions on Italy in 1935 for its actions in the Second Italo-Ethiopian War that Italy claimed was a response to an act of Ethiopian aggression against tribesman in Italian Eritrea in the Walwal incident of 1934. In 1938 Mussolini and foreign minister Ciano issued demands for concessions by France, particularly regarding the French colonial possessions of Djibouti, Tunisia and the French-run Suez Canal. Italy demanded a sphere of influence in the Suez Canal in Egypt, specifically demanding that the French-dominated Suez Canal Company accept an Italian representative on its board of directors. Italy opposed the French monopoly over the Suez Canal because under the French-dominated Suez Canal Company all Italian merchant traffic to its colony of Italian East Africa was forced to pay tolls upon entering the canal. Italy like Germany also justified its actions by claiming that Italy needed to territorially expand to provide spazio vitale ("vital space") for the Italian nation. Italy justified its intervention against Greece in October 1940 on the allegation that Greece was being used by Britain against Italy, Mussolini informed this to Hitler, saying: "Greece is one of the main points of English maritime strategy in the Mediterranean". Italy justified its intervention against Yugoslavia in 1941 by appealing to both Italian irredentist claims and the fact of Albanian, Croatian, and Vardar Macedonian separatists not wishing to be part of Yugoslavia. Croatian separatism soared after the assassination of Croatian political leaders in the Yugoslav parliament in 1928 including the death of Stjepan Radić, and Italy endorsed Croatian separatist Ante Pavelić and his fascist Ustaše movement that was based and trained in Italy with the Fascist regime's support prior to intervention against Yugoslavia. In the late 19th century, after Italian unification, a nationalist movement had grown around the concept of Italia irredenta, which advocated the incorporation into Italy of Italian-speaking areas under foreign rule. There was a desire to annex Dalmatian territories, which had formerly been ruled by the Venetians, and which consequently had Italian-speaking elites. The intention of the Fascist regime was to create a "New Roman Empire" in which Italy would dominate the Mediterranean. In 1935–1936 Italy invaded and annexed Ethiopia and the Fascist government proclaimed the creation of the "Italian Empire". Protests by the League of Nations, especially the British, who had interests in that area, led to no serious action. Italy later faced diplomatic isolation from several countries. In 1937 Italy left the League of Nations and joined the Anti-Comintern Pact, which had been signed by Germany and Japan the preceding year. In March/April 1939 Italian troops invaded and annexed Albania. Germany and Italy signed the Pact of Steel on May 22. Italy entered World War II on 10 June 1940. In September 1940 Germany, Italy, and Japan signed the Tripartite Pact. Italy was ill-prepared for war, in spite of the fact that it had continuously been involved in conflict since 1935, first with Ethiopia in 1935–1936 and then in the Spanish Civil War on the side of Francisco Franco's Nationalists. Military planning was deficient, as the Italian government had not decided on which theatre would be the most important. Power over the military was overcentralized to Mussolini's direct control; he personally undertook to direct the ministry of war, the navy, and the air force. The navy did not have any aircraft carriers to provide air cover for amphibious assaults in the Mediterranean, as the Fascist regime believed that the air bases on the Italian Peninsula would be able to do this task. Italy's army had outmoded artillery and the armoured units used outdated formations not suited to modern warfare. Diversion of funds to the air force and navy to prepare for overseas operations meant less money was available for the army; the standard rifle was a design that dated back to 1891. The Fascist government failed to learn from mistakes made in Ethiopia and Spain; it ignored the implications of the Italian Fascist volunteer soldiers being routed at the Battle of Guadalajara in the Spanish Civil War. Military exercises by the army in the Po Valley in August 1939 disappointed onlookers, including King Victor Emmanuel III. Mussolini who was angered by Italy's military unpreparedness, dismissed Alberto Pariani as Chief of Staff of the Italian military in 1939. Italy's only strategic natural resource was an abundance of aluminum. Petroleum, iron, copper, nickel, chrome, and rubber all had to be imported. The Fascist government's economic policy of autarky and a recourse to synthetic materials was not able to meet the demand. Prior to entering the war, the Fascist government sought to gain control over resources in the Balkans, particularly oil from Romania. The agreement between Germany and the Soviet Union to invade and partition Poland between them resulted in Hungary that bordered the Soviet Union after Poland's partition, and Romania viewing Soviet invasion as an immediate threat, resulting in both countries appealing to Italy for support, beginning in September 1939. Italy - then still officially neutral - responded to appeals by the Hungarian and Romanian governments for protection from the Soviet Union, by proposing a Danube-Balkan neutrals bloc. The proposed bloc was designed to increase Italian influence in the Balkans, it met resistance from France, Germany, and the Soviet Union that did not want to lose their influence in the Balkans; however Britain that still hoped that Italy would not enter the war on Germany's side, supported the neutral bloc. The efforts to form the bloc failed by November 1939 after Turkey made an agreement that it would protect Allied Mediterranean territory, along with Greece and Romania. Mussolini refused to heed warnings from his minister of exchange and currency, Felice Guarneri, who said that Italy's actions in Ethiopia and Spain meant the nation was on the verge of bankruptcy. By 1939 military expenditures by Britain and France far exceeded what Italy could afford. After entering the war in 1940, Italy had been slated to be granted a series of territorial concessions from France that Hitler had agreed to with Italian foreign minister Ciano, that included Italian annexation of claimed territories in southeastern France, a military occupation of southeastern France up to the river Rhone, and receiving the French colonies of Tunisia and Djibouti. However on 22 June 1940, Mussolini suddenly informed Hitler that Italy was abandoning its claims "in the Rhone, Corsica, Tunisia, and Djibouti", instead requesting a demilitarized zone along the French border, and on 24 June Italy agreed to an armistice with the Vichy regime to that effect. Later on 7 July 1940, the Italian government changed its decision, and Ciano attempted to make an agreement with Hitler to have Nice, Corsica, Tunisia, and Djibouti be transferred to Italy; Hitler adamantly rejected any new settlement or separate French-Italian peace agreement for the time being prior to the defeat of Britain in the war. However Italy continued to press Germany for the incorporation of Nice, Corsica, and Tunisia into Italy, with Mussolini sending a letter to Hitler in October 1940, informing him that as the 850,000 Italians living under France's current borders formed the largest minority community, that ceding these territories to Italy would be beneficial to both Germany and Italy as it would reduce France's population from 35 million to 34 and forestall any possibility of resumed French ambitions for expansion or hegemony in Europe. Germany had considered the possibility of invading and occupying the non-occupied territories of Vichy France including occupying Corsica Germany capturing the Vichy French fleet for use by Germany, in December 1940 with the proposed Operation Attila. An invasion of Vichy France by Germany and Italy took place with Case Anton in November 1942. In mid-1940, in response to an agreement by Romanian Conductator Ion Antonescu to accept German "training troops" to be sent to Romania, both Mussolini and Stalin in the Soviet Union were angered by Germany's expanding sphere of influence into Romania, and especially because neither was informed in advance of the action in spite of German agreements with Italy and the Soviet Union at that time. Mussolini in a conversation with Ciano responded to Hitler's deployment of troops into Romania, saying: "Hitler always faces me with accomplished facts. Now I'll pay him back by his same currency. He'll learn from the papers that I have occupied Greece. So the balance will be re-established.". However Mussolini later decided to inform Hitler in advance of Italy's designs on Greece. Upon hearing of Italy's intervention against Greece, Hitler was deeply concerned as he said that the Greeks were not bad soldiers that Italy might not win in its war with Greece, as he did not want Germany to become embroiled in a Balkan conflict. By 1941, Italy's attempts to run an autonomous campaign from Germany's, collapsed as a result of multiple defeats in Greece, North Africa, and Eastern Africa; and the country became dependent and effectively subordinate to Germany. After the German-led invasion and occupation of Yugoslavia and Greece, that had both been targets of Italy's war aims, Italy was forced to accept German dominance in the two occupied countries. Furthermore, by 1941, German forces in North Africa under Erwin Rommel effectively took charge of the military effort ousting Allied forces from the Italian colony of Libya, and German forces were stationed in Sicily in that year. The German government in response to Italian military failures and dependence on German military assistance, viewed Italy with contempt as an unreliable ally, and no longer took any serious consideration of Italian interests. Germany's contempt for Italy as an ally was demonstrated that year when Italy was pressured to send 350,000 "guest workers" to Germany who were used as forced labour. While Hitler was deeply disappointed with the Italian military's performance, he maintained overall favourable relations with Italy because of his personal friendship and admiration of Mussolini. Mussolini by mid-1941 was left bewildered and recognized both that Italy's war objectives had failed and that Italy was completely subordinate and dependent to Germany. Mussolini henceforth believed that Italy was left with no choice in such a subordinate status other than to follow Germany in its war and hope for a German victory. However Germany supported Italian propaganda of the creation of a "Latin Bloc" of Italy, Vichy France, Spain, and Portugal to ally with Germany against the threat of communism, and after the German invasion of the Soviet Union, the prospect of a Latin Bloc seemed plausible. From 1940 to 1941, Francisco Franco of Spain had endorsed a Latin Bloc of Italy, Vichy France, Spain and Portugal, in order to balance the countries' powers to that of Germany, however the discussions failed to yield an agreement. After the invasion and occupation of Yugoslavia, Italy annexed numerous Adriatic islands and a portion of Dalmatia that was formed into the Italian Governorship of Dalmatia including territory from the provinces of Split, Zadar, and Kotor. Though Italy had initially larger territorial aims that extended from the Velebit mountains to the Albanian Alps, Mussolini decided against annexing further territories due to a number of factors, including that Italy held the economically valuable portion of that territory within its possession while the northern Adriatic coast had no important railways or roads and because a larger annexation would have included hundreds of thousands of Slavs who were hostile to Italy, within its national borders. Mussolini and foreign minister Ciano demanded that the Yugoslav region of Slovenia to be directly annexed into Italy, however in negotiations with German foreign minister Ribbentrop in April 1941, Ribbentrop insisted on Hitler's demands that Germany be allocated the eastern Slovenia while Italy would be allocated western Slovenia, Italy conceded to this German demand and Slovenia was partitioned between Germany and Italy. Internal opposition by Italians to the war and the Fascist regime accelerated by 1942, though significant opposition to the war had existed at the outset in 1940, as police reports indicated that many Italians were secretly listening to the BBC rather than Italian media in 1940. Underground Catholic, Communist, and socialist newspapers began to become prominent by 1942. By January 1943, King Victor Emmanuel III was persuaded by the Minister of the Royal Household, the Duke of Acquarone that Mussolini had to be removed from office. On 25 July 1943, King Victor Emmanuel III dismissed Mussolini, placed him under arrest, and began secret negotiations with the Allies. An armistice was signed on 8 September 1943, and Italy joined the Allies as a co-belligerent. On 12 September 1943, Mussolini was rescued by the Germans in Operation Oak and placed in charge of a puppet state called the Italian Social Republic (Repubblica Sociale Italiana/RSI, or Repubblica di Salò) in northern Italy. He was killed by Communist partisans on 28 April 1945. Colonies and dependencies In Europe Albania was an Italian protectorate and dependency from 1939 to 1943. In spite of Albania's long-standing protection and alliance with Italy, on 7 April 1939 Italian troops invaded Albania, five months before the start of the Second World War. Following the invasion, Albania became a protectorate under Italy, with King Victor Emmanuel III of Italy being awarded the crown of Albania. an Italian governor controlled Albania. Albanian troops under Italian control were sent to participate in the Italian invasion of Greece and the Axis occupation of Yugoslavia. Following Yugoslavia's defeat, Kosovo was annexed to Albania by the Italians. When the Fascist regime of Italy fell, in September 1943 Albania fell under German occupation. The Dodecanese Islands were an Italian dependency from 1912 to 1943. Montenegro was an Italian protectorate and dependence from 1941 to 1943 that was under the control of an Italian military governor. In Africa Italian East Africa was an Italian colony existing from 1936 to 1943. Prior to the invasion and annexation of Ethiopia into this united colony in 1936, Italy had two colonies, Eritrea and Somalia since the 1880s. Libya was an Italian colony existing from 1912 to 1943. The northern portion of Libya was directly into Italy in 1939, however the region remained united as a colony under a colonial governor. Self-governing sovereign dominions or protectorates Croatia (Independent State of Croatia) The Independent State of Croatia (NDH) established in 1941, was officially a self-governing sovereign protectorate under King Tomislav II, an Italian monarch from Italy's House of Savoy. The NDH was also under strong German influence, and after Italy capitulated its war effort in 1943, the NDH was no longer a monarchy, and became a German client state. Military-contributing minor powers Political instability plagued the country until Miklós Horthy, a Hungarian nobleman and Austro-Hungarian naval officer, became regent in 1920. Hungarian nationalists desired to recover territories lost through the Trianon Treaty. The country drew closer to Germany and Italy largely because of a shared desire to revise the peace settlements made after World War I. Many people sympathized with the anti-Semitic policy of the Nazi regime. Due to its pro-German stance, Hungary received favourable territorial settlements when Germany annexed Czechoslovakia in 1938–1939 and received Northern Transylvania from Romania via the Vienna Awards of 1940. Hungarians permitted German troops to transit through their territory during the invasion of Yugoslavia, and Hungarian forces took part in the invasion. Parts of Yugoslavia were annexed to Hungary; the United Kingdom immediately broke off diplomatic relations in response. Although Hungary did not initially participate in the German invasion of the Soviet Union, Hungary declared war on the Soviet Union on 27 June 1941. Over 500,000 soldiers served on the Eastern Front. All five of Hungary's field armies ultimately participated in the war against the Soviet Union; a significant contribution was made by the Hungarian Second Army. On 25 November 1941, Hungary was one of thirteen signatories to the revived Anti-Comintern Pact. Hungarian troops, like their Axis counterparts, were involved in numerous actions against the Soviets. By the end of 1943, the Soviets had gained the upper hand and the Germans were retreating. The Hungarian Second Army was destroyed in fighting on the Voronezh Front, on the banks of the Don River. In 1944, with Soviet troops advancing toward Hungary, Horthy attempted to reach an armistice with the Allies. However, the Germans replaced the existing regime with a new one. After fierce fighting, Budapest was taken by the Soviets. A number of pro-German Hungarians retreated to Italy and Germany, where they fought until the end of the war. When war erupted in Europe in 1939, the Kingdom of Romania was pro-British and allied to the Poles. Following the invasion of Poland by Germany and the Soviet Union, and the German conquest of France and the Low Countries, Romania found itself increasingly isolated; meanwhile, pro-German and pro-Fascist elements began to grow. The August 1939 Molotov–Ribbentrop Pact between Germany and the Soviet Union contained a secret protocol ceding Bessarabia, part of northern Romania, to the Soviet Union. On June 28, 1940, the Soviet Union occupied and annexed Bessarabia, as well as Northern Bukovina and the Hertza region. On 30 August 1940, Germany forced Romania to cede Northern Transylvania to Hungary as a result of the second Vienna Award. Southern Dobruja was ceded to Bulgaria in September 1940. In an effort to appease the Fascist elements within the country and obtain German protection, King Carol II appointed the General Ion Antonescu as Prime Minister on September 6, 1940. Two days later, Antonescu forced the king to abdicate and installed the king's young son Michael (Mihai) on the throne, then declared himself Conducător ("Leader") with dictatorial powers. Under King Michael I and the military government of Antonescu, Romania signed the Tripartite Pact on November 23, 1940. German troops entered the country in 1941 and used the country as platform for invasions of Yugoslavia and the Soviet Union. Romania was a key supplier of resources, especially oil and grain. Romania joined the German-led invasion of the Soviet Union on June 22, 1941; nearly 800,000 Romanian soldiers fought on the Eastern front. Areas that were annexed by the Soviets were reincorporated into Romania, along with the newly established Transnistria Governorate. After suffering devastating losses at Stalingrad, Romanian officials began secretly negotiating peace conditions with the Allies. By 1943, the tide began to turn. The Soviets pushed further west, retaking Ukraine and eventually launching an unsuccessful invasion of eastern Romania in the spring of 1944. Foreseeing the fall of Nazi Germany, Romania switched sides during King Michael's Coup on August 23, 1944. Romanian troops then fought alongside the Soviet Army until the end of the war, reaching as far as Czechoslovakia and Austria. The Kingdom of Bulgaria was ruled by Тsar Boris III when it signed the Tripartite Pact on 1 March 1941. Bulgaria had been on the losing side in the First World War and sought a return of lost ethnically and historically Bulgarian territories, specifically in Macedonia and Thrace. During the 1930s, because of traditional right-wing elements, Bulgaria drew closer to Nazi Germany. In 1940 Germany pressured Romania to sign the Treaty of Craiova, returning to Bulgaria the region of Southern Dobrudja, which it had lost in 1913. The Germans also promised Bulgaria — in case it joined the Axis — an enlargement of its territory to the borders specified in the Treaty of San Stefano. Bulgaria participated in the Axis invasion of Yugoslavia and Greece by letting German troops attack from its territory and sent troops to Greece on April 20. As a reward, the Axis powers allowed Bulgaria to occupy parts of both countries—southern and south-eastern Yugoslavia (Vardar Banovina) and north-eastern Greece (parts of Greek Macedonia and Greek Thrace). The Bulgarian forces in these areas spent the following years fighting various nationalist groups and resistance movements. Despite German pressure, Bulgaria did not take part in the Axis invasion of the Soviet Union and actually never declared war on the Soviet Union. The Bulgarian Navy was nonetheless involved in a number of skirmishes with the Soviet Black Sea Fleet, which attacked Bulgarian shipping. Following the Japanese attack on Pearl Harbor in December 1941, the Bulgarian government declared war on the Western Allies. This action remained largely symbolic (at least from the Bulgarian perspective), until August 1943, when Bulgarian air defense and air force attacked Allied bombers, returning (heavily damaged) from a mission over the Romanian oil refineries. This turned into a disaster for the citizens of Sofia and other major Bulgarian cities, which were heavily bombed by the Allies in the winter of 1943–1944. On 2 September 1944, as the Red Army approached the Bulgarian border, a new Bulgarian government came to power and sought peace with the Allies, expelled the few remaining German troops, and declared neutrality. These measures however did not prevent the Soviet Union from declaring war on Bulgaria on 5 September, and on 8 September the Red Army marched into the country, meeting no resistance. This was followed by the coup d'état of 9 September 1944, which brought a government of the pro-Soviet Fatherland Front. After this, the Bulgarian army (as part of the Red Army's Third Ukrainian Front) fought the Germans in Yugoslavia and Hungary, sustaining numerous casualties. Despite this, the Paris Peace Treaty treated Bulgaria as one of the defeated countries. Bulgaria was allowed to keep Southern Dobrudja, but had to give up all claims to Greek and Yugoslav territory. 150,000 ethnic Bulgarians were expelled from Greek Thrace alone. Various countries fought side by side with the Axis powers for a common cause. These countries were not signatories of the Tripartite Pact and thus not formal members of the Axis. Japanese forces invaded Thailand's territory on the morning of 8 December 1941, the day after the attack on Pearl Harbor. Only hours after the invasion, prime minister Field Marshal Phibunsongkhram ordered the cessation of resistance against the Japanese. On 21 December 1941, a military alliance with Japan was signed and on 25 January 1942, Sang Phathanothai read over the radio Thailand's formal declaration of war on the United Kingdom and the United States. The Thai ambassador to the United States, Mom Rajawongse Seni Pramoj, did not deliver his copy of the declaration of war. Therefore, although the British reciprocated by declaring war on Thailand and considered it a hostile country, the United States did not. On 21 March, the Thais and Japanese also agreed that Shan State and Kayah State were to be under Thai control. The rest of Burma was to be under Japanese control, On 10 May 1942, the Thai Phayap Army entered Burma's eastern Shan State, which had been claimed by Siamese kingdoms. Three Thai infantry and one cavalry division, spearheaded by armoured reconnaissance groups and supported by the air force, engaged the retreating Chinese 93rd Division. Kengtung, the main objective, was captured on 27 May. Renewed offensives in June and November evicted the Chinese into Yunnan. The area containing the Shan States and Kayah State was annexed by Thailand in 1942. The areas were ceded back to Burma in 1946. The Free Thai Movement ("Seri Thai") was established during these first few months. Parallel Free Thai organizations were also established in the United Kingdom. Queen Ramphaiphanni was the nominal head of the British-based organization, and Pridi Phanomyong, the regent, headed its largest contingent, which was operating within Thailand. Aided by elements of the military, secret airfields and training camps were established, while Office of Strategic Services and Force 136 agents slipped in and out of the country. As the war dragged on, the Thai population came to resent the Japanese presence. In June 1944, Phibun was overthrown in a coup d'état. The new civilian government under Khuang Aphaiwong attempted to aid the resistance while maintaining cordial relations with the Japanese. After the war, U. S. influence prevented Thailand from being treated as an Axis country, but the British demanded three million tons of rice as reparations and the return of areas annexed from Malaya during the war. Thailand also returned the portions of British Burma and French Indochina that had been annexed. Phibun and a number of his associates were put on trial on charges of having committed war crimes and of collaborating with the Axis powers. However, the charges were dropped due to intense public pressure. Public opinion was favourable to Phibun, since he was thought to have done his best to protect Thai interests. Although Finland never signed the Tripartite Pact and legally (de jure) was not a part of the Axis, it was Axis-aligned in its fight against the Soviet Union. Finland signed the revived Anti-Comintern Pact of November 1941. The August 1939 Molotov-Ribbentrop Pact between Germany and the Soviet Union contained a secret protocol dividing much of eastern Europe and assigning Finland to the Soviet sphere of influence. After unsuccessfully attempting to force territorial and other concessions on the Finns, the Soviet Union tried to invade Finland in November 1939 during the Winter War, intending to establish a communist puppet government in Finland. The conflict threatened Germany's iron-ore supplies and offered the prospect of Allied interference in the region. Despite Finnish resistance, a peace treaty was signed in March 1940, wherein Finland ceded some key territory to the Soviet Union, including the Karelian Isthmus, containing Finland's second-largest city, Viipuri, and the critical defensive structure of the Mannerheim Line. After this war, Finland sought protection and support from the United Kingdom and neutral Sweden, but was thwarted by Soviet and German actions. This resulted in Finland being drawn closer to Germany, first with the intent of enlisting German support as a counterweight to thwart continuing Soviet pressure, and later to help regain lost territories. In the opening days of Operation Barbarossa, Germany's invasion of the Soviet Union, Finland permitted German planes returning from mine dropping runs over Kronstadt and Neva River to refuel at Finnish airfields before returning to bases in East Prussia. In retaliation, the Soviet Union launched a major air offensive against Finnish airfields and towns, which resulted in a Finnish declaration of war against the Soviet Union on 25 June 1941. The Finnish conflict with the Soviet Union is generally referred to as the Continuation War. Finland's main objective was to regain territory lost to the Soviet Union in the Winter War. However, on 10 July 1941, Field Marshal Carl Gustaf Emil Mannerheim issued an Order of the Day that contained a formulation understood internationally as a Finnish territorial interest in Russian Karelia. Diplomatic relations between the United Kingdom and Finland were severed on 1 August 1941, after the British bombed German forces in the Finnish village and port of Petsamo. The United Kingdom repeatedly called on Finland to cease its offensive against the Soviet Union, and declared war on Finland on 6 December 1941, although no other military operations followed. War was never declared between Finland and the United States, though relations were severed between the two countries in 1944 as a result of the Ryti-Ribbentrop Agreement. Finland maintained command of its armed forces and pursued war objectives independently of Germany. Germans and Finns did work closely together during Operation Silverfox, a joint offensive against Murmansk. Finland refused German requests to participate actively in the Siege of Leningrad, and also granted asylum to Jews, while Jewish soldiers continued to serve in its army. The relationship between Finland and Germany more closely resembled an alliance during the six weeks of the Ryti-Ribbentrop Agreement, which was presented as a German condition for help with munitions and air support, as the Soviet offensive coordinated with D-Day threatened Finland with complete occupation. The agreement, signed by President Risto Ryti but never ratified by the Finnish Parliament, bound Finland not to seek a separate peace. After Soviet offensives were fought to a standstill, Ryti's successor as president, Marshall Mannerheim, dismissed the agreement and opened secret negotiations with the Soviets, which resulted in a ceasefire on 4 September and the Moscow Armistice on 19 September 1944. Under the terms of the armistice, Finland was obliged to expel German troops from Finnish territory, which resulted in the Lapland War. Finland signed a peace treaty with the Allied powers in 1947. San Marino San Marino, ruled by the Sammarinese Fascist Party (PFS) since 1923, was closely allied to Italy. On 17 September 1940, San Marino declared war on Britain; Britain did not reciprocate. A day earlier, San Marino restored diplomatic relations with Germany, as it did not attend the 1919 Paris Peace Conference. San Marino's 1,000 man army remained garrisoned within the country. The war declaration was intended for propaganda purposes to further isolate and demoralize Britain. Three days after the fall of Mussolini, the PFS was removed and the new government declared neutrality in the conflict. The Fascists regained power on 1 April 1944, but kept neutrality intact. On 26 June, the Royal Air Force accidentally bombed the country, killing 63. Germany used this tragedy in propaganda about Allied aggression against a neutral country. Retreating Axis forces occupied San Marino on 17 September, but were forced out by the Allies in less than three days. The Allied occupation removed the Fascists from power, and San Marino declared war on Germany on 21 September. The newly elected government banned the Fascists on 16 November. Anti-British sentiments were widespread in Iraq prior to 1941. Seizing power on 1 April 1941, the nationalist government of Prime Minister Rashid Ali repudiated the Anglo-Iraqi Treaty of 1930 and demanded that the British abandon their military bases and withdraw from the country. Ali sought support from Germany and Italy in expelling British forces from Iraq. On 9 May 1941, Mohammad Amin al-Husayni, the Mufti of Jerusalem and associate of Ali, declared holy war against the British and called on Arabs throughout the Middle East to rise up against British rule. On 25 May 1941, the Germans stepped up offensive operations in the Middle East. Hitler issued Order 30: "The Arab Freedom Movement in the Middle East is our natural ally against England. In this connection special importance is attached to the liberation of Iraq ... I have therefore decided to move forward in the Middle East by supporting Iraq. " Hostilities between the Iraqi and British forces began on 2 May 1941, with heavy fighting at the RAF air base in Habbaniyah. The Germans and Italians dispatched aircraft and aircrew to Iraq utilizing Vichy French bases in Syria, which would later invoke fighting between Allied and Vichy French forces in Syria. The Germans planned to coordinate a combined German-Italian offensive against the British in Egypt, Palestine, and Iraq. Iraqi military resistance ended by 31 May 1941. Rashid Ali and the Mufti of Jerusalem fled to Iran, then Turkey, Italy, and finally Germany, where Ali was welcomed by Hitler as head of the Iraqi government-in-exile in Berlin. In propaganda broadcasts from Berlin, the Mufti continued to call on Arabs to rise up against the British and aid German and Italian forces. He also helped recruit Muslim volunteers in the Balkans for the Waffen-SS. Client states The Empire of Japan created a number of client states in the areas occupied by its military, beginning with the creation of Manchukuo in 1932. These puppet states achieved varying degrees of international recognition. Manchukuo (Manchuria) Manchukuo, in the northeast region of China, had been a Japanese puppet state in Manchuria since the 1930s. It was nominally ruled by Puyi, the last emperor of the Qing Dynasty, but was in fact controlled by the Japanese military, in particular the Kwantung Army. While Manchukuo ostensibly was a state for ethnic Manchus, the region had a Han Chinese majority. Following the Japanese invasion of Manchuria in 1931, the independence of Manchukuo was proclaimed on 18 February 1932, with Puyi as head of state. He was proclaimed the Emperor of Manchukuo a year later. The new Manchu nation was recognized by 23 of the League of Nations' 80 members. Germany, Italy, and the Soviet Union were among the major powers who recognised Manchukuo. Other countries who recognized the State were the Dominican Republic, Costa Rica, El Salvador, and Vatican City. Manchukuo was also recognised by the other Japanese allies and puppet states, including Mengjiang, the Burmese government of Ba Maw, Thailand, the Wang Jingwei regime, and the Indian government of Subhas Chandra Bose. The League of Nations later declared in 1934 that Manchuria lawfully remained a part of China. This precipitated Japanese withdrawal from the League. The Manchukuoan state ceased to exist after the Soviet invasion of Manchuria in 1945. Mengjiang (Inner Mongolia) Mengjiang was a Japanese puppet state in Inner Mongolia. It was nominally ruled by Prince Demchugdongrub, a Mongol nobleman descended from Genghis Khan, but was in fact controlled by the Japanese military. Mengjiang's independence was proclaimed on 18 February 1936, following the Japanese occupation of the region. The Inner Mongolians had several grievances against the central Chinese government in Nanking, including their policy of allowing unlimited migration of Han Chinese to the region. Several of the young princes of Inner Mongolia began to agitate for greater freedom from the central government, and it was through these men that Japanese saw their best chance of exploiting Pan-Mongol nationalism and eventually seizing control of Outer Mongolia from the Soviet Union. Japan created Mengjiang to exploit tensions between ethnic Mongolians and the central government of China, which in theory ruled Inner Mongolia. When the various puppet governments of China were unified under the Wang Jingwei government in March 1940, Mengjiang retained its separate identity as an autonomous federation. Although under the firm control of the Japanese Imperial Army, which occupied its territory, Prince Demchugdongrub had his own independent army. Mengjiang vanished in 1945 following Japan's defeat in World War II. As Soviet forces advanced into Inner Mongolia, they met limited resistance from small detachments of Mongolian cavalry, which, like the rest of the army, were quickly overwhelmed. Reorganized National Government of China During the Second Sino-Japanese War, Japan advanced from its bases in Manchuria to occupy much of East and Central China. Several Japanese puppet states were organized in areas occupied by the Japanese Army, including the Provisional Government of the Republic of China at Beijing, which was formed in 1937, and the Reformed Government of the Republic of China at Nanjing, which was formed in 1938. These governments were merged into the Reorganized National Government of China at Nanjing on 29 March 1940. Wang Jingwei became head of state. The government was to be run along the same lines as the Nationalist regime and adopted its symbols. The Nanjing Government had no real power; its main role was to act as a propaganda tool for the Japanese. The Nanjing Government concluded agreements with Japan and Manchukuo, authorising Japanese occupation of China and recognising the independence of Manchukuo under Japanese protection. The Nanjing Government signed the Anti-Comintern Pact of 1941 and declared war on the United States and the United Kingdom on 9 January 1943. The government had a strained relationship with the Japanese from the beginning. Wang's insistence on his regime being the true Nationalist government of China and in replicating all the symbols of the Kuomintang led to frequent conflicts with the Japanese, the most prominent being the issue of the regime's flag, which was identical to that of the Republic of China. The worsening situation for Japan from 1943 onwards meant that the Nanking Army was given a more substantial role in the defence of occupied China than the Japanese had initially envisaged. The army was almost continuously employed against the communist New Fourth Army. Wang Jingwei died on 10 November 1944, and was succeeded by his deputy, Chen Gongbo. Chen had little influence; the real power behind the regime was Zhou Fohai, the mayor of Shanghai. Wang's death dispelled what little legitimacy the regime had. The state stuttered on for another year and continued the display and show of a fascist regime. On 9 September 1945, following the defeat of Japan, the area was surrendered to General He Yingqin, a nationalist general loyal to Chiang Kai-shek. The Nanking Army generals quickly declared their alliance to the Generalissimo, and were subsequently ordered to resist Communist attempts to fill the vacuum left by the Japanese surrender. Chen Gongbo was tried and executed in 1946. Philippines (Second Republic) After the surrender of the Filipino and American forces in Bataan Peninsula and Corregidor Island, the Japanese established a puppet state in the Philippines in 1942. The following year, the Philippine National Assembly declared the Philippines an independent Republic and elected José Laurel as its President. There was never widespread civilian support for the state, largely because of the general anti-Japanese sentiment stemming from atrocities committed by the Imperial Japanese Army. The Second Philippine Republic ended with Japanese surrender in 1945, and Laurel was arrested and charged with treason by the US government. He was granted amnesty by President Manuel Roxas, and remained active in politics, ultimately winning a seat in the post-war Senate. India (Provisional Government of Free India) The Arzi Hukumat-e-Azad Hind, the Provisional Government of Free India, was a government in exile led by Subhas Chandra Bose, an Indian nationalist who rejected Mohandas K. Gandhi's nonviolent methods for achieving independence. One of the most prominent leaders of the Indian independence movement of the time and former president of the Indian National Congress, Bose was arrested by British authorities at the outset of the Second World War. In January 1941 he escaped from house arrest, eventually reaching Germany. He arrived in 1942 in Singapore, base of the Indian National Army, made up largely from Indian prisoners of war and Indian residents in south east Asia who joined on their own initiative. Bose and local leader A.M. Sahay received ideological support from Mitsuru Toyama, chief of the Dark Ocean Society, along with Japanese Army advisers. Other Indian thinkers in favour of the Axis cause were Asit Krishna Mukherji, a friend of Bose, and Mukherji's wife, Savitri Devi, a French writer who admired Hitler. Bose was helped by Rash Behari Bose, founder of the Indian Independence League in Japan. Bose declared India's independence on October 21, 1943. The Japanese Army assigned to the Indian National Army a number of military advisors, among them Hideo Iwakuro and Saburo Isoda. The provisional government formally controlled the Andaman and Nicobar Islands; these islands had fallen to the Japanese and been handed over by Japan in November 1943. The government created its own currency, postage stamps, and national anthem. The government would last two more years, until 18 August 1945, when it officially became defunct. During its existence it received recognition from nine governments: Germany, Japan, Italy, Croatia, Manchukuo, China (under the Nanking Government of Wang Jingwei), Thailand, Burma (under the regime of Burmese nationalist leader Ba Maw), and the Philippines under de facto (and later de jure) president José Laurel. Vietnam (Empire of Vietnam) The Empire of Vietnam was a short-lived Japanese puppet state that lasted from 11 March to 23 August 1945. When the Japanese seized control of French Indochina, they allowed Vichy French administrators to remain in nominal control. This French rule ended on 9 March 1945, when the Japanese officially took control of the government. Soon after, Emperor Bảo Đại voided the 1884 treaty with France and Trần Trọng Kim, a historian, became prime minister. The Kingdom of Cambodia was a short-lived Japanese puppet state that lasted from 9 March 1945 to 15 April 1945. the Japanese entered Cambodia in mid-1941, but allowed Vichy French officials to remain in administrative posts. The Japanese calls for an "Asia for the Asiatics" won over many Cambodian nationalists. This policy changed during the last months of the war. The Japanese wanted to gain local support, so they dissolved French colonial rule and pressured Cambodia to declare its independence within the Greater East Asia Co-Prosperity Sphere. Four days later, King Sihanouk declared Kampuchea (the original Khmer pronunciation of Cambodia) independent. Co-editor of the Nagaravatta, Son Ngoc Thanh, returned from Tokyo in May and was appointed foreign minister. On the date of Japanese surrender, a new government was proclaimed with Son Ngoc Thah as prime minister. When the Allies occupied Phnom Penh in October, Son Ngoc Thanh was arrested for collaborating with the Japanese and was exiled to France. Some of his supporters went to northwestern Cambodia, which had been under Thai control since the French-Thai War of 1940, where they banded together as one faction in the Khmer Issarak movement, originally formed with Thai encouragement in the 1940s. Fears of Thai irredentism led to the formation of the first Lao nationalist organization, the Movement for National Renovation, in January 1941. The group was led by Prince Phetxarāt and supported by local French officials, though not by the Vichy authorities in Hanoi. This group wrote the current Lao national anthem and designed the current Lao flag, while paradoxically pledging support for France. The country declared its independence in 1945. The liberation of France in 1944, bringing Charles de Gaulle to power, meant the end of the alliance between Japan and the Vichy French administration in Indochina. The Japanese had no intention of allowing the Gaullists to take over, and in late 1944 they staged a military coup in Hanoi. Some French units fled over the mountains to Laos, pursued by the Japanese, who occupied Viang Chan in March 1945 and Luang Phrabāng in April. King Sīsavāngvong was detained by the Japanese, but his son Crown Prince Savāngvatthanā called on all Lao to assist the French, and many Lao died fighting against the Japanese occupiers. Prince Phetxarāt opposed this position. He thought that Lao independence could be gained by siding with the Japanese, who made him Prime Minister of Luang Phrabāng, though not of Laos as a whole. The country was in chaos, and Phetxarāt's government had no real authority. Another Lao group, the Lao Sēri (Free Lao), received unofficial support from the Free Thai movement in the Isan region. Burma (Ba Maw regime) The Japanese Army and Burma nationalists, led by Aung San, seized control of Burma from the United Kingdom during 1942. A State of Burma was formed on 1 August under the Burmese nationalist leader Ba Maw. The Ba Maw regime established the Burma Defence Army (later renamed the Burma National Army), which was commanded by Aung San. Italy occupied several nations and set up clients in those regions to carry out administrative tasks and maintain order. Politically and economically dominated by Italy from its creation in 1913, Albania was occupied by Italian military forces in 1939 as the Albanian king [Zog] fled the country with his family. The Albanian parliament voted to offer the Albanian throne to the King of Italy, resulting in a personal union between the two countries. The Albanian army, having been trained by Italian advisors, was reinforced by 100,000 Italian troops. A Fascist militia was organized, drawing its strength principally from Albanians of Italian descent. Albania served as the staging area for the Italian invasions of Greece and Yugoslavia. Albania annexed Kozovo in 1941 when Yugoslavia was dissolved, achieving the dream of Greater Albania. Albanian troops were dispatched to the Eastern Front to fight the Soviets as part of the Italian Eighth Army. Albania declared war on the United States in 1941. Montenegro, a former kingdom which was merged into Serbia to form Yugoslavia after the First World War, had long ties to Italy. When Yugoslavia came under Axis occupation, Montenegrin nationalist jumped at the opportunity to create a new Montenegro. Sekula Drljević and the core of the Montenegrin Federalist Party formed the Provisional Administrative Committee of Montenegro on 12 July 1941, and proclaimed on the Saint Peter's Congress the "Kingdom of Montenegro" under under the protection of Italy. The country served Italy as part of its goal of fragmenting the former Kingdom of Yugoslavia, expanding the Italian Empire throughout the Adriatic. The country was caught up in the rebellion of the Yugoslav Army in the Fatherland. Drljevic was expelled from Montenegro in October 1941. The country came under direct Italian control. With the Italian capitulation of 1943, Montenegro came directly under the control of Nazi Germany. In 1944 Drljević formed a pro-Ustaše Montenegrin State Council in exile based in the Independent State of Croatia, with the aim of restoring rule over Montenegro. The Montenegrin People's Army was formed out of various Montenegrin nationalist troops. By then the partisans had already liberated most of Montenegro, which became a federal state of the new Democratic Federal Yugoslavia. Montenegro endured intense air bombing by the Allied air forces in 1944. The Principality of Monaco was officially neutral during the war. The population of the country was largely of Italian descent and sympathized with Italy. Its prince was a close friend of the Vichy French leader, Marshal Philippe Pétain, an Axis collaborator. A fascist regime was established under the nominal rule of the prince when the Italian Fourth Army occupied the country on November 10, 1942 as part of Case Anton. Monaco's military forces, consisting primarily of police and palace guards, collaborated with the Italians during the occupation. German troops occupied Monaco in 1943, and Monaco was liberated by Allied forces in 1944. The collaborationist administrations of German-occupied countries in Europe had varying degrees of autonomy, and not all of them qualified as fully recognized sovereign states. The General Government in occupied Poland did not qualify as a legitimate Polish government and was essentially a German administration. In occupied Norway, the National Government headed by Vidkun Quisling – whose name came to symbolize pro-Axis collaboration in several languages – was subordinate to the Reichskommissariat Norwegen. It was never allowed to have any armed forces, be a recognized military partner, or have autonomy of any kind. In the occupied Netherlands, Anton Mussert was given the symbolic title of "Führer of the Netherlands' people". His National Socialist Movement formed a cabinet assisting the German administration, but was never recognized as a real Dutch government. Slovakia (Tiso regime) Slovakia had been closely aligned with Germany almost immediately from its declaration of independence from Czechoslovakia on 14 March 1939. Slovakia entered into a treaty of protection with Germany on 23 March 1939. Slovak troops joined the German invasion of Poland, having interest in Spiš and Orava. Those two regions, along with Cieszyn Silesia, had been disputed between Poland and Czechoslovakia since 1918. The Poles fully annexed them following the Munich Agreement. After the invasion of Poland, Slovakia reclaimed control of those territories. Slovakia invaded Poland alongside German forces, contributing 50,000 men at this stage of the war. Slovakia declared war on the Soviet Union in 1941 and signed the revived Anti-Comintern Pact of 1941. Slovak troops fought on Germany's Eastern Front, furnishing Germany with two divisions totaling 80,000 men. Slovakia declared war on the United Kingdom and the United States in 1942. Slovakia was spared German military occupation until the Slovak National Uprising, which began on 29 August 1944, and was almost immediately crushed by the Waffen SS and Slovak troops loyal to Josef Tiso. After the war, Tiso was executed and Slovakia was rejoined with Czechoslovakia. The border with Poland was shifted back to the pre-war state. Slovakia and the Czech Republic finally separated into independent states in 1993. Croatia (Independent State of Croatia) On 10 April 1941, the Independent State of Croatia (Nezavisna Država Hrvatska, or NDH) was declared to be a member of the Axis, co-signing the Tripartite Pact. The NDH remained a member of the Axis until the end of Second World War, its forces fighting for Germany even after NDH had been overrun by Yugoslav Partisans. On 16 April 1941, Ante Pavelić, a Croatian nationalist and one of the founders of the Ustaša – Croatian Liberation Movement, was proclaimed Poglavnik (leader) of the new regime. Initially the Ustaše had been heavily influenced by Italy, it was actively supported by Mussolini's Fascist regime in Italy, which gave the movement training grounds to prepare for war against Yugoslavia, as well as accepting Pavelić as an exile and allowing him to reside in Rome. Italy intended to use the movement to destroy Yugoslavia, which would allow Italy to expand its power through the Adriatic. Hitler did not want to engage in a war in the Balkans until the Soviet Union was defeated. The Italian occupation of Greece was not going well; Mussolini wanted Germany to invade Yugoslavia to save the Italian forces in Greece. Hitler reluctantly submitted; Yugoslavia was invaded and the Independent State of Croatia was created. Pavelić led a delegation to Rome and offered the crown of Croatia to an Italian prince of the House of Savoy, who was crowned Tomislav II, King of Croatia, Prince of Bosnia and Herzegovina, Voivode of Dalmatia, Tuzla and Knin, Prince of Cisterna and of Belriguardo, Marquess of Voghera, and Count of Ponderano. The next day, Pavelić signed the Contracts of Rome with Mussolini, ceding Dalmatia to Italy and fixing the permanent borders between the NDH and Italy. Italian armed forces were allowed to control all of the coastline of the NDH, effectively giving Italy total control of the Adriatic coastline. However German influence became strong upon the NDH being founded. After the King of Italy ousted Mussolini from power and Italy capitulated, the NDH became completely under German influence. The platform of the Ustaše movement proclaimed that Croatians had been oppressed by the Serb-dominated Kingdom of Yugoslavia, and that Croatians deserved to have an independent nation after years of domination by foreign empires. The Ustaše perceived Serbs to be racially inferior to Croats and saw them as infiltrators who were occupying Croatian lands. They saw the extermination of Serbs as necessary to racially purify Croatia. While part of Yugoslavia, many Croatian nationalists violently opposed the Serb-dominated Yugoslav monarchy, and assassinated Alexander I of Yugoslavia, together with the Internal Macedonian Revolutionary Organization. The regime enjoyed support amongst radical Croatian nationalists. Ustashe forces fought against Serbian Chetnik and communist Yugoslav Partisan guerrillas throughout the war. Upon coming to power, Pavelić formed the Croatian Home Guard (Hrvatsko domobranstvo) as the official military force of the NDH. Originally authorized at 16,000 men, it grew to a peak fighting force of 130,000. The Croatian Home Guard included an air force and navy, although its navy was restricted in size by the Contracts of Rome. In addition to the Croatian Home Guard, Pavelić was also the supreme commander of the Ustaše militia, although all NDH military units were generally under the command of the German or Italian formations in their area of operations. Many Croats volunteered for the German Waffen SS. The Ustaše government declared war on the Soviet Union, signed the Anti-Comintern Pact of 1941, and sent troops to Germany's Eastern Front. Ustaše militia were garrisoned the Balkans, battling the Chetniks and communist partisans. The Ustaše government applied racial laws on Serbs, Jews, and Romas, and after June 1941 deported them to the Jasenovac concentration camp or to German camps in Poland. The racial laws were enforced by the Ustaše militia. The exact number of victims of the Ustaše regime is uncertain due to the destruction of documents and varying numbers given by historians. The estimates range between 56,000 and 97,000 to 700,000 or more. Ustaše never had widespread support among the population of the NDH. Their own estimates put the number of sympathizers, even in the early phase, at around 40,000 out of total population of 7 million. However, they were able to rely on the passive acceptance of much of the Croat population of the NDH. Serbia (Government of National Salvation) In April 1941 Germany invaded and occupied Yugoslavia. On 30 April a pro-German Serbian administration was formed under Milan Aćimović. Forces loyal to Yugoslav government in exile organized resistance movement on Ravna Gora Mountain on 13 May 1941, under command of colonel Dragoljub Draža Mihailović. In 1941, after the invasion of the Soviet Union, a guerrilla campaign against the Germans and Italians was launched also by the communist partisans under Josip Broz Tito. The uprising became a serious concern for the Germans, as most of their forces were deployed to Russia; only three divisions were in the country. On 13 August 546 Serbs, including some of them country's prominent and influential leaders, issued an appeal to the Serbian nation that condemned the partisan and royalist resistance as unpatriotic. Two weeks after the appeal, with the partisan and royalist insurgency beginning to gain momentum, 75 prominent Serbs convened a meeting in Belgrade and formed a Government of National Salvation under Serbian General Milan Nedić to replace the existing Serbian administration. Former Yougoslav army general and minister of defense accepted to take a position of Prime minister only after Germans let him known that the rest of Serbia will be divided between Independent State of Croatia, Bulgaria, Hungary and Greater Albania. On 29 August the German authorities installed General Nedić and his government in power. The Germans were short of police and military forces in Serbia, and came to rely on poor armed Serbian formations to maintain order. Those forces were not able to defeat royalist forces, and for the most of the war large parts of Serbia were under control of the Yugoslav Army in Fatherland. Most of administration helped resistance movement, and some of them like colonel Milan Kalabić from Serbian State Guards were shot by the Gestapo as British agents and supporters of the royalist forces. Because of the large scale resistance taking place on Serbian soil, Germany installed a brutal regime of reprisals, resulting in the shooting of 100 Serbs for one killed German soldier, and 50 for one wounded. Large scale shootings took place in the Serbian towns of Kraljevo and Kragujevac during October 1941. Nedić's forces included the Serbian State Guards and the Serbian Volunteer Corps, which were initially largely members of the Yugoslav National Movement "Zbor" (Jugoslovenski narodni pokret "Zbor", or ZBOR) party. Some of these formations wore the uniform of the Royal Yugoslav Army as well as helmets and uniforms purchased from Italy, while others had equipment from Germany mostly with obsolete equipment from occupied European states such as Belgium. German forces conducted mass killings of the Serbian Jews who mostly lived in Belgrade and Šabac. By Spring of 1942. most of the Serbian Jews were killed by SS, SD and Gestapo in Sajmište concentration camp (on the territory of Independent State of Croatia) and Jajinci near Belgrade. By June 1942. Germans proclaimed Belgrade ad Judenfreei. Italy (Italian Social Republic) Mussolini had been removed from office and arrested by King Victor Emmanuel III on 25 July 1943. After the Italian armistice, in a raid led by German paratrooper Otto Skorzeny, Mussolini was rescued from arrest. Once restored in power, Mussolini declared that Italy was a republic and that he was the new head of state. He was subject to German control for the duration of the war. Albania (under German control) After the Italian armistice, a void of power opened up in Albania. The Italian occupying forces could do nothing, as the National Liberation Movement took control of the south and National Front (Balli Kombëtar) took control of the north. Albanians in the Italian army joined the guerrilla forces. In September 1943 the guerrillas moved to take the capital of Tirana, but German paratroopers dropped into the city. Soon after the fight, the German High Command announced that they would recognize the independence of a greater Albania. They organized an Albanian government, police, and military with the Balli Kombëtar. The Germans did not exert heavy control over Albania's administration, but instead attempted to gain popular appeal by giving the Albanians what they wanted. Several Balli Kombëtar leaders held positions in the regime. The joint forces incorporated Kosovo, western Macedonia, southern Montenegro, and Presevo into the Albanian state. A High Council of Regency was created to carry out the functions of a head of state, while the government was headed mainly by Albanian conservative politicians. Albania was the only European country occupied by the Axis powers that ended World War II with a larger Jewish population than before the war. The Albanian government had refused to hand over their Jewish population. They provided Jewish families with forged documents and helped them disperse in the Albanian population. Albania was completely liberated on November 29, 1944. Hungary (Szálasi regime) Relations between Germany and the regency of Miklós Horthy collapsed in Hungary in 1944. Horthy was forced to abdicate after German armed forces held his son hostage as part of Operation Panzerfaust. Hungary was reorganized following Horthy's abdication in December 1944 into a totalitarian fascist regime called the Government of National Unity, led by Ferenc Szálasi. He had been Prime Minister of Hungary since October 1944 and was leader of the anti-Semitic fascist Arrow Cross Party. In power, his government was a Quisling regime with little authority other than to obey Germany's orders. Days after the government took power, the capital of Budapest was surrounded by the Soviet Red Army. German and fascist Hungarian forces tried to hold off the Soviet advance but failed. In March 1945, Szálasi fled to Germany to run the state in exile, until the surrender of Germany in May 1945. Ivan Mihailov, leader of the Internal Macedonian Revolutionary Organization (IMRO), wanted to solve the Macedonian Question by creating a pro-Bulgarian state on the territory of the region of Macedonia in the Kingdom of Yugoslavia. Romania left the Axis and declared war on Germany on 23 August 1944. and the Soviets declared war on Bulgaria on 5 September. While these events were taking place, Mihailov came out of hiding in the Independent State of Croatia and traveled to re-occupied Skopje. The Germans gave Mihailov the green light to create a Macedonian state. Negotiations were undertaken with the Bulgarian government. Contact was made with Hristo Tatarchev in Resen, who offered Mihailov the Presidency. Bulgaria switched sides on 8 September, and on the 9th the Fatherland Front staged a coup and deposed the monarchy. Mihailov refused the leadership and fled to Italy. Spiro Kitanchev took Mihailov's place and became Premier of Macedonia. He cooperated with the pro-Bulgarian authorities, the Wehrmacht, the Bulgarian Army, and the Yugoslav Partisans for the rest of September and October. In the middle of November, the communists won control over the region. Joint German-Italian puppet states Following the German invasion of Greece and the flight of the Greek government to Crete and then Egypt, the Hellenic State was formed in May 1941 as a puppet state of both Italy and Germany. Initially, Italy had wished to annex Greece, but was pressured by Germany to avoid civil unrest such as had occurred in Bulgarian-annexed areas. The result was Italy accepting the creation of a puppet regime with the support of Germany. Italy had been assured by Hitler of a primary role in Greece. Most of the country was held by Italian forces, but strategic locations (Central Macedonia, the islands of the northeastern Aegean, most of Crete, and parts of Attica) were held by the Germans, who seized most of the country's economic assets and effectively controlled the collaborationist government. The puppet regime never commanded any real authority, and did not gain the allegiance of the people. It was somewhat successful in preventing secessionist movements like the Principality of the Pindus from establishing themselves. By mid-1943, the Greek Resistance had liberated large parts of the mountainous interior ("Free Greece"), setting up a separate administration there. After the Italian armistice, the Italian occupation zone was taken over by the German armed forces, who remained in charge of the country until their withdrawal in autumn 1944. In some Aegean islands, German garrisons were left behind, and surrendered only after the end of the war. Controversial cases States listed in this section were not officially members of Axis, but at some point during the war engaged in cooperation with one or more Axis members on level that makes their neutrality disputable. On 31 May 1939, Denmark and Germany signed a treaty of non-aggression, which did not contain any military obligations for either party. On April 9, 1940, citing the intended laying of mines in Norwegian and Danish waters as a pretext, Germany invaded both countries. King Christian X and the Danish government, worried about German bombings if they resisted occupation, accepted "protection by the Reich" in exchange for nominal independence under German military occupation. Three successive Prime Ministers, Thorvald Stauning, Vilhelm Buhl, and Erik Scavenius, maintained this samarbejdspolitik ("cooperation policy") of collaborating with Germany. Denmark coordinated its foreign policy with Germany, extending diplomatic recognition to Axis collaborator and puppet regimes, and breaking diplomatic relations with the governments-in-exile formed by countries occupied by Germany. Denmark broke diplomatic relations with the Soviet Union and signed the Anti-Comintern Pact of 1941. In 1941 a Danish military corps, the Frikorps Danmark, was created at the initiative of the SS and the Danish Nazi Party, to fight alongside the Wehrmacht on Germany's Eastern Front. The government's following statement was widely interpreted as a sanctioning of the corps. Frikorps Danmark was open to members of the Danish Royal Army and those who had completed their service within the last ten years. Between 4,000 and 10,000 Danish citizens joined the Frikorps Danmark, including 77 officers of the Royal Danish Army. An estimated 3,900 of these soldiers died fighting for Germany during the Second World War. Denmark transferred six torpedo boats to Germany in 1941, although the bulk of its navy remained under Danish command until the declaration of martial law in 1943. Denmark supplied agricultural and industrial products to Germany as well as loans for armaments and fortifications. The German presence in Denmark, including the construction of the Danish part of the Atlantic Wall fortifications, was paid from an account in Denmark's central bank, Nationalbanken. The Danish government had been promised that these costs would be repaid, but this never happened. The construction of the Atlantic Wall fortifications in Jutland cost 5 billion Danish kroner. The Danish protectorate government lasted until 29 August 1943, when the cabinet resigned following a declaration of martial law by occupying German military officials. From that on, Denmark officially joined the allied. Germany declared war on Denmark and attacked the Danish military bases which led to 13 Danish soldiers dead in the fighting. The Danish navy scuttled 32 of its larger ships to prevent their use by Germany. Germany seized 14 larger and 50 smaller vessels, and later raised and refitted 15 of the sunken vessels. During the scuttling of the Danish fleet, a number of vessels attempted an escape to Swedish waters, and 13 vessels succeeded, four of which were larger ships. By the autumn of 1944, these ships officially formed a Danish naval flotilla in exile. In 1943 Swedish authorities allowed 500 Danish soldiers in Sweden to train as police troops. By the autumn of 1944, Sweden raised this number to 4,800 and recognized the entire unit as a Danish military brigade in exile. Danish collaboration continued on an administrative level, with the Danish bureaucracy functioning under German command. Active resistance to the German occupation among the populace, virtually nonexistent before 1943, increased after the declaration of martial law. The intelligence operations of the Danish resistance was described as "second to none" by Field Marshal Bernard Law Montgomery after the liberation of Denmark. France (Vichy government) The German invasion army entered Paris on June 14, 1940, following the battle of France. Pétain became the last Prime Minister of the French Third Republic on 16 June 1940. He sued for peace with Germany and on 22 June 1940, his government concluded an armistice with Hitler. Under the terms of the agreement, Germany occupied two-thirds of France, including Paris. Pétain was permitted to keep an "armistice army" of 100,000 men within the unoccupied southern zone. This number included neither the army based in the French colonial empire nor the French fleet. In French North Africa and French Equatorial Africa, the Vichy were permitted to maintain 127,000 men under arms after the colony of Gabon defected to the Free French. The French also maintained substantial garrisons at the French-mandated territory of Syria and Lebanon, the French colony of Madagascar, and in Djibouti. After the armistice, relations between the Vichy French and the British quickly deteriorated. Fearful that the powerful French fleet might fall into German hands, the British launched several naval attacks, most notable of which was against the Algerian harbour of Mers el-Kebir on 3 July 1940. Though Churchill defended his controversial decision to attack the French Fleet, the French people were less accepting. German propaganda trumpeted these attacks as an absolute betrayal of the French people by their former allies. France broke relations with the United Kingdom and considered declaring war. On 10 July 1940, Petain was given emergency "full powers" by a majority vote of the French National Assembly. The following day approval of the new constitution by the Assembly effectively created the French State (l'État Français), replacing the French Republic with the unofficial Vichy France, named for the resort town of Vichy, where Petain maintained his seat of government. The new government continued to be recognised as the lawful government of France by the United States until 1942. Racial laws were introduced in France and its colonies and many French Jews were deported to Germany. Albert Lebrun, last President of the Republic, did not leave the presidential office when he moved to Vizille on 10 July 1940. By 25 April 1945, during Petain's trial, Lebrun argued that he thought he would be able to return to power after the fall of Germany, since he had not resigned. In September 1940, Vichy France allowed Japan to occupy French Indochina, a federation of the French colonial possessions and protectorates roughly encompassing the territory of modern day Vietnam, Laos, and Cambodia. The Vichy regime continued to administer the colony under Japanese military occupation. French Indochina was the base for the Japanese invasions of Thailand, Malaya, and Borneo. In 1945, under Japanese sponsorship, the Empire of Vietnam and the Kingdom of Cambodia were proclaimed as Japanese puppet states. French General Charles de Gaulle headquartered his Free French movement in London in a largely unsuccessful effort to win over the French colonial empire. On 26 September 1940, de Gaulle led an attack by Allied forces on the Vichy port of Dakar in French West Africa. Forces loyal to Pétain fired on de Gaulle and repulsed the attack after two days of heavy fighting. Public opinion in vichy France was further outraged, and Vichy France drew closer to Germany. Vichy France assisted Iraq in the Anglo–Iraqi War of 1941, allowing Germany and Italy to utilize air bases in the French mandate of Syria to support the Iraqi revolt against the British. Allied forces responded by attacking Syria and Lebanon in 1941. In 1942 Allied forces attacked the French colony of Madagascar. There were considerable anti-communist movements in France, and as result, volunteers joined the German forces in their war against the Soviet Union. Almost 7,000 volunteers joined the anti-communist Légion des Volontaires Français (LVF) from 1941 to 1944, and some 7,500 formed the Division Charlemagne, a Waffen-SS unit, from 1944 to 1945. Both the LVF and the Division Charlemagne fought on the eastern front. Hitler never accepted that France could become a full military partner, and constantly prevented the buildup of Vichy's military strength. Vichy's collaboration with Germany was industrial as well as political, with French factories providing many vehicles to the German armed forces. In November 1942 Vichy French troops briefly but fiercely resisted the landing of Allied troops in French North Africa, but were unable to prevail. Admiral François Darlan negotiated a local ceasefire with the Allies. In response to the landings and Vichy's inability to defend itself, German troops occupied southern France and Tunisia, a French protectorate that formed part of French North Africa. The rump French army in mainland France was disbanded by the Germans. The Bey of Tunis formed a government friendly to the Germans. In mid-1943, former Vichy authorities in North Africa came to an agreement with the Free French and setup a temporary French government in Algiers, known as the French Committee of National Liberation (Comité Français de Libération Nationale, CFLN), initially led by Darlan. After his assassination De Gaulle emerged as the French leader. The CFLN raised more troops and re-organized, re-trained and re-equipped the French military, under Allied supervision. While deprived of armed forces, the Vichy government continued to function in mainland France until summer 1944, but had lost most of its territorial sovereignty and military assets, with the exception of the forces stationed in French Indochina. In 1943 it founded the Milice, a paramilitary force which assisted the Germans in rounding up opponents and Jews, as well as fighting the French Resistance. Soviet Union Relations between the Soviet Union and the major Axis powers were generally hostile before 1938. In the Spanish Civil War, the Soviet Union gave military aid to the Second Spanish Republic, against Spanish Nationalist forces, which were assisted by Germany and Italy. However, the Nationalist forces were victorious. The Soviets suffered another political defeat when their ally Czechoslovakia was partitioned and partially annexed by Germany and Hungary via the Munich Agreement. In 1938 and 1939, the USSR fought and defeated Japan in two separate border wars, at Lake Khasan and Khalkhin Gol, the latter being a major Soviet victory. In 1939 the Soviet Union considered forming an alliance with either Britain and France or with Germany. The Molotov-Ribbentrop Pact of August 1939 between the Soviet Union and Germany included a secret protocol whereby the independent countries of Finland, Estonia, Latvia, Lithuania, Poland, and Romania were divided into spheres of interest of the parties. The Soviet Union had been forced to cede Western Belarus and Western Ukraine to Poland after losing the Soviet-Polish War of 1919–1921, and the Soviet Union sought to regain those territories. On 1 September, barely a week after the pact had been signed, Germany invaded Poland. The Soviet Union invaded Poland from the east on 17 September and on 28 September signed a secret treaty with Nazi Germany to arrange coordination of fighting against Polish resistance. The Soviets targeted intelligence, entrepreneurs, and officers, committing a string of atrocities that culminated in the Katyn massacre and mass relocation to Siberian concentration camps (Gulags). Soon after that, the Soviet Union occupied the Baltic countries of Estonia, Latvia, and Lithuania, and annexed Bessarabia and Northern Bukovina from Romania. The Soviet Union attacked Finland on 30 November 1939, which started the Winter War. Finnish defences prevented an all-out invasion, resulting in an interim peace, but Finland was forced to cede strategically important border areas near Leningrad. The Soviet Union supported Germany in the war effort against Western Europe through the 1939 German-Soviet Commercial Agreement and the 1940 German-Soviet Commercial Agreement, with exports of raw materials (phosphates, chromium and iron ore, mineral oil, grain, cotton, and rubber). These and other export goods transported through Soviet and occupied Polish territories allowed Germany to circumvent the British naval blockade. In October and November 1940, Nazi-Soviet talks about the potential of joining the Axis took place in Berlin. Joseph Stalin later personally countered with a separate proposal in a letter later in November that contained several secret protocols, including that "the area south of Batum and Baku in the general direction of the Persian Gulf is recognized as the center of aspirations of the Soviet Union", referring to an area approximating present day Iraq and Iran, and a Soviet claim to Bulgaria. Hitler never returned Stalin's letter. Shortly thereafter, Hitler issued a secret directive on the eventual attempts to invade the Soviet Union. Germany then revived its Anti-Comintern Pact, enlisting many European and Asian countries in opposition to the Soviet Union. The Soviet Union and Japan remained neutral towards each other for most of the war by the Soviet-Japanese Neutrality Pact. The Soviet Union ended the Soviet-Japanese Neutrality Pact by invading Manchukuo on 8 August 1945, due to agreements reached at the Yalta Conference with Roosevelt and Churchill. Caudillo Francisco Franco's Spanish State gave moral, economic, and military assistance to the Axis powers, while nominally maintaining neutrality. Franco described Spain as a member of the Axis and signed the Anti-Comintern Pact of 1941 with Hitler and Mussolini. Members of the ruling Falange party in Spain held irredentist designs on Gibraltar. Falangists also supported Spanish colonial acquisition of Tangier, French Morocco and northwestern French Algeria. Spain also held ambitions on former Spanish colonies in Latin America. In June 1940 the Spanish government approached Germany to propose an alliance in exchange for Germany recognizing Spain's territorial aims: the annexation of the Oran province of Algeria, the incorporation of all Morocco, the extension of Spanish Sahara southward to the twentieth parallel, and the incorporation of French Cameroons into Spanish Guinea. In 1940 Spain invaded and occupied the Tangier International Zone, maintaining its occupation until 1945. The occupation caused a dispute between Britain and Spain in November 1940; Spain conceded to protect British rights in the area and promised not to fortify the area. The Spanish government secretly held expansionist plans towards Portugal that it made known to the German government. In a communiqué with Germany on 26 May 1942, Franco declared that Portugal should be annexed into Spain. Franco won the Spanish Civil War with the help of Nazi Germany and Fascist Italy, which were both eager to establish another fascist state in Europe. Spain owed Germany over $212 million for supplies of matériel during the Spanish Civil War, and Italian combat troops had actually fought in Spain on the side of Franco's Nationalists. From 1940 to 1941, Franco had endorsed a Latin Bloc of Italy, Vichy France, Spain, and Portugal, with support from the Vatican in order to balance the countries' powers to that of Germany. Franco discussed the Latin Bloc alliance with Petain of Vichy France in Montpellier, France in 1940, and with Mussolini in Bordighera, Italy. When Germany invaded the Soviet Union in 1941, Franco immediately offered to form a unit of military volunteers to join the invasion. This was accepted by Hitler and, within two weeks, there were more than enough volunteers to form a division – the Blue Division (División Azul) under General Agustín Muñoz Grandes. The possibility of Spanish intervention in World War II was of concern to the United States, which investigated the activities of Spain's ruling Falange party in Latin America, especially Puerto Rico, where pro-Falange and pro-Franco sentiment was high, even amongst the ruling upper classes. The Falangists promoted the idea of supporting Spain's former colonies in fighting against American domination. Prior to the outbreak of war, support for Franco and the Falange was high in the Philippines. The Falange Exterior, the international department of the Falange, collaborated with Japanese forces against US forces in the Philippines. The official policy of Sweden before, during, and after World War II was neutrality. It had held this policy for over a century, since the end of the Napoleonic Wars. However, Swedish neutrality during World War II has been much debated and challenged. In contrast to many other neutral countries, Sweden was not directly attacked during the war. It was subject to British and Nazi German naval blockades, which led to problems with the supply of food and fuels. From spring 1940 to summer 1941 Sweden and Finland were surrounded by Nazi Germany and the Soviet Union. This led to difficulties with maintaining the rights and duties of neutral states in the Hague Convention. Sweden violated this, as German troops were allowed to travel through Swedish territory between July 1940 to August 1943. In spite of the fact that it was allowed by the Hague Convention, Sweden has been criticized for exporting iron ore to Nazi Germany via the Baltic and the Norwegian port of Narvik. German dependence on Swedish iron ore shipments was the primary reason for Great Britain to launch Operation Wilfred and, together with France, the Norwegian Campaign in early April 1940. By early June 1940, the Norwegian Campaign stood as a failure for the allies. Nazi Germany could obtain the Swedish iron ore supply it needed for war production, despite the British naval blockade, by its forcefully securing access to Norwegian ports. On 25 March 1941, fearing that Yugoslavia would be invaded otherwise, Prince Paul signed the Tripartite Pact with significant reservations. Unlike other Axis powers, Yugoslavia was not obligated to provide military assistance, nor to provide its territory for Axis to move military forces during the war. Yugoslavia's inclusion in the Axis was not openly welcomed; Italy did not desire Yugoslavia to be a partner in the Axis alliance because Italy had territorial claims on Yugoslavia. Germany, on the other hand, initially wanted Yugoslavia to participate in Germany's then-planned Operation Marita in Greece by providing military access to German forces to travel from Germany through Yugoslavia to Greece. Two days after signing the alliance in 1941, after demonstrations in the streets of Belgrade, Prince Paul was removed from office by a coup d'état. Seventeen-year-old Prince Peter was proclaimed to be of age and was declared king, though he was not crowned nor anointed (a custom of the Serbian Orthodox Church). The new Yugoslavian government under King Peter II, still fearful of invasion, stated that it would remain bound by the Tripartite Pact. Hitler, however, suspected that the British were behind the coup against Prince Paul and vowed to invade the country. The German invasion began on 6 April 1941. Royal Yugoslav Army was thoroughly defeated in less than two weeks and an unconditional surrender was signed in Belgrade on 17 April. King Peter II and much of the Yugoslavian government had left the country because they did not want to cooperate with the Axis. While Yugoslavia was no longer capable of being a member of the Axis, several Axis-aligned puppet states emerged after the kingdom was dissolved. Local governments were set up in Serbia, Croatia, and Montenegro. The remainder of Yugoslavia was divided among the other Axis powers. Germany annexed parts of Drava Banovina. Italy annexed south-western Drava Banovina, coastal parts of Croatia (Dalmatia and the islands), and attached Kosovo to Albania (occupied since 1939). Hungary annexed several border territories of Vojvodina and Baranja. Bulgaria annexed Macedonia and parts of southern Serbia. German, Japanese and Italian World War II cooperation German-Japanese Axis-cooperation Germany's and Italy's declaration of war against the United States On 7 December 1941, Japan attacked the naval bases in Pearl Harbor, Hawaii. According to the stipulation of the Tripartite Pact, Nazi Germany was required to come to the defense of her allies only if they were attacked. Since Japan had made the first move, Germany and Italy were not obliged to aid her until the United States counterattacked. Hitler ordered the Reichstag to formally declare war on the United States. Italy also declared war. Hitler made a speech in the Reichstag on 11 December, saying that: The fact that the Japanese Government, which has been negotiating for years with this man (President Roosevelt), has at last become tired of being mocked by him in such an unworthy way, fills us all, the German people, and all other decent people in the world, with deep satisfaction ... Germany and Italy have been finally compelled, in view of this, and in loyalty to the Tri-Partite Pact, to carry on the struggle against the U. S. A. and England jointly and side by side with Japan for the defense and thus for the maintenance of the liberty and independence of their nations and empires ... As a consequence of the further extension of President Roosevelt's policy, which is aimed at unrestricted world domination and dictatorship, the U. S. A. together with England have not hesitated from using any means to dispute the rights of the German, Italian and Japanese nations to the basis of their natural existence ... Not only because we are the ally of Japan, but also because Germany and Italy have enough insight and strength to comprehend that, in these historic times, the existence or non-existence of the nations, is being decided perhaps forever. Historian Ian Kershaw suggests that this declaration of war against the United States was one of the most disastrous mistakes made by the Axis powers, as it allowed the United States to join the United Kingdom and the Soviet Union in war against Germany without any limitation. Americans played a key role in the strategic bombardment of Germany and the invasion of the continent, ending German domination in Western Europe. The Germans were aware that the Americans had drawn up a series of war plans based on a plethora of scenarios, and expected war with the United States no later than 1943. "You gave the right declaration of war. This method is the only proper one. Japan pursued it formerly and it corresponds with his own system, that is, to negotiate as long as possible. But if one sees that the other is interested only in putting one off, in shaming and humiliating one, and is not willing to come to an agreement, then one should strike as hard as possible, and not waste time declaring war." See also - Axis leaders of World War II - Axis of evil - Axis power negotiations on the division of Asia during World War II - Axis victory in World War II - Expansion operations and planning of the Axis Powers - Foreign relations of the Axis of World War II - Greater Germanic Reich - Imperial Italy - Greater Japanese Empire - Hakkō ichiu - List of pro-Axis leaders and governments or direct control in occupied territories - New Order (Nazism) - Participants in World War II - Zweites Buch ||This article cites its sources but does not provide page references. (September 2010)| - Stanley G. Payne. A History of Fascism, 1914–1945. Madison, Wisconsin, USA: University of Wisconsin Press, 1995. P. 379 - Hakim 1995, p. [page needed]. - Sinor 1959, p. 291. - MacGregor Knox. Common Destiny: Dictatorship, Foreign Policy, and War in Fascist Italy and Nazi Germany. Cambridge University Press, 2000. Pp. 124. - MacGregor Knox. Common Destiny: Dictatorship, Foreign Policy, and War in Fascist Italy and Nazi Germany. Cambridge University Press, 2000. Pp. 125. - Gerhard Schreiber, Bern Stegemann, Detlef Vogel. Germany and the Second World War. Oxford University Press, 1995. Pp. 113. - Gerhard Schreiber, Bern Stegemann, Detlef Vogel. Germany and the Second World War. Oxford University Press, 1995. P. 113. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 68. - Iván T. Berend, Tibor Iván Berend. Decades of Crisis: Central and Eastern Europe Before World War 2. First paperback edition. Berkeley and Los Angeles, California, USA: University of California Press, 2001. P. 310. - Christian Leitz. Nazi Foreign Policy, 1933–1941: The Road to Global War. Pp. 10. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 75. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 81. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 82. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 76. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 78. - Peter Neville. Mussolini. London, England, UK: Routledge, 2004. P. 123. - Peter Neville. Mussolini. London, England, UK: Routledge, 2004. Pp. 123. - Peter Neville. Mussolini. London, England, UK: Routledge, 2004. Pp. 123–125. - Gordon Martel. Origins of Second World War Reconsidered: A. J. P. Taylor and Historians. Digital Printing edition. Routledge, 2003. Pp. 179. - Gordon Martel. Austrian Foreign Policy in Historical Context. New Brunswick, New Jersey, USA: Transaction Publishers, 2006. Pp. 179. - Peter Neville. Mussolini. London, England, UK: Routledge, 2004. Pp. 125. - Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. P. 32. - Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. P. 33. - Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. P. 38. - Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. Pp. 39–40. - Hill 2003, p. 91. - Harrison 2000, p. 3. - Harrison 2000, p. 4. - Harrison 2000, p. 10. - Harrison 2000, p. 10, 25. - Harrison 2000, p. 20. - Harrison 2000, p. 19. - Lewis Copeland, Lawrence W. Lamm, Stephen J. McKenna. The World's Great Speeches: Fourth Enlarged (1999) Edition. Pp. 485. - Dr Richard L Rubenstein, John King Roth. Approaches to Auschwitz: The Holocaust Amd Its Legacy. Louisville, Kentucky, USA: Westminster John Knox Press, 2003. P. 212. - Hitler's Germany: Origins, Interpretations, Legacies. London, England, UK: Routledge, 1939. P. 134. - Stephen J. Lee. Europe, 1890–1945. P. 237. - Peter D. Stachura. The Shaping of the Nazi State. P. 31. - Richard Blanke. Orphans of Versailles: The Germans in Western Poland, 1918–1939. Lexington, Kentucky, USA: University Press of Kentucky, 1993. P. 215. - A. C. Kiss. Hague Yearbook of International Law. Martinus Nijhoff Publishers, 1989. - William Young. German Diplomatic Relations 1871–1945: The Wilhelmstrasse and the Formulation of Foreign Policy. iUniverse, 2006. P. 266. - Eastern Europe, Russia and Central Asia 2004, Volume 4. London, England, UK: Europa Publications, 2003. Pp. 138–139. - William Young. German Diplomatic Relations 1871–1945: The Wilhelmstrasse and the Formulation of Foreign Policy. iUniverse, 2006. P. 271. - Gabrielle Kirk McDonald. Documents and Cases, Volumes 1-2. The Hague, Netherlands: Kluwer Law International, 2000. P. 649. - André Mineau. Operation Barbarossa: Ideology and Ethics Against Human Dignity. Rodopi, 2004. P. 36 - Rolf Dieter Müller, Gerd R. Ueberschär. Hitler's War in the East, 1941–1945: A Critical Assessment. Berghahn Books, 2009. P. 89. - Bradl Lightbody. The Second World War: Ambitions to Nemesis. London, England, UK; New York, New York, USA: Routledge, 2004. P. 97. - Geoffrey A. Hosking. Rulers And Victims: The Russians in the Soviet Union. Harvard University Press, 2006 P. 213. - Catherine Andreyev. Vlasov and the Russian Liberation Movement: Soviet Reality and Emigré Theories. First paperback edition. Cambridge, England, UK: Cambridge University Press, 1989. Pp. 53, 61. - Randall Bennett Woods. A Changing of the Guard: Anglo-American Relations, 1941–1946. University of North Carolina Press, 1990. P. 200. - Molotov-Ribbentrop Pact 1939. - Roberts 2006, p. 82. - Command Magagzine. Hitler's Army: The Evolution and Structure of German Forces 1933–1945. P. 175. - Command Magagzine. Hitler's Army: The Evolution and Structure of German Forces 1933–1945. Da Capo Press, 1996. P. 175. - Michael C. Thomsett. The German Opposition to Hitler: The Resistance, The Underground, And Assassination Plots, 1938–1945. McFarland, 2007. P. 40. - Michael C. Thomsett. The German Opposition to Hitler: The Resistance, The Underground, And Assassination Plots, 1938–1945. McFarland, 2007. P. 41. - Barak Kushner. The Thought War: Japanese Imperial Propaganda. University of Hawaii Press, P. 119. - Hilary Conroy, Harry Wray. Pearl Harbor Reexamined: Prologue to the Pacific War. University of Hawaii Press, 1990. P. 21. - Euan Graham. Japan's sea lane security, 1940–2004: a matter of life and death? Oxon, England, UK; New York, New York, USA: Routledge, 2006. Pp. 77. - Daniel Marston. The Pacific War: From Pearl Harbor to Hiroshima. Osprey Publishing, 2011. - Hilary Conroy, Harry Wray. Pearl Harbor Reexamined: Prologue to the Pacific War. University of Hawaii Press, 1990. P. 60. - Dull 2007, p. 5. - Asada 2006, pp. 275–276. - John Whittam. Fascist Italy. Manchester, England, UK; New York, New York, USA: Manchester University Press. P. 165. - Michael Brecher, Jonathan Wilkenfeld. Study of Crisis. University of Michigan Press, 1997. P. 109. - Reynolds Mathewson Salerno. Vital Crossroads: Mediterranean Origins of the Second World War, 1935–1940. Cornell University, 2002. p 82–83. - "French Army breaks a one-day strike and stands on guard against a land-hungry Italy", LIFE, 19 Dec 1938. pp. 23. - *Rodogno, Davide (2006). Fascism's European Empire: Italian Occupation During the Second World War. Cambridge, UK: Cambridge University Press. pp. 46–48. ISBN 978-0-521-84515-1. - John Lukacs. The Last European War: September 1939-December 1941. P. 116. - Jozo Tomasevich. War and Revolution in Yugoslavia, 1941–1945: Occupation and Collaboration. P. 30–31. - Lowe & Marzari 2002, p. 289. - McKercher & Legault 2001, p. 40–41. - McKercher & Legault 2001, pp. 38–40. - McKercher & Legault 2001, p. 40. - McKercher & Legault 2001, p. 41. - Neville Wylie. European Neutrals and Non-Belligerents during the Second World War. Cambridge, England, UK: Cambridge University Press, 2002. Pp. 143. - Neville Wylie. European Neutrals and Non-Belligerents during the Second World War. Cambridge, England, UK: Cambridge University Press, 2002. Pp. 142=143. - Aristotle A. Kallis. Fascist Ideology: Territory and Expansionism in Italy and Germany, 1922–1945. P. 175. - Deist, Wilhelm; Klaus A. Maier et al. (1990). Germany and the Second World War. Oxford University Press. p. 78. - Mussolini Unleashed, 1939–1941: Politics and Strategy in Fascist Italy's Last War. Pp. 284–285. - Patricia Knight. Mussolini and Fascism. Pp. 103. - Patricia Knight. Mussolini and Fascism. Routledge, 2003. P. 103. - Davide Rodogno. Fascism's European Empire: Italian Occupation during the Second World War. Cambridge, England, UK: Cambridge University Press, 2006. P. 30. - Patrick Allitt. Catholic Converts: British and American Intellectuals Turn to Rome. Ithaca, New York, USA: Cornell University, 1997. P. 228. - John Lukacs. The Last European War: September 1939-December 1941. Yale University Press, 2001. P. 364. - Davide Rodogno. Fascism's European empire: Italian occupation during the Second World War. Cambridge, England, UK: Cambridge University Press, 2006. Pp. 80–81. - Davide Rodogno. Fascism's European Empire: Italian Occupation during the Second World War. Cambridge, England, UK: Cambridge University Press, 2006. P. 31. - Peter Neville. Mussolini. Pp. 171. - Peter Neville. Mussolini. P. 171. - Peter Neville. Mussolini. P. 172. - Shirer 1960, p. 1131. - Montgomery 2002, p. [page needed]. - Senn 2007, p. [page needed]. - Thailand and the Second World War - Kirby 1979, p. 134. - Kirby 1979, p. 120. - Kirby 1979, pp. 120–121. - Kennedy-Pipe 1995, p. [page needed]. - Kirby 1979, p. 123. - Seppinen 1983, p. [page needed]. - British Foreign Office Archive, 371/24809/461-556. - Jokipii 1987, p. [page needed]. - "San Marino Ends Old War On Reich to Fight Britain". The New York Times, 18 September 1940. - "Southern Theatre: San Marino In". Time Magazine, 30 September 1940. - "San Marino Army of 900 Enters War Against Reich". The New York Times, 23 September 1944. - Jabārah 1985, p. 183. - Churchill, Winston (1950). The Second World War, Volume III, The Grand Alliance. Boston: Houghton Mifflin Company, p.234; Kurowski, Franz (2005). The Brandenburger Commandos: Germany's Elite Warrior Spies in World War II. Mechanicsburg, Pennsylvania: Stackpole Book. ISBN 978-0-8117-3250-5, 10: 0-8117-3250-9. p. 141 - Guillermo, Artemio R. (2012). Historical Dictionary of the Philippines. Scarecrow Press. pp. 211, 621. ISBN 978-0-8108-7246-2. Retrieved 22 March 2013. - Abinales, Patricio N; Amoroso, Donna J. (2005). State And Society In The Philippines. State and Society in East Asia Series. Rowman & Littlefield. pp. 160, 353. ISBN 978-0-7425-1024-1. Retrieved 22 March 2013. - Lebra 1970, p. 49–54. - Kaplan 1998. - Jasenovac United States Holocaust Memorial Museum web site - Sarner 1997, p. [page needed]. - org/odot_pdf/Microsoft%20Word%20-%205725.pdf Shoah Research Center – Albania - "Den Dansk-Tyske Ikke-Angrebstraktat af 1939". Flådens Historie. (Danish) - Trommer, Aage. ""Denmark". The Occupation 1940–45". Foreign Ministry of Denmark. Archived from the original on 2006-06-18. Retrieved 2006-09-20. - Lidegaard 2003, pp. 461–463. - "Danish Legion Military and Feldpost History". com/DanishFeldpost.htm Archived from the original on 11 October 2006. Retrieved 2006-09-20. - Søværnets mærkedage – August - Flåden efter 29 August 1943 - Den danske Flotille 1944–1945 - Den Danske Brigade DANFORCE – Den Danske Brigade "DANFORCE" Sverige 1943–45 - dk/temaer/befrielsen/jubel/index.html "Jubel og glæde". befrielsen1945.dk. (Danish) - Bachelier 2000, p. 98. - Albert Lebrun's biography, French Republic Presidential official website[dead link] - Paxton 1993. - Nekrich, Ulam & Freeze 1997, pp. 112–120. - Shirer 1960, pp. 495–496. - Senn 2007, p. [page needed]. - Wettig 2008, pp. 20–21. - Kennedy-Pipe 1995, p. [page needed]. - Roberts 2006, p. 58. - Brackman 2001, p. 341–343. - Nekrich, Ulam & Freeze 1997, pp. 202–205. - Donaldson & Nogee 2005, pp. 65–66. - Churchill 1953, pp. 520–521. - Roberts 2006, p. 59. - Wylie 2002, p. 275. - Rohr 2007, p. 99. - Bowen 2000, p. 59. - Payne 1987, p. 269. - Preston 1994, p. 857. - Leonard & Bratzel 2007, p. 96. - Steinberg 2000, p. 122. - Payne 1999, p. 538. - Corvaja 2008, p. 161. - Kershaw 2007, p. 385. - German Declaration of War - Kershaw 2007, Chapter 10. - United States Navy and WWII[dead link] - Nuremberg Trial transcripts, December 11, 1945. More details of the exchanges at the meeting are available online at nizkor.org - Asada, Sadao (2006). From Mahan to Pearl Harbor: The Imperial Japanese Navy and the United States. Annapolis: Naval Institute Press. ISBN 978-1-55750-042-7. - Bachelier, Christian (2000). "L'armée française entre la victoire et la défaite". In Azéma & Bédarida. La France des années noires 1 (Le Seuil) - Bowen, Wayne H. (2000). Spaniards and Nazi Germany: Collaboration in the New Order. Columbia, Missouri: University of Missouri Press. ISBN 978-0-8262-1300-6. - Brackman, Roman (2001). The Secret File of Joseph Stalin: A Hidden Life. London; Portland: Frank Cass. ISBN 978-0-7146-5050-0. - Leonard, Thomas M.; Bratzel, John F. (2007). Latin America During World War II. Lanham Road, Maryland; Plymouth, England: Rowman & Littlefield. ISBN 978-0-7425-3740-8. - Churchill, Winston (1953). The Second World War. Boston: Houghton Mifflin Harcourt. ISBN 978-0-395-41056-1. - Cohen, Philip J. (1996). Serbia's Secret War: Propaganda and the Deceit of History. College Station, Tex: Texas A&M University Press. ISBN 978-0-89096-760-7. - Corvaja, Santi (2008) . Hitler & Mussolini: The Secret Meetings. New York: Enigma. - Donaldson, Robert H; Nogee, Joseph L (2005). The Foreign Policy of Russia: Changing Systems, Enduring Interests. Armonk, NY: M. E. Sharpe. ISBN 978-0-7656-1568-8. - Dull, Paul S (2007) . A Battle History of the Imperial Japanese Navy, 1941–1945. Annapolis: Naval Institute Press. - Hakim, Joy (1995). A History of Us: War, Peace and all that Jazz. New York: Oxford University Press. ISBN 978-0-19-509514-2. - Harrison, Mark (2000) . The Economics of World War II: Six Great Powers in International Comparison. Cambridge: Cambridge University Press. ISBN 978-0-521-78503-7. - Hill, Richard (2003) . Hitler Attacks Pearl Harbor: Why the United States Declared War on Germany. Boulder, CO: Lynne Rienner. - Jabārah, Taysīr (1985). Palestinian leader, Hajj Amin al-Husayni, Mufti of Jerusalem. Kingston Press. p. 183. ISBN 978-0-940670-10-5. - Jokipii, Mauno (1987). Jatkosodan synty: tutkimuksia Saksan ja Suomen sotilaallisesta yhteistyöstä 1940–41 [Birth of the Continuation War: Analysis of the German and Finnish Military Co-operation, 1940–41] (in Finnish). Helsinki: Otava. ISBN 978-951-1-08799-1. - Kaplan, Jeffrey (1998). "Hitler's Priestess: Savitri Devi, the Hindu-Aryan Myth, and Occult Neo-Nazism". Nova Religio (University of California Press) 2 (1): 148–149. doi:10.1525/nr.19220.127.116.11. OCLC 361148795. - Kennedy-Pipe, Caroline (1995). Stalin's Cold War: Soviet Strategies in Europe, 1943 to 1956. New York: Manchester University Press. ISBN 978-0-7190-4201-0. - Kershaw, Ian (2007). Fateful Choices: Ten Decisions That Changed the World, 1940–1941. London: Allen Lane. ISBN 978-1-59420-123-3. - Kirby, D. G. (1979). Finland in the Twentieth Century: A History and an Interpretation. London: C. Hurst & Co. ISBN 978-0-905838-15-1. - Lebra, Joyce C (1970). The Indian National Army and Japan. Singapore: Institute of Southeast Asian Studies. ISBN 978-981-230-806-1. - Lewis, Daniel K. (2001). The History of Argentina. New York; Hampshire: Palgrave MacMillan. - Lidegaard, Bo (2003). Dansk Udenrigspolitisk Historie, vol. 4 (in Danish). Copenhagen: Gyldendal. ISBN 978-87-7789-093-2. - Lowe, Cedric J.; Marzari, Frank (2002) . Italian Foreign Policy, 1870–1940. Foreign Policies of the Great Powers. London: Routledge. - McKercher, B. J. C.; Legault, Roch (2001) . Military Planning and the Origins of the Second World War in Europe. Westport, Connecticut: Greenwood Publishing Group. - Montgomery, John F. (2002) . Hungary: The Unwilling Satellite. Simon Publications. - Nekrich, Aleksandr Moiseevich; Ulam, Adam Bruno; Freeze, Gregory L. (1997). Pariahs, Partners, Predators: German-Soviet Relations, 1922–1941. Columbia University Press. ISBN 0-231-10676-9. - Paxton, Robert O (1993). "La Collaboration d'État". In J. P. Azéma & François Bédarida. La France des Années Noires (Paris: Éditions du Seuil) - Payne, Stanley G. (1987). The Franco Regime, 1936–1975. Madison, Wisconsin: University of Wisconsin Press. ISBN 978-0-299-11074-1. - Payne, Stanley G. (1999). Fascism in Spain, 1923–1977. Madison, Wisconsin: University of Wisconsin Press. ISBN 978-0-299-16564-2. - Potash, Robert A. (1969). The Army And Politics in Argentina: 1928–1945; Yrigoyen to Perón. Stanford: Stanford University Press. - Roberts, Geoffrey (2006). Stalin's Wars: From World War to Cold War, 1939–1953. Yale University Press. ISBN 0-300-11204-1. - Preston, Paul (1994). Franco: A Biography. New York: Basic Books. ISBN 978-0-465-02515-2. - Rodao, Florentino (2002). Franco y el imperio japonés: imágenes y propaganda en tiempos de guerra. Barcelona: Plaza & Janés. ISBN 978-84-01-53054-8. - Rohr, Isabelle (2007). The Spanish Right and the Jews, 1898–1945: Antisemitism and Opportunism. Eastbourne, England; Portland, Oregon: Sussex Academic Press. - Sarner, Harvey (1997). Rescue in Albania: One Hundred Percent of Jews in Albania Rescued from the Holocaust. Cathedral City, California: Brunswick Press. - Senn, Alfred Erich (2007). Lithuania 1940: Revolution From Above. Amsterdam; New York: Rodopi Publishers. ISBN 978-90-420-2225-6. - Seppinen, Ilkka (1983). Suomen ulkomaankaupan ehdot 1939–1940 [Conditions of Finnish Foreign Trade 1939–1940] (in Finnish). Helsinki: Suomen historiallinen seura. ISBN 978-951-9254-48-7. - Shirer, William L. (1960). The Rise and Fall of the Third Reich. New York: Simon & Schuster. ISBN 978-0-671-62420-0. - Sinor, Denis (1959). History of Hungary. Woking; London: George Allen and Unwin. - Steinberg, David Joel (2000) . The Philippines: A Singular and A Plural Place. Boulder Hill, Colorado; Oxford: Westview Press. ISBN 978-0-8133-3755-5. - Walters, Guy (2009). Hunting Evil: The Nazi War Criminals Who Escaped and the Quest to Bring Them to Justice. New York: Broadway Books. - Wettig, Gerhard (2008). Stalin and the Cold War in Europe. Landham, Md: Rowman & Littlefield. ISBN 978-0-7425-5542-6. - Wylie, Neville (2002). European Neutrals and Non-Belligerents During the Second World War. Cambridge; New York: Cambridge University Press. ISBN 978-0-521-64358-0. - Halsall, Paul (1997). "The Molotov-Ribbentrop Pact, 1939". New York: Fordham University. Retrieved 2012-03-22. Further reading - Dear, Ian C. B.; Foot, Michael; Richard Daniell (eds.) (2005). The Oxford Companion to World War II. Oxford University Press. ISBN 0-19-280670-X. - Kirschbaum, Stanislav (1995). A History of Slovakia: The Struggle for Survival. New York: St. Martin's Press. ISBN 0-312-10403-0. - Roberts, Geoffrey (1992). "Infamous Encounter? The Merekalov-Weizsacker Meeting of 17 April 1939". The Historical Journal (Cambridge University Press) 35 (4): 921–926. doi:10.1017/S0018246X00026224. JSTOR 2639445. - Weinberg, Gerhard L. (2005). A World at Arms: A Global History of World War II (2nd ed.). NY: Cambridge University Press. ISBN 978-0-521-85316-3. |Look up Axis Powers in Wiktionary, the free dictionary.| - Axis History Factbook - Full text of The Tripartite Pact - Silent movie of the signing of The Tripartite Pact |Wikimedia Commons has media related to: Axis powers|
http://en.wikipedia.org/wiki/Rome-Berlin_Axis
13
188
Prime number theorem In number theory, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers. The prime number theorem gives a general description of how the primes are distributed amongst the positive integers. It formalizes the intuitive idea that primes become less common as they become larger. Informally speaking, the prime number theorem states that if a random integer is selected in the range of zero to some large integer N, the probability that the selected integer is prime is about 1 / ln(N), where ln(N) is the natural logarithm of N. Consequently, a random integer with at most 2n digits (for large enough n) is about half as likely to be prime as a random integer with at most n digits. For example, among the positive integers of at most 1000 digits, about one in 2300 is prime (ln 101000 ≈ 2302.6), whereas among positive integers of at most 2000 digits, about one in 4600 is prime (ln 102000 ≈ 4605.2). In other words, the average gap between consecutive prime numbers among the first N integers is roughly ln(N). Statement of the theorem Let π(x) be the prime-counting function that gives the number of primes less than or equal to x, for any real number x. For example, π(10) = 4 because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that x / ln(x) is a good approximation to π(x), in the sense that the limit of the quotient of the two functions π(x) and x / ln(x) as x approaches infinity is 1: known as the asymptotic law of distribution of prime numbers. Using asymptotic notation this result can be restated as This notation (and the theorem) does not say anything about the limit of the difference of the two functions as x approaches infinity. Instead, the theorem states that x/ln(x) approximates π(x) in the sense that the relative error of this approximation approaches 0 as x approaches infinity. The prime number theorem is equivalent to the statement that the nth prime number pn is approximately equal to n ln(n), again with the relative error of this approximation approaching 0 as n approaches infinity. History of the asymptotic law of distribution of prime numbers and its proof Based on the tables by Anton Felkel and Jurij Vega, Adrien-Marie Legendre conjectured in 1797 or 1798 that π(a) is approximated by the function a/(A ln(a) + B), where A and B are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with A = 1 and B = −1.08366. Carl Friedrich Gauss considered the same question: "Im Jahr 1792 oder 1793", according to his own recollection nearly sixty years later in a letter to Encke (1849), he wrote in his logarithm table (he was then 15 or 16) the short note "Primzahlen unter ". But Gauss never published this conjecture. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral li(x) (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of π(x) and x / ln(x) stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients. In two papers from 1848 and 1850, the Russian mathematician Pafnuty L'vovich Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function ζ(s) (for real values of the argument "s", as are works of Leonhard Euler, as early as 1737) predating Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit of π(x)/(x/ln(x)) as x goes to infinity exists at all, then it is necessarily equal to one. He was able to prove unconditionally that this ratio is bounded above and below by two explicitly given constants near to 1 for all x. Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for π(x) were strong enough for him to prove Bertrand's postulate that there exists a prime number between n and 2n for any integer n ≥ 2. Without doubt, the single most significant paper concerning the distribution of prime numbers was Riemann's 1859 memoir On the Number of Primes Less Than a Given Magnitude, the only paper he ever wrote on the subject. Riemann introduced revolutionary ideas into the subject, the chief of them being that the distribution of prime numbers is intimately connected with the zeros of the analytically extended Riemann zeta function of a complex variable. In particular, it is in this paper of Riemann that the idea to apply methods of complex analysis to the study of the real function π(x) originates. Extending these deep ideas of Riemann, two proofs of the asymptotic law of the distribution of prime numbers were obtained independently by Jacques Hadamard and Charles Jean de la Vallée-Poussin and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta function ζ(s) is non-zero for all complex values of the variable s that have the form s = 1 + it with t > 0. During the 20th century, the theorem of Hadamard and de la Vallée-Poussin also became known as the Prime Number Theorem. Several different proofs of it were found, including the "elementary" proofs of Atle Selberg and Paul Erdős (1949). While the original proofs of Hadamard and de la Vallée-Poussin are long and elaborate, and later proofs have introduced various simplifications through the use of Tauberian theorems but remained difficult to digest, a surprisingly short proof was discovered in 1980 by American mathematician Donald J. Newman. Newman's proof is arguably the simplest known proof of the theorem, although it is non-elementary in the sense that it uses Cauchy's integral theorem from complex analysis. Proof methodology In a lecture on prime numbers for a general audience, Fields medalist Terence Tao described one approach to proving the prime number theorem in poetic terms: listening to the "music" of the primes. We start with a "sound wave" that is "noisy" at the prime numbers and silent at other numbers; this is the von Mangoldt function. Then we analyze its notes or frequencies by subjecting it to a process akin to Fourier transform; this is the Mellin transform. Then we prove, and this is the hard part, that certain "notes" cannot occur in this music. This exclusion of certain notes leads to the statement of the prime number theorem. According to Tao, this proof yields much deeper insights into the distribution of the primes than the "elementary" proofs discussed below. Proof sketch Here is a sketch of the proof referred to in Tao's lecture mentioned above. Like most proofs of the PNT, it starts out by reformulating the problem in terms of a less intuitive, but better-behaved, prime-counting function. The idea is to count the primes (or a related set such as the set of prime powers) with weights to arrive at a function with smoother asymptotic behavior. The most common such generalized counting function is the Chebyshev function , defined by Here the summation is over all prime powers up to x. This is sometimes written as , where is the von Mangoldt function, namely It is now relatively easy to check that the PNT is equivalent to the claim that . Indeed, this follows from the easy estimates and (using big O notation) for any ε > 0, The next step is to find a useful representation for . Let be the Riemann zeta function. It can be shown that is related to the von Mangoldt function , and hence to , via the relation holds, where the sum is over all zeros (trivial and non-trivial) of the zeta function. This striking formula is one of the so-called explicit formulas of number theory, and is already suggestive of the result we wish to prove, since the term x (claimed to be the correct asymptotic order of ) appears on the right-hand side, followed by (presumably) lower-order asymptotic terms. The next step in the proof involves a study of the zeros of the zeta function. The trivial zeros −2, −4, −6, −8, ... can be handled separately: which vanishes for a large x. The nontrivial zeros, namely those on the critical strip , can potentially be of an asymptotic order comparable to the main term x if , so we need to show that all zeros have real part strictly less than 1. To do this, we take for granted that is meromorphic in the half-plane , and is analytic there except for a simple pole at , and that there is a product formula for This product formula follows from the existence of unique prime factorization of integers, and shows that is never zero in this region, so that its logarithm is defined there and Write ; then Now observe the identity so that for all . Suppose now that . Certainly is not zero, since has a simple pole at . Suppose that and let tend to from above. Since has a simple pole at and stays analytic, the left hand side in the previous inequality tends to , a contradiction. Finally, we can conclude that the PNT is "morally" true. To rigorously complete the proof there are still serious technicalities to overcome, due to the fact that the summation over zeta zeros in the explicit formula for does not converge absolutely but only conditionally and in a "principal value" sense. There are several ways around this problem but all of them require rather delicate complex-analytic estimates that are beyond the scope of this article. Edwards's book provides the details. Prime-counting function in terms of the logarithmic integral In a handwritten note on a reprint of his 1838 paper "Sur l'usage des séries infinies dans la théorie des nombres", which he mailed to Carl Friedrich Gauss, Peter Gustav Lejeune Dirichlet conjectured (under a slightly different form appealing to a series rather than an integral) that an even better approximation to π(x) is given by the offset logarithmic integral function Li(x), defined by Indeed, this integral is strongly suggestive of the notion that the 'density' of primes around t should be 1/lnt. This function is related to the logarithm by the asymptotic expansion So, the prime number theorem can also be written as π(x) ~ Li(x). In fact, it follows from the proof of Hadamard and de la Vallée Poussin that for some positive constant a, where O(…) is the big O notation. This has been improved to Because of the connection between the Riemann zeta function and π(x), the Riemann hypothesis has considerable importance in number theory: if established, it would yield a far better estimate of the error involved in the prime number theorem than is available today. More specifically, Helge von Koch showed in 1901 that, if and only if the Riemann hypothesis is true, the error term in the above relation can be improved to for all x ≥ 2657. He also derived a similar bound for the Chebyshev prime-counting function ψ: for all x ≥ 73.2. This latter bound has been shown to express a variance to mean power law (when regarded as a random function over the integers), 1/f noise and to also correspond to the Tweedie compound Poisson distribution. Parenthetically, the Tweedie distributions represent a family of scale invariant distributions that serve as foci of convergence for a generalization of the central limit theorem. The logarithmic integral Li(x) is larger than π(x) for "small" values of x. This is because it is (in some sense) counting not primes, but prime powers, where a power pn of a prime p is counted as 1/n of a prime. This suggests that Li(x) should usually be larger than π(x) by roughly Li(x1/2)/2, and in particular should usually be larger than π(x). However, in 1914, J. E. Littlewood proved that this is not always the case. The first value of x where π(x) exceeds Li(x) is probably around x = 10316; see the article on Skewes' number for more details. Elementary proofs In the first half of the twentieth century, some mathematicians (notably G. H. Hardy) believed that there exists a hierarchy of proof methods in mathematics depending on what sorts of numbers (integers, reals, complex) a proof requires, and that the prime number theorem (PNT) is a "deep" theorem by virtue of requiring complex analysis. This belief was somewhat shaken by a proof of the PNT based on Wiener's tauberian theorem, though this could be set aside if Wiener's theorem were deemed to have a "depth" equivalent to that of complex variable methods. There is no rigorous and widely accepted definition of the notion of elementary proof in number theory. One definition is "a proof that can be carried out in first order Peano arithmetic." There are number-theoretic statements (for example, the Paris–Harrington theorem) provable using second order but not first order methods, but such theorems are rare to date. In March 1948, Atle Selberg established, by elementary means, the asymptotic formula for primes . By July of that year, Selberg and Paul Erdős had each obtained elementary proofs of the PNT, both using Selberg's asymptotic formula as a starting point. These proofs effectively laid to rest the notion that the PNT was "deep," and showed that technically "elementary" methods (in other words Peano arithmetic) were more powerful than had been believed to be the case. In 1994, Charalambos Cornaros and Costas Dimitracopoulos proved the PNT using only , a formal system far weaker than Peano arithmetic. On the history of the elementary proofs of the PNT, including the Erdős–Selberg priority dispute, see Dorian Goldfeld. Computer verifications In 2005, Avigad et al. employed the Isabelle theorem prover to devise a computer-verified variant of the Erdős–Selberg proof of the PNT. This was the first machine-verified proof of the PNT. Avigad chose to formalize the Erdős–Selberg proof rather than an analytic one because while Isabelle's library at the time could implement the notions of limit, derivative, and transcendental function, it had almost no theory of integration to speak of (Avigad et al. p. 19). In 2009, John Harrison employed HOL Light to formalize a proof employing complex analysis. By developing the necessary analytic machinery, including the Cauchy integral formula, Harrison was able to formalize “a direct, modern and elegant proof instead of the more involved ‘elementary’ Erdös–Selberg argument.” Prime number theorem for arithmetic progressions Let denote the number of primes in the arithmetic progression a, a + n, a + 2n, a + 3n, … less than x. Lejeune Dirichlet and Legendre conjectured, and Vallée-Poussin proved, that, if a and n are coprime, then where φ(·) is the Euler's totient function. In other words, the primes are distributed evenly among the residue classes [a] modulo n with gcd(a, n) = 1. This can be proved using similar methods used by Newman for his proof of the prime number theorem. The Siegel–Walfisz theorem gives a good estimate for the distribution of primes in residue classes. Prime number race Although we have in particular empirically the primes congruent to 3 are more numerous and are nearly always ahead in this "prime number race"; the first reversal occurs at x = 26,861.:1–2 However Littlewood showed in 1914:2 that there are infinitely many sign changes for the function so the lead in the race switches back and forth infinitely many times. The phenomenon that π4,3(x) is ahead most of the time is called Chebyshev's bias. The prime number race generalizes to other moduli and is the subject of much research; Pál Turán asked whether it is always the case that π(x;a,c) and π(x;b,c) change places when a and b are coprime to c. Granville and Martin give a thorough exposition and survey. Bounds on the prime-counting function The prime number theorem is an asymptotic result. Hence, it cannot be used to bound π(x). However, some bounds on π(x) are known, for instance Pierre Dusart's The first inequality holds for all x ≥ 599 and the second one for x ≥ 355991. A weaker but sometimes useful bound is for x ≥ 55. In Dusart's thesis there are stronger versions of this type of inequality that are valid for larger x. The proof by de la Vallée-Poussin implies the following. For every ε > 0, there is an S such that for all x > S, Approximations for the nth prime number As a consequence of the prime number theorem, one gets an asymptotic expression for the nth prime number, denoted by pn: A better approximation is Table of π(x), x / ln x, and li(x) The table compares exact values of π(x) to the two approximations x / ln x and li(x). The last column, x / π(x), is the average prime gap below x. x π(x) π(x) − x / ln x π(x) / (x / ln x) li(x) − π(x) x / π(x) 10 4 −0.3 0.921 2.2 2.500 102 25 3.3 1.151 5.1 4.000 103 168 23 1.161 10 5.952 104 1,229 143 1.132 17 8.137 105 9,592 906 1.104 38 10.425 106 78,498 6,116 1.084 130 12.740 107 664,579 44,158 1.071 339 15.047 108 5,761,455 332,774 1.061 754 17.357 109 50,847,534 2,592,592 1.054 1,701 19.667 1010 455,052,511 20,758,029 1.048 3,104 21.975 1011 4,118,054,813 169,923,159 1.043 11,588 24.283 1012 37,607,912,018 1,416,705,193 1.039 38,263 26.590 1013 346,065,536,839 11,992,858,452 1.034 108,971 28.896 1014 3,204,941,750,802 102,838,308,636 1.033 314,890 31.202 1015 29,844,570,422,669 891,604,962,452 1.031 1,052,619 33.507 1016 279,238,341,033,925 7,804,289,844,393 1.029 3,214,632 35.812 1017 2,623,557,157,654,233 68,883,734,693,281 1.027 7,956,589 38.116 1018 24,739,954,287,740,860 612,483,070,893,536 1.025 21,949,555 40.420 1019 234,057,667,276,344,607 5,481,624,169,369,960 1.024 99,877,775 42.725 1020 2,220,819,602,560,918,840 49,347,193,044,659,701 1.023 222,744,644 45.028 1021 21,127,269,486,018,731,928 446,579,871,578,168,707 1.022 597,394,254 47.332 1022 201,467,286,689,315,906,290 4,060,704,006,019,620,994 1.021 1,932,355,208 49.636 1023 1,925,320,391,606,803,968,923 37,083,513,766,578,631,309 1.020 7,250,186,216 51.939 1024 18,435,599,767,349,200,867,866 339,996,354,713,708,049,069 1.019 17,146,907,277 54.243 OEIS A006880 A057835 A057752 Analogue for irreducible polynomials over a finite field There is an analogue of the prime number theorem that describes the "distribution" of irreducible polynomials over a finite field; the form it takes is strikingly similar to the case of the classical prime number theorem. To state it precisely, let F = GF(q) be the finite field with q elements, for some fixed q, and let Nn be the number of monic irreducible polynomials over F whose degree is equal to n. That is, we are looking at polynomials with coefficients chosen from F, which cannot be written as products of polynomials of smaller degree. In this setting, these polynomials play the role of the prime numbers, since all other monic polynomials are built up of products of them. One can then prove that If we make the substitution x = qn, then the right hand side is just which makes the analogy clearer. Since there are precisely qn monic polynomials of degree n (including the reducible ones), this can be rephrased as follows: if a monic polynomial of degree n is selected randomly, then the probability of it being irreducible is about 1/n. One can even prove an analogue of the Riemann hypothesis, namely that The proofs of these statements are far simpler than in the classical case. It involves a short combinatorial argument, summarised as follows. Every element of the degree n extension of F is a root of some irreducible polynomial whose degree d divides n; by counting these roots in two different ways one establishes that where μ(k) is the Möbius function. (This formula was known to Gauss.) The main term occurs for d = n, and it is not difficult to bound the remaining terms. The "Riemann hypothesis" statement depends on the fact that the largest proper divisor of n can be no larger than n/2. See also - Abstract analytic number theory for information about generalizations of the theorem. - Landau prime ideal theorem for a generalization to prime ideals in algebraic number fields. - Riemann hypothesis - Hoffman, Paul (1998). The Man Who Loved Only Numbers. Hyperion. p. 227. ISBN 0-7868-8406-1. - N. Costa Pereira (August–September 1985). "A Short Proof of Chebyshev's Theorem". American Mathematical Monthly 92 (7): 494–495. doi:10.2307/2322510. JSTOR 2322510. - M. Nair (February 1982). "On Chebyshev-Type Inequalities for Primes". American Mathematical Monthly 89 (2): 126–129. doi:10.2307/2320934. JSTOR 2320934. - Ingham, A.E. (1990). The Distribution of Prime Numbers. Cambridge University Press. pp. 2–5. ISBN 0-521-39789-8. - D. J. Newman (1980). "Simple analytic proof of the prime number theorem". American Mathematical Monthly 87 (9): 693–696. doi:10.2307/2321853. JSTOR 2321853. - D. Zagier (1997). "Newman's short proof of the prime number theorem". American Mathematical Monthly 104 (8): 705–708. doi:10.2307/2975232. JSTOR 2975232. - Video and slides of Tao's lecture on primes, UCLA January 2007. - Edwards, Harold M. (2001). Riemann's zeta function. Courier Dover Publications. ISBN 0-486-41740-9. - Helge von Koch (December 1901). "Sur la distribution des nombres premiers". Acta Mathematica 24 (1): 159–182. doi:10.1007/BF02403071. (French) - Schoenfeld, Lowell (1976). "Sharper Bounds for the Chebyshev Functions θ(x) and ψ(x). II". Mathematics of Computation 30 (134): 337–360. doi:10.2307/2005976. JSTOR 2005976.. - Kendal, WS (2013). "Fluctuation scaling and 1/f noise: shared origins from the Tweedie family of statistical distributions". J Basic Appl Phys 2: 40––49. - Jørgensen, B; Martinez, JR & Tsao, M (1994). "Asymptotic behaviour of the variance function". Scandinavian Journal of Statistics 21: 223–243. - D. Goldfeld The elementary proof of the prime number theorem: an historical perspective. - Baas, Nils A.; Skau, Christian F. (2008). "The lord of the numbers, Atle Selberg. On his life and mathematics". Bull. Amer. Math. Soc. 45 (4): 617–649. doi:10.1090/S0273-0979-08-01223-8 - Cornaros, Charalambos; Dimitracopoulos, Costas (1994). "The prime number theorem and fragments of PA". Archive for Mathematical Logic 33 (4): 265–281. doi:10.1007/BF01270626. - Jeremy Avigad, Kevin Donnelly, David Gray, Paul Raff (2005). "A formally verified proof of the prime number theorem". arXiv:cs.AI/0509025 [cs.AI]. - "Formalizing an analytic proof of the Prime Number Theorem". Journal of Automated Reasoning. 2009, volume = 43, pages = 243––261. Unknown parameter - Ivan Soprounov (1998). A short proof of the Prime Number Theorem for arithmetic progressions. - Granville, Andrew; Martin, Greg (January 2006). "Prime Number Races". American Mathematical Monthly 113 (1): 1–33. doi:10.2307/27641834. JSTOR 27641834. - Guy, Richard K. (2004). Unsolved problems in number theory (3rd ed.). Springer-Verlag. A4. ISBN 978-0-387-20860-2. Zbl 1058.11001. - Dusart, Pierre (1998). "Autour de la fonction qui compte le nombre de nombres premiers". PhD Thesis. (French) - Barkley Rosser (January 1941). "Explicit Bounds for Some Functions of Prime Numbers". American Journal of Mathematics 63 (1): 211–232. doi:10.2307/2371291. JSTOR 2371291. - Ernest Cesàro (1894). "Sur une formule empirique de M. Pervouchine". Comptes rendus hebdomadaires des séances de l'Académie des sciences 119: 848–849. (French) - Eric Bach, Jeffrey Shallit (1996). Algorithmic Number Theory 1. MIT Press. p. 233. ISBN 0-262-02405-5. - Pierre Dusart (1999). "The kth prime is greater than k(ln k + ln ln k−1) for k ≥ 2". Mathematics of Computation 68: 411–415. - "Conditional Calculation of pi(1024)". Chris K. Caldwell. Retrieved 2010-08-03. - "Computing π(x) Analytically)". Retrieved Jul. 25, 2012. - Chebolu, Sunil; Ján Mináč (December 2011). "Counting Irreducible Polynomials over Finite Fields Using the Inclusion-Exclusion Principle". Mathematics Magazine 84 (5): 369–371. doi:10.4169/math.mag.84.5.369. - Hardy, G. H. & Littlewood, J. E. (1916). "Contributions to the Theory of the Riemann Zeta-Function and the Theory of the Distribution of Primes". Acta Mathematica 41: 119–196. doi:10.1007/BF02422942. - Granville, Andrew (1995). "Harald Cramér and the distribution of prime numbers". Scandinavian Actuarial Journal 1: 12–28. - Table of Primes by Anton Felkel. - Short video visualizing the Prime Number Theorem. - Prime formulas and Prime number theorem at MathWorld. - Prime number theorem, PlanetMath.org. - How Many Primes Are There? and The Gaps between Primes by Chris Caldwell, University of Tennessee at Martin. - Tables of prime-counting functions by Tomás Oliveira e Silva
http://en.wikipedia.org/wiki/Prime_number_theorem
13
86
- How Pressure Sensors Work - Sensor Technology - Selection Criteria - Performance Specifications - Mechanical Considerations - Electrical Specifications - Environmental Considerations - Special Requirements How to Select Pressure Sensor Image Credit: GE | Kobold | Measurement Specialties Pressure sensors include all sensors, transducers and elements that produce an electrical signal proportional to pressure or changes in pressure. The device reads the changes in pressure, and then relays this data to recorders or switches. How Pressure Sensors Work Pressure instruments monitor the amount of pressure applied to a part of the process. There are several types of pressure instruments: Sensors-Pressure sensors convert a measured pressure into an electrical output signal. They are typically simple devices that do not include a display or user interface. Elements are the portions of a pressure instrument which are moved or temporarily deformed by the gas or liquid of the system to which the gage is connected. These include the Bourdon tube which is a sealed tube that deflects in response to applied pressure, as well as bellows, capsule elements and diaphragm1 elements. The basic pressure sensing element can be configured as a C-shaped Bourdon tube (A); a helical Bourdon tube (B); flat diaphragm (C); a convoluted diaphragm (D); a capsule (E); or a set of bellows (F). Image Credit: sensorsmag Transducers- Pressure transducers are pressure-sensing devices. It converts an applied pressure into an electrical signal. The output signal is generated by the primary sensing element and the device maintains the natural characteristics of the sensing technology. A transducer is also a sensor but a transducer always converts the non-electric pressure signal into an electrical signal. Therefore, a transducer is always a sensor but a sensor is not always a transducer. In industry the terms are often interchanged. There are several types of transducers including: Thin film sensors have an extremely thin layer of material deposited on a substrate by sputtering, chemical vapor deposition, or other technique. This technology incorporates a compact design with good temperature stability. There are a variety of materials used in thin film technology, such as titanium nitride and polysilicon. These gauges are most suitable for long-term use and harsh measurement conditions. Semiconductor strain gauge There are numerous technologies by which pressure transducers and sensors function. Some of the most widely used technologies include, Piston technology uses a sealed piston/Cylinder to measure changes in pressure. Mechanical deflection uses an elastic or flexible element to mechanically deflect with a change in pressure, for example a diaphragm, Bourdon tube, or bellows. Diaphragm Pressure Sensor. Image Credit: machinedesign.com Piezoelectric pressure sensors measure dynamic and quasi-static pressures. The bi-directional transducers consist of metalized quartz or ceramic materials which have naturally occurring electrical properties. They are capable of converting stress into an electric potential and vice versa. The common modes of operation are charge mode, which generates a high-impedance charge output; and voltage mode, which uses an amplifier to convert the high-impedance charge into a low-impedance output voltage. The sensors can only be used for varying pressures. They are very rugged but require amplification circuitry and are susceptible to shock and vibration. Piezoelectric Pressure Transducer. Image Credit: National Instruments MicroElectroMechanical systems (MEMS) are typically micro systems manufactured by silicon surface micromachining for use in very small industrial or biological systems. Vibrating elements (silicon resonance) use a vibrating element technology, such as silicon resonance. Variable capacitance pressure instruments use the capacitance change results from the movement of a diaphragm element to measure pressure. Depending on the type of pressure, the capacitive transducer can be either an absolute, gauge, or differential pressure transducer. The device uses a thin diaphragm as one plate of a capacitor. The applied pressure causes the diaphragm to deflect and the capacitance to change. The deflection of the diaphragm causes a change in capacitance that is detected by a bridge circuit. Design Tip: The electronics for signal conditioning should be located close to the sensing element to prevent errors due to stray capacitance The capacitance of two parallel plates is given by the following equation, µ = dielectric constant of the material between the plates A = area of the plates d = spacing between the plates These pressure transducers are generally very stable, linear and accurate, but are sensitive to high temperatures and are more complicated to setup than most pressure sensors. Capacitive absolute pressure sensors with a vacuum between the plates are ideal for preventing error by keeping the dielectric constant of the material constant. - Strain gauges (strain-sensitive variable resistors) are bonded to parts of the structure that deform as the pressure changes. Four strain gages are typically used in series in a Wheatstone bridge circuit, which is used to make the measurement. When voltage is applied to two opposite corners of the bridge, an electrical output signal is developed proportional to the applied pressure. The output signal is collected at the remaining two corners of the bridge. Strain gauges are rugged, accurate, and stable, they can operate in severe shock and vibration environments as well as in a variety of pressure media. Strain gauge pressure transducers come in several different varieties: the bonded strain gauge, the sputtered strain gauge, and the semiconductor strain gauge. Strain gauge pressure transducer. Image Credit: openticle.com Semiconductor piezoresistive sensors are based on semiconductor technology. The change in resistance is not only because of a change in the length and width (as it is with strain gage) but because of a shift of electrical charges within the resistor. There are four piezoresistors within the diagram area on the sensor connected to an element bridge. When the diaphragm is deflected, two resistors are subjected to tangential stress and two to radial stress. Piezoresistive semiconductor pressure sensors incorporate four piezoresistors in the diaphragm. Image Credit: sensorsmag The output is described by the following equation: Vout/ Vcc = ΔR/R Vcc = supply voltage R = base resistance of the piezoresistor ΔR = change with applied pressure and it typically 2.5% of the full R. These are very sensitive devices. The GlobalSpec SpecSearch database allows industrial buyers to select pressure sensors by performance specifications, mechanical considerations, electrical specifications, environmental considerations, and special requirements. The performance of the sensor is based on several factors intrinsic to the system in which the sensor will be used. These include maximum pressure, pressure reference, engineering units, accuracy required, and pressure conditions. Static pressure is defined as P = F/A; where P is pressure, F is applied force and A is the area of application. This equation can be used on liquid and gas that is not flowing. Pressure in moving fluids can be calculated using the equation P1 = ρVO2/2; where ρ is fluid density, and VO is the fluid velocity. Impact pressure is the pressure a moving fluid exerts parallel to the flow direction. Dynamic pressure measures more "real-life" applications. Pressure is in all directions in a fluid. Image Credit: schoolforchampions.com Maximum Pressure Range is the maximum allowable pressure at which a system or piece of equipment is designed to operate safely. The extremes of this range should be determined in accordance with the expected pressure range the device must operate within. It is common practice that this value should not exceed 75% of the device's maximum rated range. For example: if the device has a maximum rated range of 100 psi then the working range should not exceed 75 psi. Design Tip: Figure out what the anticipated pressure spikes will be and then pick a transducer rated 25% higher than the highest spike. An additional margin is suggested where "high cycling" may occur. Absolute pressure sensors measure the pressure of a system relative to a perfect vacuum. These sensors incorporate sensing elements which are completely evacuated and sealed; the high pressure port is not present and input pressure is applied through the low port. The measurement is done in pounds per square inch absolute. Differential pressure is measured by reading the difference between the inputs of two or more pressure levels. The sensor must have two separate pressure ports; the higher of the two pressures is applied through the high port and the lower through the low port. It is commonly measured in units of pounds per square inch. An example of a differential pressure sensor is filter monitors; when the filter starts to clog the flow resistance and therefore the pressure drop across the filter will increase. Bidirectional sensors are able to measure positive and negative pressure differences i.e. p1>p2 and p1Unidirectional sensors only operate in the positive range i.e. p1> p2 and the highest pressure has to be applied to the pressure port defined as "high pressure"Gauge sensors are the most common type of pressure sensors. The pressure is measured relative to ambient pressure which is the atmospheric pressure at a given location. The average atmospheric pressure at sea level is 1013.25 mbar but changes in weather and altitude directly influence the output of the pressure sensor. In this device, the input pressure is through the high port and the ambient pressure is applied through the open low port. Vacuum sensors are gauge sensors used to measure the pressure lower than the localized atmospheric pressure. A vacuum is a volume of space that is essentially empty of matter. Vacuum sensors are divided into different ranges of low, high and ultra-high vacuum. Sealed gauged sensors measure pressure relative to one atmosphere at sea level (14.7 PSI) regardless of local atmospheric pressure. The same sensor can be used for all three types of pressure measurement; only the references differ. Image Credit: sensorsmag Pressure is a measure of force per unit area. A variety of units are used depending on the application; a conversion table is below. 1psi = 51.714 mmHg = 2.0359 in.Hg = 27.680 in.H2O = 6.8946 kPa 1 bar = 14.504 psi 1 atm. = 14.696 psi Accuracy is defined as the difference (error) between the true value and the indicated value expressed as percent of the span. It includes the combined deviations resulting from the method, observer, apparatus and environment. Accuracy is observed in three different areas; static, thermal, and total. Static accuracy is the combined effects of linearity, hysteresis, and repeatability. It is expressed as +/- percentage of full scale output. The static error band is a good measure of the accuracy that can be expected at constant temperature. Linearity is the deviation of a calibration curve from a specified straight line. One way to measure linearity is to use the least squares method, which gives a best fit straight line. The best straight line (BSL) is a line between two parallel lines that enclose all output vs. pressure values on the calibration curve. Hysteresis is the maximum difference in output at any pressure within the specified range, when the value is first approached with increasing and then with decreasing pressure. Temperatures hysteresis is the sensor's ability to give the same output at a given temperature before and after a temperature cycle. Hysteresis is a sensor's ability to give the same output at a given temperature before and after a temperature cycle. Image Credit: sensorsmag Repeatability is the ability of a transducer to reproduce output readings when the same pressure is applied to the transducer repeatedly, under the same conditions and in the same direction. Thermal accuracy observes how temperature affects the output. It is expressed as a percentage of full scale output or as a percentage of full scale per degree Celsius, degree Fahrenheit or Kelvin. Total accuracy is the combination of static and thermal accuracy. In cases where the accuracy differs between middle span and the first and last quarters of the scale, the largest % error is reported. ASME2 B40.1 and DIN accuracy grades are frequently used: Grade 4A (0.1% Full Scale) Grade 3A (0.25% Full Scale) Grade 2A (0.5% Full Scale) Grade 1A (1% Full Scale) Grade A (1% middle half, 2% first and last quarters) Grade B (2% middle half, 3% first and last quarters) Grade C (3% middle half, 4% first and last quarters) Grade D (5% Full Scale) Industrial buyers should consider the pressure conditions that the sensor will be exposed to and ask the following questions: Over pressure: Will pressure ever exceed the maximum pressure? If so, by how much? Burst pressure: The designed safety limit which should not be exceeded. If this pressure is exceeded it may lead to mechanical breach and permanent loss of pressure containment. Are additional safety features needed? Dynamic loading: Dynamic loads can exceed expected static loads. Is the system experiencing dynamic pressure loading? Fatigue loading: Will the system experience high cycle rates? Vacuum Range is the span of pressures from the lowest vacuum pressure to the highest vacuum pressure (e.g., from 0 to 30 inches of mercury VAC). Mechanical conditions of the device determine how the sensor operates within the system. Consideration should be given to the physical constraints of the system, the media into which the sensor will be incorporated, process connectors, and configurations of the system and sensor. Physical constraints depend on the system that the sensor will be incorporated in to and should be considered when selecting a pressure sensor. Understanding the media of the system is critical when selecting a pressure sensor. The media environments for the sensor could be: Hydrogen and gases are very compressible, and they completely fill any closed vessels in which they are places. Abrasive or corrosive liquids and gases such as hydrogen sulfide, hydrochloric acid, bleach, bromides, and waste water. Pressure sensors made of Inconel X, phosphor bronze, beryllium copper or stainless steel are the most corrosion resistant materials to be used in the sensor. However, these materials require internal temperature compensation, in the form of a bi-metallic member, to offset the change in deflection of the sensor resulting from a change in temperature. Radioactive systems should include highly sensitive sensors which have explosion proof mechanisms. The temperature of the media should also be considered when selecting a pressure sensor to ensure the sensor can function in the range of the system. Pressure port and process connection options generally have male and female options and the standard connection depends on the application. British Standard Pipe (BSP)- Large diameter pressure connectors are needed for lower pressure ranges National Pipe Thread (NPT)- Commonly used in automotive and aerospace industries Unified Fine Thread (UNF)- Commonly used in automotive and aerospace industries Metric Threads- Meet ISO specifications. They are denoted with an M and a number which is the outside diameter in millimeters. Flush connectors- Used to provide a crevice free interface which is ideal for biotechnology, pharmaceutical or food process applications. Diary Pipe Standard- Used with hygienic pressure transmitters. Autoclave Engineers- Used in high pressure applications. Mechanical considerations include several application-driven device configurations. Differential systems measure the pressure difference between two points. Small diameter flow systems allow for flowing liquid or gas to be measured as it moves through the system. Flush diaphragms measure pressure in systems which have either completely flush or semi-flush exposed diaphragms to prevent buildup of material on the diaphragm and facilitate easy cleaning. Exposed diaphragm sensors are useful for measuring viscous fluids or media that are processed in a clean environment. Replaceable diaphragms are easily replaceable within the system to ensure high accuracy. Secondary containment houses the sensor to protect the device from environmental conditions. Explosion proof sensors are used in hazardous conditions. The electrical components of the pressure sensor are extremely important to consider and are specific to the application the sensor will be used in. Such specifications include electrical output, display, connections, signal conditioning and electrical features. Industrial buyers should consider the electrical output needed for seamless integration into the system controller. Analog- The output voltage is a simple (usually linear) function of the measurement. Pressure sensors generally have an output of mV/V. Most sensors operate from 10 V to 32 V, unregulated supply. The device will also have internal regulators to provide a stabilized input to the electronic circuitry under varying supply voltages. Industrial sensors can have high-level voltage outputs of 0-5 VDC, and 0-10 VDC. The output signal will lose its amplitude and accuracy due to resistance from the cable when transmitting voltages between a few inches and 30ft depending on the level. Design Tip: A zero- based output signal, such as 0-5VDC does not offer constant feedback at zero pressure because the controller is unaware if the system is operating or if there is a problem. Analog current levels or transmitters such as 4 - 20 mA are suitable for sending signals over long distances. 4-20mA current: is popular for long distances. Frequency: The output signal is encoded via amplitude modulation (AM), frequency modulation (FM), or some other modulation scheme such as sine wave or pulse train; however, the signal is still analog in nature. RS485(MODbus): RS232 and RS485 are serial communication protocols that transmit data one bit at a time. RS232 provides a standard interface between data terminal and data communications equipment. CANbus, J1939, CAN open: connects industrial devices such as limit switches, photoelectric cells, etc. to programmable logic controllers (PLCs) and personal computers (PCs). FOUNDATION Fieldbus a serial, all-digital, two-way communication system that serves as a local area network (LAN) for factory instrumentation and control devices. Special Digital (TTL) devices produce digital outputs other than standard serial or parallel signals. Examples include transistor-transistor logic (TTL) outputs. Combination includes analog and digital outputs The display is the interface the user interacts with to observe the pressure sensor reading. - Analog Meter-The device has an analog meter or simple visual indicator. - Digital- The device has a display for numerical values. Video- The device has a CRT, LCD or other multi-line display. Connectors are considered for the electrical termination of the sensor. The use of connectors adds benefits to pressure sensor installation such as easy removal from the system for recalibration or system maintenance. - Connector or integral cables connect the sensor to the rest of the system. Integral cables are used for submersed applications such as on pumps or hose down situations. - Mating connectors and cable accessories are needed for applications when sealing the sensor is important. Threads are very common for low to medium pressures. National Pipe Threads (NPT) are tapered in nature and require some form of Teflon® tape or putty to seal the thread to a piece of equipment. - Connector/cable orientation allows the sensor to be unplugged and the senor un-threaded. High-vibration environments require an inline connector at the end of a length of wire to reduce the loading on the connector pins. Inline connectors increase the life of the sensor. When selecting a cable option, the outer jacket material and inner conductor insulators must be selected to match the application. Wiring codes and pin-outs External zero and span potentiometers are used to compensate stray current in the measuring circuit to prevent distortion. DIN Rail mount or In-line signal conditioning for mV/V units will amplify output signals. These devices are used for applications requiring high level analog outputs and where the pressure transducer is exposed to conditions detrimental to internal signal conditioning or the required pressure transducer configuration will not accommodate an internal amplifier. Wireless sensors allow the information to be transmitted via a wireless signal to the host. Typically, the wireless signal is a radio frequency (RF) signal. Switch sensors change the output to a switch or relay closure to turn the system on or off with changes in pressure. Temperature output devices provide temperature measurement outputs in addition to pressure. Negative pressure output are available with devices that provide differential pressure measurements Alarm indicator devices have a built-in audible or visual alarm to warn operators of changes and/or danger in the system. Frequency response identifies the highest frequency that the sensor will measure without distortion or attenuation. The sensor's frequency response should be 5-10 times the highest frequency component in the pressure signal. Sometimes this feature is given as response time and the relation is: FB = ½ πτ FB = frequency where the response is reduced by 50% τ = time constant the output rises to 63% of its final value following a step input change. The environment the sensor will operate in should be considered when selecting a pressure sensor. Environmental considerations such as temperature, indoor/outdoor use and use in hazardous locations can affect the accuracy of the sensor. Changes in temperature are directly related to changes in pressure. A plot of the vapor pressure of water versus the water's temperature. Image Credit: purdue.edu Operating temperature is important to consider. Buyers should be aware of the ambient and media temperatures in the environment of the sensor. If the sensor is not compensated correctly the reading can change drastically. Temperature compensation devices include built-in factors that prevent pressure measurement errors due to temperature changes. A material such as a nickel alloy called Ni Span "C", requires no internal temperature compensation because they are relatively insensitive to temperature. Electromagnetic and radio frequency interference (EMI/RFI) have been identified as environmental conditions that affect the performance of safety-related electrical equipment. Ingress Protection or National Electrical Manufacturing Association rating required. IP protection is used in Europe and follows three parameters; protects the equipment, protects the personal, and protects the equipment against penetration of water with harmful effects. IP does not specify degrees of protection against mechanical damage, explosions, moisture, corrosive vapors or vermin. The NEMA standard for the environments surrounding the electrical equipment tests environmental conditions such as corrosion, rust, icing, oil, and coolants. A full explanation of IP and NEMA standards can be found at Solid Applied Technologies. If the pressure sensor is being used outdoors, the sensor may be sealed or vented depending on the pressure range and the accuracy needed. Environmental exposure can include: Animals and rodent tampering If the pressure sensor is going to be used in a hazardous area, the class type and group type must be known in order for the product to comply with NEC or CEC codes in North America. Some systems may require special calibration and approvals. Standard 11-point calibration means the sensor is calibrated to 11 pressure points spanning the full scale range of the pressure sensor. Points such as 0% 20% 40% 60% 80% 100% 80% 60% 40% 20% 0% can be used and going up and down the pressure range will check for hysteresis. Special calibration with additional calibration points Pressure sensors may need special approvals or certifications for operation in certain environments to protect the user and the environment. Specific testing, cleaning procedures and labeling may also need to be implemented for applications. Some additional considerations when installing or budgeting for a sensor in a system are: How accessible will the sensor be? How often will it need to be serviced? Fluid level in a tank: A gauge pressure sensor can be used to measure the pressure at the bottom of a tank. Fluid level can be calculated using the relation: h = P/ρg h= depth below the water surface ρ= water density g= acceleration of gravity Fluid flow: Placing an orifice plate in a pipe section results in a pressure drop which can be used to measure flow. This method is commonly used because it does not cause clogging and the pressure drop is small compared to many other flow meters. The relation is: V0 = √2 (Ps-P0)/ρ In some cases, differential pressures of only a few inches of water are measured in the presence of common-mode pressures of thousands of pounds per square inch. Automotive. A wide variety of pressure applications exist in the modern electronically controlled auto. Among the most important are: Manifold absolute pressure (MAP). Many engine control systems use the speed-density approach to intake air mass flow rate measurement. The mass flow rate must be known so that the optimum amount of fuel can be injected. Engine oil pressure. Engine lubrication requires pressures of 10-15 psig. Evaporative purge system leak detection. To reduce emissions, modern fuel systems are not vented to the atmosphere. This means that fumes resulting from temperature-induced pressure changes in the fuel tank are captured in a carbon canister and later recycled through the engine. Tire pressure. Recent development of the "run-flat" tire has prompted development of a remote tire pressure measurement system. Tank level measurement Differential pressure application. Image Credit: futek Brief explanation of a how a strain gauge diaphragm works Design Essentials: How to select a pressure sensor for a specific application Fundamentals of Pressure Sensor Technology IP/NEMA Rating Introduction Pressure Measurement Glossary Pressure Transducer Tutorial 1Diaphragm- A strain gauge diaphragm typically consists of a flat circular piece of uniform elastic material which is manufactured into a variety of different surface areas and thickness' to optimize performance at lower and higher pressure ranges. 2ASME- American Society of Mechanical Engineers Related Products & Services Digital Pressure Gauges Digital pressure gauges use electronic components to convert applied pressure into usable signals. The gauge readout has a digital numerical display. Mechanical Pressure Gauges Analog pressure gauges are mechanical devices that include bellows, Bourdon tubes, capsule elements and diaphragm element gauges. Pressure gauges are used for a variety of industrial and application-specific pressure monitoring applications. Uses include visual monitoring of air and gas pressure for compressors, vacuum equipment, process lines and specialty tank applications such as medical gas cylinders and fire extinguishers. Pressure instruments are used to measure, monitor, record, transmit or control pressure. Pressure switches are actuated by a change in the pressure of a liquid or gas. They activate electromechanical or solid-state switches upon reaching a specific pressure level. Pressure transmitters translate the low level output of a sensor or transducer to a higher level signal suitable for transmission to a site where it can be further processed. These devices include pressure sensors, transducers, elements, and instruments. Vacuum sensors are devices for measuring vacuum or sub-atmospheric pressures. - Alarm Indicator - Analog Current - Analog Meter - Analog Voltage - FOUNDATION Fieldbus - HART® Protocol - Mechanical Deflection - Negative Pressure Output - Pressure Reading:Other - Sensor Technology:Other - Electrical Output:Other - RS232 / RS485 - Semiconductor Piezoresistive
http://www.globalspec.com/learnmore/sensors_transducers_detectors/pressure_sensing/pressure_sensors_instruments
13
61
How Computers Work: The CPU and Memory Figure 0 shows the parts of a computer: - The Central Processing Unit: - Ports and controllers, - Main Memory (RAM); - Input Devices; - Output Devices; - Secondary Storage; - floppy disks, - hard disk, |Figure 0: Inside The Computer| This part of the reading will examine the CPU, Buses, Controllers, and Main Memory. Other sections will examine input devices, output devices, and secondary memory. The Central Processing Unit (CPU) The computer does its primary work in a part of the machine we cannot see, a control center that converts data input to information output. This control center, called the central processing unit (CPU), is a highly complex, extensive set of electronic circuitry that executes stored program instructions. All computers, large and small, must have a central processing unit. As Figure 1 shows, the central processing unit consists of two parts: The control unit and the arithmetic/logic unit. Each part has a specific function. |Figure 1: The Central Processing Unit| Before we discuss the control unit and the arithmetic/logic unit in detail, we need to consider data storage and its relationship to the central processing unit. Computers use two types of storage: Primary storage and secondary storage. The CPU interacts closely with primary storage, or main memory, referring to it for both instructions and data. For this reason this part of the reading will discuss memory in the context of the central processing unit. Technically, however, memory is not part of the CPU. Recall that a computer's memory holds data only temporarily, at the time the computer is executing a program. Secondary storage holds permanent or semi-permanent data on some external magnetic or optical medium. The diskettes and CD-ROM disks that you have seen with personal computers are secondary storage devices, as are hard disks. Since the physical attributes of secondary storage devices determine the way data is organized on them, we will discuss secondary storage and data organization together in another part of our on-line readings. Now let us consider the components of the central processing unit. The Control Unit The control unit of the CPU contains circuitry that uses electrical signals to direct the entire computer system to carry out, or execute, stored program instructions. Like an orchestra leader, the control unit does not execute program instructions; rather, it directs other parts of the system to do so. The control unit must communicate with both the arithmetic/logic unit The Arithmetic/Logic Unit The arithmetic/logic unit (ALU) contains the electronic circuitry that executes all arithmetic and logical operations. The arithmetic/logic unit can perform four kinds of arithmetic operations, or mathematical calculations: addition, subtraction, multiplication, and division. As its name implies, the arithmetic/logic unit also performs logical operations. A logical operation is usually a comparison. The unit can compare numbers, letters, or special characters. The computer can then take action based on the result of the comparison. This is a very important capability. It is by comparing that a computer is able to tell, for instance, whether there are unfilled seats on airplanes, whether charge- card customers have exceeded their credit limits, and whether one candidate for Congress has more votes than another. Logical operations can test for three conditions: - Equal-to condition. In a test for this condition, the arithmetic/logic compares two values to determine if they are equal. For example: If the number of tickets sold equals the number of seats in the auditorium, then the concert is declared sold out. - Less-than condition. To test for this condition, the computer compares values to determine if one is less than another. For example: If the number of speeding tickets on a driver's record is less than three, then insurance rates are $425; otherwise, the rates are $500. - Greater-than condition. In this type of comparison, the computer determines if one value is greater than another. For example: If the hours a person worked this week are greater than 40, then multiply every extra hour by 1.5 times the usual hourly wage to compute overtime pay. A computer can simultaneously test for more than one condition. In fact, a logic unit can usually discern six logical relationships: equal to, less than, greater than, less than or equal to, greater than or equal to, and not The symbols that let you define the type of comparison you want the computer to perform are called relational operators. The most common relational operators are the equal sign(=), the less-than symbol(<), and the - Registers: Temporary Storage Areas Registers are temporary storage areas for instructions or data. They are not a part of memory; rather they are special additional storage locations that offer the advantage of speed. Registers work under the direction of the control unit to accept, hold, and transfer instructions or data and perform arithmetic or logical comparisons at high speed. The control unit uses a data storage register the way a store owner uses a cash register-as a temporary, convenient place to store what is used in transactions. Computers usually assign special roles to certain registers, including these - An accumulator, which collects the result of computations. - An address register, which keeps track of where a given instruction or piece of data is stored in memory. Each storage location in memory is identified by an address, just as each house on a street has an address. - A storage register, which temporarily holds data taken from or about to be sent to memory. - A general-purpose register, which is used for several functions. Memory and Storage Memory is also known as primary storage, primary memory, main storage, internal storage, main memory, and RAM (Random Access Memory); all these terms are used interchangeably by people in computer circles. Memory is the part of the computer that holds data and instructions for processing. Although closely associated with the central processing unit, memory is separate from it. Memory stores program instructions or data for only as long as the program they pertain to is in operation. Keeping these items in memory when the program is not running is not feasible for three reasons: - Most types of memory only store items while the computer is turned on; data is destroyed when the machine is turned off. - If more than one program is running at once (often the case on large computers and sometimes on small computers), a single program can not lay exclusive claim to memory. - There may not be room in memory to hold the processed data. How do data and instructions get from an input device into memory? The control unit sends them. Likewise, when the time is right, the control unit sends these items from memory to the arithmetic/logic unit, where an arithmetic operation or logical operation is performed. After being processed, the information is sent to memory, where it is hold until it is ready to he released to an output unit. The chief characteristic of memory is that it allows very fast access to instructions and data, no matter where the items are within it. We will discuss the physical components of memory-memory chips-later in this To see how registers, memory, and second storage all work together, let us use the analogy of making a salad. In our kitchen we have: The process of making the salad is then: bring the veggies from the fridge to the counter top; place some veggies on the chopping board according to the recipe; chop the veggies, possibly storing some partially chopped veggies temporarily on the corners of the cutting board; place all the veggies in the bowl to either put back in the fridge or put directly on the dinner table. - a refrigerator where we store our vegetables for the salad; - a counter where we place all of our veggies before putting them on the cutting board for chopping; - a cutting board on the counter where we chop the vegetables; - a recipe that details what veggies to chop; - the corners of the cutting board are kept free for partially chopped piles of veggies that we intend to chop more or to mix with other partially chopped - a bowl on the counter where we mix and store the salad; - space in the refrigerator to put the mixed salad after it is made. The refrigerator is the equivalent of secondary (disk) storage. It can store high volumes of veggies for long periods of time. The counter top is the equivalent of the computer's motherboard - everything is done on the counter (inside the computer). The cutting board is the ALU - the work gets done there. The recipe is the control unit - it tells you what to do on the cutting board (ALU). Space on the counter top is the equivalent of RAM memory - all veggies must be brought from the fridge and placed on the counter top for fast access. Note that the counter top (RAM) is faster to access than the fridge (disk), but can not hold as much, and can not hold it for long periods of time. The corners of the cutting board where we temporarily store partially chopped veggies are equivalent to the registers. The corners of the cutting board are very fast to access for chopping, but can not hold much. The salad bowl is like a temporary register, it is for storing the salad waiting to take back to the fridge (putting data back on a disk) or for taking to the dinner table (outputting the data to an output device). Now for a more technical example. let us look at how a payroll program uses all three types of storage. Suppose the program calculates the salary of an employee. The data representing the hours worked and the data for the rate of pay are ready in their respective registers. Other data related to the salary calculation-overtime hours, bonuses, deductions, and so forth-is waiting nearby in memory. The data for other employees is available in secondary storage. As the CPU finishes calculations about one employee, the data about the next employee is brought from secondary storage into memory and eventually into the registers. The following table summarizes the characteristics of the various kinds of data storage in the storage hierarchy. Modern computers are designed with this hierarchy due to the in the table. It has been the cheapest way to get the functionality. However, as RAM becomes cheaper, faster, and even permanent, we may see disks disappear as an internal storage device. Removable disks, like Zip disks or CDs (we describe these in detail in the online reading on storage devices) will probably remain in use longer as a means to physically transfer large volumes of data into the computer. However, even this use of disks will probably be supplanted by the Internet as the major (and eventually only) way of transferring data. Floppy disks drives are already disappearing: the new IMac Macintosh from Apple does not come with one. Within the next five years most new computer designs will only include floppy drives as an extra for people with old floppy disks that they must use. | Storage|| Speed || Relative Cost ($)|| | Registers|| Fastest || Lowest|| Highest | RAM|| Very Fast || Low/Moderate|| High | Floppy Disk|| Very Slow || Low|| Low | Hard Disk|| Moderate || Very High|| Very Low For more detail on the computer's memory hierarchy, see the How Stuff Works pages on computer memory.. This is optional reading. How the CPU Executes Program Instructions Let us examine the way the central processing unit, in association with memory, executes a computer program. We will be looking at how just one instruction in the program is executed. In fact, most computers today can execute only one instruction at a time, though they execute it very quickly. Many personal computers can execute instructions in less than one-millionth of a second, whereas those speed demons known as supercomputers can execute instructions in less than one-billionth of a second. Before an instruction can be executed, program instructions and data must be placed into memory from an input device or a secondary storage device (the process is further complicated by the fact that, as we noted earlier, the data will probably make a temporary stop in a register). As Figure 2 shows, once the necessary data and instruction are in memory, the central processing unit performs the following four steps for each instruction: |Figure 2: The Machine Cycle| - The control unit fetches (gets) the instruction from memory. - The control unit decodes the instruction (decides what it means) and directs that the necessary data be moved from memory to the arithmetic/logic unit. These first two steps together are called instruction time, or I-time. - The arithmetic/logic unit executes the arithmetic or logical instruction. That is, the ALU is given control and performs the actual operation on the data. - Thc arithmetic/logic unit stores the result of this operation in memory or in a register. Steps 3 and 4 together are called execution time, The control unit eventually directs memory to release the result to an output device or a secondary storage device. The combination of I-time and E-time is called the machine cycle. Figure 3 shows an instruction going through the machine cycle. Each central processing unit has an internal clock that produces pulses at a fixed rate to synchronize all computer operations. A single machine-cycle instruction may be made up of a substantial number of sub-instructions, each of which must take at least one clock cycle. Each type of central processing unit is designed to understand a specific group of instructions called the instruction set. Just as there are many different languages that people understand, so each different type of CPU has an instruction set it understands. Therefore, one CPU-such as the one for a Compaq personal computer-cannot understand the instruction set from another CPU-say, for a Macintosh. |Figure 3: The Machine Cycle in Action| It is one thing to have instructions and data somewhere in memory and quite another for the control unit to be able to find them. How does it do The location in memory for each instruction and each piece of data is identified by an address. That is, each location has an address number, like the mailboxes in front of an apartment house. And, like the mailboxes, the address numbers of the locations remain the same, but the contents (instructions and data) of the locations may change. That is, new instructions or new data may be placed in the locations when the old contents no longer need to be stored in memory. Unlike a mailbox, however, a memory location can hold only a fixed amount of data; an address can hold only a fixed number of bytes - often two bytes in a modern computer. |Figure 4: Memory Addresses Like Mailboxes Figure 4 shows how a program manipulates data in memory. A payroll program, for example, may give instructions to put the rate of pay in location 3 and the number of hours worked in location 6. To compute the employee's salary, then, instructions tell the computer to multiply the data in location 3 by the data in location 6 and move the result to location 8. The choice of locations is arbitrary - any locations that are not already spoken for can be used. Programmers using programming languages, however, do not have to worry about the actual address numbers, because each data address is referred to by a name. The name is called a symbolic address. In this example, the symbolic address names are Rate, Hours, and Salary.
http://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading04.htm
13
63
In geometry and linear algebra, a rotation is a transformation in a plane or in space that describes the motion of a rigid body around a fixed point. A rotation is different from a translation, which has no fixed points, and from a reflection, which "flips" the bodies it is transforming. A rotation and the above-mentioned transformations are isometries; they leave the distance between any two points unchanged after the transformation. It is important to know the frame of reference when considering rotations, as all rotations are described relative to a particular frame of reference. In general for any orthogonal transformation on a body in a coordinate system there is an inverse transformation which if applied to the frame of reference results in the body being at the same coordinates. For example in two dimensions rotating a body clockwise about a point keeping the axes fixed is equivalent to rotating the axes counterclockwise about the same point while the body is kept fixed. Two dimensions Only a single angle is needed to specify a rotation in two dimensions – the angle of rotation. To calculate the rotation two methods can be used, either matrix algebra or complex numbers. In each the rotation is acting to rotate an object counterclockwise through an angle θ about the origin. Matrix algebra To carry out a rotation using matrices the point (x, y) to be rotated is written as a vector, then multiplied by a matrix calculated from the angle, , like so: where (x′, y′) are the co-ordinates of the point after rotation, and the formulae for x′ and y′ can be seen to be The vectors and have the same magnitude and are separated by an angle as expected. Complex numbers Points can also be rotated using complex numbers, as the set of all such numbers, the complex plane, is geometrically a two dimensional plane. The point (x, y) in the plane is represented by the complex number This can be rotated through an angle θ by multiplying it by eiθ, then expanding the product using Euler's formula as follows: which gives the same result as before, 1 Like complex numbers rotations in two dimensions are commutative, unlike in higher dimensions. They have only one degree of freedom, as such rotations are entirely determined by the angle of rotation. Three dimensions Rotations in ordinary three-dimensional space differ from those in two dimensions in a number of important ways. Rotations in three dimensions are generally not commutative, so the order in which rotations are applied is important. They have three degrees of freedom, the same as the number of dimensions. A three dimensional rotation can be specified in a number of ways. The most usual methods are as follows. Matrix algebra As in two dimensions a matrix can be used to rotate a point (x, y, z) to a point (x′, y′, z′). The matrix used is a 3 × 3 matrix, This is multiplied by a vector representing the point to give the result The matrix A is a member of the three dimensional special orthogonal group, SO(3), that is it is an orthogonal matrix with determinant 1. That it is an orthogonal matrix means that its rows are a set of orthogonal unit vectors (so they are an orthonormal basis) as are its columns, making it simple to spot and check if a matrix is a valid rotation matrix. The determinant of a rotation orthogonal matrix must be 1. The only other possibility for the determinant of an orthogonal matrix is -1, and this result means the transformation is a reflection, improper rotation or inversion in a point, i.e. not a rotation. Matrices are often used for doing transformations, especially when a large number of points are being transformed, as they are a direct representation of the linear operator. Rotations represented in other ways are often converted to matrices before being used. They can be extended to represent rotations and transformations at the same time using Homogeneous coordinates. Transformations in this space are represented by 4 × 4 matrices, which are not rotation matrices but which have a 3 × 3 rotation matrix in the upper left corner. The main disadvantage of matrices is that they are more expensive to calculate and do calculations with. Also in calculations where numerical instability is a concern matrices can be more prone to it, so calculations to restore orthonormality, which are expensive to do for matrices, need to be done more often. Mobile frame rotations One way of generalising the two dimensional angle of rotation is to specify three rotation angles, carried out in turn about the three principal axes. They individually can be labelled yaw, pitch, and roll, but in mathematics are more often known by their mathematical name, Euler angles. They have the advantage of modelling a number of physical systems such as gimbals, and joysticks, so are easily visualised, and are a very compact way of storing a rotation. But they are difficult to use in calculations as even simple operations like combining rotations are expensive to do, and suffer from a form of gimbal lock where the angles cannot be uniquely calculated for certain rotations. Euler rotations Euler rotations are a set of three rotations defined as the movement obtained by changing one of the Euler angles while leaving the other two constant. Euler rotations are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third one is an intrinsic rotation around an axis fixed in the body that moves. Axis angle A second way of generalising the two dimensional angle of rotation is to specify an angle with the axis about which the rotation takes place. It can be used to model motion constrained by a hinges and Axles, and so is easily visualised, perhaps even more so than Euler angles. There are two ways to represent it; - as a pair consisting of the angle and a unit vector for the axis, or - as a vector obtained by multiplying the angle with this unit vector, called the rotation vector. Usually the angle and axis pair is easier to work with, while the rotation vector is more compact, requiring only three numbers like Euler angles. But like Euler angles it is usually converted to another representation before being used. Quaternions are in some ways the least intuitive representation of three dimensional rotations. They are not the three dimensional instance of a general approach, like matrices; nor are they easily related to real world models, like Euler angles or axis angles. But they are more compact than matrices and easier to work with than all other methods, so are often preferred in real world applications. A rotation quaternion consists of four real numbers, constrained so the length of the quaternion considered as a vector is 1. This constraint limits the degree of freedom of the quaternion to three, as required. It can be thought of as a generalisation of the complex numbers, by e.g. the Cayley–Dickson construction, and generates rotations in a similar way by multiplication. But unlike matrices and complex numbers two multiplications are needed: where q is the rotation quaternion, q−1 is its inverse, and x is the vector treated as a quaternion. The quaternion can be related to the rotation vector form of the axis angle rotation by the exponential map over the quaternions, Where v is the rotation vector treated as a quaternion. Four dimensions A general rotation in four dimensions has only one fixed point, the centre of rotation, and no axis of rotation. Instead the rotation has two mutually orthogonal planes of rotation, each of which is fixed in the sense that points in each plane stay within the planes. The rotation has two angles of rotation, one for each plane of rotation, through which points in the planes rotate. If these are ω1 and ω2 then all points not in the planes rotate through an angle between ω1 and ω2. If ω1 = ω2 the rotation is a double rotation and all points rotate through the same angle so any two orthogonal planes can be taken as the planes of rotation. If one of ω1 and ω2 is zero, one plane is fixed and the rotation is simple. If both ω1 and ω2 are zero the rotation is the identity rotation. Rotations in four dimensions can be represented by 4th order orthogonal matrices, as a generalisation of the rotation matrix. Quaternions can also be generalised into four dimensions, as even Multivectors of the four dimensional Geometric algebra. A third approach, which only works in four dimensions, is to use a pair of unit quaternions. Rotations in four dimensions have six degrees of freedom, most easily seen when two unit quaternions are used, as each has three degrees of freedom (they lie on the surface of a 3-sphere) and 2 × 3 = 6. One application of this is special relativity, as it can be considered to operate in a four dimensional space, spacetime, spanned by three space dimensions and one of time. In special relativity this space is linear and the four dimensional rotations, called Lorentz transformations, have practical physical interpretations. If a rotation is only in the three space dimensions, i.e. in a plane that is entirely in space, then this rotation is the same as a spatial rotation in three dimensions. But a rotation in a plane spanned by a space dimension and a time dimension is a hyperbolic rotation, a transformation between two different reference frames, which is sometimes called a "Lorentz boost". These transformations, which are not actual rotations, but squeeze mappings, are sometimes described with Minkowski diagrams. The study of relativity is concerned with the Lorentz group generated by the space rotations and hyperbolic rotations. Orthogonal matrices More generally, coordinate rotations in any dimension are represented by orthogonal matrices. The set of all orthogonal matrices of the n-th dimension which describe proper rotations (determinant = +1), together with the operation of matrix multiplication, forms the special orthogonal group: SO(n). Orthogonal matrices have real elements. The analogous complex-valued matrices are the unitary matrices. The set of all unitary matrices in a given dimension n forms a unitary group of degree n, U(n); and the subgroup of U(n) representing proper rotations forms a special unitary group of degree n, SU(n). The elements of SU(2) are used in quantum mechanics to rotate spin. See also - Lounesto 2001, p.30. - Lounesto 2001, pp. 85, 89. - Hestenes 1999, pp. 580 - 588. - Hestenes, David (1999). New Foundations for Classical Mechanics. Dordrecht: Kluwer Academic Publishers. ISBN 0-7923-5514-8. - Lounesto, Pertti (2001). Clifford algebras and spinors. Cambridge: Cambridge University Press. ISBN 978-0-521-00551-7. - Brannon, Rebecca M. (2002). "A review of useful theorems involving proper orthogonal matrices referenced to three-dimensional physical space.". Albuquerque: Sandia National Laboratories.
http://en.wikipedia.org/wiki/Rotation_(mathematics)
13
85
Physics 2083 - Complete Study Guide Updated through Wednesday, May 5. Current study questions can be found - Explain how we can use parallax to find the distances to stars. How does the distance depend on the parallax angle? If star A has a parallax angle 10 times smaller than star B, then how does the distance to star A compare with the distance to star B? - Why can we only use the parallax technique to find the distances to stars fairly close (about 100 parsecs) to the Sun? - How were parallax observations used to "disprove" the heliocentric model of the solar system by contemporaries of Kepler and Copernicus? Why were these observations invalid? - Given the equation of the inverse square law, be able to answer questions such as "Star A has twice the absolute luminosity of star B and is twice as far away as star B. How does the apparent luminosity of star A compare with that of star B?" - What are the two main functions of the telescope that we have discussed in class? How and why does increased aperture diameter affect these two things? - Explain the concept of frequency and how it relates to wavelength. Know how the wavelength, speed and frequency of light are related, and be able to identify from two wave patterns which has the higher/lower - Explain how energy and wavelength are related for light. An object with more energy to emit will tend to emit light that has higher energy. Use this fact to explain why hotter objects tend to be blue. - Explain qualitatively and graphically why the wavelength of maximum emission is inversely proportional to the temperature of an object emitting continuous radiation. - Given that light is more easily scattered at short wavelengths, use this fact to explain why the sky is blue and why the Sun appears red at sunset. On a spectrum graph, show how the distribution of light from the Sun changes between noon and sunset. - How does scattering affect the apparent surface temperature of the star that is derived from the color of the star? Explain how your temperature estimate will be wrong (too high or too low?) if you fail to take scattering effects into account. - Be sure you understand how electrons change energy levels within an atom. Given a simple energy level diagram, be able to identify which electron transition corresponds to the longest/shortest wavelength photon emission/absorption. - Explain how absorption/emission line spectra are generated with the help of a simple diagram. - How can we use line spectra to determine the composition of a cloud of gas? - Explain how we can calculate the total luminosity (amount of energy given off per second) of the Sun just by knowing how much energy per second falls on a square meter of the Earth's surface. - Explain how did scientists discount the possibility of chemical or gravitational forces as the source of the Sun's energy. - What is the source of energy in the proton-proton chain? Why does nuclear fusion require high temperatures and densities? - Explain how (and why) spectral line strengths are related to the abundance of the particular atom that is creating the spectral line. - Use a simple diagram to show how the bulk motion of an object results in spectral line shifts due to the Doppler effect. Be sure you understand the difference between radial motion (which causes shifts) and transverse motion (which does not cause shifts). - Explain how and why internal motions in a cloud of gas are related to the line widths. Understand why larger motions lead to larger line - Explain why temperature and density of an object are all considered to be sources of internal motion, leading to broader lines. - Explain how rotational motion of a cloud of gas might lead to line broadening. Would a rotating star or cloud exhibit rotational line broadening if viewed pole-on? Explain. (TQ #2) - What is ionization? How are the rules of light absorption by electrons different for simple absorption compared to the process of - Is it possible for light to pass through a cloud of Hydrogen without suffering the effects of absorption? Explain the circumstances under which this might occur. (TQ #1) - Explain why the inner envelope of the Sun transports energy radiatively from the core outward toward the surface (instead of transporting the energy mechanically). - Use the temperature structure of the outer layers of the Sun to explain why the chromosphere is thought to be the source of most of the absorption lines in the Sun's spectrum. - What is the physical mechanism that we think causes the gas in the corona to be so hot? - Explain how observations of emission line widths from coronal gases could confirm the theory of the temperature structure of the corona stated in question 26. - Explain how observations of different ionization species at various altitudes above the photosphere could confirm the temperature structure of the corona stated in question 26. - Explain how observations of limb darkening confirms the theoretical model of the photosphere that states temperatures in the photosphere increase the deeper one observes below the outer edge of the photosphere. - The solar corona has temperatures at least as high as the core of the Sun, so why doesn't nuclear fusion occur in the corona? (TQ #3) - Briefly explain the "solar neutrino problem". Why can we use neutrinos to observe the Sun's core instead of just using light, and why are neutrinos so hard to observe? - Given the conflict of observation with theory, why do Astronomers still believe that the proton-proton chain is still the fundamental source of the Sun's energy? - Given the equation for the absolute luminosity of a star, describe how we can use the radius (size) of a star along with the star's surface temperature to determine the distance to that star. - Explain why stars with the same atmospheric composition but different surface temperatures will have spectra showing different patterns of absorption lines. - Suppose a star has no lines of a particular element, like Calcium. How can we tell whether this is due to a lack of Calcium or a stellar temperature that is too high or too low for Calcium lines? - How and why do spectral line widths correlate with stellar sizes? - Be able to answer questions such as, "Star X is twice as large as the Sun and has a surface temperature of 3000 K (the Sun's surface temperature is 6000 K). How does the absolute luminosity of star X compare to that of the Sun?" Be able to answer these kinds of questions quantitatively and graphically (plotting the position of X on a typical H-R Diagram relative to the Sun). - Use the Copernican Principle to help explain why a collection of the stars nearest to the Sun is more representative than a collection of stars with the largest apparent brightness. - Describe how Astronomers can use either spectral line observations or light curves to deduce that a given star on the sky is actually some kind of eclipsing binary system. - Understand what a light curve is and what causes the dips in the curve for an eclipsing binary system. Be able to identify the period of a binary system from a light curve. - Given the equation of orbital velocity, how could you deduce the mass of the Sun by knowing only Earth's orbital distance from the Sun and Earth's orbital period (just describe how it is done; you do not need to actually repeat the calculation). - Be able to manipulate the two equations of orbits in order to answer questions such as "The orbital distance for binary system A is the same as the orbital distance for binary system B, but the companion star in binary system A has only half the period of system B. Which system's companion star has a higher orbital velocity? Which system's central star has a higher mass?" - Explain how we find the masses for central stars in edge-on binary systems. What two quantities are easy to measure directly, and how do we use that information to find the central star's mass? - Explain how we find the masses for central stars in face-on binary systems. What two quantities are easy to measure directly? Why is it that we can only easily find the orbital distance for such a system if that system is relatively close to us? - By determining the mass of several hundred stars in binary systems, Astronomers have found that there is a correlation between mass and absolute luminosity. Explain why theorists believe this is true. - Explain why a star's mass is inversely proportional to its lifetime, both quantitatively (with equations) and qualitatively (with words). - Given the relationship between mass and absolute luminosity, be able to answer questions such as, "Star X is twice as massive as our Sun. How does the luminosity of star X compare with our Sun? How does the lifetime of star X compare with our Sun?" - Be able to discuss the concept of ISM extinction of starlight. If, due to ISM extinction, a star appears dimmer than it would otherwise, how does that affect our estimate of the distance? - Just by looking at star counts on the sky (e.g. page 207 in the text), explain how we can tell where an interstellar cloud is found. - Be able to discuss the concept of ISM reddening of starlight. If, due to ISM reddening, a star appears redder than it would otherwise, how does that affect our estimate of the distance? - How and why do ISM absorption lines differ from stellar absorption lines? Given a spectrum with both kinds of lines, be able to identify which set belongs to the ISM and which belongs to the star (as well as relative Doppler shifts due to bulk motion of either or both). - Hydrogen's 21-cm radiation is considered to be a "forbidden line" because we can't reproduce it in laboratory conditions. Why is this emission line only seen in interstellar Hydrogen? - We probably wouldn't be able to detect the presence of most of the cold Hydrogen gas in our galaxy if it weren't for the fine splitting of the first energy level that corresponds to a wavelength of 21 cm. Explain why. - Explain why it is easiest to see the dust component of the ISM by observing at infrared wavelengths. - Explain the concept of pressure equilibrium in the ISM. Understand what the heliosphere is, and explain how this boundary represents an example of pressure equilibrium. - Explain what happens to the temperature and density of an interstellar cloud as it collapses due to its own self-gravity. Explain what happens to the internal (outward-pushing) pressure of the cloud during a collapse. - Why is there a minimum mass for stars? - What is hydrostatic equilibrium? Use the HSE cycle to describe what happens to a star after its gravity is somehow increased (by adding mass) or after its core temperature is somehow increased. - Describe how the core of a star changes during the course of its main sequence lifetime (a few simple diagrams would help). Use the concept of differentiation to explain why the inert Helium "ash" collects at the center of the star's core. - What happens to the overall size of a star near the end of its main sequence lifetime? Use HSE to help explain your answer. - After the main sequence lifetime of a star ends, what happens to the star's size, core temperature and core density? Again, use HSE to help explain your answer. - Why is the minimum stellar mass for Helium core burning larger than the minimum stellar mass for Hydrogen core burning? As part of your answer, explain three reasons why Helium fusion requires higher core temperatures and higher core densities than Hydrogen fusion. - Why do stellar colors change to redder shades when they expand into giants near the end of the main sequence? - After Helium fusion begins, Hydrogen in a shell around the core ignites to fuse into Helium. This Hydrogen was part of the envelope during the main sequence. Why is it now a part of the core? - Explain the pulsations of a variable star in the context of the Hydrostatic Equilibrium cycle. When a star is at its smallest, explain what is happening to the outward-pushing pressure and the inward-pushing gravity. Likewise, when a star is at its largest. - Explain how to calibrate the Cepheid Period-Luminosity relationship. Why is it best to calibrate using a sample of nearby Cepheids? Because we're limited to using nearby Cepheids, is our sample likely to be representative? Explain - Explain how to the use the P-L relationship to find the distance to a galaxy in which a Cepheid is located. Given a couple of Cepheid light curves, be able to tell which has the longer period and therefore, the higher absolute luminosity. - Why do stars have a maximum possible mass? (TQ #1) - Explain how to find the distance to a planetary nebula with the following pieces of information: angular size, time since the original explosion, and the velocity of the expanding gaseous shell. (TQ #2) - Explain what a planetary nebula is. Why do stars with less than about four solar masses undergo this experience? - What is different about iron fusion as opposed to the fusion of lighter elements? Explain how this difference leads to the collapse of the star's core. - Every atom in your body with an atomic number higher than that of Helium comes from the cycle of stellar evolution, and every atom with an atomic number higher than that of iron comes from a supernova explosion. Explain why we can make these statements with confidence. - Use conservation of angular momentum (mass*size*rotation speed) to explain why neutron stars spin so quickly. - Explain how the gravity of black holes is similar to and different from the gravity of ordinary matter. For example, if the Sun were replaced by a one solar mass black hole, would the Earth's orbit change? Also, is it possible for the gravitational force of a one solar mass black hole to be greater than the Sun? Explain each answer. (TQ #4) - Explain how to use binary star system techniques to find the mass of a black hole around which a companion star is orbiting. - When a black hole is part of a binary system, the region around a black hole contains an accretion disk of material spiraling into the black hole. Explain how this accretion disk radiates energy. - Explain how we can "prove" the existence of black holes indirectly. - Why does a representative sample of stars contain mostly stars that are burning Hydrogen in their cores (in other words, mostly stars that lie along the main sequence strip of the H-R Diagram)? - Why does a representative sample of stars contain more stars with absolute luminosities less than the Sun compared to stars with absolute luminosities greater than the Sun? - Explain the Malmquist bias. Why are stars with low absolute luminosities typically excluded from samples that are limited by - How and why has the metallicity of the ISM changed over time? - Explain the relationship between age and metallicity for stars. - Describe how the H-R Diagram for stars in a globular cluster evolves over time. Given a couple of different cluster diagrams, be able to identify and explain which cluster is older and which has a higher metallicity. - Discuss how astronomers use the main sequence turnoff point of a globular cluster to estimate the age of the cluster. - Discuss how astronomers find the distance to a globular cluster based on the properties of a star at the main sequence turnoff point. - Explain how the distribution of globular clusters on the sky is seen as evidence that our Sun is not at the center of our galaxy. How are the distances to globular clusters used to estimate the size of our galaxy? - Why are star-forming regions usually blue? Why are regions devoid of gas and dust (e.g. the bulge and the halo of our galaxy) typically red? - If you see a blue star, can you say conclusively that it is young (that it formed recently)? Explain. If you see a red star, can you say conclusively that it is old? Explain. - Use the rotation of the galaxy and the concept of centrifugal force to explain why the galaxy collapsed into a disk shape. - Use the fact that there are no zero-metallicity stars in the disk to argue that there must have been a generation of star formation either before or during the collapse of the galaxy into a disk. - Explain why the gas and dust originally in the halo of the galaxy is now a part of the disk while stars originally in the halo of the galaxy are still there. - Describe how astronomers discovered that there is a super-massive black hole at the center of our galaxy (and most if not all other galaxies). - Why are the spiral arms so bright and blue compared to the rest of the material in the disk? - What is a Keplerian rotation curve? How does the rotation curve of our galaxy differ from a Keplerian curve, and why is this considered to be evidence for the existence of dark matter? - Given the equation of orbital velocity, explain how we can determine the amount of dark matter in the galaxy. How is the "M" in the equation of orbital velocity different from the visible mass of the galaxy, obtained by counting up all the stars, gas and dust? - Why are WIMPs (Weakly Interacting Massive Particles) so difficult to study? Even though the mass of a WIMP may be extremely small (even compared to the mass of an atom), they still may constitute a large part of the dark matter. Explain why. - Explain how MACHOs use gravitational lensing to make starlight appear brighter for a short period of time. A simple diagram would help. - How and why does the light curve of a MACHO-lensed star differ from the light curve of a Cepheid variable or a star that periodically brightens or dims due to instability? - The colors of MACHO-lensed stars typically do not change during a brightening while other variable stars change colors when they change brightnesses. Explain the difference. - Summarize the scientific results of the two photographs of page 343 of the text, in which astronomers predicted and counted the number of very low mass stars in a representative region. In other words, explain how much these stars contribute to the dark matter in the Milky Way, and how we know this. - Black holes are also thought to be a candidate for the dark matter. Why can we use x-ray observations to detect black holes? If black holes don't have accompanying accretion disks, why can't we detect them easily via lensing like MACHOs? - Explain the concept of lookback time. Why is lookback time only significant for objects that are very far away (more than a few hundred million light years)? - The average size of elliptical galaxies has been found to be inversely proportional to their distances from Earth (when we look at very large, cosmological-scale distances). If this is so, explain how the average size of elliptical galaxies has changed as the Universe has gotten older. - The ratio of spirals to elliptical galaxies has been found to be inversely proportional to the distance from Earth. Based on this, explain how the number of elliptical galaxies in the Universe has changed as the Universe has gotten older. - The observation that elliptical galaxies tend to be located near the centers of galactic clusters is also seen as evidence that ellipticals are formed via galaxy mergers. Explain why the observation leads to this conclusion. - How does the "merger hypothesis" for the origin of elliptical galaxies explain why elliptical galaxies have very small amounts of gas and dust today? Why does a small amount of gas and dust tend to imply that an elliptical galaxy will have an overall red color? - If we measure the radial velocity of a galaxy that is part of a galaxy cluster, explain how this velocity compares to the true velocity of the galaxy. - Why do we assume that the average velocity of galaxies in a cluster must be less than or equal to the escape velocity of the cluster? - Given the equation of escape velocity and knowledge about the angular size and distance to the galaxy cluster, explain how we can use the average galaxy velocity to deduce the existence of dark matter in a galaxy cluster. - How would you expect the metallicity of the ISM of galaxies (seen by way of their absorption lines in quasar spectra) to correlate with redshift? Explain. (TQ #2) - Cepheid variables are very reliable distance indicators. Unfortunately, we can only use them to find distances to galaxies that are very close to our own. Why are we unable to utilize the Cepheid P-L relation for galaxies very far away? - Explain how the standard ruler method of distance determination works, using a simple diagram to help. How does the angular size of an object correlate with its distance from Earth? - Why is the standard ruler method not a reliable method of distance determination? Is there any way to improve this reliability? - Explain how the standard candle method of distance determination works. Assuming all galaxies have the same absolute luminosity, how would the apparent luminosity of galaxies correlate with distance from Earth? - Why is the standard candle method (using galaxies as standards) not a reliable distance indicator? Is there any way to improve this reliability? - Why are the brightest globular clusters in a given galaxy a better "candle" for use with the standard candle method? Why are supernovae peak brightnesses even better, besides the fact that they are often as bright as entire galaxies? - Given a graph that shows the relationship between absolute luminosity and rotation velocity for a galaxy, explain how to use the Tully-Fisher relationship to determine the distance to a galaxy. - Why can the TF method only be used on relatively nearby galaxies? - Given a Hubble diagram that shows the relationship between distance and redshift, explain how to use the redshift of a galaxy's absorption lines to determine the distance to that galaxy. - How would the Hubble diagram look in a Universe that weren't expanding away from us? (TQ #1) - Why do some galaxies close to use have radial velocity components that are in the direction toward Earth? - How do we know that quasars are so far away? Explain. - How do we know that quasars are much brighter than most galaxies? Explain. - How do we know that quasars are much smaller than galaxies? Explain the relationship between size and time variability. - Be able to graph a simple Hubble's Law analogy, like the car race described in class. Be able to calculate the "age" of any car race based on your graph. - How does the slope of the Hubble relation change as the race time (or the age of the Universe, by analogy) grow longer? Explain with the help of a simple graph. - Why does the Hubble relation imply that the Universe has a finite age (in other words, why does it imply some kind of initial "big bang")? - We observe that virtually every galaxy in the sky seems to be moving away from our location in the Milky Way galaxy. Explain why this is not a violation of the Copernican Principle. - Define the critical density of the Universe. Explain how and why the relationship between the observed density of the Universe and the critical density of the Universe will dictate the ultimate fate of the Universe (i.e. whether it will collapse or expand forever). - Why is the Anthropic Principle compelling philosophically (what makes us think it may be true?) Use observed vs critical density as an example to argue this (i.e what would happen if the observed density were very different from the critical density?) - Explain two different versions of the Anthropic Principle in your own words and justify your belief in one or the other. (TQ #4) - Explain how the observation that the night sky is dark leads us to believe that the Universe is finite in some way (start by assuming that the Universe is infinite in space and time and explain how it would differ from what we see today). - Explain how the observation that the night sky is dark also leads us to the conclusion that the Universe is expanding. - If the Universe is slowing down due to gravity, how would that change the appearance of the Hubble relation? Explain. - How does the Hubble relation change if the Universe is accelerating? What does this imply about the age of the Universe compared to the case where gravity slows the Universe down? Explain. - Explain how the idea of an accelerating Universe helps solve the "age discrepancy", which is the idea that, under the old theory (that the Universe was slowing down), the ages of the oldest globular clusters were apparently older than the age of the Universe. - What is the Microwave Background Radiation? Explain how it leads us to believe that the early Universe was smaller, hot and dense. - Why does the Microwave Background look bluer (hotter) in one direction and cooler (redder) in the opposite direction? (TQ #3) - Explain why observations of a lumpy structure of galaxies in the Universe as a whole led theorists to predict the presence of similar "lumps" in the Microwave Background Radiation (which were subsequently observed by later, more sensitive observations). - Conditions in the early Universe were ripe for nucleosynthesis to occur prior to a time when the Universe was about 3 minutes old. Why wasn't nucleosynthesis an important process in the Universe prior to a time of about 1 second? Why was it limited to only occuring between times of about 1 second and 3 minutes? - How does the Big Bang theory predict the current composition of the Universe? In other words, how does it explain the fact that the Universe is mostly Hydrogen, instead of something like Carbon? - Given problems with the Big Bang theory like the age discrepancy, why don't scientists abandon the theory as incompatible with observations, like the scientific method says we should? - Explain the importance of the observation that there seems to be an upper limit to the ages of all things in the Universe. What can we conclude from this? - What is the horizon problem? Explain why the observation of this problem requires a mechanism like inflation. - Explain why the number of intelligent, communicative civilizations in our galaxy (the "N" of the Drake equation) depends on L, the average lifetime of an intelligent civilization. - Why do we think that searching for and communicating via radio signals is a more effective way to find out about the possibility of extraterrestrial life as opposed to actually visiting other stellar systems (you don't need to go through all the math, just summarize the reason). - Explain the concept of bandwidth in broadcasting. How does the bandwidth of a signal correlate with the amount of energy it takes to broadcast the signal? How does it correlate with the number of possible channels one would have to search to find the signal? - Explain the tradeoff an observer must make between sensitivity and sky coverage when trying to detect signals. If an observer takes the time to observe with very high sensitivity in a given direction, how does that affect the amount of time it will take to cover the whole sky?
http://personal.tcu.edu/dingram/phys20083/sp99_sg.html
13
103
Arithmetic for Praxis II ParaPro Test Prep Study Guide (page 4) The practice quiz for this study guide can be found at: This section covers the basics of mathematical operations and their sequence. It also reviews whole numbers, integers, fractions, decimals, percents, and estimation. The Number Line The number line is a graphical representation of the order of numbers. As you move to the right, the value increases. As you move to the left, the value decreases. A number line can show the approximate values of certain numbers. For example, the following number line shows that –4.5 is halfway between –5 and –4 and that 2.2 is closer to 2 than it is to 3. The following table will illustrate some comparison symbols: In addition, the numbers being added are called addends. The result is called a sum. The symbol for addition is called a plus sign. In the following example, 4 and 5 are addends and 9 is the sum: - 4 + 5 = 9 In subtraction, the number being subtracted is called the subtrahend. The number being subtracted from is called the minuend. The answer to a subtraction problem is called a difference. The symbol for subtraction is called a minus sign. In the following example, 15 is the minuend, 4 is the subtrahend, and 11 is the difference: - 15 – 4 = 11 When two or more numbers are being multiplied, they are called factors. The answer that results is called the product. In the following example, 5 and 6 are factors and 30 is their product: 5 ×6 = 30 There are several ways to represent multiplication in this mathematical statement. A dot between factors indicates multiplication: 5 · 6 = 30 Parentheses around any one or more factors indicate multiplication: (5)6 = 30, 5(6) = 30, and (5)(6) = 30. Multiplication is also indicated when a number is placed next to a variable: 5a = 30. In this equation, 5 is being multiplied by a. In division, the number being divided by is called the divisor. The number being divided into is called the dividend. The answer to a division problem is called the quotient. There are a few different ways to represent division with symbols. In each of the following equivalent expressions, 3 is the divisor and 8 is the dividend: Prime and Composite Numbers A positive integer that is greater than the number 1 is either prime or composite, but not both. - A prime number is a number that has exactly two factors: 1 and itself. For example, 2, 3, 5, 7, 11, 13, 17, 19, and 23 are all prime numbers. - A composite number is a number that has more than two factors. For example, 4, 6, 8, 9, 10, 12, 14, 15, and 16 are all composite numbers. - The number 1 is neither prime nor composite since it has only one factor. Whole numbers are the set of nonnegative numbers that are not expressed as a fraction or a decimal. For example, 0, 4, 39, and 3,318 are all whole numbers. For the ParaPro Assessment, you will be expected to know how to add, subtract, multiply, divide, compare, and order them. Comparing and Ordering Whole Numbers To compare and order whole numbers, it is essential that you are familiar with the place value system. The following table shows the place values for a very large number: 3,294,107. - To compare or order whole numbers, you need to look at the digits in the largest place value of a number first. - Compare 3,419 and 3,491. Begin by comparing the two numbers in their largest place value. They both have the digit 3 in the thousands place. Therefore, you do not know which number is larger. Move to the smaller place values (to the right) of each number and continue comparing. The digit in the hundreds place for each number is 4.You still do not know which number is larger. However, when you compare the digits in the tens places, you see that the 9 is greater than the 1. That means 3,491 is greater than 3,419. This can be represented with the greater than symbol: 3,491 > 4,419. - Put the following numbers in order from greatest to least: 307, 319, 139, 301. To order these numbers, the digits in their place values must be compared. Three of the numbers have a 3 in the hundreds place, but one number has a 1 in the hundreds place. Therefore, 139 is the smallest number. Next the digits in the tens places must be compared with the remaining numbers. The tens digit in 319 is 1, and the tens digit in 307 and 301 is 0. Therefore, 319 is the largest number. To order 307 and 301, compare the digits in the ones place: 7 is greater than 1, so 307 is greater than 301. The correct order of the numbers, from greatest to least, is 319, 307, 301, and 139. Adding Whole Numbers Addition is used when it is necessary to combine amounts. It is easiest to add when the addends are stacked in a column with the place values aligned. Work from right to left, starting with the ones column. - Add 40 + 129 + 24. - Align the addends in the ones column. Because it is necessary to work from right to left, begin to add starting with the ones column. The ones column totals 13, and 13 equals 1 ten and 3 ones, so write the 3 in the ones column of the answer, and regroup, or "carry" the 1 ten to the next column as a 1 over the tens column, so that it gets added with the other tens: - Add the tens column, including the regrouped 1. - Then add the hundreds column. Because there is only one value, write the 1 in the answer. Subtracting Whole Numbers Subtraction is used to find the difference between amounts. It is easiest to subtract when the minuend and subtrahend are in a column with the place values aligned. Again, just as in addition, work from right to left. It may be necessary to regroup. - If Becky has 52 clients and Claire has 36, how many more clients does Becky have? - Find the difference between their client numbers by subtracting. Start with the ones column. Because 2 is less than the number being subtracted (6), regroup, or "borrow," a ten from the tens column. Add the regrouped amount to the ones column. Now subtract 12 – 6 in the ones column. - Regrouping 1 ten from the tens column left 4 tens. Subtract 4 – 3 and write the result in the tens column of the answer. Becky has 16 more clients than Claire. Check by addition: 16 + 36 = 52. Multiplying Whole Numbers In multiplication, the same amount is combined multiple times. For example, instead of adding 30 three times, 30 + 30 + 30, it is easier to simply multiply 30 by 3. If a problem asks for the product of two or more numbers, the numbers should be multiplied to arrive at the answer. - A school auditorium contains 54 rows, each containing 34 seats. How many seats are there in total? - In order to solve this problem, you could add 34 to itself 54 times, but we can solve this problem more easily with multiplication. Line up the place values vertically, writing the problem in columns. Multiply the number in the ones place of the top factor (4) by the number in the ones place of the bottom factor (4): 4 ×4 = 16. Because 16 = 1 ten and 6 ones, write the 6 in the ones place in the first partial product. - Multiply the number in the tens place in the top factor (3) by the number in the ones place of the bottom factor (4): 4 ×3 = 12. Then add the regrouped amount: 12 + 1 = 13.Write the 3 in the tens column and the 1 in the hundreds column of the partial product. - The last calculations to be done require multiplying by the tens place of the bottom factor. Multiply 5 (tens from bottom factor) by 4 (ones from top factor); 5 ×4 = 20, but because the 5 really represents a number of tens, the actual value of the answer is 200 (50 ×4 = 200). Therefore, write the two zeros under the ones and tens columns of the second partial product and regroup or carry the 2 hundreds by writing a 2 above the tens place of the top factor. - Multiply 5 (tens from bottom factor) by 3 (tens from top factor); 5 ×3 = 15, but because the 5 and the 3 each represent a number of tens, the actual value of the answer is 1,500 (50 × 30 = 1,500). Add the two additional hundreds carried over from the last multiplication: 15 + 2 = 17 (hundreds).Write the 17 in front of the zeros in the second partial product. - Add the partial products to find the total product: Note: It is easier to perform multiplication if you write the factor with the greater number of digits in the top row. In this example, both factors have an equal number of digits, so it does not matter which is written on top. Dividing Whole Numbers In division, the same amount is subtracted multiple times. For example, instead of subtracting 5 from 25 as many times as possible, 25 – 5 – 5 – 5 – 5 – 5, it is easier to simply divide, asking how many 5s are in 25: 25 ÷ 5. - At a road show, three artists sold their beads for a total of $54. If they share the money equally, how much money should each artist receive? - Divide the total amount ($54) by the number of ways the money is to be split (3).Work from left to right. How many times does 3 divide into 5? Write the answer, 1, directly above the 5 in the dividend, because both the 5 and the 1 represent a number of tens. Now multiply: since 1(ten) ÷ 3(ones) = 3(tens), write the 3 under the 5, and subtract; 5(tens) – 3(tens) = 2(tens). - Continue dividing. Bring down the 4 from the ones place in the dividend. How many times does 3 divide into 24? Write the answer, 8, directly above the 4 in the dividend. Because 3 ÷ 8 = 24, write 24 below the other 24 and subtract 24 – 24 = 0. If you get a number other than zero after your last subtraction, this number is your remainder. - What is 9 divided by 4? - 1 is the remainder. The answer is 2 R1. This answer can also be written as 2 , because there was one part left over out of the four parts needed to make a whole. Estimating with Whole Numbers Some questions on the ParaPro Assessment will ask you for an estimate. That means you will not need to find the actual answer, but should instead find an answer that is close to the actual answer. One way to solve estimation problems with whole numbers is to use numbers that are easy to work with, and that are close to the actual numbers. - A television set weighs 21 pounds. About how much will a case weigh if it carries 46 television sets? - The number 21 is close to 20, and 20 is much easier to work with than 21. The number 46 is close to 50, and 50 is much easier to work with than 46. To find the approximate weight of the 46 television sets, you can just multiply 20 by 50. A proper estimate would be 1,000 pounds. An integer is a whole number or its opposite. Here are some rules for performing operations with integers: Adding numbers with the same sign results in a sum of the same sign: - (positive) + (positive) = positive - (negative) + (negative) = negative - When adding numbers of different signs, follow this two-step process: - Subtract the positive values of the numbers. Positive values are the values of the numbers without any signs. - Keep the sign of the number with the larger positive value. - –2 + 3 = - Subtract the positive values of the numbers: 3 – 2 = 1. - The number 3 is the larger of the two positive values. Its sign in the original example was positive, so the sign of the answer is positive. The answer is positive 1. - 8 + –11 = - Subtract the positive values of the numbers: 11 – 8 = 3. - The number 11 is the larger of the two positive values. Its sign in the original example was negative, so the sign of the answer is negative. The answer is negative 3. When subtracting integers, change the subtraction sign to an addition sign, and change the sign of the number being subtracted to its opposite. Then follow the rules for addition. - (+10) – (+12) = (+10) + (–12) = –2 - (–5) – (–7) = (–5) + (+7) = +2 Multiplying and Dividing Integers A simple method for remembering the rules of multiplying and dividing is that if the signs are the same when multiplying or dividing two quantities, the answer will be positive. If the signs are different, the answer will be negative. (positive) × (positive) = positive (positive) × (negative) = negative (negative) × (negative) = positive - (10)(–12) = –120 - –5 × –7 = 35 - 12 ÷ –3 = –4 - 15 ÷ 3 = 5 An exponent indicates the number of times a base is used as a factor to attain a product. - Evaluate 25. In this example, 2 is the base and 5 is the exponent. Therefore, 2 should be used as a factor 5 times to attain a product: - 25 = 2 × 2 × 2 × 2 × 2 = 32 Any nonzero number raised to the zero power equals 1. - 50 = 1 700 = 1 29,8740 = 1 The number 52 is read "5 to the second power," or, more commonly,"5 squared." Perfect squares are numbers that are second powers of other numbers. Perfect squares are always zero or positive, because when you multiply a positive or a negative by itself, the result is always positive. The perfect squares are 02, 12, 22, 32 … Therefore, the perfect squares are 0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100 … The number 53 is read as "5 to the third power," or, more commonly, "5 cubed." (Powers higher than three have no special name.) Perfect cubes are numbers that are third powers of other numbers. Perfect cubes, unlike perfect squares, can be either positive or negative. This is because when a negative is multiplied by itself three times, the result is negative. The perfect cubes are 03, 13, 23, 33 … Therefore, the perfect cubes are 0, 1, 8, 27, 64, 125 … The Order of Operations There is an order in which a sequence of mathematical operations must be performed, known as PEMDAS: P: Parentheses/Grouping Symbols. Perform all operations within parentheses first. If there is more than one set of parentheses, begin to work with the innermost set and work toward the outside. If more than one operation is present within the parentheses, use the remaining rules of order to determine which operation to perform first. E: Exponents. Evaluate exponents. M/D: Multiply/Divide. Work from left to right in the expression. A/S: Add/Subtract. Work from left to right in the expression. This order and the acronym PEMDAS, which can be remembered by using the first letter of each of the words in the phrase: Please Excuse My Dear Aunt Sally. Properties of Arithmetic While ETS says that the ParaPro Assessment will not test your knowledge of the properties of mathematics, they are very important to know. Commutative Property: This property states that the result of an arithmetic operation is not affected by reversing the order of the numbers. Multiplication and addition are operations that satisfy the commutative property. - 5 × 2 = 2 × 5 - (5)a = a(5) - b + 3 = 3 + b However, neither subtraction nor division is commutative, because reversing the order of the numbers does not yield the same result. - 5 – 2 ≠ 2 – 5 - 6 ÷ 3 ≠ 3 ÷ 6 Associative Property: If parentheses can be moved to group different numbers in an arithmetic problem without changing the result, then the operation is associative. Addition and multiplication are associative. - 2 + (3 + 4) = (2 + 3) + 4 - 2(ab) = (2a)b Distributive Property: When a value is being multiplied by a sum or difference,multiply that value by each quantity within the parentheses. Then, take the sum or difference to yield an equivalent result. - 5(a + b) = 5a + 5b - 5(100 – 6) = (5 × 100) – (5 × 6) This second example can be proved by performing the calculations: - 5(94) = 5(100 – 6) - 470 = 500 – 30 - 470 = 470 Additive and Multiplicative Identities and Inverses The additive identity is the value that, when added to a number, does not change the number. For all integers, the additive identity is 0. - 5 + 0 = 5 - –3 + 0 = –3 - Adding 0 does not change the values of 5 and –3, so 0 is the additive identity. The additive inverse of a number is the number that, when added to the number, gives you the additive identity. - What is the additive inverse of –3? - This means, "What number can I add to –3 to give me the additive identity (0)?" - –3 + ___ = 0 - –3 + 3 = 0 - The answer is 3. The multiplicative identity is the value that, when multiplied by a number, does not change the number. For all integers, the multiplicative identity is 1. - 5 × 1 = 5 - –3 × 1 = –3 - Multiplying by 1 does not change the values of 5 and –3, so 1 is the multiplicative identity. The multiplicative inverse of a number is the number that, when multiplied by the number, gives you the multiplicative identity. - What is the multiplicative inverse of 5? - This means, "What number can I multiply 5 by to give me the multiplicative identity (1)?" - 5 × ___ = 1 - × 5 = 1 - The answer is . There is an easy way to find the multiplicative inverse. It is the reciprocal, which is obtained by reversing the numerator and denominator of a fraction. In the preceding example, the answer is the reciprocal of 5; 5 can be written as , so the reciprocal is . Note: Reciprocals do not change signs. Note: The additive inverse of a number is the opposite of the number; the multiplicative inverse is the reciprocal. Factors and Multiples Factors are numbers that can be divided into a larger number without a remainder. - 12 ÷ 3 = 4 The number 3 is, therefore, a factor of the number 12. Other factors of 12 are 1, 2, 4, 6, and 12. The common factors of two numbers are the factors that both numbers have in common. - The factors of 24 = 1, 2, 3, 4, 6, 8, 12, and 24. - The factors of 18 = 1, 2, 3, 6, 9, and 18. From the examples, you can see that the common factors of 24 and 18 are 1, 2, 3, and 6. From this list it can also be determined that the greatest common factor of 24 and 18 is 6. Determining the greatest common factor (GCF) is useful for simplifying fractions. - Simplify . The factors of 16 are 1, 2, 4, 8, and 16. The factors of 20 are 1, 2, 4, 5, and 20. The common factors of 16 and 20 are 1, 2, and 4. The greatest of these, the GCF, is 4. Therefore, to simplify the fraction, both numerator and denominator should be divided by 4. Multiples are numbers that can be obtained by multiplying a number x by a positive integer. - 5 × 7 = 35 The number 35 is, therefore, a multiple of the number 5 and of the number 7. Other multiples of 5 are 5, 10, 15, 20, and so on. Other multiples of 7 are 7, 14, 21, 28, and so on. - The common multiples of two numbers are the multiples that both numbers share. - Some multiples of 4 are: 4, 8, 12, 16, 20, 24, 28, 32, 36 … - Some multiples of 6 are: 6, 12, 18, 24, 30, 36, 42, 48 … Some common multiples are 12, 24, and 36. From the above it can also be determined that the least common multiple of the numbers 4 and 6 is 12, since this number is the smallest number that appeared in both lists. The least common multiple, or LCM, is used when performing addition and subtraction of fractions to find the least common denominator. Example (using denominators 4 and 6 and LCM of 12) It is very important to remember the place values of a decimal. The first place value to the right of the decimal point is the tenths place. The place values from thousands to ten thousandths are as follows: - In expanded form, this number can also be expressed as: - 1,268.3457 = (1 × 1,000) + (2 × 100) + (6 × 10) + (8 × 1) + (3 × 0.1) + (4 × 0.01) + (5 × 0.001) + (7 × 0.0001) Comparing and Ordering Decimals To compare or order decimals, compare the digits in their place values. It's the same process as comparing or ordering whole numbers. You just need to pay careful attention to the decimal point. - Compare 0.2 and 0.05. Compare the numbers by the digits in their place values. Both decimals have a 0 in the ones place, so you need to look at the place value to the right. 0.2 has a 2 in the tenths place while 0.05 has a 0 in the tenths place. Because 2 is bigger than 0, 0.2 is bigger than 0.05. You can show this as 0.2 > 0.05. - Order 2.32, 2.38, and 2.29 in order from greatest to least. Again, look at the place values of the numbers. All three numbers have a 2 in the ones place, so you cannot order them yet. Looking at the next place value to the right, tenths, reveals that 2.29 has the number 2 in the tenths place whereas the other numbers have a 3. So 2.29 is the smallest number. To order 2.32 and 2.38 correctly, compare the digits in the hundredths place. 8 > 2, so 2.38 > 2.32. The correct order from greatest to least is 2.38, 2.32, and 2.29. It is often inconvenient to work with decimals. It is much easier to have an approximation value for a decimal. In this case, you can round decimals to a certain number of decimal places. The most common ways to round are as follows: - To the nearest integer: zero digits to the right of the decimal point - To the nearest tenth: one digit to the right of the decimal point (tenths unit) - To the nearest hundredth: two digits to the right of the decimal point (hundredths unit) In order to round, look at the digit to the immediate right of the digit you are rounding to. If the digit is less than 5, leave the digit you are rounding to alone, and omit all the digits to its right. If the digit is 5 or greater, increase the digit you are rounding by one, and omit all the digits to its right. - Round 14.38 to the nearest whole number. - The digit to the right of the ones place is 3. Therefore, you can leave the digit you are rounding to alone, which is the 4 in the ones place. Omit all the digits to the right. - 14.38 is 14 when rounded to the nearest whole number. - Round 1.084 to the nearest tenth. - The digit to the right of the tenths place is 8. Therefore, you need to increase the digit you are rounding to by 1. That means the 0 in the tenths place becomes a 1. Then all of the digits to the right can be omitted. - 1.084 is 1.1 to the nearest tenth. Adding and Subtracting Decimals Adding and subtracting decimals is very similar to adding and subtracting whole numbers. The most important thing to remember is to line up the numbers to be added or subtracted by their decimal points. Zeros may be filled in as placeholders when all numbers do not have the same number of decimal places. - What is the sum of 0.45, 0.8, and 1.36? - Take away 0.35 from 1.06. The process for multiplying decimals is exactly the same as multiplying whole numbers.Multiply the numbers, ignoring the decimal points in the factors. Then add the decimal point in the final product later. - What is the product of 0.14 and 4.3? Now, to figure out where the decimal point goes in the product, count how many decimal places are in each factor. 4.3 has one decimal place and 0.14 has two decimal places. Add these in order to determine the total number of decimal places the answer must have to the right of the decimal point. In this problem, there are a total of three (1 + 2) decimal places. Therefore, the decimal point needs to be placed three decimal places from the right side of the answer. In this example, 602 turns into 0.602. If there are not enough digits in the answer, add zeros in front of the answer until there are enough. - Multiply 0.03 × 0.2. There are three total decimal places in the two numbers being multiplied. Therefore, the answer must contain three decimal places. Starting to the right of 6 (because 6 is equal to 6.0), move left three places. The answer becomes 0.006. To divide decimals, you need to change the divisor so that it does not have any decimals in it. In order to do that, simply move the decimal place to the right as many places as necessary to make the divisor a whole number. The decimal point must also be moved in the dividend the same number of places to keep the answer the same as the original problem. Moving a decimal point in a division problem is equivalent to multiplying a numerator and denominator of a fraction by the same quantity, which is the reason the answer will remain the same. If there are not enough decimal places in the dividend (the number being divided) to accommodate the required move, simply add zeros at the end of the number. Add zeros after the decimal point to continue the division until the decimal terminates, or until a repeating pattern is recognized. The decimal point in the quotient belongs directly above the decimal point in the dividend. - What is - To make 0.425 a whole number, move the decimal point three places to the right: 0.425 becomes 425. Now move the decimal point three places to the right for 1.53: You need to add a zero, but 1.53 becomes 1,530. The problem is now a simple long division problem. A fraction is a part of a whole, represented with one number over another number. The number on the bottom, the denominator, shows how many parts there are in the whole in total. The number on the top, the numerator, shows how many parts there are of the whole. To perform operations with fractions, it is necessary to understand some basic concepts. To simplify fractions, identify the greatest common factor (GCF) of the numerator and denominator and divide both the numerator and denominator by this number. - The GCF of 16 and 24 is 8, so divide 16 and 24 each by 8 to simplify the fraction: Adding and Subtracting Fractions To add or subtract fractions with like denominators, just add or subtract the numerators and keep the denominator. To add or subtract fractions with unlike denominators, first find the least common denominator or LCD. The LCD is the smallest number divisible by each of the denominators. For example, for the denominators 8 and 12, 24 would be the LCD because 24 is the smallest number that is divisible by both 8 and 12: 8 × 3 = 24, and 12 × 2 = 24. Using the LCD, convert each fraction to its new form by multiplying both the numerator and denominator by the appropriate factor to get the LCD, and then follow the directions for adding/subtracting fractions with like denominators. To multiply fractions, simply multiply the numerators and the denominators. Dividing fractions is similar to multiplying fractions. You just need to flip the numerator and denominator of the divisor, the fraction being divided. Then multiply across, like you would when multiplying fractions. - Solve: . - Flip the numerator and denominator of the divisor and change the symbol to multiplication. - Now multiply the numerators and the denominators, and simplify if necessary. - Because both the numerator and the denominator of can be divided by 2, the fraction can be reduced. Sometimes it is necessary to compare the sizes of fractions. This is very simple when the fractions have a common denominator. All you have to do is compare the numerators. - Compare and . - Because 3 is smaller than 5, is smaller than . Therefore, < . If the fractions do not have a common denominator, multiply the numerator of the first fraction by the denominator of the second fraction. Write this answer under the first fraction. Then multiply the numerator of the second fraction by the denominator of the first one. Write this answer under the second fraction. Compare the two numbers. The larger number represents the larger fraction. - Which is larger: or ? - Cross multiply. - 7 × 9 = 63 4 × 11 = 44 - 63 > 44; therefore, - Compare and . - Cross multiply. - 6 × 6 = 36 2 × 18 = 36 - 36 = 36; therefore, Percents are always "out of 100": 45% means 45 out of 100. Therefore, to write percents as decimals, move the decimal point two places to the left (to the hundredths place). - Here are some common conversions: The practice quiz for this study guide can be found at: Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Bullying in Schools - A Teacher's Guide to Differentiating Instruction - First Grade Sight Words List
http://www.education.com/reference/article/arithmetic1/?page=4
13
63
On the vector page we saw that vectors take elements with a field structure and create an array of these field elements which produces a vector space structure. This is described as a vector 'over' a field. |combined structure||vector space||addition and scalar multiplication||scalar and vector| |element structure||field||add,subtract, mult and divide||number such as integer, real and so on.| A covector is the dual of this: - vector: field → vector space - covector: vector space → field that is: if we think of a vector as a mapping from a field to a vector space then a covector represents a mapping from a vector space to a field. So how can we construct a covector? A linear function like: ƒ(x,y,z) = 3*x + 4*y + 2*z has similarities to vectors, for instance we can add them for instance if: ƒ1(x,y,z) = 3*x + 4*y + 2*z ƒ2(x,y,z) = 6*x +5*y + 3*z then we can add them by adding corresponding terms: ƒ1(x,y,z) + ƒ2(x,y,z) = 9*x + 9*y + 5*z We can also apply scalar multipication, for instance: 2*ƒ1(x,y,z) = 6*x + 8*y + 4*z We can also multiply them together, for instance: ƒ1(x,y,z) * ƒ2(x,y,z) = 18 x² + 39 xy + 20y² + 21 xz + 22 yz + 6z² but this product is no longer linear but is quadratic. So these functions have the same properties as vectors (that is they are isomorphic to vectors), however we now want to reverse this and make the functions ƒ1and ƒ2 the unknowns and the vector is the known: |(a*ƒ1+ b*ƒ2+ c*ƒ3)(||y||)=(a*ƒ1)(||y||)+(b*ƒ2)(||y||)+(c*ƒ3)(||y||)| So if we supply values for the vector what is the unknown function (or the function multipliers a,b and c) ? We can generalise this duality between vectors and covectors to tensors one of the aims of this type of approach is to analyze geometry and physics in a way that is independent of the coordinate system. The duality shows itself in various ways : - If vectors are related to columns of a matrix then covectors are related to the rows. - The dot product of a vector and its corresponding covector gives a scalar. - When the coordinate system in changed then the covectors move in the opposite way to vectors (Contravariant and Covariant). - If a vector is made from a linear combination of basis vectors then a covector is made by combining the normals to planes. - When we take an infinitesimally small part of a manifold the vectors form the tangent space and the covectors form the cotangent space. - vector elements are represented by superscripts and covector elements are represented by subscripts. So lets start with a 3D global orthogonal coordinate system. First we will start with a coordinate system based on a linear combination of orthogonal basis vectors. The physical vector 'p' can be represented by either: p = ∑ viei in the red coordinate system and p = ∑ v'ie'i in the green coordinate system. - p = physical vector being represented in tensor terms - vi = tensor in the red coordinate system - ei= basis in the red coordinate system - v'i= tensor in the green coordinate system - e'i = basis in the green coordinate system So we can transform between the two using: ∑ v'k = tki v i ∑ ek = t'ki e'i - t = a matrix tensor which rotates the vector v to vector ' - t' = a matrix tensor which rotates the basis e to the basis e' We are considering the situation where a vector is measured as a linear combination of a number of basis vectors. We now add an additional condition that the basis vectors are mutually at 90° to each other. In this case we have: ei • ej = δij - ei = a unit length basis vector - ej = another unit length basis vector perpendicular to the first. - δij= Kronecker Delta as described here. If we choose a different set of basis vectors, but still perpendicular to each other, say e'i and e'j then we have: e'i • e'j = δij To add more dimensions we can use: eki • ekj = δij This is derived from the above expression using the substitution property of the Kronecker Delta. We could express the above in matrix notation: e et= [I] For example, in the simple two dimensional case: since the basis vectors are orthogonal then: e0•e0 = e1•e1 = 1 e1•e0 = e0•e1 = 0 et e = 1 but a scalar multiplication by 1 is the same as a matrix multiplication by [I] so we have: et e = e et= [I] = 1 = δij Now instead of looking at the basis vectors we will look at aT a = [I] - a = a vector - aT= transpose of 'a' - [I] = the identity matrix We can combine these terminologies to give: (aT a)ij = eki • ekj = δij Any of these equations defines an orthogonal transformation. So far we assumed the coordinates are linear and orthogonal but what if the coordinates are curviliner?
http://www.euclideanspace.com/maths/algebra/vectors/related/covector/index.htm
13
94
Your Digestive System And How It Works The digestive system is a series of hollow organs joined in a long, twisting tube from the mouth to the anus. Inside this tube is a lining called the mucosa. In the mouth, stomach, and small intestine, the mucosa contains tiny glands that produce juices to help digest food. Two solid organs, the liver and the pancreas, produce digestive juices that reach the intestine through small tubes. In addition, parts of other organ systems (for instance, nerves and blood) play a major role in the digestive system. Why Digestion Is Important When we eat such things as bread, meat, and vegetables, they are not in a form that the body can use as nourishment. Our food and drink must be changed into smaller molecules of nutrients before they can be absorbed into the blood and carried to cells throughout the body. Digestion is the process by which food and drink are broken down into their smallest parts so that the body can use them to build and nourish cells and to provide energy. How Food Is Digested Digestion involves the mixing of food, its movement through the digestive tract, and the chemical breakdown of the large molecules of food into smaller molecules. Digestion begins in the mouth, when we chew and swallow, and is completed in the small intestine. The chemical process varies somewhat for different kinds of food. Movement Of Food Through The System The large, hollow organs of the digestive system contain muscle that enables their walls to move. The movement of organ walls can propel food and liquid and also can mix the contents within each organ. Typical movement of the esophagus, stomach, and intestine is called peristalsis. The action of peristalsis looks like an ocean wave moving through the muscle. The muscle of the organ produces a narrowing and then propels the narrowed portion slowly down the length of the organ. These waves of narrowing push the food and fluid in front of them through each hollow organ. The first major muscle movement occurs when food or liquid is swallowed. Although we are able to start swallowing by choice, once the swallow begins, it becomes involuntary and proceeds under the control of the nerves. The esophagus is the organ into which the swallowed food is pushed. It connects the throat above with the stomach below. At the junction of the esophagus and stomach, there is a ringlike valve closing the passage between the two organs. However, as the food approaches the closed ring, the surrounding muscles relax and allow the food to pass. The food then enters the stomach, which has three mechanical tasks to do. First, the stomach must store the swallowed food and liquid. This requires the muscle of the upper part of the stomach to relax and accept large volumes of swallowed material. The second job is to mix up the food, liquid, and digestive juice produced by the stomach. The lower part of the stomach mixes these materials by its muscle action. The third task of the stomach is to empty its contents slowly into the small intestine. Several factors affect emptying of the stomach, including the nature of the food (mainly its fat and protein content) and the degree of muscle action of the emptying stomach and the next organ to receive the contents (the small intestine). As the food is digested in the small intestine and dissolved into the juices from the pancreas, liver, and intestine, the contents of the intestine are mixed and pushed forward to allow further digestion. Finally, all of the digested nutrients are absorbed through the intestinal walls. The waste products of this process include undigested parts of the food, known as fiber, and older cells that have been shed from the mucosa. These materials are propelled into the colon, where they remain, usually for a day or two, until the feces are expelled by a bowel movement. Production Of Digestive Juices The glands that act first are in the mouth the salivary glands. Saliva produced by these glands contains an enzyme that begins to digest the starch from food into smaller molecules. The next set of digestive glands is in the stomach lining. They produce stomach acid and an enzyme that digests protein. One of the unsolved puzzles of the digestive system is why the acid juice of the stomach does not dissolve the tissue of the stomach itself. In most people, the stomach mucosa is able to resist the juice, although food and other tissues of the body cannot. After the stomach empties the food and juice mixture into the small intestine, the juices of two other digestive organs mix with the food to continue the process of digestion. One of these organs is the pancreas. It produces a juice that contains a wide array of enzymes to break down the carbohydrate, fat, and protein in food. Other enzymes that are active in the process come from glands in the wall of the intestine or even a part of that wall. The liver produces yet another digestive juice bile. The bile is stored between meals in the gallbladder. At mealtime, it is squeezed out of the gallbladder into the bile ducts to reach the intestine and mix with the fat in our food. The bile acids dissolve the fat into the watery contents of the intestine, much like detergents that dissolve grease from a frying pan. After the fat is dissolved, it is digested by enzymes from the pancreas and the lining of the intestine. Absorption And Transport Of Nutrients Digested molecules of food, as well as water and minerals from the diet, are absorbed from the cavity of the upper small intestine. Most absorbed materials cross the mucosa into the blood and are carried off in the bloodstream to other parts of the body for storage or further chemical change. As already noted, this part of the process varies with different types of nutrients. Carbohydrates. It is recommended that about 55 to 60 percent of total daily calories be from carbohydrates. Some of our most common foods contain mostly carbohydrates. Examples are bread, potatoes, legumes, rice, spaghetti, fruits, and vegetables. Many of these foods contain both starch and fiber. The digestible carbohydrates are broken into simpler molecules by enzymes in the saliva, in juice produced by the pancreas, and in the lining of the small intestine. Starch is digested in two steps: First, an enzyme in the saliva and pancreatic juice breaks the starch into molecules called maltose; then an enzyme in the lining of the small intestine (maltase) splits the maltose into glucose molecules that can be absorbed into the blood. Glucose is carried through the bloodstream to the liver, where it is stored or used to provide energy for the work of the body. Table sugar is another carbohydrate that must be digested to be useful. An enzyme in the lining of the small intestine digests table sugar into glucose and fructose, each of which can be absorbed from the intestinal cavity into the blood. Milk contains yet another type of sugar, lactose, which is changed into absorbable molecules by an enzyme called lactase, also found in the intestinal lining. Protein. Foods such as meat, eggs, and beans consist of giant molecules of protein that must be digested by enzymes before they can be used to build and repair body tissues. An enzyme in the juice of the stomach starts the digestion of swallowed protein. Further digestion of the protein is completed in the small intestine. Here, several enzymes from the pancreatic juice and the lining of the intestine carry out the breakdown of huge protein molecules into small molecules called amino acids. These small molecules can be absorbed from the hollow of the small intestine into the blood and then be carried to all parts of the body to build the walls and other parts of cells. Fats. Fat molecules are a rich source of energy for the body. The first step in digestion of a fat such as butter is to dissolve it into the watery content of the intestinal cavity. The bile acids produced by the liver act as natural detergents to dissolve fat in water and allow the enzymes to break the large fat molecules into smaller molecules, some of which are fatty acids and cholesterol. The bile acids combine with the fatty acids and cholesterol and help these molecules to move into the cells of the mucosa. In these cells the small molecules are formed back into large molecules, most of which pass into vessels (called lymphatics) near the intestine. These small vessels carry the reformed fat to the veins of the chest, and the blood carries the fat to storage depots in different parts of the body. Vitamins. Another vital part of our food that is absorbed from the small intestine is the class of chemicals we call vitamins. The two different types of vitamins are classified by the fluid in which they can be dissolved: water-soluble vitamins (all the B vitamins and vitamin C) and fat-soluble vitamins (vitamins A, D, and K). Water and salt. Most of the material absorbed from the cavity of the small intestine is water in which salt is dissolved. The salt and water come from the food and liquid we swallow and the juices secreted by the many digestive glands. How The Digestive Process Is Controlled A fascinating feature of the digestive system is that it contains its own Hormone Regulators. The major hormones that control the functions of the digestive system are produced and released by cells in the mucosa of the stomach and small intestine. These hormones are released into the blood of the digestive tract, travel back to the heart and through the arteries, and return to the digestive system, where they stimulate digestive juices and cause organ movement. The hormones that control digestion are gastrin, secretin, and cholecystokinin (CCK): - Gastrin causes the stomach to produce an acid for dissolving and digesting some foods. It is also necessary for the normal growth of the lining of the stomach, small intestine, and colon. - Secretin causes the pancreas to send out a digestive juice that is rich in bicarbonate. It stimulates the stomach to produce pepsin, an enzyme that digests protein, and it also stimulates the liver to produce bile. - CCK causes the pancreas to grow and to produce the enzymes of pancreatic juice, and it causes the gallbladder to empty. Additional Hormones In The Digestive System Regulate Appetite: - Ghrelin is produced in the stomach and upper intestine in the absence of food in the digestive system and stimulates appetite. - Peptide YY is produced in the GI tract in response to a meal in the system and inhibits appetite. Both of these hormones work on the brain to help regulate the intake of food for energy. Two types of nerves help to control the action of the digestive system. Extrinsic (outside) nerves come to the digestive organs from the unconscious part of the brain or from the spinal cord. They release a chemical called acetylcholine and another called adrenaline. Acetylcholine causes the muscle of the digestive organs to squeeze with more force and increase the "push" of food and juice through the digestive tract. Acetylcholine also causes the stomach and pancreas to produce more digestive juice. Adrenaline relaxes the muscle of the stomach and intestine and decreases the flow of blood to these organs. Even more important, though, are the intrinsic (inside) nerves, which make up a very dense network embedded in the walls of the esophagus, stomach, small intestine, and colon. The intrinsic nerves are triggered to act when the walls of the hollow organs are stretched by food. They release many different substances that speed up or delay the movement of food and the production of juices by the digestive organs. Overview Of The Digestive System Posted By: Jon Barron Edited To Fit This Format Introduction And Getting Food Into The Digestive Tract Effectively, this system is a continuous tube from the mouth to the anus -- something you probably don't want to think about the next time you kiss someone. Over the course of the next half dozen or so newsletters, I'm going to walk you through the digestive system -- from the tip of your tongue to the outer edge of your rectum. We're going to cover the anatomy and physiology of everything from your teeth to your bowel, plus the organs of digestion including the liver, gallbladder, and pancreas. All of this will help you understand the nature of diseases of the digestive tract (everything from hiatal hernia to acid reflux, from peptic ulcers to irritable bowel syndrome) and how to treat them naturally by working with your body, not against it. Along the way, I'm going to be challenging a number of medical assumptions. How can that be? Aren't anatomy and physiology pretty much cut and dried? And the answer is: "Not necessarily." As it turns out, the body responds differently according to what you eat, how you eat it, and how that food is prepared. Virtually, all physiological assumptions used by the medical community are based on observation of people eating the typical high speed modern diet. Change the diet, and you change the physiology. And in fact, these differences are critical. It has been said that we dig our graves one forkful at a time. By understanding exactly how our body processes what we eat and how what we eat affects those processes, we can change our health outcomes. Effectively, we can delay the digging of our graves for years. And maybe even more importantly, we can enjoy those years with a much higher level of health and vitality. I'm sorry, but people who tell me they are perfectly healthy because they are successfully "managing" their acid reflux and Crohn's disease with medications are not actually healthy. They are merely suppressing the symptoms of unhealth temporarily. Obviously, this is a huge topic and can't be covered in one newsletter. Effectively, I'm going to break the discussion into several pieces, including: - Getting food into the digestive tract -- the mouth and esophagus - The organs that support digestion -- the liver, gallbladder, and pancreas - Absorbing nutrients -- the small intestine - Processing and eliminating the waste -- the colon Digestive System Overview Before we launch into the focus of today's newsletter, the mouth and esophagus, Let's take a quick overview of the entire system. The digestive system is also known as the gastrointestinal (GI) tract and the alimentary canal and covers everything from the digestive tract itself to the organs that support it. It is a continuous tube like structure that develops outpouchings, which in turn evolve into those aforementioned attached digestive organs such as the pancreas, liver, and gallbladder. The entire system is about 40 feet in length from the mouth to the anus and is designed to transport food and water, modify it, and make it suitable for absorption and excretion. There are storage sites, excretion sites, and detoxifying sites along the way. And, according to the medical community, it has six primary functions. - Ingesting food. - Preparing food for digestion by physically grinding it and breaking it down into small pieces and unwinding proteins so they can be separated into their component amino acids. - Actually breaking the food into molecular pieces that your body can use as nourishment. - Transporting the food during its various stages of breakdown along the digestive tract in a measured, "manageable" flow. - Absorbing the nutrients into the body. Absorption is the movement of broken down nutrients across the digestive tract wall and into the bloodstream for use by the cells of the body. Only water and alcohol are absorbed through the mucosa of the stomach and only in special circumstances such as severe dehydration. All the rest of absorption happens in the small intestine. - Eliminating the unused waste products of digestion and absorption from the body. - Digested waste products go to the kidneys - Undigested waste products pass out through the colon and rectum. - Ingested material that might otherwise be toxic is rendered harmless, primarily by the liver, and excreted from the body. But that said, I now have my first disagreement with the medical community. I submit to you that the above list is incomplete, and that these omissions are not unimportant. For example, medicine has no understanding of the role your digestive system plays in maintaining an optimal environment for beneficial bacteria and why that's essential. Therefore, they both allow and, in fact, encourage by their treatments many diseases to manifest that should never appear -- and have no idea how to treat them when they do. And that's just one example that we'll explore in more detail later on. So, from a holistic point of view, the digestive system, in addition to the functions listed above, also performs the following functions: - It is the first line of defense in the body's immune system. It both identifies and eliminates viruses and unhealthy bacteria ingested with our food and water. - It plays a key role in helping remove, not just food waste from the body, but also metabolic waste, heavy metals, and drug residues. - It also serves as a drain for toxic substances absorbed through the skin and lungs. - And, of course, as mentioned above, it is designed to serve as a hospitable breeding ground for trillions of beneficial bacteria that do everything from aiding in digestion, waste elimination, and immune function. In fact, as much of 60% of your immune function comes from beneficial bacteria living in your intestinal tract. Carnivore, Omnivore, Frugivore? There is one other piece of overview information we need to cover. The medical community bases its assumptions concerning the human digestive system on the fact that it is essentially designed as an omnivore system. Only people at tailgate parties and gladiator games actually believe that we are pure carnivores. But, as I discussed in detail in "Lessons from the Miracle Doctors", this is simply not supported by the evidence at hand. And once again, this distinction is not subtle; and not insignificant. Yes, the human body has an amazing ability to adapt to any diet we throw at it -- but not without consequences. And, in fact, many of the diseases we face today are the direct result of not understanding what our systems are designed to handle and the consequences we face as a result. How could the medical community be so wrong on this issue? Actually, it's very simple, and it's the same old problem. As usual, the medical community views the body as separate pieces, not as an integrated whole. It looks at things in isolation. In this particular case it looks at the diet of the 99% of population that passes through their doors in need of their care, and those people eat everything from cotton candy to slabs of grilled beef -- an omnivore diet. Given this context, for medical anatomists, the digestive system is undeniably designed for an omnivore diet. However, it takes only a slightly more holistic viewpoint to make a casual comparison of the structures of the human digestive system (teeth, stomach, and intestines) to other animals living in the wild to see how unsupportable that point of view is. And in fact, we will cover those differences in detail as we move through the digestive system and discuss each relevant organ. And with all that said, let's now begin our trip through the digestive system. Getting Food Into The Digestive Tract, The Mouth And Esophagus Let's begin our exploration of the digestive system by examining the structures that play a key role in getting the food into the stomach. And since this is not an actual anatomy course, but a series of newsletters about how anatomy and physiology relate to alternative health, we will focus our discussion on the specific parts of the system relevant to our discussion and brush lightly over the rest. The mouth is the portal to the digestive system. Food enters the body through the mouth, where it is cut and ground by the teeth and moistened by saliva for ease in swallowing and to start the digestive process. The tongue assists in moving food around during chewing and swallowing and also contains the taste buds. Most medical texts suggest that our teeth are designed to eat all kinds of food from meat to fruit, thus proving that man is an omnivore. But as I mentioned earlier, the facts do not bear this out. The first thing you notice about carnivores is that their teeth are nothing like those found in humans. They have huge canines for striking and seizing prey, pointed incisors for removing meat from bones, and molars and premolars with cusps for shredding muscle fiber. In carnivores, the teeth of the upper jaw slide past the outside of the lower jaw so that prey is caught in a vice-like grip. In general, carnivores don't chew much; mostly, they just tear chunks off and swallow them whole. All in all, nothing like human teeth. But the claim in medical texts is that we are omnivores, not carnivores. How does that claim stand up? Well, first of all, no animal is really adapted to eat all things, but if any animal comes close, it would be the bear. Typical foods consumed by bears include ants, bees, seeds, roots, nuts, berries, insect larvae such as grubs, and even flowers. Some meat, of course, is eaten by bears, including rodents, fish, deer, pigs, and lambs. Grizzlies and Alaskan brown bears are well-known salmon eaters. Polar bears feed almost exclusively on seals, but then, what vegetation is there for them to eat in the frozen wastes of the Arctic? And, of course, anyone who has read Winnie the Pooh knows that many bears love honey. So, other than the ants, grubs, and rodents, the bear diet sounds a lot like the typical Western diet. Among the great apes (the gorilla, the orangutan, the bonobo, and the chimpanzee) and ourselves, only humans and chimpanzees hunt and eat meat on a frequent basis. Gorillas have never been observed hunting or feeding on any animals other than invertebrates such as termites and ants. Nevertheless, chimpanzees are largely fruit eaters, and meat comprises only about 3 percent of their diet -- far less than is found in the typical Western diet. Bottom line: at least as defined by our teeth, we do not qualify as carnivores or omnivores. So, at least as judged by our teeth, meat should comprise no more than 3% of our diet. But teeth do not comprise the end of the issues. Later on, we'll compare stomachs and intestinal tracts to see if we match up any better there. The tongue is the largest muscle in the mouth. It functions in chewing, swallowing, and forming words. The extrinsic muscles of the tongue (those muscles that originate outside the tongue itself) attach to the skull and neck, and they move from side to side and in and out. The intrinsic muscles attach to the tongue itself, and they alter the tongue's shape (for swallowing and speech). The most interesting parts of the tongue in terms of our discussion are the papillae, the bumps on the tongue that contain the taste buds. Taste buds are composed of groups of about 40 column shaped epithelial cells bundled together along their long axes. Taste cells within a bud are arranged such that their tips form a small taste pore. Minute, hair-like threads called microvilli extend through this pore from the actual taste cells. The microvilli of the taste cells bear the actual taste receptors, and it appears that most taste buds contain cells that bear receptors for two or three of the basic tastes. There are four tastes we normally associate with taste buds: sweet, salty, sour, and bitter. However, research has identified a fifth taste our buds can identify. The fifth taste is umami, the taste of monosodium glutamate (no kidding), and has recently been recognized as a unique taste, as it cannot be elicited by any combination of the other four taste types. Glutamate is present in a variety of protein-rich foods, and particularly abundant in aged cheese. Unless artificially disrupted, our sense of taste will guide us to the foods necessary for our survival. And, in fact, our taste preferences change according to our body's needs. Just ask the husband of any pregnant woman. Or more scientifically: - Removal of the adrenal glands without replacement of mineralocorticoids leads rapidly to death due to massive loss of sodium from the body. Adrenalectomized animals (animals whose adrenal glands have been surgically removed) show a clear preference for salty water over pure water, and if provided with salt water, can actually survive. - If the parathyroid glands are removed, animals lose calcium and cannot maintain blood calcium levels appropriately due to deficiency in parathyroid hormone. Following parathyroidectomy (removal of the parathyroid glands), animals choose drinking water that contains calcium chloride over pure water or water containing equivalent concentrations of sodium chloride. - Injection of excessive doses of insulin results in hypoglycemia (low blood sugar). Following such treatment, animals will preferentially pick out and consume the sweetest among a group of foods. Now, there are three tastes I want to focus on. The sweet taste was designed to cause us to desire natural carbohydrates essential for our survival. As we discussed above, our teeth match those of the frugivores, largely fruit eaters. However, technology has allowed food manufacturers to exploit our desire for sweet things -- to our detriment. For the most part, concentrated sugars, other than honey, are not naturally available for us to consume. Table sugar is a manufactured creation, as is maple syrup, agave syrup, not to mention high fructose corn syrup, glucose, dextrose and all of the other concentrated sweeteners added to our food. If living in nature, our desire for sweets would lead us to low concentrations of sugar bound to fiber, not 32 oz Big Gulp sodas containing almost a cup of concentrated sugar. The bottom line is that these concentrated sweeteners feed an addiction because, based on evolution, our taste buds never expected to find concentrated sweeteners -- only natural foods, with a far less concentrated character. And to make matters even worse, the more concentrated sweeteners we eat, the more we crave. A similar situation exists with umami, also known as "savory." In nature, this taste is never concentrated, and exists only in very small amounts in selected foods. Concentrating it as a food additive, confuses the system and allows us to consume glutamate in far higher levels than our bodies were ever designed to handle -- with highly disruptive health effects for sensitive people. And then there's bitter! Bitterness is the most sensitive of the five tastes. It has been suggested that the evolutionary purpose of "bitter" is to warn us against ingesting toxic substances, many of which have a bitter character. Unfortunately, this association between bitter and unhealthy is not entirely true, and our current culinary desire to avoid bitter tastes causes us to miss the health benefits associated with many bitters. Common bitter foods and beverages include coffee, unsweetened chocolate, bitter melon, beer, bitters, olives, citrus peel. But how many people eat them in their unadulterated form any more. Bitters are almost always masked by added sugar. In any case, whereas at one time people regularly consumed bitters as part of their diet, we pretty much completely avoid them now. When's the last time you saw a fast food or soda pop based on bitter? This has major health consequences for your liver. The body has a number of built-in feedback loops, a number of which we'll cover as we move through the digestive system, such as the triggers that both stimulate and shut off the production of stomach acid. But the simple fact is that the taste of bitter in the mouth is stimulating to the liver. There is a direct feedback loop from the tongue to the liver. Every time you taste something bitter, your liver gets a positive jolt that stimulates it to put out more essential bio-chemicals and expel accumulated toxic waste. If you never taste any bitter, your liver tends to become sluggish over time and retain toxic build-up. This is one of the key reasons that the Liver Tincture and Blood Support formulas I use during detoxing have such a pronounced bitter taste. In fact, all of the great liver herbs, milk thistle, dandelion root, and Picrorhiza Kurrooa are decidedly bitter. Although not usually considered, anatomically, as part of the digestive system, the nose really does qualify. After all, up to 75% of what we perceive as taste is due to smell. And the mere smell of certain foods can stimulate hunger and the production of digestive juices. Thus, simple nasal maintenance, such as daily nasal cleansing, is an important part of good intestinal health -- not to mention the fact that it washes out vast quantities of bacteria and viruses, thus preventing them from entering the digestive tract. Incidentally, the primary role of the uvula, the fleshy piece that hangs from the back of the throat, is to detect food that passes over it, and rise up during swallowing to close off the nose from the food so it can't back up into the nose. There are three pairs of salivary glands that secrete saliva, the first of the digestive juices to contact the food in the mouth. They are: - The parotid glands, which are located high up in each cheek, just below the ears. Incidentally, these are the glands that gets infected and swell up when you have the mumps. - The submandibular glands, which are located in the floor of the mouth just below the parotid glands. - And the sublingual glands, which are located on the floor of the mouth, upfront. Saliva performs several key functions. It moistens the mucous membrane, moistens food for easy swallowing, lubricates the esophagus for swallowing, washes the mouth, kills bacteria, dilutes poisonous substances, and contains enzymes that begin the digestion process. Your body produces from 1-1.5 liters of saliva per day (about a quart). More than 99% of that saliva is water, and almost all of it is reabsorbed in the digestive tract. The tiny bit of saliva that is not water contains about 0.05% enzymes: - Lysozyme kills bacteria in the mouth. Incidentally, your mouth is remarkably dirty and infested with bacteria -- some good, but most not so much. It is really true that the mouth of a dog that drinks from the toilet is cleaner than yours. And if you must be bitten, better to be bitten by a dog than a person. - Lingual lipase breaks triglycerides down into far healthier and more easily digested fatty acids and monoglycerides. - And then there's salivary amylase Salivary Amylase Begins The Breakdown Of Carbohydrates Your digestive system is remarkably adaptable; after all, it can handle pepperoni pizza, beer, and Ding Dongs. But there are consequences if you abuse it. There are two forms of abuse. First, there's eating a diet high in cooked and processed food that has destroyed all of the enzymes naturally present in the food. In this particular case, we're talking about amylase. All natural carbohydrates contain the amylase needed to digest them. In fact, the amylase found in wheat and other grains will actually work in the stomach at high acid pH levels of 3 to 4. If natural amylase is present, it will handle a great deal of the digestive process required to break down the carbohydrates you eat. Second, you need to chew your food thoroughly. If you chew your food well enough, it slows down the entire eating process, which spreads out the glycemic response. It also allows the amylase in the saliva to effectively start breaking down the carbohydrates, which takes a huge burden off your pancreas. And it allows time for your stomach to signal your brain that you're full (it normally takes twenty minutes for your brain to catch up with your stomach), so you end up eating less. So, how much do you need to chew your food? There's an old saying: "You should drink your solids and chew your liquids." What that means is that you should chew the dry food you eat until it turns to liquid in your mouth (about forty chews per mouthful), and that you should swish liquids back and forth in your mouth (chew them as it were) an equal number of times. This helps mix enzymes into the food or liquid and begins the digestive process. The more you chew, the more effective these enzymes are. And if you don't do these things, how much does the body have to compensate? Amylase levels in the saliva of people eating the typical western cooked/processed diet are as much as 40 times higher than that found in people eating a more natural diet! Note: During dehydration, the brain signals the mouth to stop the flow of saliva to impel us to drink more water and to conserve fluids. Once you start chewing your food and mixing it with saliva, it picks up a technical name; the wad of chewed food is called a bolus. During the voluntary stage of swallowing, the tongue moves the bolus of food upward and backward. Once the bolus reaches the back of the throat, all actions become involuntary -- they happen outside of your conscious control. During the first of these involuntary phases, the muscles move the food down and back into the esophagus. And finally, the food is actively moved through the esophagus to the stomach. By actively, I'm referring to the fact that movement through the esophagus is the result of series of active, coordinated movements by constrictor muscles lining the esophagus -- not the result of gravity. Specifically, longitudinal muscles pull the esophagus up and relax lower portions so that the circular bands of muscle lining the esophagus can constrict and move the bolus down into the stomach. In fact, although it is not advisable, you can easily swallow when hanging upside down. As we discussed in our series on breathing, aspiration (entry of food or water) into the lungs and nasopharynx is prevented in a series of involuntary actions. - The uvula and soft palate move upward to close off the nasopharynx. - The larynx is pulled forward and upward under the protection of the tongue. - The epiglottis moves back and down to close the opening of the trachea and airway. - Food slides over the epiglottis into the esophagus. - Vocal cords close to further block the airway. - Breathing ceases for about 2 seconds while this process takes place, then resumes. Esophagus ("Carries Food") Although there are a number of things that can go wrong with the esophagus, they are mostly medical and fall outside the scope of our discussion. For our purposes, the only function of the esophagus is to carry food from the mouth to the stomach. No digestion or absorption of nutrients takes place in the esophagus. Liquids pass through quickly -- in about a second. A food bolus, on the other hand will take about five to nine seconds to make its way through the esophagus. In fact, there is little to interest us from an alternative health point of view until we reach the lower esophageal sphincter, which is located at the end of the esophagus just above the diaphragm. The sphincter is not actually an anatomical structure. It's just an area at the end of the esophagus that is capable of constricting to effectively separate the stomach from the esophagus. When functioning properly, it allows food to enter the stomach while at the same time preventing stomach acids and bile from refluxing back into the esophagus. From a medical point of view, there are a number of things that can go wrong with the lower esophageal sphincter, such as achalasia (inability to relax), which prevents food from entering the stomach. But for the purposes of our discussion, two conditions stand out: GERD and hiatal hernia. These conditions used to be handled surgically, but with rather poor results. Antacids provided temporary relief, but as we will learn when we discuss the stomach, actually aggravated the problems. Now, new drugs called proton pump inhibitors are the treatment of choice. They work by cutting the ability of the body to produce stomach acid and are more effective, from a medical point of view, than either surgery or antacids. GERD (Gastro Esophageal Reflux Disease) is also known as acid reflux disease. It is a condition in which the sphincter fails to prevent acid from backing up into the esophagus. This causes inflammation, scarring, and can lead to esophageal cancer. We will talk more about GERD when we talk about acid production in the stomach, which is the primary contributing factor in this disease. We will also discuss why Prilosec, Prevacid, and Nexium may not be the best answers to this problem. One other note on acid reflux at this time is that hiatal hernia is often a contributing factor. Hiatal hernia is a condition in which part of the stomach moves above the diaphragm, into the chest. They are much more common than generally recognized and can produce a wide variety of symptoms that make diagnosis difficult. Hiatal hernias can manifest as severe chest pains that mimic a heart attack, pressure in the chest, or severe stomach pain. And most notably, as mentioned above, a hital hernia can significantly aggravate acid reflux as it pushes the esophageal sphincter out of position, thereby seriously compromising its ability to prevent stomach acid from moving into the esophagus. There are very few medical options for treating a hiatal hernia. As I mentioned earlier, surgical intervention is only marginally effective. The common medical approach today is to reduce the amount of acid the stomach produces with proton pump inhibitor drugs. But the use of these drugs is even more questionable for a hiatal hernia than for standard GERD as it does nothing at all to alleviate the underlying condition -- the fact that part of your stomach is now up in your chest cavity. It merely helps control one symptom. Fortunately, there are alternatives. - Self Massage - Chiropractic Adjustment - Then, once you've corrected the initial hiatal hernia you might want to do some yoga exercises to strengthen your diaphragm so that your stomach won't slip back up through the opening again. For example: - Uddyiana Bandha That concludes our introduction to the digestive system -- getting food into the stomach. In our next issue, we will cover the stomach in detail. Areas of interest will include: - The need for enzymatic digestion - Why proton pump inhibitor drugs create at least as many problems as they resolve - How stomach acid is produced in your body and how to use that feedback loop to your advantage - Why antacids create more acid than they get rid of - Peptic ulcers and how to get eliminate them - The proper way to eat to control appetite - The types of food your stomach is anatomically designed to handle - The feedback loop that drains your body of enzymes - And much more Posted By: Jon Barron Edited To Fit This Format Anatomy Of The Small Intestine We return to our exploration of the intestinal tract from a natural health perspective, but this time we shift gears a bit. So far, we've covered everything from the mouth through the duodenum (taking time to discuss the ancillary outpouchings along the way: the pancreas, liver, and gallbladder). And throughout, the emphasis has been on digestion. But now as we reach the small intestine, things change. Absorption becomes the dominant issue. Yes, a great deal of digestion still occurs in the small intestine, and we will cover that, but the overall emphasis is on absorption. In fact, if you ignore exceptions like the direct absorption of alcohol from an empty stomach, close to 100% of all nutrient absorption in the human body takes place in the small intestine. Obviously then, its proper functioning is crucial to our health. In this issue, we will explore the anatomy of the small intestine to give us a functional understanding of how it is constructed to do its job and also provide us with a shared vocabulary that we can subsequently use as we explore exactly how the small intestine completes digestion of food and selectively absorbs the nutrients your body needs. Macro Anatomy Of The Small Intestine The small intestine, also called the small bowel, serves two primary functions in the body. - If the diet consists primarily of cooked and refined carbohydrates and fats, and if no supplemental enzymes are taken with your meals, these compounds will be mostly intact when they reach the small intestine. Digestive juices in the stomach work on proteins, not carbs and fats. That means that for most people, the small intestine is the final stage for the enzymatic digestion of carbohydrates and fats, keeping in mind that oftentimes they are never fully digested and pass unabsorbed into the bowel where they contribute to gas and bloating as bacteria begin to work on them. - That said, the primary role of the small intestine is the absorption of nutrients broken down by digestion. These include, the absorption of: - Proteins (amino acids) - Carbohydrates (monosaccharides) - Fats (lipids) Technically, the small intestine begins at the pylorus valve that separates the stomach from the duodenum and ends at the ileocecal valve that separates the ileum from the large intestine. The bulk of the small intestine is suspended from the body wall by an extension of the peritoneum called the mesentery. The small intestine is approximately 20 to 23 feet long, depending on how and when it's measured, and it is divided into three sections: Although precise boundaries between these three segments of bowel are not readily observed, there are microscopic structural differences among them. The name duodenum actually derives from its length and literally means twelve inches. It runs from the pylorus valve to the ligament of Treitz (a band of smooth muscle that extends to the diaphragm and works to hold the small intestine in place). Although technically part of the small intestine, the duodenum is almost 100% involved in digestion, not absorption. As such, we have discussed it in great detail already and will not focus on it in this newsletter. The jejunum runs from the ligament of Treitz to the mid small bowel and encompasses roughly 40% of the length of the small intestine. It has numerous muscular folds called plicae circulares, and we will explore it in some detail in the next newsletter. The term "jejunum" derives from the Latin and means "empty of food." The name, however, actually came from the ancient Greeks who noticed that at death this part of the intestine was always "empty of food." Hence, the name jejunum. The third division of the small intestine is the ileum, which runs from the mid small bowel to the ileocecal valve at the entrance to the large bowel (colon) and encompasses roughly 60% of the length of the small intestine. The word "ileum" comes from the ancient Greek and means "twisted," which actually has a dual meaning. First, when viewed during surgery (or after a Trojan sword has slit open your midsection), the ileum actually looks twisted. The second reference is that the ileum is most often the site of twists that can cause obstructions in the small intestine. As mentioned above, when referencing the jejunum, the small intestine is not flat internally, but is thrown into circular folds. These folds are known as "plicae circulares" and are prominent inside the small intestine from the duodenum to the mid ileum. They serve a dual purpose: - They increase surface area for enhanced absorption. - They cause the chyme to move through the small intestine in a corkscrew motion, which aids in mixing the chyme. Effectively, the folds act as baffles. Blood Supply Of The Small Intestine Identifying the blood supply of the small intestine is more important for surgeons than for our discussion of the small intestine as it relates to natural health. Nevertheless, very quickly: - The duodenum is supplied by the gastroduodenal artery and by branches of the superior mesenteric artery. - The jejunum is supplied by jejunal branches of the superior mesenteric artery. - The ileum is supplied by the ileal, right colic, ileocolic, and appendiceal branches of the superior mesenteric artery. The Microstructure Of The Small Intestine If examined closely, the surface of the small intestine has the appearance of soft velvet. This is because it's covered by millions of small projections called villi which extend about 1 mm into the lumen (the empty space inside the small intestine). But villi are only the most obvious feature of the intestinal wall. As we've already discussed, the mucosa (the innermost layer of the intestinal wall) contains a number of different cells including: a self-renewing population of epithelial cells, secretory cells, and endocrine cells. Let's look at the intestinal wall in a little more detail. The small intestine has the same four layers as the rest of the GI tract, but they are modified for maximal absorptive power. - Serosa - the peritoneal covering of the external surface of the small intestine. - Muscularis - the muscle layer that governs peristalsis. In particular, it contains: - A thin layer of longitudinal muscles that stretches the intestine. - A thicker layer of circular muscles that closes off sections of the intestine as required to allow the intestine to work, move, and grind the chyme in that section over and over before it releases it into the next section of the small intestine, where the process repeats again. We will explore this action in more detail in the next newsletter. (Note: paralytic ileus is the absence of normal GI tract muscle contractions (peristalsis) and can be caused by anything that irritates the peritoneum sufficiently.) - Myenteric plexi of Auerbach, which coordinate peristalsis. Specifically, the plexi (intersecting groups of nerve cells) are located in the longitudinal muscle layer of the small intestine. The nerve cells in each plexus primarily project to the circular muscle layer and play an important role in regulating gut motility. - Submucosa - connective tissue. The submucosa consists of dense connective tissue, although fat cells may be present. In fact, all three sections of the small intestine (the duodenum, the jejunum, and the ileum) are all characterized by modifications of the submucosa. The submucosa in the small intestine contains: - Arterioles, venules, and lymphatic vessels (lacteals) that regulate the flow of blood and lymph fluids going to and from the mucosa of the small intestine. As a side note, the lymphatic vessels also play a key role in the absorption of fats from the small intestine, something we will talk more about a bit later. - Mucosa - villi. This is the grand prize, where most of the action in the small intestine takes place. Accordingly, we will now focus on this layer. Villi are projections into the lumen covered predominantly with mature, absorptive enterocytes, along with a spattering of mucus secreting goblet cells. These cells live only for a few days, die and are shed into the lumen to become part of the chyme where they are digested and absorbed. And yes, if you wish to think of it that way, we are all cannibals eating our own intestinal walls. The word villi literally means "tuft of hair," which is exactly what the villi look like. In fact, they are fingerlike projections of the mucosa, with approximately 40 villi per sq mm inside the wall of the small intestine. As discussed earlier, each single villus contains an arterial and venus capillary (arteriole and venule) and a lacteal (the lymphatic equivalent of a capillary). Note" the lymphatic system is a circulatory system that exchanges fluid between cells, drains into veins in the neck, and can absorb fat. In the small intestine, the lacteals transport fat from the digestive tract into the circulatory system. Microstructure Of A Single Villus Each villus contains multiple absorptive cells on its surface. And protruding from the surface of these absorptive cells on each villus are a vast multitude of microvilli. Microvilli are minutely small hair like projections that serve to increase the surface area of each villus. Microvilli lined up along the edge of a villus. How many microvilli are we talking about? Hold your breath. Each villus has approximately 200 million microvilli/sq mm. This creates a velvety surface on the walls of the small intestine known as the brush border. And how much does the brush border of microvilli increase the surface area of the wall of the small intestine involved in nutrient absorption? Again, hold your breath. All in all, if the small intestine is viewed as a simple pipe, its surface area totals about half a square meter. But it is not a simple pipe. Factor in the mucosal folds, the villi, and the microvilli, and the absorptive surface area of the small intestine is in fact approximately 250 square meters - the size of a tennis court! This increases the absorptive power of the small intestine exponentially. Intestinal glands are located in the crypts of Lieberkuhn at the base of the villus (see illustration above). The cells/glands here secrete intestinal juices. Toward the base of the crypts are stem cells, which continually divide and provide the source of all the epithelial cells in the crypts and on the villi. The way they divide is actually quite interesting. One daughter cell from each stem cell division is retained as a stem cell, thus perpetuating the untainted original source. The other daughter cell differentiates along one of four pathways to become either an enterocyte, an enteroendocrine cell, a goblet cell, or a Paneth cell. Enterocyte cells migrate up the crypts, and onto the villi, where they become the mature epithelial absorptive cells essential for extracting nutrients from the chyme. Virtually all nutrients, including all amino acids and sugars, enter the body across these absorptive cells that form the epithelium covering the villi. Note: After crossing the epithelium of the villi, most nutritional molecules diffuse into the capillary network inside the villus diagrammed above, and then into the bloodstream. Some molecules, fats in particular, are transported not into capillaries, but rather into the lymphatic vessels (lacteals), which drain from the intestine and rapidly flow into blood via the thoracic duct. Specifically, cells/glands found in the crypts of Lieberkuhn, at the base of villi, include: - Paneth cells are in the deepest part of the glands. They secrete lysozyme (a bacteriocidal enzyme), and they are phagocytes. Their purpose is to protect against invaders that have made their way into the intestinal tract along with the food we eat. - Enteroendocrine glands are the deepest part of the glands. The cells here secrete three hormones: secretin (S-cells), CCK (CCK-cells), and gastric inhibitory peptide (K-cells). - Brunner's glands are in the deepest part of the duodenal mucosa. They secrete alkaline mucous to neutralize acid. - Goblet cells secrete lubricating mucous. - Peyer's patches are sections of lymphatic tissue that detect foreign elements in the GI tract and signal the immune system. (Again, you can bring a lot of bad stuff in through your mouth that needs to be dealt with.) The ileocecal valve is a small muscle located on the right side of the body (left side on most illustrations) between the small and large intestine, thus marking the end of the small intestine. It is essentially a one way check valve that allows the final stage of chyme to pass into the large intestine for final water extraction and stool making. (Note: once chyme enters the large intestine, it is called fecal matter.) If functioning properly, the valve will open and close as required. Unfortunately, it does not always function properly. Sometimes it sticks in the open position, which allows fecal matter to back up into the small intestine, where it can then contaminate the nutrient extraction process. And sometimes the valve sticks in the closed position, which can lead to constipation. Both of these conditions are very toxic and are easily triggered by bad diet (heavy alcohol consumption in particular), dehydration, and stress. It should be noted that problems with the ileocecal valve are, for the most part, not acknowledged by the medical community, almost never diagnosed, and no effective treatments are offered. Fortunately, there are highly effective natural health options. - Chiropractic And Homeopathic Treatments - Self Massage - Dietary Changes In our next issue of the newsletter, we will begin an exploration of exactly how the small intestine (based on its anatomy) does its job -- both mechanically and chemically. We will also discuss its physiology, what can go wrong, and how we can fix it without the need for surgery or debilitating pharmaceutical drugs. Posted By: Jon Barron Edited To Fit This Format Physiology Of The Small Intestine, Part 1 And now we reach the heart of the intestinal tract. Everything so far has been preparation for this discussion. Digestion, or breaking food down into smaller bits, is certainly important -- crucial even -- but to what purpose? The purpose, quite simply, is to get the nutrition inherent in the food you ate ready so that it can be absorbed into your body where it can be used by each and every single cell to survive and carry on its individual function. When it comes to the intestinal tract, the key is absorption. It's not what you eat or digest that matters; It's what you absorb. And when it comes to absorption, the small intestine is the portal for virtually all nutrients that enter into the bloodstream. Note: much of this discussion is easy to understand, but the core of it, the actual act of absorption is quite technical and involves some chemistry. As always, I will only deal with as much chemistry as is absolutely necessary -- and will present it in such a way as to make it comprehensible. Digestion -- Setting Up Absorption Before we can get to absorption, we have to cover the final stages of digestion that take place in the small intestine. In fact, you get a combination of mechanical and chemical digestion and some absorption in the small intestine. Early in the intestine it is mostly digestion, very little absorption. However, the further on you move down the digestive tract, the more the ratio swings in favor of absorption. Effectively, the entire small bowel (duodenum, jejunum, and ileum) is devoted to these two processes: digestion and absorption. Digestion itself is divided into mechanical and chemical phases. Mechanical digestion, as we alluded to in our exploration of the anatomy of the small intestine, is the result of two very different, but complementary actions: - Segmentation contractions chop, mix, and roll the chyme (the mixture of food and digestive juices). - Peristalsis slowly propels the chyme forward toward the large intestine. Segmentation represents localized activity in the small intestine, whereas peristalsis represents the more global movement that takes place throughout the entire intestinal tract. In segmentation, circular muscles constrict and divide the small bowel into segments -- each about 3-4 inches long. A muscle then contracts between the two other muscles and subdivides the segment. This is repeated many times per minute so that the chyme is moved back and forth in the same area of the segment. Localized contractions crush and mix food within that segment alone. This action mixes the chyme with intestinal juices and prolongs its contact with the absorptive surface of the small intestine. Relaxation allows the segments to coalesce, thus allowing chyme to move on down the intestinal tract -- pushed by peristalsis. Peristaltic contractions represent a global movement that is designed to move chyme through the entire length of the small intestine and ultimately complement the mechanical process of segmentation that holds chyme in individual segments of the intestinal tract. Peristalsis is completely under the control of the autonomic nervous system and is coordinated by the myenteric plexi (plexuses). The myenteric plexus, also known as Auerbach"s plexus, is a network of nerves between the circular and longitudinal layers of the muscles surrounding the intestinal tract. Segmentation in a sheep's small intestine It should be noted that peristaltic activity is weak (as opposed to segmentation), which means that food stays in the small bowel for a relatively long time (4-6 hours). And it should also be noted that peristalsis can be fairly easily slowed or even stopped by outside factors. Culprits include appendicitis, surgery, medication, and even very large meals. On the other hand, there are certain things that can increase peristalsis such as laxatives and certain kinds of illness or toxicity. As anyone who has experienced food poisoning or stomach flu would know, peristalsis is quite capable of shooting food through the intestinal tract when required. In simple terms, the body responds to toxins in the intestinal tract by adhering to the old bromide, "The solution to pollution is dilution." In effect, the body pours fluid into the intestines and increases peristalsis to eject and weaken toxins in cases such as bacterial contamination. In extreme situations such as presented by cholera, victims may actually die of dehydration from massive diarrhea. Note: in cases of massive diarrhea, you cannot drink enough water to compensate for the loss of fluids. Without the use of massive IV's, you will die of dehydration. It should also be noted that in the period between meals, when the small intestine is for the most part empty, peristaltic contractions continue throughout the entire small intestine. Think of it as housekeeping activity, designed to sweep the small bowel clear of debris. This movement is the cause of "growling" that can be heard when people have not eaten for awhile. By the time chyme reaches the small bowel, it is a mix of partially digested carbohydrates, lipids, and proteins -- not yet ready for absorption. Digestion must be completed in the small intestine, because the colon will not absorb nutrients to any significant degree. As I mentioned earlier, the ratio of digestion to absorption changes dramatically as the chyme moves through the small intestine and is exposed to ever more chemical digestion. Specifically, digestion for each type of nutrient proceeds as follows. Proteins are denatured (unwound) by acid and broken down by pepsin in the stomach. For the most part, they arrive as polypeptides (short-chain amino acids) in the small intestine. The extent of breakdown into polypeptides is dependent on several factors such as: - The amount of proteases that arrive undamaged with the food to significantly break down proteins before being neutralized by the release of stomach acid (about 45 minutes after food enters the stomach) -- or the use of supplemental digestive enzymes to make up the difference. - The ability of the stomach to produce sufficient stomach acid to denature the protein. If the protein is not unwound from its tight ball-like structure into a long chain, pepsin won't be able to work on it. - Sufficient pepsin production to chop up the protein into its smaller component chunks. - Any use of antacids or proton pump inhibitor drugs, of course, totally compromise the ability of the body to break down proteins in the stomach since they suppress the stomach acid required to unwind the protein. Any breakdown not accomplished in the stomach must now be compensated for in the small intestine -- in addition to the small intestine's role in breaking down short-chain amino acids into even smaller molecules capable of being absorbed into the bloodstream. In either case, after proteins leave the stomach, breakdown continues in the small bowel by activated pancreatic enzymes, including trypsin, chymotrypsin, and elastase (which breaks down elastin fibers). All three are necessary because they each act at different places in the amino acid sequences. In addition, brush border cells of the small bowel excrete more peptidases -- enzymes such as aminopeptidase and dipeptidase -- that complete the splitting of the amino acids into ever smaller components. Ultimately, this creates molecules small enough to transport across the brush border cells and into the bloodstream. Some lingual and gastric lipases (fat digesting enzymes) have already been at work, but the major job of fat digestion takes place in the small bowel. Again, if fats are consumed uncooked or unprocessed or if supplemental digestive enzymes are consumed with the meal, the equation changes. But in lieu of that, at this point in the process, fats are composed mainly of triglycerides (three fatty acids bound to glycerine). It is the action of pancreatic lipase in the small bowel that breaks them down into smaller, potentially absorbable components. Specifically, pancreatic lipase splits off a monoglyceride, leaving two of the lipids still attached to the glycerine. To a significant degree, the ability of pancreatic lipase to break down lipids is regulated by how soluble those fats have become. It should be noted that lipids in their natural state are not water-soluble (that is, they do not dissolve in water). This is where bile, regulated by the gallbladder, comes into play. Bile salts (from the liver and gallbladder) emulsify (break into small droplets) the fat for easier entry into water solutions or more technically, into water suspensions. If you have gallstones, or have had your gallbladder removed, you will tend to have incomplete breakdown of lipids in your small intestine, resulting in fatty stools and a tendency to intestinal discomfort. In addition, and even more important, malabsorption of lipids prevents the body from receiving any of the nutrients dissolved in the fat. We are talking about vitamins A, E, and D, tocotrienols, and Omega 3 fatty acids to name some of the more familiar ones. Unless you chewed your food properly (to pick up amylase from your saliva), or took supplemental enzymes with your meal, carbohydrates, for the most part, enter the small intestine intact. Once there, however, they are cleaved into sugars by pancreatic amylase. Further down the small bowel, maltase, sucrase, lactase, isomaltase and alpha dextrinas, secreted by the brush border cells, act on the remaining carbohydrates, cleaving off the component simple sugars one sugar at a time. For example: - Maltase acts on maltose -- cleaving it into its component parts, glucose and glucose. - Sucrase acts on sucrose -- cleaving it into glucose and fructose. - Lactase acts on lactose -- cleaving it into its component parts, glucose and galactose. Note: if lactase levels are insufficient, lactose intolerance develops. Bacteria ferment the unbroken lactose, and excess gas is produced. Note: pancreatic lipase and amylase in the blood are used to measure abnormal function of damaged pancreatic cells. Again, everything we've talked about so far is about preparing the chyme for absorption into the bloodstream. Ninety to ninety-five percent of nutrition is absorbed in the small bowel. By the time chyme has reached the small intestine, it has been mechanically broken down and reduced to a liquid by chewing and by mechanical grinding in the stomach. In addition, partial chemical digestion may already have taken place as the result of enzymes in the food itself and enzymes found in saliva. As discussed previously, the effect of those enzymes can be extensive (up to 70% of total digestion) or virtually non-existent depending on how cooked and processed the food is and how much it is chewed. The use of supplemental digestive enzymes, of course, can change that equation dramatically. And finally, the action of stomach acids and pepsin serve to denature proteins and begin the process of breaking them down, making them readily amenable to final breakdown in the small intestine. Thus, once inside the small intestine, the "partially" digested chyme is exposed to pancreatic enzymes and bile, which ultimately break down the chyme into "component" forms of protein, carbohydrates, and fats capable of being absorbed. By the end of its passage through the small intestine, virtually everything of value to the body has been extracted from the chyme. We're talking about: - Electrolytes (sodium, chloride, potassium) - Proteins, carbohydrates, and fats (which have been broken down respectively into amino acids, glucose, and fatty acids) - Vitamins, minerals, antioxidants, and phytochemicals Let's now look at this process in detail. The Absorption Of Water In The Intestinal Tract Virtually all of the water that enters your intestinal tract, in whatever form, is absorbed into the body across the walls of the small intestine -- primarily through the action of osmosis. Incidentally, osmosis is defined as the movement of water across a semi-permeable membrane from an area of high water potential (closer to distilled water) to an area of low water potential (water that contains a lot of dissolved osmotically active molecules such as electrolytes and some nutrients). Incidentally, since its molecules are so large, the chyme that enters the intestinal tract from the stomach has only a minimal impact on osmotic pressure. However, as it is progressively broken down, its ability to increase osmotic pressure rises dramatically. For example, undigested starch has little effect on osmotic pressure, but as it is digested, each starch molecule breaks down into thousands of molecules of maltose, each of which is as osmotically active as the single original starch molecule. The net effect is to increase the osmotic pressure by a factor of several thousand times over the original starch molecule. Thus, as digestion proceeds, the osmotic pressure increases dramatically, thereby pulling water into the small intestine. In addition, crypt cells at the base of each villus (in the duodenum and jejunum) secrete electrolytes (chloride, sodium, and potassium) into the small intestine which further increases the osmotic pressure to pull water into the lumen (the empty space in the small intestine). On the other hand, as the osmotically active molecules (maltose, glucose, amino acids, and electrolytes) are absorbed out of the lumen and into the bloodstream, osmotic pressure decreases relative to the electrolyte rich water of the bloodstream, and water is thus reabsorbed back into the body. The bottom line is that if the secretion and absorption of water doesn't balance, we become either bloated or dehydrated. With that in mind, we can take a look at a water balance sheet. |Swallowed Liquids||2.3 Liters (most contained in the food we eat)| |Gastric Juice||2.0 Liters| |Pancreatic Juice||2.0 liters| |Intestinal Juice||1.0 Liter (primarily from brush border cells)| |Total||9.3 Liters (average 154 lb man)| |Small Intestine Reabsorption||8.3 liters| |Colon Reabsorption||1.0 liter| |Excreted In Feces||0.1 liter| |Total||9.3 Liters (average 154 lb man)| Thus we can see that the water that enters the digestive tract and that is used in the digestive process is matched to a remarkable degree by the water that is recycled and excreted. In a healthy body, they are perfectly balanced, give or take a tenth of a liter. Keep in mind that the water lost through other means needs to be accounted for in balancing intake and outflow for the entire body. Sweat, for example, can account for anywhere from 100 to 8,000 ml (about 8.5 quarts) lost per day. You lose another quart as water vapor that passes out of your body as you breathe each day -- as anyone knows who watches their breath on a cold day. The amount lost in your urine will pretty much balance out the difference between the amount above and beyond the bare 2.3 liters you consume in your drink and food and the tenth of a liter lost in your feces and what you lose in your perspiration and breath. The bottom line is that your body will seek to balance the intake and outflow of the water it deals with every day -- to prevent bloating or dehydration. At any point it fails to do so, you will end up visiting your doctor. Keep in mind that even small imbalances between fluid intake and output can cause major problems. Diarrhea is a common symptom of disease and can kill patients through dehydration. On the other hand, rapid over-consumption of water or other liquids, though rare, can cause a rapid drop in sodium and electrolyte levels in the bloodstream and can cause death. Or if your body loses the ability to effectively pass water through your kidneys, you suffer from edema (swelling in your legs), which puts an added burden on your heart. So, how much water should you drink in addition to what you get in your food? Despite some medical claims to the contrary, I'm still a big fan of 64 ounces a day -- give or take, as circumstances dictate (body weight, temperature, how much you perspire, etc.). Posted By: Jon Barron Edited To Fit This Format Physiology Of The Small Intestine, Part 2 In our last issue, we explored the physiology of digestion in the small intestine and started our discussion of nutrient absorption. In this issue, we conclude that discussion. Effectively, this is the heart and soul of our entire series on the digestive tract. Ultimately, everything that happens in the digestive tract is designed to get nutrients into the bloodstream. The final step in the process, absorption, is in many ways the most fascinating part of the discussion. Stomach acid unwinding proteins and pepsin breaking them down, that is simple stuff. How the body actually recognizes amino acids and peptides and then transports them across the wall of the small intestine, that"s remarkably complex and fascinating and important to understand in terms of optimizing your nutritional uptake and, ultimately, your health. Note: this is a fairly technical discussion. However, my goal is to make sure you understand enough of it so that: - You are never overwhelmed by the technical for very long. - You walk away with an overall understanding of how nutrients are absorbed in the small intestine. As we discussed in our newsletter on the anatomy of the small intestine, virtually all nutrients, including all amino acids and sugars, enter the body by crossing the enterocytes (the absorptive cells found in the small intestine), that make up the epithelium layer that covers each and every villi (the hair like extensions that project from the wall of the small intestine). There are two routes by which molecules make their way from the small intestine into the bloodstream: - The transcellular route -- across the plasma membrane of the enterocytes. - The paracellular route -- across tight junctions between the enterocytes. For the most part, the tight junctions of the paracellular route are impermeable to large organic molecules such as dietary amino acids and glucose. Those types of molecules are transported exclusively by the transcellular route. Transcellular absorption of nutrients can take place by active transport or by diffusion. Active transport involves the expenditure of body energy, whereas diffusion occurs simply through random molecular movement and, therefore, without the use of body energy. Water for example, is transported through the intestinal mucosa by diffusion (isosmotic absorption); on the other hand, the absorption of amino acids and sugars involves active transport. This is one of the main reasons that eating a large meal can put you to sleep. You literally exhaust your body digesting and absorbing nutrients -- until down the road, those same nutrients ultimately make their way into your body's individual cells, thus, once again energizing you. Depending on the food you eat, you gain on the exchange -- deriving more energy as the cells absorb the nutrients than was lost in digesting those nutrients and getting them into the bloodstream. In any case, after passing through the epithelium into the villi, most of these molecules then cross over into the capillary network found inside each villus, thus making their way into the bloodstream. Fats, as we discussed when exploring the anatomy of the small intestine, behave differently. Instead of diffusing into the capillaries, they make their way into the lacteals, the lymphatic vessels present in each villus. From there, they drain from the intestine and rapidly flow through the lymphatic system and ultimately into the bloodstream by way of the thoracic duct. The process of crossing the epithelium into the villus, however, is not simple. In fact, the process varies for each nutrient. Or to put it another way, the epithelial tissue covering the villi is not uniform throughout the small intestine -- or for that matter, from top to bottom in a single villus. Individual epithelial cells vary in both their makeup and functionality. In fact, each villus has a multitude of different receptor sites, specific for each nutrient. Each type of protein fragment and each type of carbohydrate fraction has its own particular receptor site it uses for absorption. In addition, as mentioned earlier, some nutrients diffuse through the spaces between the epithelial cells (the paracellular route) -- spaces that vary throughout the intestinal tract, which has a significant impact on permeability. This becomes particularly important when we talk about the absorption of supplemental proteolytic enzymes (which are protein molecules). Unlike food proteins, proteolytic enzymes can actually use the larger spaces between cells to transport themselves out of the small intestine. The bottom line is that as chyme (the mixture of broken down food and digestive juices) travels through the small intestine, it is exposed to a wide variety of absorption sites, each with very different characteristics. These absorption/receptor sites differ in the number and type of transporter molecules found in the plasma membranes of each individual cell. And once again, keep in mind, each individual villus is comprised of multiple enterocytes,each with a multitude of receptor sites. In other words, there are a vast number of receptor sites in the small intestine. The Chemistry Of Absorption The key to the absorption of most nutrients in the small intestine is the electrochemical pump powered by electrolytes (primarily sodium) which works across the epithelial cell boundary of the villi. In fact, this is not unique to cells in the small intestine. Every single cell in the body is required to maintain a low concentration of sodium inside the cell (with a correspondingly high concentration of sodium outside the cell), which is required for the movement of nutrients into the cell and waste out of the cell. Correspondingly, potassium levels tend to be high inside cells and low in the areas just outside them. In addition, the sodium pump requires the presence of a large number of Na+/K+ ATPases (ATP enzymes) to regulate and power the reaction. This means that the cells of the body require the expenditure of energy (in the form of ATP) to power the sodium pump. The purpose of the sodium pump is to pull nutrients into the cell as sodium flows in and move waste out of the cell as potassium moves out. With that said, It's now time to bite the bullet and get specific as to how nutrients move in and out of cells. Every cell in the small intestine has three types of gateways that combine to move nutrients in and waste out. - The actual sodium pump that is used to move sodium into the cell and potassium out of it. It carries three sodium ions into the cell and two potassium ions out with each action of the pump. (don't panic; we'll explain this in more detail in a moment.) - The leak channels for both potassium and sodium. If sodium continually moved into cells and potassium out, then in short order the cell would become electrically unbalanced. To help maintain the electrical balance of the cell, there are sodium and potassium ion "leak" channels in the membrane of each cell. These channels are normally closed, but even when closed, they "leak", allowing accumulated sodium ions to leak back out of the cell and excess potassium ions to leak back in, as needed down their respective concentration gradients. In other words, the leak channels work in conjunction with the sodium pump, and are used to maintain the electrical differential that drives the pump. This is known as the cell membrane potential. - The receptor sites make use of this electrical potential to carry nutrients (specific to each receptor site) into each cell. Let me repeat that one more time: each receptor site is specific to a particular nutrient. One receptor site transports glucose. Another site transports a specific type of amino acid. And so on. (A little later, we will discuss exactly how this works.) By the way, there are approximately 150,000 sodium pumps per small intestinal enterocyte (cell). Each single cell is thus able to transport about 4.5 billion sodium ions into each cell per minute -- along with accompanying nutrients. So with that in mind, Let's explore these specialized means of absorption in some detail: Most dietary carbohydrates (even most simple sugars such as sucrose and lactose) cannot be absorbed in the intestinal tract. The monosaccharides (glucose and galactose), on the other hand, are actively transported with sodium. Monosaccharides, however, are only rarely found in normal diets. Rather, as described in Part 1 of our discussion of the Physiology of the Small Intestine, they are derived by enzymatic digestion of more complex carbohydrates in the small intestine. In summary, glucose and galactose are taken into receptor sites found on the villi by co-transport with sodium using the same transporter. Now, for the briefest of moments, let's get technical. (Hang in there; it's actually understandable.) The specific transporter molecule that carries glucose and galactose into the absorbing cell on the intestinal wall is SGLUT-1, also known as the sodium-dependent hexose transporter. This molecule will only transport the combination of a glucose and sodium ion into the cell together; it will not transport either molecule alone. It works as follows: - The transporter molecule is initially oriented facing into the small intestine. At this point, it can only bind sodium - not glucose. - The act of binding sodium inside the transporter molecule triggers the opening of the glucose-binding pocket. - This causes glucose found in the small intestine to also bind inside the transporter cell. The binding of the glucose molecule triggers the transporter molecule to reorient so that the pockets holding sodium and glucose are moved so that they face inside the cell. - The sodium now moves off into the cell's cytoplasm, which triggers the glucose to also unbind and move off into the cytoplasm. - The emptying of the transporter molecule triggers it to reorient back to its original, outward-facing position. And the cycle starts again. - The transport of galactose works in exactly the same way. Once inside the enterocyte, glucose, galactose and fructose are transported out of the cell through another hexose transporter called GLUT-2 and on into capillaries that are found within each villus. As we've already discussed, this is called active transport because it requires the use of ATP and requires the expenditure of some energy both for pulling the sugar molecules into the enterocyte, and then on out of the cell into the bloodstream. However, some time later, after using the sugars to power the body's cells, the end result is a net gain of energy. Fructose, of course, is the other simple sugar readily absorbed in the small intestine. The transport of fructose, though, involves an entirely different process. It is absorbed through something called facilitated diffusion (facilitated by Glut5) and requires no added energy (ATP) to cross into the bloodstream. The ability of fructose to be absorbed so easily into the system is indicative of its high reactivity in the body -- and therefore also indicative of some of the problems it can present when consumed in a "pure" form such as high fructose corn syrup. When bound with fruit fiber, it behaves differently. It breaks down more slowly and is absorbed more slowly -- thus presenting fewer problems. As we mentioned earlier, the receptor sites for sugars are specific for sugars. This allows for an interesting option. Certain forms of fiber (which are also carbohydrates) can actually fill these receptor sites making them unavailable for use by the sugars for about an hour. Now, although these fibers can fill the sites, they are not transported into the cell. Instead, they occupy the site for up to an hour (again making those sites unavailable to any sugars for that period of time) until they are eventually rejected by the gateway and move out of the receptor site, then on down the digestive tract and out through the large bowel. Why is this important? Because the use of a sugar metabolic enhancement formula based on these fibers can modulate sugar uptake -- slowing down and evening out the absorption of sugar -- thus helping to avoid insulin spikes. The health benefits can be profound. After digestion, the proteins consumed in our food have been broken down into single amino acids, dipeptides, and tripeptides. These protein "pieces" are actively transported across the duodenum and jejunum. In fact, the mechanism by which amino acids are absorbed is virtually identical to that of monosaccharides, but takes place in different receptor sites. Amino acids are transported by sodium through nutrient gateways built into the cell walls of enterocytes. Dipeptides and tripeptides, on the other hand, are transported in a similar manner, but with hydrogen, not sodium, as the transporter. Again, since we're talking about active transport involving the use of ATP, varying amounts of energy are required in the absorption of proteins. It should be noted that as with carbohydrates, the transporter receptor sites are specific to amino acids and specific to different types of amino acids. In fact, there are several sodium-dependent amino acid transporters -- including one each for acidic, basic, and neutral amino acids. Once again, these transporters bind their specific amino acids only after binding sodium. The fully loaded transporter then dumps sodium and the amino acid into the cell's cytoplasm, followed by its reorientation back to its outward facing position. After digestion, the fats in our meal have been broken down into fatty acids, monoglycerides, and glycerol. They are absorbed primarily by simple diffusion of small particles across the brush border (the name for the microvilli-covered surface of the epithelial cells that line the small intestine) and by a small amount of active transport. The key here is the size of the fatty particles; they must be small in order to be absorbed. That's where bile salts come in. The presence of a controlled flow of bile salts which break up the fats into tiny particles is essential for proper absorption of fats. If your gallbladder is not functioning properly or has been removed, you will have a problem absorbing fats. If you have a problem digesting fats for any reason, an option is to use ox bile tablets available at most health food stores. Supplemental digestive enzymes with lipase will also assist. Another lipid of importance that is absorbed in the small intestine is cholesterol. As it turns out, cholesterol is readily absorbed in the small intestine. Specifically, a transport protein (NPC1L1) has been identified that transports cholesterol from the lumen (the interior space) of the small intestine into the enterocytes. Note: unlike proteins and sugars, fats do not go directly into the bloodstream. They transport into the lacteals (tiny lymphatic ducts) found in the villi, and then travel through the lymphatic system and ultimately into the bloodstream. And in fact, fats do not enter the bloodstream in the form in which they were absorbed into the enterocyte. Once inside the enterocyte, fatty acids and monoglycerides are synthesized into triglycerides. These triglycerides are then packaged with cholesterol, lipoproteins, and other lipids into particles called chylomicrons. It is the chylomicrons that actually are transported into the lacteals and on into the bloodstream. Many doctors believe that a high triglyceride count in your bloodstream is actually more indicative of potential heart problems than a high cholesterol number. Okay, we need to revert to a little anatomy for a moment and talk about the omentum. It's not really an organ, and it doesn't really relate to digestion or absorption so it hasn't made any sense to talk about it so far in our series on the intestinal tract. It does, however, relate to fat storage, and in that regard it makes sense to talk about it in terms of what happens to a large chunk of the fat we absorb. The omentum actually has two parts -- the greater and the lesser. To keep things simple we'll focus on the greater omentum, which hangs from the bottom of the stomach and extends down the abdominal cavity, then back up to the posterior abdominal wall after connecting with the transverse colon. The greater omentum is mostly made up of fat. It stores fat and provides a rich blood supply to the stomach. Specifically, it plays the following roles: - It's a fat depository, having varying amounts of adipose tissue. It is one of your body's primary storage sites for fat. - Immune contribution, having milky spots of macrophage collections. - Infection and wound isolation; It may also physically limit the spread of intraperitoneal infections. The greater omentum can often be found wrapped around areas of infection and trauma. For the most part, these are "medical" considerations, but one aspect of the omentum will ring a bell for many readers. Sometimes when people lose a lot of weight, they wonder why their stomachs are still large and fatty. It's often because of the fat stored in the omentum. The fat in the omentum is often the last fat to go when losing weight. If you want to lose the gut, you have to lose the fat from the omentum too. Note: the lesser omentum is an attachment of the peritoneum that lies between the liver and the upper edge of the stomach. It carries the vessels that run to the stomach and liver. Vitamins And Minerals The thing to understand about vitamins and minerals is that for the most part, your body doesn't like isolates, can't absorb them, and considers them toxic if by chance they are absorbed. In general, your body prefers its vitamins and minerals bound to food -- in their natural form, primarily bound to carbohydrates and some proteins. In fact, as might be guessed from all that we've learned about absorption in the small intestine, It's actually the small lipids, sugars, and amino acids attached to the vitamins and minerals that the individual cells of your body recognize and absorb, not so much the vitamins and minerals themselves. Effectively, they just tag along for the ride into the cells. All that said, there are still important differences in how the different vitamins and minerals are absorbed. Fat Soluble Vitamins Assuming that your liver and gallbladder are working properly and that bile salts are breaking fats down into smaller, more absorbable particles, there is little problem absorbing the fat soluble vitamins -- even when in an isolated form -- such as d-alpha-tocopherol vitamin E. The bottom line is that the fat soluble vitamins (including vitamins A, beta- carotene, D, E and K) are diffused right along with their lipid carriers across the brush border of the cells found in the ileum. Likewise, they then travel with their associated fats on into the lymph system and then into the bloodstream. The problem with using vitamin isolates when supplementing the fat soluble vitamins is not one of absorption or even one of toxicity (where the body thinks the isolate is a toxin). Rather, the problem is one of completeness. For example, consuming vitamin E as d-alpha-tocopherol leaves behind the seven other components of vitamin E (gamma, beta, and delta tocopherol -- plus the four tocotrienols: alpha, beta, gamma, and delta). Likewise, supplementing with beta carotene or vitamin A leaves behind the several hundred other carotenoids that usually accompany them in nature -- such as alpha carotene. Is that important? Studies have shown that alpha carotene is one of the most powerful carotenoids and has a strong inhibitory effect on the proliferation of various types of cancer cells such as those affecting the lungs, stomach, cervix, breast, bladder and mouth. It works by allowing normal cells to send growth-regulating signals to premalignant cells. Carrots, for that matter, contain approximately 400 different carotenoids in addition to beta carotene, and many of those carotenoids are far more powerful than beta carotene itself. If all you're getting is beta carotene, you're missing out. And if all you're getting is synthetic beta carotene, you may actually be hurting yourself. Water Soluble Vitamins The water soluble vitamins such as vitamin C and most of the B vitamins are mainly absorbed in the jejunum. They are taken into receptor sites found on the villi by co-transport with sodium using the same transporter system used to carry monosaccharides into the bloodstream. These vitamins do present a problem when allowed to enter the bloodstream as isolates, no longer bound to their appropriate carbohydrates. First, by not being bound to the carbohydrates, it severely limits the amount of absorption that can take place (much of the supplement is wasted and passed on out through the rectum). Second, if absorbed in an isolated form, they are toxic to the body and are carried to the liver as "poisons." The liver then neutralizes their toxicity through a process called conjugation that combines them with proteins. Although conjugation of water soluble vitamins stresses the liver (forcing it to do extra work), it does neutralize the toxic effect of the isolated water soluble vitamins and makes them usable by the cells of your body. Minerals are absorbed in a small area at the top of the duodenum next to the pyloric valve where chyme passes out of the stomach. This is the primary absorption site for the bivalent minerals, including iron, calcium, magnesium, and zinc. The problem with minerals is that they are not easily absorbed in their raw isolated state (think oyster shells and iron filings) because of their electrical charge, which is opposite that of the intestinal wall. At first glance, this might seem like a good thing since opposite charges attract. Unfortunately, they attract to the extent that the minerals "stick" to the intestinal wall and do not get absorbed into the bloodstream. Eventually, the chyme moving through the intestinal tract pushes these "sticky" minerals down through the small intestine and on out through the rectum. Absorption of isolated minerals is about 3-5%. In a non isolated state, when bound to food, the charge is hidden, and absorption will be some ten times greater. Manufacturers selling vitamin isolates, use a compromise. They chelate their minerals by wrapping amino acids around them. The amino acids "cover" the electrical charge and allow the minerals to be absorbed in the duodenum. Unfortunately, although the charge is obscured, isolates are not user friendly when it actually comes to utilization by the individual cells. In this case, absorption and utilization by individual cells are not the same thing and the rate of cell utilization is significantly less with chelated minerals. Food bound minerals, on the other hand, are easily absorbed through the small intestine AND they are readily utilized by every cell in the body. An exception to this rule is what some marketers call "ionic minerals." This is just a fancy way of saying that the mineral particles in the supplement (usually in a liquid form) are so small that the electric charge they generate is not strong enough to prevent its absorption. The bottom line is that good ionic mineral supplements (or their equivalent) are readily absorbed. One other factor to consider is that the bivalent minerals are competitively absorbed because the area of absorption in the duodenum is relatively small. This means that an excessively high intake of one bivalent mineral in particular may occupy the entire absorption area and make the absorption of other bivalent minerals difficult. It also means that you need to supplement your minerals in an evenly balanced form rather than mega dosing on one mineral. To look at it another way, taking regular high doses of iron will impede the absorption of calcium, magnesium, and zinc leading to a series of other nutrition problems. Many so called experts say that you cannot absorb proteolytic enzymes. First, they claim that as proteins, they are broken down by stomach acid and pepsin in the stomach unless they are enterically coated. Then other experts say that even if they did survive, their molecules are too big to pass through the walls of the small intestine. Whenever, I hear these arguments, I'm always reminded of the apocryphal story of the engineer who proved that bumblebees can't fly. Applying the principles of aerodynamics, he PROVED that based on their size, weight, the size of their wings, and the physiological limits of how fast they could flap them, that bumblebees could not fly. Of course, how valid is a proof when the evidence before your eyes demonstrates It's nonsense? The absorption of proteolytic enzymes is a lot like the story of bumblebees. In the end, it doesn't matter how many ways you try and prove that they can't be absorbed; in the end, you can both measure them in the bloodstream and, more importantly, quantify the results of their presence in your own body. In any case, Let's first deal with the digestive juice issue first. There are two rebuttals: - Not all enzymes are destroyed by stomach acid and pepsin. Many are merely inactivated until they reach a friendlier pH environment such as found in the small intestine. Want an example of an enzyme that can survive stomach acid and digestive juices, in fact it thrives in a high acid environment? How about pepsin itself! Pepsin is an enzyme. Not only is it not destroyed by stomach acid; It's actually activated by it. So much for the statement that all enzymes are destroyed in the stomach. (Really! Who are these people?) - And even if all proteolytic enzymes were destroyed by digestive juices, instructions for using most such formulas tell you to take them between meals, when no digestive juices are present. Thus the issue is moot and the need for enteric coating moot, at least in a well designed formula used properly. When I designed my own proteolytic formula, pHi-Zymes, I specifically selected enzymes that survive the stomach environment. It's actually not that hard to do. The key is to use non-animal derived enzymes. Oral supplementation with non-animal derived enzymes, such as microbial enzymes, those manufactured by a fermentation process of Aspergillus, for example, possess unusually high stability and activity throughout a wide range of pH conditions (from a pH of 2-10). This enables them to be more consistently active and functional for a longer distance as they are transported through the digestive tract. Bottom line: they are not destroyed by stomach acid or pepsin. Now Let's address the issue of absorption. The standard medical assumption is that no dietary protein is absorbed in an undigested form -- pretty much without exception. Rather, since their molecules are too large, dietary proteins first must be digested into amino acids or di and tripeptides before they can be absorbed. At first blush, that seems to exclude undigested enzymes (which are indeed proteins) from absorption. The clinker, though, is that enzymes, although they are proteins, are not dietary proteins. They are very different in function and structure; they are biochemical catalysts. In fact, enzyme molecules are much smaller than dietary proteins. In fact, they are smaller than DNA molecules. They are indeed small enough to be absorbed. The bottom line is that supplemental proteolytic enzymes can cross the intestinal wall. How exactly then are they transported across the mucosal membrane of the small intestine? The definitive answer appears to be unknown at this time. Nevertheless, studies indicate that proteolytic enzymes are able to increase the permeability of the mucosal epithelium and, hence, facilitate their own absorption by a mechanism of self-enhanced paracellular diffusion (i.e., across the tight junctions between the epithelial cells). At this point, it is probably worth abandoning our attempt to argue against the critics and return to the bumblebee analogy and examine what's before our eyes. The bottom line is that if we can demonstrate that proteolytic enzymes consumed orally can later be found in the bloodstream, then we know they are absorbed no matter how many experts tell us they can't get there -- even if we don't know exactly how they got there. And in fact, there are a plethora of studies that prove they reach the bloodstream. - There are at least 17 studies that prove that nattokinase enters the bloodstream. - Seaprose-S has at least six studies proving its efficacy on individuals with bronchial and sinus mucous as well as inflammatory issues. - As for, there have been a number of studies over the years that substantiate its efficacy in the treatment of inflammatory disorders of the musculoskeletal system. When summarizing the argument pro and con on the absorption of non enterically coated proteolytic enzymes in the intestinal tract, I"m reminded of the movie "Chicago". The husband of Kitty (Lucy Liu) says to his wife when caught in bed with two women, "Are you going to believe what you see or what I say?" In the end, it doesn't matter what some experts say, proteolytic enzyme supplements can be seen in the bloodstream,and their benefits can be seen by anyone who uses them. Fatigue After Eating And now let me touch on one final topic before concluding this newsletter on the absorption of nutrients in the small intestine: fatigue after eating. This appears to be one of those oxymorons that people have a hard time understanding. How can eating sometimes exhaust us? We know that we can drink Gatorade or have a Snickers bar for quick energy in the middle of the day. But why is it that when we eat a larger, healthy, full spectrum meal (proteins, carbohydrates, and fats) that we actually feel enervated and sleepy for some time after eating, before the energy kicks in. And the answer is actually quite simple. Digesting and absorbing food is energy intensive and exhausts the body. It takes energy for the body to produce stomach acid and pepsin. It takes energy for the body to produce the pancreatic enzymes that assist in digestion in the small intestine. And as we've seen in this newsletter, it takes energy to actually absorb proteins and carbohydrates across the enterocytes, into the villi, and on into the bloodstream. All in all, the body expends a great amount of energy getting nutrients into your bloodstream -- enough energy so that you feel exhausted after eating a large meal. And the larger the meal, the more exhausted you feel. It is not until the digested/absorbed nutrients actually make their way through the bloodstream and on into every single cell in your body that you get your energy back. In the end, you gain more energy than you expended, and it is that energy that is used to power your body. But it can take several hours after eating to go from a negative expenditure of energy to a positive intake of energy and balance the scales out. As a side note, taking supplemental digestive enzymes with your meals significantly decreases the fatigue factor experienced after eating large meals since they relieve your body of so much of its digestive work. Okay, that concludes our exploration of the small intestine, both digestion and absorption. In our next newsletter we will pick up with the ileocecal valve, the gateway between the small and large intestines. From there we will explore how chyme (actually called fecal matter at this point) moves on through the large intestine and on out through the rectum. We will also explore all of the problems that can occur, including colorectal cancer and some of the options you have in dealing with them -- both medical and alternative. Posted By: Jon Barron Edited To Fit This Format And now we are ready to conclude our series on the intestinal tract. Several months ago, we began at the top of the tract, in the mouth. We followed our meal step-by-step as it moved on down the esophagus into the stomach where initial digestion began. We then moved into the duodenum and the small intestine where digestion was completed and absorption took place. Now, in this newsletter, we turn to the large intestine, or colon, which absorbs any remaining water in the feces and transfers them to the rectum for excretion. As part of our exploration, we will also explore the various reflexes that move feces into and through the colon. And finally, we will conclude by examining the complicated anal sphincter muscle that controls passage through the anus and then discussing the physiology of defecation. Along the way, we will also explore those things that can go wrong in the colon -- from colon cancer to diverticular disease -- and the options you have to correct them. Let's begin by looking at the anatomy of the colon, rectum, and anus. The Colon, Rectum, And Anus The large intestine (aka the colon or large bowel) is the last part of the digestive system and has two primary functions: - It extracts water and salt from solid wastes before they are eliminated from the body. It should be noted that by the time chyme enters the large intestine, 90% of its water has already been absorbed in the small intestine. On the other hand, as we saw earlier, absorbing that final ten percent is essential for maintaining proper hydration in the body. If the secretion and absorption of water doesn't balance, we become either bloated or dehydrated. It's also essential for firming up the stools and preventing diarrhea. (The large intestine does not play a major role in the absorption of nutrients in the body.) - It uses bacteria that reside in the colon to ferment and break down any unabsorbed food material that passed through the small intestine unabsorbed. These materials consist largely of amylose (forms of starch), undigested protein, and "indigestible" carbohydrates. The bacteria break down some of these materials for their own nourishment and create acetate, propionate, and butyrate as waste products, which in turn are used by the cell lining of the colon for nourishment. Fermentation by bacteria also produces methane gas, hydrogen gas, and assists in the breakdown of bile salts. Note: intestinal gas is primarily swallowed air. Only 20% consists of the methane and hydrogen produced from fermentation by bacteria. Anatomically, the large intestine begins with an area called the cecum (caecum), which extends on up through the ascending colon, across the body through the transverse colon, then down towards the anus through the descending colon. It ends in an s-shaped "trap" area called the sigmoid colon, which leads to the rectum, and then on out through the anus. In total, it is about five feet (1.5 meters) in length. On average, it is about 2.5 inches wide, but generally starts much wider in the ascending colon and narrows by the time it reaches the sigmoid colon. The pH in the colon varies between 5.5 and 7 (slightly acidic to neutral). Structurally, the walls of the colon are similar to the small intestine. All of the underlying layers are virtually identical. The serosa (outside covering), muscularis (layer of muscles that control peristalsis), and submucosa (connective tissue), are all the same. The mucosa, the actual surface on the inside of the large intestine, however, is different. Since nutrient absorption is not a factor, there are no villi. Instead, we find a smooth velvety surface with pits dropping deep into the mucosa. The pits are for absorbing water. Note: mucous is secreted by the mucosa to lubricate the colon, but enzymes are not secreted. The Ileocecal Valve The ileocecal valve is actually a fold of muscle controlled mucosa located in the cecum between the small and large intestine that serves as the inlet valve of the colon. It acts as a one way valve to allow food wastes to flow from the small intestines into the first part of the colon, the cecum, but prevents waste in the colon from leaking back into the small intestine. It is the distension of the cecum, caused by the chyme entering from the small intestine that actually triggers the closing of the ileocecal valve. The ileocecal valve also has a second related function -- to prevent the contents of the ileum from passing into the cecum prematurely. Note: once chyme (food mixed with digestive juices) passes through the ileocecal valve and enters the cecum, it picks up a new name. It is now designated as fecal matter, and it is still fecal matter if it backs up through a malfunctioning ileocecal valve and reenters the small intestine. The proper function of the ileocecal valve is to open and close upon demand. When this muscle sticks in the open position, it allows fecal matter back into the small intestine. Not healthy! When the muscle is stuck in the closed position, it causes constipation. The main causes of these two conditions are improper diet and stress; and either condition can seriously affect the body. Alcohol in particular can cause the valve to stick in the open position, resulting in the toxic feeling associated with hangovers. Shaped like a pouch, the cecum (also spelled caecum) is where the colon begins. It sits on the right side of your body (left when viewed from the front as seen from an anatomy POV) and, as already mentioned, is connected to the small intestines through the ileocecal valve. Its sole function is to receive waste from the small intestine as it pours through the ileocecal valve. Located below the ileocecal valve are the vermiform and retrocecal appendixes. The retrocecal appendix is located inside the cecum and rarely causes a problem. The vermiform ("wormlike add-on") is the familiar appendix that dangles from the cecum and can frequently become inflamed or infected and require surgery. Like the gallbladder, the medical community considers the appendix to be vestigial -- an evolutionary holdover primarily used by ruminants for hard to digest foods, particularly woody foods. The thinking is that in people, It's become less and less important over time -- shriveling to a wormlike vestigial organ that gets infected. However, thanks to surgeons who now save anyone with appendicitis, there's no evolutionary imperative for the appendix to disappear, so it continues. At least that's the medical thinking. But as with the gallbladder, that thinking may be a misapprehension, and the vermiform appendix may not be as vestigial as is medically assumed. There is now evidence that the appendix may be of significant importance -- that it plays a powerful role in the functioning of the immune system and that it serves as a storage area for beneficial bacteria. According to a paper published in the Journal of Evolutionary Biology, the appendix serves a dual function. First, it makes, trains, and directs white blood cells. Second, it serves as a type of warehouse or storage compartment for "good bacteria" that boost the immune system when help is required. According to the research, the appendix holds on to reserves of "good bacteria" so that when bad bacteria flourish or a nasty case of diarrhea reduces the colonies of good bacteria, the appendix can send in reinforcements. These bacteria may also influence white blood cells to clear up any infections in the gut. The studies cited in the paper clearly indicate that the appendix does indeed influence white cell function. So once again, it appears medical science may have "vestigialized" an important functioning organ. The Traffic Junction The three organs just discussed, the cecum, the ileocecal valve, and the appendix form what can be described as a traffic junction designed to control the flow of waste into the large intestine. Ideally, they should be cleared of waste on a continual basis -- daily at the very least. This can most easily be achieved by using the squatting position when evacuating your bowels. (If you are not presently visiting a rural village in India where the toilet is a hole in the ground, you can always use a toilet footstool.) In the squatting position, the left thigh supports the descending and sigmoid colons so as to minimize straining and help squeeze fecal matter on into the rectum for imminent evacuation. In addition, the squatting position helps relax the rectal muscles to facilitate evacuation. Meanwhile, the right thigh presses against the lower abdomen on the right side of the body, thereby "squeezing" the cecum to force waste upwards into the ascending colon and away from the appendix, ileocecal valve, and small intestines. As a result of waste being pushed up out of the cecum, the appendix is kept free of waste and is unlikely to ever get infected. In addition, pressure from the right thigh also helps the ileocecal valve stay securely closed to guard against any leakage of waste into the small intestine. Finally, as the result of the reduced pressure required for evacuation, the squatting position is a highly effective treatment/preventative for hemorrhoids. The Large Intestine Once fecal matter arrives in the cecum, it begins its journey through the rest of the large intestine and on out of the body. The ascending colon, on the right side of the abdomen, is about 25cm (10 inches) long in humans. It extends from the cecum straight up the right side of your abdominal cavity to just under the liver, where it makes a sharp right angle bend to the left (in what is known as the hepatic flexure) and becomes the transverse colon. The ascending colon receives fecal material as a liquid. The muscles of the colon then move the watery waste material forward and slowly begin the absorption of all excess water. The transverse colon runs straight across the body from right to left, from the hepatic flexure to what is called the splenic flexure (the right angle bend on the left side of the body just below the spleen). As you may remember from our last newsletter, the transverse colon hangs off the stomach, attached to it by the greater omentum. It is about 18 inches long. The transverse colon is unique among the other parts of the large intestine in one important way: it is mobile. The ascending, descending, and sigmoid colons are pretty much locked into place and do not move noticeably. Not so for the transverse colon. This becomes particularly important later in the newsletter when we talk about prolapsed colons. It should also be noted that colon cancer starts to become more frequent as we enter the transverse colon, with its incidence steadily increasing as we move further along the bowel, peaking when we reach the sigmoid colon and the rectum. One other note on the transverse colon: in some people who are not evacuating their bowels properly, it can become a major storage area for fecal matter. Again, this will be a factor when we talk about prolapsed colons. The descending colon runs from the end of the transverse colon on the left side of the body, from the splenic flexure to the beginning of the sigmoid colon and is about 12 inches in length. The function of the descending colon in the digestive system is to store food that will be emptied into the rectum. It is also in the descending colon that stools start to become semi solid as they move on to the sigmoid colon. The sigmoid colon is about 18 inches long and is S-shaped. In fact, sigmoid means S-shaped. It begins just after the descending colon and ends just before the rectum. Stools more or less complete their solidification in the sigmoid colon. Additionally, the walls of the sigmoid colon are muscular and contract to forcefully "move" stools into the rectum. The rectum begins at the end of the sigmoid colon and is about four to six inches in length. It is defined by its powerful muscles and by the fact that it sits outside the peritoneal lining (the lining of the abdominal cavity). Essentially, the rectum serves as a holding area for fecal matter. Internally, the rectum contains little transverse folds that serve to keep the stool in place until you're ready to go to the bathroom. When you're ready, the stool enters the lower rectum, moves into the anal canal, and then passes through the anus on its way out. Stimulus of the rectum (giving you the urge to go to the bathroom) occurs both internally (which is an involuntary stimulus) and externally (which occurs when you voluntarily squeeze the muscles. Note: by the time they reach the rectum, feces are composed of water salts, desquamated (peeled off or shed) epithelial cells, bacterial decay products, and undigested food (fiber, etc.). Also, the rectum is an excellent absorber. It can be used to instill (insufflate) water, salts, medication, and/or herbs rapidly -- almost as fast as if administered intravenously. The anus is the end of the trail. Its function is to control the expulsion of feces. The flow of fecal matter through the anus is controlled by the anal sphincter muscle. Physiology Of Defecation The feces end up in the rectum via mass peristalsis. Receptors signal distension of the rectum to the brain. This is a conscious perception. The defecation reflex is initiated when parasympathetic (involuntary) stimulation from the spinal cord contracts the longitudinal rectal muscles. This causes pressure to increase in the rectum. Pressure is added to the rectum by voluntary contraction of the abdominal muscles. Parasympathetic stimulation (again involuntary) relaxes the internal sphincter of the anus. This increases the urge to defecate. Finally, the external sphincter is opened by voluntary relaxation, which allows the feces to pass out of the body. This can be postponed by voluntary contraction. This is useful since it allows us to wait for an appropriate time/place to go to the bathroom. However, continually postponing defecation begins to dull the evacuation response over time -- leading to chronic constipation. Then again, voluntary postponement can be overwhelmed by conditions such as diarrhea or long term weakening of the muscles. And finally, sphincter muscles weakened by age, disease, or trauma can cause incontinence (inability to hold feces in). Note: bulky, indigestible fiber acts like a "colonic broom" to move feces through the system more quickly, carrying fat, cholesterol, and carcinogens with it. Things That Can Go Wrong According to medical doctors, digestion time (from entering your mouth to passing through your anus) varies depending on the individual. For healthy adults, according to the Mayo Clinic, "It's usually between 24 and 72 hours. After you eat, it takes about six to eight hours for food to pass through your stomach and small intestine. Food then enters your large intestine (colon) for further digestion and absorption of water. Elimination of undigested food residue through the large intestine usually begins after 24 hours. Complete elimination from the body may take several days." That means that, medically speaking, constipation is defined as anything fewer than three bowel movements per week. Or conversely, that normal could be defined as slightly less than one bowel movement every other day. Quite simply, that's nonsense. It's merely the average elimination time that most doctors see in their patients. But keep in mind, 99% of those patients are eating the standard, fast food, highly processed, low fiber, modern diet. That's neither healthy nor "normal." It's merely what most people do, and most people are unhealthy -- or rapidly moving in that direction. In fact, normal digestion/elimination time is about 24 hours. You literally should have one major bowel movement for every meal you had the day before. You should be passing the waste from yesterday's breakfast when you get up in the morning, or shortly after today's breakfast. Yesterday's lunch should pass around lunchtime and dinner around dinner time. Holding waste in the colon for longer periods of time is one of the single biggest factors in the onset on many major diseases -- not just the colon specific diseases we will discuss below. Other than eating a healthy, high fiber, largely raw food diet, the single best thing you can do for your overall health and the health of your colon is a semi-annual colon cleanse. Any program designed to improve our health or to eliminate disease from our bodies must begin with intestinal cleansing and detoxification. It is the "sine qua non" of health (literally, "without which, there is not"). Look for a program that addresses all of the following aspects of intestinal health: - Remove all old fecal matter and waste from the colon (to clear the drain, if you will). - Help remove all the heavy metals and drug residues that have accumulated in the body as a result of having your drain plugged. - Strengthen the colon muscle so that it works again. - Repair any damage, such as herniations and inflammations of the colon and small intestine. - Eliminate the presence of polyps and other abnormal growths that have been allowed to flourish because of an unhealthy intestinal environment. - Rebuild and replenish the various "friendly" bacteria cultures that ideally should line virtually every square inch of the tube -- again, from mouth to anus. A minor digression before we continue! It probably would make sense to define a handful of surgical terms that you are likely to hear from your doctor if you ever have to visit her for any of the conditions below. - 'tome -- to cut - 'ectomy -- to cut out, as in appendectomy and cholecystectomy - 'otomy -- to cut open and then close again, as in colotomy - 'ostomy -- to cut open and make (semi) permanent, as in colostomy The most obvious place we see problems associated with not regularly evacuating the bowels is when it comes to colon cancer. Feces remain in the colon for a long time, and carcinogens in feces (which are concentrated to their maximum degree at that point) are currently assumed to explain the prevalence of colon cancer -- second only to lung cancer in the number of deaths it causes each year in the US. Fecal matter maintains contact with the wall of the large intestine wall for many hours (sometimes for many days if not effectively clearing your bowels on a daily basis). The longer the contact, the greater the problem. The more severe the constipation, the greater the problem. If this fecal matter contains carcinogens ingested with the diet, those carcinogens (some of which are found in grilled meat) have an excellent chance of affecting the wall of the colon -- particularly at places of the longest contact. Not surprisingly, the longest contact and the highest incidence of colon cancer occur in the sigmoid colon, just above the rectum and in the rectum itself. Societies that eat high fiber, unprocessed diets (that move through the colon more quickly) have far lower incidences of colon cancer, diverticulitis, appendicitis, and coronary artery disease. That said, high fiber diets and proper elimination are not the only factors involved in colon cancer. You can still get colorectal cancer even if you do everything right. Genetics may play a role in up to 10% of colon cancers, for example. Exposure to toxins may also play a factor. Rancid fats in the diet (vegetarian included), too many Omega 6 fatty acids as found in most vegetable oils, and of course, a weakened immune system can all contribute to a higher risk of colon cancer. As always with issues of health, it's a question of odds,not guarantees. A polyp is a projecting mass of overgrown tissue. It looks a lot like an inflated balloon, with the part you tie off attached to wherever It's growing from. Although it is not cancerous itself, virtually all colorectal cancer develops from polyps. When identified during a colonoscopy, polyps are snipped out on the spot thereby eliminating the risk of cancer,from that particular polyp. The same things that cause colon cancer are the things that cause polyps. Prolapsed Colon (Ptosis) Ptosis is defined as the abnormal descent (prolapse) of the transverse colon in the abdominal cavity. It is usually associated with the downward displacement of other viscera. It is actually quite common, although the degree to which the transverse colon may prolapse can vary wildly -- from very mild to a full V shape, with the middle of the colon actually dropping down all the way to the pelvis. It also should be noted that it is rare for the transverse colon to prolapse by itself without being accompanied by the prolapse of other abdominal organs. In fact, the term now most commonly used to refer to the condition is enteroptosis (entero referring to the entire intestinal area), which reflects this multi-organ reality. The condition will place pressure on all of the organs under it -- uterus, ovaries, prostate, gonads, and bladder. It will exacerbate any tendency towards constipation and will decrease circulation to all of the organs in the lower half of the abdominal cavity. Also, the more pronounced the condition is, the more likely it is to produce a lower "belly bulge" that won't go away no matter how much weight you lose or scrawny the rest of your body becomes. The condition is more common in women than men and, in fact, frequent pregnancy is sometimes hypothesized as a contributing factor. But the truth is that although many causes (congenital anomalies, weakness of abdominal muscles from lack of exercise, heavy lifting, etc.) are all suspected, no definitive cause has been found. But there can be no doubt that storing undefecated fecal matter in the transverse colon while awaiting the slow evacuation of the bowels cannot help. In some people, pounds of old fecal matter can be found in the transverse colon waiting a chance to exit the body. And considering that constipation is far more common in women than in men, this would also account for the prevalence of ptosis in women. How do you treat a prolapsed colon? Actually, medical science has little to offer in the way of help. Surgery is problematic and only rarely helpful. Instead, you need to rebuild your intestinal foundation so as to once again fully support the transverse colon. It is difficult to "fully" reverse a prolapsed colon once it has occurred, but it is possible to "mostly" reverse it -- at least to the point it is no longer visible and no longer noticeably impacts your overall health. Protocol includes: - An intestinal cleanse to remove any accumulated fecal matter in the transverse colon, thereby decreasing the weight of the organ, and therefore its tendency to prolapse. - Use a toilet footstool to get your feet up to a squatting position to optimize your posture for more effective evacuation. - Start exercising your abdominal muscles -- all of them. This means not just things like sits up, but more yoga based exercises such as uddiyana bhanda that actually lift the internal organs. - Incorporate inverted postures such as a yoga shoulder stand or an inversion machine to hang upside down and let gravity do the work. Or just use a slant board to get your feet and lower body higher than your head. - And deep massage that incorporates intestinal work can also help. Crohn's, IBS, Ulcerative Colitis More Americans are hospitalized for digestive diseases than for any other type of illness. In fact, digestive diseases cost the United States alone an estimated $91 billion annually in health care costs, lost work days and premature deaths. And the bottom line is that virtually every single American will suffer from some form of chronic digestive disorder if they live long enough -- and the rest of the world is following close behind. Four years ago, I wrote a newsletter on Crohn's disease, IBS, and ulcerative colitis. The information and recommendations still apply today. Diverticular disease represents one of the great conflicts between the alternative health community and the medical community. For several decades from the early 1900's to the 1940's, the alternative health community vehemently argued that the "modern" diet was creating outpouchings or herniations of the colon. The medical community's equally vehement response was that this was utter nonsense. After all, they argued, "We perform numerous autopsies and never see any evidence of it." And they called alternative health practitioners quacks. Nevertheless, starting in the 50's, they began to take possession of the problem and named it diverticulosis. And as is typical, they gave no acknowledgement to the members of the alternative health community such as John Harvey Kellogg, M.D., who identified the disease almost a half century before they did. Nor was there any acknowledgement that they had missed identifying the condition throughout almost a half century of autopsies -- something worth keeping in mind the next time you hear the medical community say that today's autopsies never provide any evidence of people retaining large amounts of old fecal matter in their colons. Bragging rights aside, it is now understood by all concerned that many people have small pouches in the lining of their colon that bulge outward through weak spots. Each pouch is called a diverticulum. Multiple pouches are called diverticula. The condition of having diverticula is called diverticulosis. About 10 percent of Americans older than 40 have diverticulosis. About half of all people older than 60 have diverticulosis. The incidence of diverticulosis has increased dramatically from just 10 percent of the adult population over the age of 45 who had this disease in 1952 to an astounding "every person will have many" diverticula, if they live long enough, according to the 1992 edition of the Merck Manual. we've certainly come a long way since the medical community's denial of the first half century. Comparison Of Digestive Tract Length Back in September when we started this series on the digestive tract, I announced that as we proceeded, we would be comparing the digestive systems of humans to other animals to see what conclusions could be drawn as to what diet we should eat. And we have done that. we've compared teeth and seen that human teeth are nothing like the teeth of carnivores. we've compared stomachs and seen that once again, the human stomach is very different from that of carnivores and omnivores. In fact, when it comes to teeth and stomachs, humans most closely resemble animals that eat a diet that is mostly comprised of fresh fruit, vegetables, and nuts -- with, in some instances, a bit of raw meat thrown in for good measure. Is this important? Yes! The medical community bases its assumptions concerning the human digestive system on the "fact" that it is essentially designed as an omnivore system. But as I discussed in detail in Lessons from the Miracle Doctors (and so far in this series on the digestive tract), this is simply not supported by the evidence at hand. This distinction is not subtle,and not insignificant. Yes, the human body has an amazing ability to adapt to any diet we throw at it -- but not without consequences. And, in fact, many of the diseases we face today are the direct result of not understanding what our systems are designed to handle and the consequences we face as a result. So, in this newsletter, we reach the last point of comparison: the length of the alimentary canal compared to the length of the body. An examination of the carnivore intestinal tract reveals a short (relative to the length of their body) tract for fast transit of waste out of the body. The actual length of the carnivore bowel (small and large combined) is approximately 3--5 times the length of the body -- measured from mouth to anus -- a ratio less than half that found in humans. Fast transit of waste for carnivores is essential for two reasons. The faster the transit, the less opportunity for parasites to take hold. Also, meat tends to putrefy in the intestinal tract, so fast transit limits exposure to the byproducts of putrefaction. As for the herbivore (cows, sheep, etc.) bowel, at 20 - 28 times the length of the body (from mouth to anus), it usually runs almost eight times longer than a carnivore's, since plant matter (unlike meat) is not prone to putrefaction, thus rendering quick elimination moot. Again, not much like us. As for the bowel of the frugivore (gorilla, orangutan, chimpanzee, etc.), it runs about 10 - 12 times the length of the body from mouth to anus. So which intestinal tract does the human alimentary canal most closely resemble? As we discussed in our Digestive System Overview, the entire system runs about 30 feet in length from mouth to the anus. Let's total up the lengths we have identified so far: - Esophagus equals one foot - Small Intestine equals 23 feet - Bowel equals five feet (as cited above) That's 29 feet. Add in the mouth, stomach, and rectum and you have a total length of approximately 30 feet. Now compare that to the length of the body (mouth to anus). Why mouth to anus and not head to toe? Because when calculating the body length of four legged animals, we don't stretch out the legs and add them in. We measure from mouth to tail, and so, for a valid comparison, we need to do the same with humans. In any case, mouth to anus is about 2.5 to 3 feet. That gives you a ratio of 10-12 to one. Bingo! It's an absolute match to the frugivore intestinal tract. What Should We Eat? So, are we restricted to fruits and nuts? No. In fact, the frugivores we most closely resemble, the wild chimpanzees, periodically eat live insects and raw meat. Among the great apes (the gorilla, the orangutan, the bonobo, and the chimpanzee) and ourselves, only humans and chimpanzees hunt and eat meat on a frequent basis. Nevertheless, chimpanzees are largely fruit eaters, and meat comprises only about 3 percent of their diet -- far less than is found in the typical Western diet. Is a vegetarian diet automatically healthier? Not necessarily. Some people actually do better when they include small amounts of meat in their diet -- although, to be sure, a balanced vegetarian diet appears to offer some protection against cancer and heart disease. Other factors in our diet, however, affect our health to a much greater degree than whether or not we eat meat. The bottom line is that, ethical questions aside, eating small amounts of meat, chicken, or fish probably comes down mostly to a personal choice. If you choose to, you can include meat in your diet without any significant health problems -- with the following provisos: - Keep the amount small, three ounces a day or less. - If you're going to eat meat, eat organic. Eat grass fed beef, free range chicken and eggs, wild caught fish. - Avoid or minimize dairy. And if you must have it, have it raw -- or at the very least free of growth hormones. Remember, heat (pasteurization) denatures proteins, specifically making several dairy proteins relatively indigestible and highly allergenic. - Include lots of water soluble fiber in your diet to keep the unabsorbed proteins moving through the digestive tract. If nothing else, incorporate a tablespoon of psyllium as part of your daily regimen. we've covered the intestinal tract from mouth to anus over the last five plus months. Specifically, we've explored how we get food into the digestive tract, which organs support digestion, how nutrients are absorbed, and how we process and eliminate waste. So what useful things have we learned? - It's important to chew food thoroughly so that it mixes completely with the amylase in your saliva. - Eat raw foods as much as possible so that your food is packed with live enzymes. - Use digestive enzyme supplements with your meals to compensate for any shortage of live enzymes in your food. Any shortage causes the body to produce excess stomach acid to compensate. - Do not drink a large amount of fluids (water, soda, beer) with your meals as that dilutes digestive juices, thus forcing the body to produce more excess stomach acid to compensate. - How to correct excess stomach acid without using antacids or proton pump inhibitors. - Why antacids ultimately lead to more stomach acid. - Why proton pump inhibitors ultimately lead to nutrition problems. - How to use self massage, chiropractic adjustment and special exercises to correct hiatal hernias. - Why it makes sense to regularly run a liver detox program to clean out your liver, pancreas, gallbladder, and kidneys. - How to make sure you absorb the vitamins and minerals you eat or supplement with. - How to rebuild and replenish the various "friendly" bacteria cultures that ideally should line virtually every square inch of the tube -- again, from mouth to anus. - Why it makes sense to regularly run a colon detox program to clean out your intestinal tract -- particularly the large intestine. Digestion Information: Natural News 6/30/2012 - Improve Your Digestion Naturally Digestive System Information: Natural News 8/15/2012 - Digestive System 101: Here's How It Really Works Digestion Information: Dr. Kaslow - 5 Reasons To Eat Small Meals More Frequently Digestion Information: Cancer Checklist - Digestion Digestion Information: Dr. Ben Balzer,M.D. - Introduction To The Paleolithic Diet Digestion Information: Dr. Mercola 1/6/2011 - Problems With Digestion? Digestion Information: Dr. Mercola 1/1/2013 - Your Digestive System Dictates Whether You're Sick or Well
http://www.alternative-health-group.org/digestion.php
13
174
A parabola (pron.: //; plural parabolas or parabolae, adjective parabolic, from Greek: παραβολή) is a two-dimensional, mirror-symmetrical curve, which is approximately U-shaped when oriented as shown in the diagram, but which can be in any orientation in its plane. It fits any of several superficially different mathematical descriptions which can all be proved to define curves of exactly the same shape. One description of a parabola involves a point (the focus) and a line (the directrix). The focus does not lie on the directrix. The locus of points in that plane that are equidistant from both the directrix and the focus is the parabola. Another description of a parabola is as a conic section, created from the intersection of a right circular conical surface and a plane, which is parallel to a straight line on the conical surface and perpendicular to another plane which includes both the axis of the cone and also the same straight line on its surface. A third description is algebraic. A parabola is a graph of a quadratic function, such as The line perpendicular to the directrix and passing through the focus (that is, the line that splits the parabola through the middle) is called the "axis of symmetry". The point on the axis of symmetry that intersects the parabola is called the "vertex", and it is the point where the curvature is greatest. The distance between the vertex and the focus, measured along the axis of symmetry, is the "focal length". The "latus rectum" is the chord of the parabola which is parallel to the directrix and passes through the focus. Parabolas can open up, down, left, right, or in some other arbitrary direction. Any parabola can be repositioned and rescaled to fit exactly on any other parabola — that is, all parabolas are geometrically similar. Parabolas have the property that, if they are made of material that reflects light, then light which enters a parabola travelling parallel to its axis of symmetry is reflected to its focus, regardless of where on the parabola the reflection occurs. Conversely, light that originates from a point source at the focus is reflected ("collimated") into a parallel beam, leaving the parabola parallel to the axis of symmetry. The same effects occur with sound and other forms of energy. This reflective property is the basis of many practical uses of parabolas. The parabola has many important applications, from a parabolic antenna or parabolic microphone to automobile headlight reflectors to the design of ballistic missiles. They are frequently used in physics, engineering, and many other areas. Strictly, the adjective parabolic should be applied only to things that are shaped as a parabola, which is a two-dimensional shape. However, as shown in the last paragraph, the same adjective is commonly used for three-dimensional objects, such as parabolic reflectors, which are really paraboloids. Sometimes, the noun parabola is also used to refer to these objects. Though not perfectly correct, this usage is generally understood. The earliest known work on conic sections was by Menaechmus in the fourth century BC. He discovered a way to solve the problem of doubling the cube using parabolae. (The solution, however, does not meet the requirements imposed by compass and straightedge construction.) The area enclosed by a parabola and a line segment, the so-called "parabola segment", was computed by Archimedes via the method of exhaustion in the third century BC, in his The Quadrature of the Parabola. The name "parabola" is due to Apollonius, who discovered many properties of conic sections. It means "application", referring to "application of areas" concept, that has a connection with this curve, as Apollonius had proved. The focus–directrix property of the parabola and other conics is due to Pappus. Galileo showed that the path of a projectile follows a parabola, a consequence of uniform acceleration due to gravity. The idea that a parabolic reflector could produce an image was already well known before the invention of the reflecting telescope. Designs were proposed in the early to mid seventeenth century by many mathematicians including René Descartes, Marin Mersenne, and James Gregory. When Isaac Newton built the first reflecting telescope in 1668 he skipped using a parabolic mirror because of the difficulty of fabrication, opting for a spherical mirror. Parabolic mirrors are used in most modern reflecting telescopes and in satellite dishes and radar receivers. Equation in Cartesian coordinates Let the directrix be the line x = −p and let the focus be the point (p, 0). If (x, y) is a point on the parabola then, by Pappus' definition of a parabola, it is the same distance from the directrix as the focus; in other words: Squaring both sides and simplifying produces as the equation of the parabola. By interchanging the roles of x and y one obtains the corresponding equation of a parabola with a vertical axis as The equation can be generalized to allow the vertex to be at a point other than the origin by defining the vertex as the point (h, k). The equation of a parabola with a vertical axis then becomes The last equation can be rewritten so the graph of any function which is a polynomial of degree 2 in x is a parabola with a vertical axis. More generally, a parabola is a curve in the Cartesian plane defined by an irreducible equation — one that does not factor as a product of two not necessarily distinct linear equations — of the general conic form with the parabola restriction that where all of the coefficients are real and where A and C are not both zero. The equation is irreducible if and only if the determinant of the 3×3 matrix is non-zero: that is, if (AC − B2/4)F + BED/4 − CD2/4 − AE2/4 ≠ 0. The reducible case, also called the degenerate case, gives a pair of parallel lines, possibly real, possibly imaginary, and possibly coinciding with each other. Conic section and quadratic form Cone with cross-sections The diagram represents a cone with its axis vertical. The point A is its apex. A horizontal cross-section of the cone passes through the points B, E, C, and D. This cross-section is circular, but appears elliptical when viewed obliquely, as is shown in the diagram. An inclined cross-section of the cone, shown in pink, is inclined from the vertical by the same angle, θ, as the side of the cone. According to the definition of a parabola as a conic section, the boundary of this pink cross-section, EPD, is a parabola. The cone also has another horizontal cross-section, which passes through the vertex, P, of the parabola, and is also circular, with a radius which we will call r. Its centre is V, and PK is a diameter. The chord BC is a diameter of the lower circle, and passes through the point M, which is the midpoint of the chord ED. Let us call the lengths of the line segments EM and DM x, and the length of PM y. - (The triangle BPM is isosceles.) - (PMCK is a parallelogram.) Using the intersecting chords theorem on the chords BC and DE, we get: For any given cone and parabola, r and θ are constants, but x and y are variables which depend on the arbitrary height at which the horizontal cross-section BECD is made. This last equation is a simple quadratic one which describes how x and y are related to each other, and therefore defines the shape of the parabolic curve. This shows that the definition of a parabola as a conic section implies its definition as the graph of a quadratic function. Both definitions produce curves of exactly the same shape. Focal length It is proved below that if a parabola has an equation of the form y = ax2, where a is a constant, then where f is its focal length. Comparing this with the last equation above shows that the focal length of the above parabola is r sin θ. Position of the focus If a line is perpendicular to the plane of the parabola and passes through the centre, V, of the horizontal cross-section of the cone passing through P, then the point where this line intersects the plane of the parabola is the focus of the parabola, which is marked F on the diagram. Angle VPF is complementary to θ, and angle PVF is complementary to angle VPF, therefore angle PVF is θ. Since the length of PV is r, this construction correctly places the focus on the axis of symmetry of the parabola, at a distance r sin θ from its vertex. Other geometric definitions A parabola may also be characterized as a conic section with an eccentricity of 1. As a consequence of this, all parabolae are similar, meaning that while they can be different sizes, they are all the same shape. A parabola can also be obtained as the limit of a sequence of ellipses where one focus is kept fixed as the other is allowed to move arbitrarily far away in one direction. In this sense, a parabola may be considered an ellipse that has one focus at infinity. The parabola is an inverse transform of a cardioid. A parabola has a single axis of reflective symmetry, which passes through its focus and is perpendicular to its directrix. The point of intersection of this axis and the parabola is called the vertex. A parabola spun about this axis in three dimensions traces out a shape known as a paraboloid of revolution. The parabola is found in numerous situations in the physical world (see below). In the following equations and are the coordinates of the vertex, , of the parabola and is the distance from the vertex to the focus and the vertex to the directrix. Vertical axis of symmetry Horizontal axis of symmetry General parabola The general form for a parabola is This result is derived from the general conic equation given below: and the fact that, for a parabola, The equation for a general parabola with a focus point F(u, v), and a directrix in the form Latus rectum, semilatus rectum, and polar coordinates In polar coordinates, a parabola with the focus at the origin and the directrix parallel to the y-axis, is given by the equation where l is the semilatus rectum: the distance from the focus to the parabola itself, measured along a line perpendicular to the axis of symmetry. Note that this equals the perpendicular distance from the focus to the directrix, and is twice the focal length, which is the distance from the focus to the vertex of the parabola. The latus rectum is the chord that passes through the focus and is perpendicular to the axis of symmetry. It has a length of 2l. Gauss-mapped form A Gauss-mapped form: has normal . Proof of the reflective property The reflective property states that, if a parabola can reflect light, then light which enters it travelling parallel to the axis of symmetry is reflected to the focus. This is derived from the wave nature of light in the caption to a diagram near the top of this article. This derivation is valid, but may not be satisfying to readers who would prefer a mathematical approach. In the following proof, the fact that every point on the parabola is equidistant from the focus and from the directrix is taken as axiomatic. Consider the parabola Since all parabolas are similar, this simple case represents all others. The right-hand side of the diagram shows part of this parabola. Construction and definitions The point E is an arbitrary point on the parabola, with coordinates The focus is F, the vertex is A (the origin), and the line FA (the y-axis) is the axis of symmetry. The line EC is parallel to the axis of symmetry, and intersects the x-axis at D. The point C is located on the directrix (which is not shown, to minimize clutter). The point B is the midpoint of the line segment FC. Measured along the axis of symmetry, the vertex, A, is equidistant from the focus, F, and from the directrix. Correspondingly, since C is on the directrix, the y-coordinates of F and C are equal in absolute value and opposite in sign. B is the midpoint of FC, so its y-coordinate is zero, so it lies on the x-axis. Its x-coordinate is half that of E, D, and C, i.e. The slope of the line BE is the quotient of the lengths of ED and BD, which is which comes to But is also the slope (first derivative) of the parabola at E. Therefore the line BE is the tangent to the parabola at E. The distances EF and EC are equal because E is on the parabola, F is the focus and C is on the directrix. Therefore, since B is the midpoint of FC, triangles FEB and CEB are congruent (three sides), which implies that the angles marked are equal. (The angle above E is vertically opposite angle BEC.) This means that a ray of light which enters the parabola and arrives at E travelling parallel to the axis of symmetry will be reflected by the line BE so it travels along the line EF, as shown in red in the diagram (assuming that the lines can somehow reflect light). Since BE is the tangent to the parabola at E, the same reflection will be done by an infinitessimal arc of the parabola at E. Therefore, light that enters the parabola and arrives at E travelling parallel to the axis of symmetry of the parabola is reflected by the parabola toward its focus. The point E has no special characteristics. This conclusion about reflected light applies to all points on the parabola, as is shown on the left side of the diagram. This is the reflective property. Tangent bisection property The above proof, and the accompanying diagram, show that the tangent BE bisects the angle FEC. In other words, the tangent to the parabola at any point bisects the angle between the lines joining the point to the focus, and perpendicularly to the directrix. Alternative proofs The above proofs of the reflective and tangent bisection properties use a line of calculus. For readers who are not comfortable with calculus, the following alternative is presented. In this diagram, F is the focus of the parabola, and T and U lie on its directrix. P is an arbitrary point on the parabola. PT is perpendicular to the directrix, and the line MP bisects angle FPT. Q is another point on the parabola, with QU perpendicular to the directrix. We know that FP=PT and FQ=QU. Clearly, QT>QU, so QT>FQ. All points on the bisector MP are equidistant from F and T, but Q is closer to F than to T. This means that Q is to the "left" of MP, i.e. on the same side of it as the focus. The same would be true if Q were located anywhere else on the parabola (except at the point P), so the entire parabola, except the point P, is on the focus side of MP. Therefore MP is the tangent to the parabola at P. Since it bisects the angle FPT, this proves the tangent bisection property. The logic of the last paragraph can be applied to modify the above proof of the reflective property. It effectively proves the line BE to be the tangent to the parabola at E if the angles are equal. The reflective property follows as shown previously. Two tangent properties Let the line of symmetry intersect the parabola at point Q, and denote the focus as point F and its distance from point Q as f. Let the perpendicular to the line of symmetry, through the focus, intersect the parabola at a point T. Then (1) the distance from F to T is 2f, and (2) a tangent to the parabola at point T intersects the line of symmetry at a 45° angle.:p.26 Orthoptic property If two tangents to a parabola are perpendicular to each other, then they intersect on the directrix. Conversely, two tangents which intersect on the directrix are perpendicular. Without loss of generality, consider the parabola Suppose that two tangents contact this parabola at the points and Their slopes are and respectively. Thus the equation of the first tangent is of the form where is a constant. In order to make the line pass through the value of must be so the equation of this tangent is Likewise, the equation of the other tangent is At the intersection point of the two tangents, Thus Factoring the difference of squares, cancelling, and dividing by 2 gives Substituting this into one of the equations of the tangents gives an expression for the y-coordinate of the intersection point: Simplifying this gives We now use the fact that these tangents are perpendicular. The product of the slopes of perpendicular lines is −1, assuming that both of the slopes are finite. The slopes of our tangents are and , so so Thus the y-coordinate of the intersection point of the tangents is given by This is also the equation of the directrix of this parabola, so the two perpendicular tangents intersect on the directrix. Dimensions of parabolas with axes of symmetry parallel to the y-axis These parabolas have equations of the form By interchanging and the parabolas' axes of symmetry become parallel to the x-axis. Some features of a parabola Coordinates of the vertex The x-coordinate at the vertex is , which is found by differentiating the original equation , setting the resulting equal to zero (a critical point), and solving for . Substitute this x-coordinate into the original equation to yield: Put terms over a common denominator where is the discriminant, Thus, the vertex is at point Coordinates of the focus Since the axis of symmetry of this parabola is parallel with the y-axis, the x-coordinates of the focus and the vertex are equal. The coordinates of the vertex are calculated in the preceding section. The x-coordinate of the focus is therefore also To find the y-coordinate of the focus, consider the point, P, located on the parabola where the slope is 1, so the tangent to the parabola at P is inclined at 45 degrees to the axis of symmetry. Using the reflective property of a parabola, we know that light which is initially travelling parallel to the axis of symmetry is reflected at P toward the focus. The 45-degree inclination causes the light to be turned 90 degrees by the reflection, so it travels from P to the focus along a line that is perpendicular to the axis of symmetry and to the y-axis. This means that the y-coordinate of P must equal that of the focus. By differentiating the equation of the parabola and setting the slope to 1, we find the x-coordinate of P: Substituting this value of in the equation of the parabola, we find the y-coordinate of P, and also of the focus: where is the discriminant, as is used in the "Coordinates of the vertex" section. The focus is therefore the point: Axis of symmetry, focal length, and directrix The above coordinates of the focus of a parabola of the form: can be compared with the coordinates of its vertex, which are derived in the section "Coordinates of the vertex", above, and are: The axis of symmetry is the line which passes through both the focus and the vertex. In this case, it is vertical, with equation: The focal length of the parabola is the difference between the y-coordinates of the focus and the vertex: It is sometimes useful to invert this equation and use it in the form: See the section "Conic section and quadratic form", above. Measured along the axis of symmetry, the vertex is the midpoint between the focus and the directrix. Therefore, the equation of the directrix is: Length of an arc of a parabola If a point X is located on a parabola which has focal length and if is the perpendicular distance from X to the axis of symmetry of the parabola, then the lengths of arcs of the parabola which terminate at X can be calculated from and as follows, assuming they are all expressed in the same units. This quantity, , is the length of the arc between X and the vertex of the parabola. The length of the arc between X and the symmetrically opposite point on the other side of the parabola is The perpendicular distance, , can be given a positive or negative sign to indicate on which side of the axis of symmetry X is situated. Reversing the sign of reverses the signs of and without changing their absolute values. If these quantities are signed, the length of the arc between any two points on the parabola is always shown by the difference between their values of The calculation can be simplified by using the properties of logarithms: This calculation can be used for a parabola in any orientation. It is not restricted to the situation where the axis of symmetry is parallel to the y-axis. Parabolae in the physical world In nature, approximations of parabolae and paraboloids (such as catenary curves) are found in many diverse situations. The best-known instance of the parabola in the history of physics is the trajectory of a particle or body in motion under the influence of a uniform gravitational field without air resistance (for instance, a baseball flying through the air, neglecting air friction). The parabolic trajectory of projectiles was discovered experimentally by Galileo in the early 17th century, who performed experiments with balls rolling on inclined planes. He also later proved this mathematically in his book Dialogue Concerning Two New Sciences. For objects extended in space, such as a diver jumping from a diving board, the object itself follows a complex motion as it rotates, but the center of mass of the object nevertheless forms a parabola. As in all cases in the physical world, the trajectory is always an approximation of a parabola. The presence of air resistance, for example, always distorts the shape, although at low speeds, the shape is a good approximation of a parabola. At higher speeds, such as in ballistics, the shape is highly distorted and does not resemble a parabola. Another hypothetical situation in which parabolae might arise, according to the theories of physics described in the 17th and 18th Centuries by Sir Isaac Newton, is in two-body orbits; for example the path of a small planetoid or other object under the influence of the gravitation of the Sun. Parabolic orbits do not occur in nature; simple orbits most commonly resemble hyperbolas or ellipses. The parabolic orbit is the degenerate intermediate case between those two types of ideal orbit. An object following a parabolic orbit would travel at the exact escape velocity of the object it orbits; objects in elliptical or hyperbolic orbits travel at less or greater than escape velocity, respectively. Long-period comets travel close to the Sun's escape velocity while they are moving through the inner solar system, so their paths are close to being parabolic. Approximations of parabolae are also found in the shape of the main cables on a simple suspension bridge. The curve of the chains of a suspension bridge is always an intermediate curve between a parabola and a catenary, but in practice the curve is generally nearer to a parabola, and in calculations the second degree parabola is used. Under the influence of a uniform load (such as a horizontal suspended deck), the otherwise catenary-shaped cable is deformed toward a parabola. Unlike an inelastic chain, a freely hanging spring of zero unstressed length takes the shape of a parabola. Suspension-bridge cables are, ideally, purely in tension, without having to carry other, e.g. bending, forces. Similarly, the structures of parabolic arches are purely in compression. Paraboloids arise in several physical situations as well. The best-known instance is the parabolic reflector, which is a mirror or similar reflective device that concentrates light or other forms of electromagnetic radiation to a common focal point, or conversely, collimates light from a point source at the focus into a parallel beam. The principle of the parabolic reflector may have been discovered in the 3rd century BC by the geometer Archimedes, who, according to a legend of debatable veracity, constructed parabolic mirrors to defend Syracuse against the Roman fleet, by concentrating the sun's rays to set fire to the decks of the Roman ships. The principle was applied to telescopes in the 17th century. Today, paraboloid reflectors can be commonly observed throughout much of the world in microwave and satellite-dish receiving and transmitting antennas. Paraboloids are also observed in the surface of a liquid confined to a container and rotated around the central axis. In this case, the centrifugal force causes the liquid to climb the walls of the container, forming a parabolic surface. This is the principle behind the liquid mirror telescope. Aircraft used to create a weightless state for purposes of experimentation, such as NASA's "Vomit Comet," follow a vertically parabolic trajectory for brief periods in order to trace the course of an object in free fall, which produces the same effect as zero gravity for most purposes. Click on any image to enlarge it. A bouncing ball captured with a stroboscopic flash at 25 images per second. Note that the ball becomes significantly non-spherical after each bounce, especially after the first. That, along with spin and air resistance, causes the curve swept out to deviate slightly from the expected perfect parabola. The path (in red) of Comet Kohoutek as it passed through the inner solar system, showing its nearly parabolic shape. The blue orbit is the Earth's Parabolic shape formed by a liquid surface under rotation. Two liquids of different densities completely fill a narrow space between two sheets of transparent plastic. The gap between the sheets is closed at the bottom, sides and top. The whole assembly is rotating around a vertical axis passing through the centre. (See Rotating furnace) Parabolic microphone with optically transparent plastic reflector, used to overhear referee conversations at an American college football game. Edison's searchlight, mounted on a cart. The light had a parabolic reflector. Physicist Stephen Hawking in an aircraft flying a parabolic trajectory to produce zero-gravity In algebraic geometry, the parabola is generalized by the rational normal curves, which have coordinates the standard parabola is the case and the case is known as the twisted cubic. A further generalization is given by the Veronese variety, when there is more than one input variable. In the theory of quadratic forms, the parabola is the graph of the quadratic form (or other scalings), while the elliptic paraboloid is the graph of the positive-definite quadratic form (or scalings) and the hyperbolic paraboloid is the graph of the indefinite quadratic form Generalizations to more variables yield further such objects. The curves for other values of p are traditionally referred to as the higher parabolas, and were originally treated implicitly, in the form for p and q both positive integers, in which form they are seen to be algebraic curves. These correspond to the explicit formula for a positive fractional power of x. Negative fractional powers correspond to the implicit equation and are traditionally referred to as higher hyperbolas. Analytically, x can also be raised to an irrational power (for positive values of x); the analytic properties are analogous to when x is raised to rational powers, but the resulting curve is no longer algebraic, and cannot be analyzed via algebraic geometry. See also - Parabolic dome - Parabolic reflector - Parabolic partial differential equation - Quadratic equation - Quadratic function - Rotating furnace, paraboloids produced by rotation - Universal parabolic constant - The only way to draw a straight line on the surface of a circular cone is to make it pass through the apex of the cone, where it will intersect the cone's axis. The line and the axis must therefore be coplanar. - Apollonius' Derivation of the Parabola at Convergence - Wilson, Ray N. (2004). Reflecting Telescope Optics: Basic design theory and its historical development (2 ed.). Springer. p. 3. ISBN 3-540-40106-7., Extract of page 3 - Stargazer, p. 115. - Stargazer, pp. 123 and 132 - Fitzpatrick, Richard (July 14, 2007), "Spherical Mirrors", Electromagnetism and Optics, lectures, University of Texas at Austin, Paraxial Optics, retrieved October 5, 2011. - Lawrence, J. Dennis, A Catalog of Special Plane Curves, Dover Publ., 1972. - In the diagram, the axis is not exactly vertical. This is the result of a technical problem that occurs when a 3-dimensional model is converted into a 2-dimensional image. Readers should imagine the cone rotated slightly clockwise, so the axis, AV, is vertical. - Downs, J. W., Practical Conic Sections, Dover Publ., 2003. - Dialogue Concerning Two New Sciences (1638) (The Motion of Projectiles: Theorem 1); see - However, this parabolic shape, as Newton recognized, is only an approximation of the actual elliptical shape of the trajectory, and is obtained by assuming that the gravitational force is constant (not pointing toward the center of the earth) in the area of interest. Often, this difference is negligible, and leads to a simpler formula for tracking motion. - Troyano, Leonardo Fernández (2003). Bridge engineering: a global perspective. Thomas Telford. p. 536. ISBN 0-7277-3215-3., Chapter 8 page 536 - Drewry, Charles Stewart (1832). A memoir of suspension bridges. Oxford University. p. 159., Extract of page 159 - Middleton, W. E. Knowles (December 1961). "Archimedes, Kircher, Buffon, and the Burning-Mirrors". Isis (Published by: The University of Chicago Press on behalf of The History of Science Society) 52 (4): 533–543. doi:10.1086/349498. JSTOR 228646. - Lockwood, E. H. (1961): A Book of Curves, Cambridge University Press |Wikimedia Commons has media related to: Parabolas| |Wikisource has the text of the 1911 Encyclopædia Britannica article Parabola.| - Hazewinkel, Michiel, ed. (2001), "Parabola", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Weisstein, Eric W., "Parabola", MathWorld. - Interactive parabola-drag focus, see axis of symmetry, directrix, standard and vertex forms - Archimedes Triangle and Squaring of Parabola at cut-the-knot - Two Tangents to Parabola at cut-the-knot - Parabola As Envelope of Straight Lines at cut-the-knot - Parabolic Mirror at cut-the-knot - Three Parabola Tangents at cut-the-knot - Module for the Tangent Parabola - Focal Properties of Parabola at cut-the-knot - Parabola As Envelope II at cut-the-knot - The similarity of parabola at Dynamic Geometry Sketches, interactive dynamic geometry sketch. - a method of drawing a parabola with string and tacks
http://en.wikipedia.org/wiki/Parabola
13
104
A practical superlens, super lens or perfect lens, could significantly advance the field of optics and optical engineering. The principles governing the behavior of the superlens reveals resolution capabilities that go substantially beyond ordinary microscopes. As Ernst Abbe reported in 1873, the lens of a camera or microscope is incapable of capturing some very fine details of any given image. The super lens, on the other hand, is intended to capture these fine details. Consequently, conventional lens limitation has inhibited progress in certain areas of the biological sciences. This is because a virus or DNA molecule is out of visual range with the highest powered microscopes. Also, this limitation inhibits seeing the minute processes of cellular proteins moving alongside microtubules of a living cell in their natural environments. Additionally, computer chips and the interrelated microelectronics are manufactured to smaller and smaller scales. This requires specialized optical equipment, which is also limited because these use the conventional lens. Hence, the principles governing a super lens show that it has potential for imaging a DNA molecule and cellular protein processes, or aiding in the manufacture of even smaller computer chips and microelectronics. Furthermore, conventional lenses capture only the propagating light waves. These are waves that travel from a light source or an object to a lens, or the human eye. This can alternatively be studied as the far field. In contrast, the superlens, or perfect lens, captures propagating light waves and waves that stay on top of the surface of an object, which, alternatively, can be studied as both the far field and the near field. In other words, a superlens, super lens or perfect lens is a lens which uses metamaterials to go beyond the diffraction limit. The diffraction limit is an inherent limitation in conventional optical devices or lenses. In 2000, a type of lens was proposed that consisted of a metamaterial that compensates for wave decay and reconstructs images in the near field. In addition, both propagating and evanescent waves contribute to the resolution of the image. Theory and simulations show that the superlens and hyperlens can work, but engineering obstacles need to be overcome. An image of an object can be defined as a tangible or visible representation of the features of that object. A requirement for image formation is interaction with fields of electromagnetic radiation. Furthermore, the level of feature detail, or image resolution, is limited to a length of a wave of radiation. For example, with optical microscopy, image production and resolution depends on the length of a wave of visible light. However, with a superlens, this limitation may be removed, and a new class of image generated. Electron beam lithography can overcome this resolution limit. Optical microscopy, on the other hand cannot, being limited to some value just above 200 nanometers. However, new technologies combined with optical microscopy are beginning to allow for increased feature resolution (see sections below). One definition of being constrained by the resolution barrier, is a resolution cut off at half the wavelength of light. The visible spectrum has a range that extends from 390 nanometers to 750 nanometers. Green light, half way in between, is around 500 nanometers. Microscopy takes into account parameters such as lens aperture, distance from the object to the lens, and the refractive index of the observed material. This combination defines the resolution cutoff, or Microscopy's optical limit, which tabulates to 200 nanometers. Therefore, conventional lenses, which literally construct an image of an object by using "ordinary" light waves, discard information that produce very fine, and minuscule details of the object that are contained in evanescent waves. These dimensions are less than 200 nanometers. For this reason, conventional optical systems, such as microscopes, have been unable to accurately image very small, nanometer-sized structures or nanometer-sized organisms in vivo, such as individual viruses, or DNA molecules. The limitations of standard optical microscopy (bright field microscopy) lie in three areas: - The technique can only image dark or strongly refracting objects effectively. - Diffraction limits the object, or cell's, resolution to approximately 200 nanometers. - Out of focus light from points outside the focal plane reduces image clarity. Live biological cells in particular generally lack sufficient contrast to be studied successfully, because the internal structures of the cell are colorless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes, but this involves killing and fixing the sample. Staining may also introduce artifacts, apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen. The conventional glass lens is pervasive throughout our society and in the sciences. It is one of the fundamental tools of optics. However, the wavelength of light can be analogous to the width of a pencil used to draw the ordinary images. The limit becomes noticeable, for example, when the laser used in a digital video system can only detect and deliver details from a DVD based on the wavelength of light. The image cannot be rendered any sharper beyond this limitation. When an object emits or reflects light there are two types of electromagnetic radiation associated with this phenomenon. These are the near field radiation and the far field radiation. As implied by its description, the far field escapes beyond the object. It is then easily captured and manipulated by a conventional glass lens. However, useful (nanometer-sized) resolution details are not observed, because they are hidden in the near field. They remain localized, staying much closer to the light emitting object, unable to travel, and unable to be captured by the conventional lens. Controlling the near field radiation, for high resolution, can be accomplished with a new class of materials not found in nature. These are unlike familiar solids, such as crystals, which derive their properties from atomic and molecular units. The new material class, termed metamaterials, obtains its properties from its artificially larger structure. This has resulted in novel properties, and novel responses, which allow for details of images that surpass the limitations imposed by the wavelength of light. This has led to the desire to view live biological cell interactions in a real time, natural environment, and the need for subwavelength imaging. Subwavelength imaging can be defined as optical microscopy with the ability to see details of an object or organism below the wavelength of visible light (see discussion in the above sections). In other words, to have capability to observe, in real time, below 200 nanometers. Optical microscopy is a non-invasive technique and technology because everyday light is the transmission medium. Imaging below the optical limit in optical microscopy (subwavelength) can be engineered for the cellular level, and nanometer level in principle. For example, in 2007 a technique was demonstrated where a metamaterials-based lens coupled with a conventional optical lens could manipulate visible light to see (nanoscale) patterns that were too small to be observed with an ordinary optical microscope. This has potential applications not only for observing a whole living cell, or for observing cellular processes, such as how proteins and fats move in and out of cells. In the technology domain, it could be used to improve the first steps of photolithography and nanolithography, essential for manufacturing ever smaller computer chips. Focusing at subwavelength has become a unique imaging technique which allows visualization of features on the viewed object which are smaller than the wavelength of the photons in use. A photon is the minimum unit of light (see article). While previously thought to be physically impossible, subwavelength imaging has been made possible through the development of metamaterials. This is generally accomplished using a layer of metal such as gold or silver a few atoms thick, which acts as a superlens, or by means of 1D and 2D photonic crystals. There is a subtle interplay between propagating waves, evanescent waves, near field imaging and far field imaging discussed in the sections below. Early subwavelength imaging Metamaterial lenses (Superlens) are able to compensate for the exponential evanescent wave decay via negative refractive index, and in essence reconstruct the image. Prior to metamaterials, proposals were advanced in the 1970s to avoid this evanescent decay. For example, in 1974 proposals for two-dimensional, fabrication techniques were presented. These proposals included contact imaging to create a pattern in relief, photolithography, electron lithography, X-ray lithography, or ion bombardment, on an appropriate planar substrate. The shared technological goals of the metamaterial lens and the variety of lithography aim to optically resolve features having dimensions much smaller than that of the vacuum wavelength of the exposing light. In 1981 two different techniques of contact imaging of planar (flat) submicroscopic metal patterns with blue light (400 nm) were demonstrated. One demonstration resulted in an image resolution of 100 nm and the other a resolution of 50 to 70 nm. Since at least 1998 near field optical lithography was designed to create nanometer-scale features. Research on this technology continued as the first experimentally demonstrated negative index metamaterial came into existence in 2000–2001. The effectiveness of electron-beam lithography was also being researched at the beginning of the new millennium for nanometer-scale applications. Imprint lithography was shown to have desirable advantages for nanometer-scaled research and technology. Advanced deep UV photolithography can now offer sub-100 nm resolution, yet the minimum feature size and spacing between patterns are determined by the diffraction limit of light. Its derivative technologies such as evanescent near-field lithography, near-field interference lithography, and phase-shifting mask lithography were developed to overcome the diffraction limit. The first superlens (2004) with a negative refractive index provided resolution three times better than the diffraction limit and was demonstrated at microwave frequencies. In 2005, the first near field superlens was demonstrated by N.Fang et al., but the lens did not rely on negative refraction. Instead, a thin silver film was used to enhance the evanescent modes through surface plasmon coupling. Almost at the same time Melville and Blaikie succeeded with a near field superlens. Other groups followed. Two developments in superlens research were reported in 2008. In the second case, a metamaterial was formed from silver nanowires which were electrochemically deposited in porous aluminium oxide. The material exhibited negative refraction. The superlens has not yet been demonstrated at visible or near-infrared frequencies (Nielsen, R. B.; 2010). Furthermore as dispersive materials, these are limited to functioning at a single wavelength. Proposed solutions are metal–dielectric composites (MDCs) and multilayer lens structures. The multi-layer superlens appears to have better subwavelength resolution than the single layer superlens. Losses are less of a concern with the multi-layer system, but so far it appears to be impractical because of impedance mis-match. When the world is observed through conventional lenses, the sharpness of the image is determined by and limited to the wavelength of light. Around the year 2000, a slab of negative index metamaterial was theorized to create a lens with capabilities beyond conventional (positive index) lenses. Sir John Pendry, a British physicist, proposed that a thin slab of negative refractive metamaterial might overcome known problems with common lenses to achieve a "perfect" lens that would focus the entire spectrum, both the propagating as well as the evanescent spectra. A slab of silver was proposed as the metamaterial. As light moves away (propagates) from the source, it acquires an arbitrary phase. Through a conventional lens the phase remains consistent, but the evanescent waves decay exponentially. In the flat metamaterial DNG slab, normally decaying evanescent waves are contrarily amplified. Furthermore, as the evanescent waves are now amplified, the phase is reversed. Therefore, a type of lens was proposed, consisting of a metal film metamaterial. When illuminated near its plasma frequency, the lens could be used for superresolution imaging that compensates for wave decay and reconstructs images in the near-field. In addition, both propagating and evanescent waves contribute to the resolution of the image. Pendry suggested that left-handed slabs allow "perfect imaging" if they are completely lossless, impedance matched, and their refractive index is −1 relative to the surrounding medium. Theoretically, this would be a breakthrough in that the optical version resolves objects as minuscule as nanometers across. Pendry predicted that Double negative metamaterials (DNG) with a refractive index of n = −1, can act, at least in principle, as a "perfect lens" allowing imaging resolution which is limited not by the wavelength, but rather by material quality. Other studies concerning the perfect lens Further research demonstrated that Pendry's theory behind the perfect lens was not exactly correct. The analysis of the focusing of the evanescent spectrum (equations 13–21 in reference ) was flawed. In addition, this applies to only one (theoretical) instance, and that is one particular medium that is lossless, nondispersive and the constituent parameters are defined as: - ε(ω) / ε0 = µ(ω) / µ0 = −1, which in turn results in a negative refraction of n = −1 However, the final intuitive result of this theory that both the propagating and evanescent waves are focused, resulting in a converging focal point within the slab and another convergence (focal point) beyond the slab turned out to be correct. If the DNG metamaterial medium has a large negative index or becomes lossy or dispersive, Pendry's perfect lens effect cannot be realized. As a result, the perfect lens effect does not exist in general. According to FDTD simulations at the time (2001), the DNG slab acts like a converter from a pulsed cylindrical wave to a pulsed beam. Furthermore, in reality (in practice), a DNG medium must be and is dispersive and lossy, which can have either desirable or undesirable effects, depending on the research or application. Consequently, Pendry's perfect lens effect is inaccessible with any metamaterial designed to be a DNG medium. Another analysis, in 2002, of the perfect lens concept showed it to be in error while using the lossless, dispersionless DNG as the subject. This analysis mathematically demonstrated that subtleties of evanescent waves, restriction to a finite slab and absorption had led to inconsistencies and divergencies that contradict the basic mathematical properties of scattered wave fields. For example, this analysis stated that absorption, which is linked to dispersion, is always present in practice, and absorption tends to transform amplified waves into decaying ones inside this medium (DNG). A third analysis of Pendry's perfect lens concept, published in 2003, used the recent demonstration of negative refraction at microwave frequencies as confirming the viability of the fundamental concept of the perfect lens. In addition, this demonstration was thought to be experimental evidence that a planar DNG metamaterial would refocus the far field radiation of a point source. However, the perfect lens would require significantly different values for permittivity, permeability, and spatial periodicity than the demonstrated negative refractive sample. This study agrees that any deviation from conditions where ε = µ = −1 results in the normal, conventional, imperfect image that degrades exponentially i.e., the diffraction limit. The perfect lens solution in the absence of losses is again, not practical, and can lead to paradoxical interpretations. It was determined that although resonant surface plasmons are undesirable for imaging, these turn out to be essential for recovery of decaying evanescent waves. This analysis discovered that metamaterial periodicity has a significant effect on the recovery of types of evanescent components. In addition, achieving subwavelength resolution is possible with current technologies. Negative refractive indices have been demonstrated in structured metamaterials. Such materials can be engineered to have tunable material parameters, and so achieve the optimal conditions. Losses can be minimized in structures utilizing superconducting elements. Furthermore, consideration of alternate structures may lead to configurations of left-handed materials that can achieve subwavelength focusing. Such structures were being studied at the time. Near-field imaging with magnetic wires Pendry's theoretical lens was designed to focus both propagating waves and the near-field evanescent waves. From permittivity "ε" and magnetic permeability "µ" an index of refraction "n" is derived. The index of refraction determines how light is bent on traversing from one material to another. In 2003, it was suggested that a metamaterial constructed with alternating, parallel, layers of n = −1 materials and n = +1 materials, would be a more effective design for a metamaterial lens. It is an effective medium made up of a multi-layer stack, which exhibits birefringence, n2 = ∞, nx = 0. The effective refractive indices are then perpendicular and parallel, respectively. Like a conventional lens, the z-direction is along the axis of the roll. The resonant frequency (w0) – close to 21.3 MHz – is determined by the construction of the roll. Damping is achieved by the inherent resistance of the layers and the lossy part of permittivity. The details of construction are found in ref. Simply put, as the field pattern is transferred from the input to the output face of a slab, so the image information is transported across each layer. This was experimentally demonstrated. To test the two-dimensional imaging performance of the material, an antenna was constructed from a pair of anti-parallel wires in the shape of the letter M. This generated a line of magnetic flux, so providing a characteristic field pattern for imaging. It was placed horizontally, and the material, consisting of 271 Swiss rolls tuned to 21.5 MHz, was positioned on top of it. The material does indeed act as an image transfer device for the magnetic field. The shape of the antenna is faithfully reproduced in the output plane, both in the distribution of the peak intensity, and in the “valleys” that bound the M. A consistent characteristic of the very near (evanescent) field is that the electric and magnetic fields are largely decoupled. This allows for nearly independent manipulation of the electric field with the permittivity and the magnetic field with the permeability. Furthermore, this is highly anisotropic system. Therefore, the transverse (perpendicular) components of the EM field which radiate the material, that is the wavevector components kx and ky, are decoupled from the longitudinal component kz. So, the field pattern should be transferred from the input to the output face of a slab of material without degradation of the image information. Optical super lens with silver metamaterial In 2003, a group of researchers showed that optical evanescent waves would be enhanced as they passed through a silver metamaterial lens. This was referred to as a diffraction-free lens. Although a coherent, high-resolution, image was not intended, nor achieved, regeneration of the evanescent field was experimentally demonstrated. By 2003 it was known for decades that evanescent waves could be enhanced by producing excited states at the interface surfaces. However, the use of surface plasmons to reconstruct evanescent components was not tried until Pendry's recent proposal (see "Perfect lens" above). By studying films of varying thickness it has been noted that a rapidly growing transmission coefficient occurs, under the appropriate conditions. This demonstration provided direct evidence that the foundation of superlensing is solid, and suggested the path that will enable the observation of superlensing at optical wavelengths. In 2005, a coherent, high-resolution, image was produced (based on the 2003 results). A thinner slab of silver (35 nm) was better for sub–diffraction-limited imaging, which results in one-sixth of the illumination wavelength. This type of lens was used to compensate for wave decay and reconstruct images in the near-field. Prior attempts to create a working superlens used a slab of silver that was too thick. Objects were imaged as small as 40 nm across. In 2005 the imaging resolution limit for optical microscopes was at about one tenth the diameter of a red blood cell. With the silver superlens this results in a resolution of one hundredth of the diameter of a red blood cell. Conventional lenses, whether man-made or natural, create images by capturing the propagating light waves all objects emit and then bending them. The angle of the bend is determined by the index of refraction and has always been positive until the fabrication of artificial negative index materials. Objects also emit evanescent waves that carry details of the object, but are unobtainable with conventional optics. Such evanescent waves decay exponentially and thus never become part of the image resolution, an optics threshold known as the diffraction limit. Breaking this diffraction limit, and capturing evanescent waves are critical to the creation of a 100-percent perfect representation of an object. In addition, conventional optical materials suffer a diffraction limit because only the propagating components are transmitted (by the optical material) from a light source. The non-propagating components, the evanescent waves, are not transmitted. Moreover, lenses that improve image resolution by increasing the index of refraction are limited by the availability of high-index materials, and point by point subwavelength imaging of electron microscopy also has limitations when compared to the potential of a working superlens. Scanning electron and atomic force microscopes are now used to capture detail down to a few nanometers. However, such microscopes create images by scanning objects point by point, which means they are typically limited to non-living samples, and image capture times can take up to several minutes. With current optical microscopes, scientists can only make out relatively large structures within a cell, such as its nucleus and mitochondria. With a superlens, optical microscopes could one day reveal the movements of individual proteins traveling along the microtubules that make up a cell's skeleton, the researchers said. Optical microscopes can capture an entire frame with a single snapshot in a fraction of a second With superlenses this opens up nanoscale imaging to living materials, which can help biologists better understand cell structure and function in real time. Advances of magnetic coupling in the THz and infrared regime provided the realization of a possible metamaterial superlens. However, in the near field, the electric and magnetic responses of materials are decoupled. Therefore, for transverse magnetic (TM) waves, only the permittivity needed to be considered. Noble metals, then become natural selections for superlensing because negative permittivity is easily achieved. By designing the thin metal slab so that the surface current oscillations (the surface plasmons) match the evanescent waves from the object, the superlens is able to substantially enhance the amplitude of the field. Superlensing results from the enhancement of evanescent waves by surface plasmons. The key to the superlens is its ability to significantly enhance and recover the evanescent waves that carry information at very small scales. This enables imaging well below the diffraction limit. No lens is yet able to completely reconstitute all the evanescent waves emitted by an object, so the goal of a 100-percent perfect image will persist. However, many scientists believe that a true perfect lens is not possible because there will always be some energy absorption loss as the waves pass through any known material. In comparison the superlens image is substantially better than the one created without the silver superlens. 50-nm flat silver layer In February 2004, an electromagnetic radiation focusing system, based on a negative index metamaterial plate, accomplished subwavelength imaging in the microwave domain. This showed that obtaining separated images at much less than the wavelength of light is possible. Also, in 2004, a silver layer was used for sub-micrometre near-field imaging. Super resolution was not achieved, but this was intended. The silver layer was too thick to allow significant enhancements of evanescent field components. In early 2005, feature resolution was achieved with a different silver layer. Though this was not an actual image, it was intended. Dense feature resolution down to 250 nm was produced in a 50 nm thick photoresist using illumination from a mercury lamp. Using simulations (FDTD), the study noted that resolution improvements could be expected for imaging through silver lenses, rather than another method of near field imaging. Building on this prior research, super resolution was achieved at optical frequencies using a 50 nm flat silver layer. The capability of resolving an image beyond the diffraction limit, for far-field imaging, is defined here as superresolution. The image fidelity is much improved over earlier results of the previous experimental lens stack. Imaging of sub-micrometre features has been greatly improved by using thinner silver and spacer layers, and by reducing the surface roughness of the lens stack. The ability of the silver lenses to image the gratings has been used as the ultimate resolution test, as there is a concrete limit for the ability of a conventional (far field) lens to image a periodic object – in this case the image is a diffraction grating. For normal-incidence illumination the minimum spatial period that can be resolved with wavelength λ through a medium with refractive index n is λ/n. Zero contrast would therefore be expected in any (conventional) far-field image below this limit, no matter how good the imaging resist might be. The (super) lens stack here results in a computational result of a diffraction-limited resolution of 243 nm. Gratings with periods from 500 nm down to 170 nm are imaged, with the depth of the modulation in the resist reducing as the grating period reduces. All of the gratings with periods above the diffraction limit (243 nm) are well resolved. The key results of this experiment are super-imaging of the sub-diffraction limit for 200 nm and 170 nm periods. In both cases the gratings are resolved, even though the contrast is diminished, but this gives experimental confirmation of Pendry's superlensing proposal. Negative index GRIN lenses Gradient Index (GRIN) – The larger range of material response available in metamaterials should lead to improved GRIN lens design. In particular, since the permittivity and permeability of a metamaterial can be adjusted independently, metamaterial GRIN lenses can presumably be better matched to free space. The GRIN lens is constructed by using a slab of NIM with a variable index of refraction in the y direction, perpendicular to the direction of propagation z. Transmission properties of an optical far-field superlens Also in 2005 a group proposed a theoretical way to overcome the near-field limitation using a new device termed a far-field superlens (FSL), which is a properly designed periodically corrugated metallic slab-based superlens. Metamaterial crystal lens An idea for a far-field scanless optical microscopy, with a resolution below diffraction limit, was investigated by exploiting the special dispersion characteristics of an anisotropic metamaterial crystal. Metamaterial lens goes from near field to far field Imaging is experimentally demonstrated in the far field, taking the next step after near-field experiments. The key element is termed as a far-field superlens (FSL) which consists of a conventional superlens and a nanoscale coupler. Focusing beyond the diffraction limit with far-field time reversal An approach is presented for subwavelength focusing of microwaves using both a time-reversal mirror placed in the far field and a random distribution of scatterers placed in the near field of the focusing point. Once capability for near-field imaging was demonstrated, the next step was to project a near-field image into the far-field. This concept, including technique and materials, is dubbed "hyperlens". The capability of a metamaterial-hyperlens for sub-diffraction-limited imaging is shown below. Sub-diffraction imaging in the far field With conventional optical lenses, the far field is a limit that is too distant for evanescent waves to arrive intact. When imaging an object, this limits the optical resolution of lenses to the order of the wavelength of light These non-propagating waves carry detailed information in the form of high spatial resolution, and overcome limitations. Therefore, projecting image details, normally limited by diffraction into the far field does require recovery of the evanescent waves. In essence steps leading up to this investigation and demonstration was the employment of an anisotropic metamaterial with a hyperbolic dispersion. The effect was such that ordinary evanescent waves propagate along the radial direction of the layered metamaterial. On a microscopic level the large spatial frequency waves propagate through coupled surface plasmon excitations between the metallic layers. In 2007, just such an anisotropic metamaterial was employed as a magnifying optical hyperlens. The hyperlens consisted of a curved periodic stack of thin silver and alumina (at 35 nanometers thick) deposited on a half-cylindrical cavity, and fabricated on a quartz substrate. The radial and tangential permittivities have different signs. Upon illumination, the scattered evanescent field from the object enters the anisotropic medium and propagates along the radial direction. Combined with another effect of the metamaterial, a magnified image at the outer diffraction limit-boundary of the hyperlens occurs. Once the magnified feature is larger than (beyond) the diffraction limit, it can then be imaged with a conventional optical microscope, thus demonstrating magnification and projection of a sub-diffraction-limited image into the far field. The hyperlens magnifies the object by transforming the scattered evanescent waves into propagating waves in the anisotropic medium, projecting a spatial resolution high-resolution image into the far field. This type of metamaterials-based lens, paired with a conventional optical lens is therefore able to reveal patterns too small to be discerned with an ordinary optical microscope. In one experiment, the lens was able to distinguish two 35-nanometer lines etched 150 nanometers apart. Without the metamaterials, the microscope showed only one thick line. (See diagram to the right). In a control experiment, the line pair object was imaged without the hyperlens. The line pair could not be resolved because of the diffraction limit of the (optical) aperture was limited to 260 nm. (See panels B and C of the figure). Because the hyperlens supports the propagation of a very broad spectrum of wave vectors, it can magnify arbitrary objects with sub-diffraction-limited resolution. The recorded image of the letters "ON" shows the fine features of the object. Although this work appears to be limited by being only a cylindrical hyperlens, the next step is to design a spherical lens. That lens will exhibit three-dimensional capability. Near-field optical microscopy uses a tip to scan an object. In contrast, this optical hyperlens magnifies an image that is sub-diffraction-limited. The magnified sub-diffraction image is then projected into the far field. The optical hyperlens shows a notable potential for applications, such as real-time biomolecular imaging and nanolithography. Such a lens could be used to watch cellular processes that have been impossible to see. Conversely, it could be used to project an image with extremely fine features onto a photoresist as a first step in photolithography, a process used to make computer chips. The hyperlens also has applications for DVD technology. In 2010, spherical hyperlens for two dimensional imaging at visible frequencies is demonstrated experimentally. The spherical hyperlens based on silver and titanium oxide alternating layers has strong anisotropic hyperbolic dispersion allowing super-resolution with visible spectrum. The resolution is 160 nm at visible spectrum. It will enable biological imaging such as cell and DNA with a strong benefit of magnifying sub-diffraction resolution into far-field. Plasmon assisted microscopy. (See Near-field scanning optical microscope). Super-imaging in the visible frequency range Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology. Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength). This means that viruses, proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable. Another approach achieving super-resolution at visible wavelength is recently developed spherical hyperlens based on silver and titanium oxide alternating layers. It has strong anisotropic hyperbolic dispersion allowing super-resolution with converting evanescent waves into propagating waves. This method is non-fluorescence based super-resolution imaging, which results in real-time imaging without any reconstruction of images and information. Super resolution far-field microscopy techniques By 2008 the diffraction limit has been surpassed and lateral imaging resolutions of 20 to 50 nm have been achieved by several "super-resolution" far-field microscopy techniques, including stimulated emission depletion (STED) and its related RESOLFT (reversible saturable optically linear fluorescent transitions) microscopy; saturated structured illumination microscopy (SSIM) ; stochastic optical reconstruction microscopy (STORM); photoactivated localization microscopy (PALM); and other methods using similar principles. Cylindrical superlens via coordinate transformation This began with a proposal by Sir John Pendry, in 2003. Magnifying the image required a new design concept in which the surface of the negatively refracting lens is curved. One cylinder touches another cylinder, resulting in a curved cylindrical lens which reproduced the contents of the smaller cylinder in magnified but undistorted form outside the larger cylinder. Coordinate transformations are required to curve the original perfect lens into the cylindrical, lens structure. In 2007, a superlens utilizing coordinate transformation was again the subject. However, in addition to image transfer other useful operations were discussed; translation, rotation, mirroring and inversion as well as the superlens effect. Furthermore, elements that perform magnification are described, which are free from geometric aberrations, on both the input and output sides while utilizing free space sourcing (rather than waveguide). These magnifying elements also operate in the near and far field, transferring the image from near field to far field. Nano-optics with metamaterials Nanohole array subwavelength imaging Nanohole array as a lens A recent prior work (2007) demonstrated that a quasi-periodic array of nanoholes, in a metal screen, were able to focus the optical energy of a plane wave to form subwavelength spots (hot spots). The distances for the spots was a few tens of wavelengths on the other side of the array, or, in other words, opposite the side of the incident plane wave. The quasi-periodic array of nanoholes functioned as a light concentrator. In June 2008, this was followed by the demonstrated capability of an array of quasi-crystal nanoholes in a metal screen. More than concentrating hot spots, an image of the point source is displayed a few tens of wavelengths from the array, on the other side of the array (the image plane). Also this type of array exhibited a 1 to 1 linear displacement, – from the location of the point source to its respective, parallel, location on the image plane. In other words from x to x + δx. For example, other point sources were similarly displaced from x' to x' + δx', from x^ to x^ + δx^, and from x^^ to x^^ + δx^^, and so on. Instead of functioning as a light concentrator, this performs the function of conventional lens imaging with a 1 to 1 correspondence, albeit with a point source. However, resolution of more complicated structures can be achieved as constructions of multiple point sources. The fine details, and brighter image, that are normally associated with the high numerical apertures of conventional lenses can be reliably produced. Notable applications for this technology arise when conventional optics is not suitable for the task at hand. For example, this technology is better suited for X-ray imaging, or nano-optical circuits, and so forth. The metamaterial nanolens was constructed of millions of nanowires at 20 nanometers in diameter. These were precisely aligned and a packaged configuration was applied. The lens is able to depict a clear, high-resolution image of nano-sized objects because it uses both normal propagating EM radiation, and evanescent waves to construct the image. Super-resolution imaging was demonstrated over a distance of 6 times the wavelength (λ), in the far-field, with a resolution of at least λ/4. This is a significant improvement over previous research and demonstration of other near field and far field imaging, including nanohole arrays discussed below. Light transmission properties of holey metal films 2009-12. The light transmission properties of holey metal films in the metamaterial limit, where the unit length of the periodic structures is much smaller than the operating wavelength, are analyzed theoretically. Transporting an Image through a subwavelength hole Theoretically it appears possible to transport a complex electromagnetic image through a tiny subwavelength hole with diameter considerably smaller than the diameter of the image, without losing the subwavelength details. Nanoparticle imaging – quantum dots When observing the complex processes in a living cell, significant processes (changes) or details are easy to overlook. This can more easily occur when watching changes that take a long time to unfold and require high-spatial-resolution imaging. However, recent research offers a solution to scrutinize activities that occur over hours or even days inside cells, potentially solving many of the mysteries associated with molecular-scale events occurring in these tiny organisms. A joint research team, working at the National Institute of Standards and Technology (NIST) and the National Institute of Allergy and Infectious Diseases (NIAID), has discovered a method of using nanoparticles to illuminate the cellular interior to reveal these slow processes. Nanoparticles, thousands of times smaller than a cell, have a variety of applications. One type of nanoparticle called a quantum dot glows when exposed to light. These semiconductor particles can be coated with organic materials, which are tailored to be attracted to specific proteins within the part of a cell a scientist wishes to examine. Notably, quantum dots last longer than many organic dyes and fluorescent proteins that were previously used to illuminate the interiors of cells. They also have the advantage of monitoring changes in cellular processes while most high-resolution techniques like electron microscopy only provide images of cellular processes frozen at one moment. Using quantum dots, cellular processes involving the dynamic motions of proteins, are observable (elucidated). The research focused primarily on characterizing quantum dot properties, contrasting them with other imaging techniques. In one example, quantum dots were designed to target a specific type of human red blood cell protein that forms part of a network structure in the cell's inner membrane. When these proteins cluster together in a healthy cell, the network provides mechanical flexibility to the cell so it can squeeze through narrow capillaries and other tight spaces. But when the cell gets infected with the malaria parasite, the structure of the network protein changes. Because the clustering mechanism is not well understood, it was decided to examine it with the quantum dots. If a technique could be developed to visualize the clustering, then the progress of a malaria infection could be understood, which has several distinct developmental stages. Research efforts revealed that as the membrane proteins bunch up, the quantum dots attached to them are induced to cluster themselves and glow more brightly, permitting real time observation as the clustering of proteins progresses. More broadly, the research discovered that when quantum dots attach themselves to other nanomaterials, the dots' optical properties change in unique ways in each case. Furthermore, evidence was discovered that quantum dot optical properties are altered as the nanoscale environment changes, offering greater possibility of using quantum dots to sense the local biochemical environment inside cells. Some concerns remain over toxicity and other properties. However, the overall findings indicate that quantum dots could be a valuable tool to investigate dynamic cellular processes. The abstract from the related published research paper states (in part): Results are presented regarding the dynamic fluorescence properties of bioconjugated nanocrystals or quantum dots (QDs) in different chemical and physical environments. A variety of QD samples was prepared and compared: isolated individual QDs, QD aggregates, and QDs conjugated to other nanoscale materials... A technical view of the original problem The original deficiency related to the perfect lens is elucidated: The general expansion of an EM field emanating from a source consists of both propagating waves and near-field or evanescent waves. An example of a 2-D line source with an electric field which has S-polarization will have plane waves consisting of propagating and evanescent components, which advance parallel to the interface. As both the propagating and the smaller evanescent waves advance in a direction parallel to the medium interface, evanescent waves decay in the direction of propagation. Ordinary (positive index) optical elements can refocus the propagating components, but the exponentially decaying inhomogeneous components are always lost, leading to the diffraction limit for focusing to an image. A superlens is a lens which is capable of subwavelength imaging, allowing for magnification of near field rays. Conventional lenses have a resolution on the order of one wavelength due to the so-called diffraction limit. This limit hinders imaging very small objects, such as individual atoms, which are much smaller than the wavelength of visible light. A superlens is able to beat the diffraction limit. A very well known superlens is the perfect lens described by John Pendry, which uses a slab of material with a negative index of refraction as a flat lens. In theory, Pendry's perfect lens is capable of perfect focusing — meaning that it can perfectly reproduce the electromagnetic field of the source plane at the image plane. The diffraction limit The performance limitation of conventional lenses is due to the diffraction limit. Following Pendry (Pendry, 2000), the diffraction limit can be understood as follows. Consider an object and a lens placed along the z-axis so the rays from the object are traveling in the +z direction. The field emanating from the object can be written in terms of its angular spectrum method, as a superposition of plane waves: where is a function of as: Only the positive square root is taken as the energy is going in the +z direction. All of the components of the angular spectrum of the image for which is real are transmitted and re-focused by an ordinary lens. However, if then becomes imaginary, and the wave is an evanescent wave whose amplitude decays as the wave propagates along the z-axis. This results in the loss of the high angular frequency components of the wave, which contain information about the high frequency (small scale) features of the object being imaged. The highest resolution that can be obtained can be expressed in terms of the wavelength: A superlens overcomes the limit. A Pendry-type superlens has an index of n = −1 (ε = −1, µ = −1), and in such a material, transport of energy in the +z direction requires the z-component of the wave vector to have opposite sign: For large angular frequencies, the evanescent wave now grows, so with proper lens thickness, all components of the angular spectrum can be transmitted through the lens undistorted. There are no problems with conservation of energy, as evanescent waves carry none in the direction of growth: the Poynting vector is oriented perpendicularly to the direction of growth. For traveling waves inside a perfect lens, the Poynting vector points in direction opposite to the phase velocity. Negative index of refraction and Pendry's perfect lens Normally when a wave passes through the interface of two materials, the wave appears on the opposite side of the normal. However, if the interface is between a material with a positive index of refraction and another material with a negative index of refraction, the wave will appear on the same side of the normal. John Pendry's perfect lens is a flat material where n = −1. Such a lens allows for near field rays—which normally decay due to the diffraction limit—to focus once within the lens and once outside the lens, allowing for subwavelength imaging. Superlens was believed impossible until John Pendry showed in 2000 that a simple slab of left-handed material would do the job. The experimental realization of such a lens took, however, some more time, because it is not that easy to fabricate metamaterials with both negative permittivity and permeability. Indeed, no such material exists naturally and construction of the required metamaterials is non-trivial. Furthermore, it was shown that the parameters of the material are extremely sensitive (the index must equal −1); small deviations make the subwavelength resolution unobservable. Due to the resonant nature of metamaterials, on which many (proposed) implementations of superlenses depend, metamaterials are highly dispersive. The sensitive nature of the superlens to the material parameters causes superlenses based on metamaterials to have a limited usable frequency range. However, Pendry also suggested that a lens having only one negative parameter would form an approximate superlens, provided that the distances involved are also very small and provided that the source polarization is appropriate. For visible light this is a useful substitute, since engineering metamaterials with a negative permeability at the frequency of visible light is difficult. Metals are then a good alternative as they have negative permittivity (but not negative permeability). Pendry suggested using silver due to its relatively low loss at the predicted wavelength of operation (356 nm). In 2005, Pendry's suggestion was finally experimentally verified by two independent groups, both using thin layers of silver illuminated with UV light to produce "photographs" of objects smaller than the wavelength. Negative refraction of visible light has been experimentally verified in an yttrium orthovanadate (YVO4) bicrystal in 2003. - Zhang, Xiang; and Liu,Zhaowei (2008). "Superlenses to overcome the diffraction limit" (Free PDF download). Nature Materials 7: 435 – 441. doi:10.1038/nmat2141. Retrieved 2013-06-03. - Aguirre, Edwin L. (09/18/2012). "Creating a ‘Perfect’ Lens for Super-Resolution Imaging". U-Mass Lowell News. doi:10.1117/1.3484153. Retrieved 2013-06-02. - Kawata, S.; Inouye, Y.; Verma, P. (2009). "Plasmonics for near-field nano-imaging and superlensing". Nature Photonics 3 (7): 388–394. Bibcode:2009NaPho...3..388K. doi:10.1038/nphoton.2009.111. - Vinson, V; Chin, G. (2007). "Introduction to special issue – Lights, Camera, Action". Science 316 (5828): 1143. doi:10.1126/science.316.5828.1143. - Pendry, John. Manipulating the Near Field. Volume 15. & Photonics News September 2004. - Anantha, S. Ramakrishna; J.B. Pendry, M.C.K. Wiltshire and W.J. Stewart (2003). "Imaging the Near Field". Journal of Modern Optics (Taylor & Francis) 50 (09): 1419–1430. doi:10.1080/0950034021000020824. - Pendry, J. B. (2000). "Negative Refraction Makes a Perfect Lens". Physical Review Letters 85 (18): 3966–9. Bibcode:2000PhRvL..85.3966P. doi:10.1103/PhysRevLett.85.3966. PMID 11041972. - Fang, N. et al. (2005). "Sub–Diffraction-Limited Optical Imaging with a Silver Superlens". Science 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849. - Brumfiel, G (2009). "Metamaterials: Ideal focus" (online web page). Nature News 459 (7246): 504–5. doi:10.1038/459504a. PMID 19478762. - Lauterbur, P. (1973). "Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance". Nature 242 (5394): 190. Bibcode:1973Natur.242..190L. doi:10.1038/242190a0. - "Prof. Sir John Pendry, Imperial College, London". Colloquia Series. Research Laboratory of Electronics. 13 March 2007. Retrieved 2010-04-07. - Yeager, A. (28 March 2009). "Cornering The Terahertz Gap". Science News. Retrieved 2010-03-02. - Savo, S.; Andreone, A.; Di Gennaro, E. (2009). "Superlensing properties of one-dimensional dielectric photonic crystals". Optics Express 17 (22): 19848–56. arXiv:0907.3821. Bibcode:2009OExpr..1719848S. doi:10.1364/OE.17.019848. PMID 19997206. - Parimi, P. et al. (2003). "Imaging by Flat Lens using Negative Refraction". Nature 426 (6965): 404. Bibcode:2003Natur.426..404P. doi:10.1038/426404a. PMID 14647372. - Bullis, Kevin (2007-03-27). "Superlenses and Smaller Computer Chips". Technology Review magazine of Massachusetts Institute of Technology. pp. 2 pages. Retrieved 2010-01-13 - Smith, H.I. (1974). "Fabrication techniques for surface-acoustic-wave and thin-film optical devices". Proceedings of the IEEE 62 (10): 1361–1387. doi:10.1109/PROC.1974.9627. - Srituravanich, W. et al. (2004). "Plasmonic Nanolithography". Nano Letters 4 (6): 1085. Bibcode:2004NanoL...4.1085S. doi:10.1021/nl049573q. - Fischer, U. Ch.; Zingsheim, H. P. (1981). "Submicroscopic pattern replication with visible light". Journal of Vacuum Science and Technology 19 (4): 881. Bibcode:1981JVST...19..881F. doi:10.1116/1.571227. - Schmid, H. et al. (1998). "Light-coupling masks for lensless, sub-wavelength optical lithography". Applied Physics Letters 73 (19): 237. Bibcode:1998ApPhL..72.2379S. doi:10.1063/1.121362. - Grbic, A.; Eleftheriades, G. V. (2004). "Overcoming the Diffraction Limit with a Planar Left-handed Transmission-line Lens" (Free HTML copy of this article). Physical Review Letters 92 (11): 117403. Bibcode:2004PhRvL..92k7403G. doi:10.1103/PhysRevLett.92.117403. PMID 15089166. - Nielsen, R. B.; Thoreson, M. D.; Chen, W.; Kristensen, A.; Hvam, J. M.; Shalaev, V. M.; Boltasseva, A. (2010). "Toward superlensing with metal–dielectric composites and multilayers" (Free PDF download). Applied Physics B 100: 93. Bibcode:2010ApPhB.100...93N. doi:10.1007/s00340-010-4065-z. - Fang, N.; Lee, H; Sun, C; Zhang, X (2005). "Sub-Diffraction-Limited Optical Imaging with a Silver Superlens". Science 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849. - D.O.S. Melville, R.J. Blaikie, Optics Express 13, 2127 (2005) - C. Jeppesen, R.B. Nielsen, A. Boltasseva, S. Xiao, N.A. Mortensen, A. Kristensen, Optics Express 17, 22543 (2009) - Valentine, J. et al. (2008). "Three-dimensional optical metamaterial with a negative refractive index". Nature 455 (7211): 376–9. Bibcode:2008Natur.455..376V. doi:10.1038/nature07247. PMID 18690249. - Yao, J. et al. (2008). "Optical Negative Refraction in Bulk Metamaterials of Nanowires". Science 321 (5891): 930. Bibcode:2008Sci...321..930Y. doi:10.1126/science.1157566. PMID 18703734. - W. Cai, D.A. Genov, V.M. Shalaev, Phys. Rev. B 72, 193101 (2005) - A.V. Kildishev,W. Cai, U.K. Chettiar, H.-K. Yuan, A.K. Sarychev, V.P. Drachev, V.M. Shalaev, J. Opt. Soc. Am. B 23, 423 (2006) - L. Shi, L. Gao, S. He, B. Li, Phys. Rev. B 76, 045116 (2007) - L. Shi, L. Gao, S. He, Proc. Int. Symp. Biophot. Nanophot. Metamat. (2006), pp. 463–466 - Z. Jacob, L.V. Alekseyev, E. Narimanov, Opt. Express 14, 8247 (2006) - P.A. Belov, Y. Hao, Phys. Rev. B 73, 113110 (2006) - P. Chaturvedi, N.X. Fang, Mater. Res. Soc. Symp. Proc. 919, 0919-J04-07 (2006) - B. Wood, J.B. Pendry, D.P. Tsai, Phys. Rev. B 74, 115116 (2006) - E. Shamonina, V.A. Kalinin, K.H. Ringhofer, L. Solymar, Electron. Lett. 37, 1243 (2001) - Ziolkowski, R. W.; Heyman, E. (2001). "Wave propagation in media having negative permittivity and permeability". Physical Review E 64 (5): 056625. Bibcode:2001PhRvE..64e6625Z. doi:10.1103/PhysRevE.64.056625. - Smolyaninov, Igor I.; Hung, YJ; Davis, CC (2007-03-27). "Magnifying Superlens in the Visible Frequency Range". Science 315 (5819): 1699–1701. arXiv:physics/0610230. Bibcode:2007Sci...315.1699S. doi:10.1126/science.1138746. PMID 17379804. - Dumé, B. (21 April 2005). "Superlens breakthrough". Physics World. - Pendry, J. B. (18 February 2005). "Collection of photonics references". - Garcia1, N.; Nieto-Vesperinas, M. (2002). "Left-Handed Materials Do Not Make a Perfect Lens". Physical Review Letters 88 (20): 207403. Bibcode:2002PhRvL..88t7403G. doi:10.1103/PhysRevLett.88.207403. PMID 12005605. - Smith, D.R. et al. (2003). "Limitations on subdiffraction imaging with a negative refractive index slab". Applied Physics Letters 82 (10): 1506. arXiv:cond-mat/0206568. Bibcode:2003ApPhL..82.1506S. doi:10.1063/1.1554779. - Shelby, R. A.; Smith, D. R.; Schultz, S. (2001). "Experimental Verification of a Negative Index of Refraction". Science 292 (5514): 77–9. Bibcode:2001Sci...292...77S. doi:10.1126/science.1058847. PMID 11292865. - Wiltshire, M. C. K. et al. (2003). "Metamaterial endoscope for magnetic field transfer: near field imaging with magnetic wires". Optics Express 11 (7): 709–15. Bibcode:2003OExpr..11..709W. doi:10.1364/OE.11.000709. PMID 19461782. - Dumé, B. (4 April 2005). "Superlens breakthrough". Physics World. Retrieved 2009-11-10. - Liu, Z. et al. (2003). "Rapid growth of evanescent wave by a silver superlens". Applied Physics Letters 83 (25): 5184. Bibcode:2003ApPhL..83.5184L. doi:10.1063/1.1636250. - Lagarkov, A. N.; & V. N. Kissel (2004-02-18). "Near-Perfect Imaging in a Focusing System Based on a Left-Handed-Material Plate". Phys. Rev. Lett. 92 (7): 077401 (2004) [4 pages]. Bibcode:2004PhRvL..92g7401L. doi:10.1103/PhysRevLett.92.077401. - Melville, David; and Richard Blaikie (2005-03-21). "Super-resolution imaging through a planar silver layer". Optics Express 13 (6): 2127–2134. Bibcode:2005OExpr..13.2127M. doi:10.1364/OPEX.13.002127. PMID 19495100. Retrieved 2009-10-23. - Blaikie, Richard J; David O. S. Melville (2005-01-20). "Imaging through planar silver lenses in the optical near field". J. Opt. A: Pure Appl. Opt. 7 (2): S176–S183. Bibcode:2005JOptA...7S.176B. doi:10.1088/1464-4258/7/2/023. - Greegor, R. B. et al. (2005-08-25). "Simulation and testing of a graded negative index of refraction lens". Applied Physics Letters 87 (9): 091114. Bibcode:2005ApPhL..87i1114G. doi:10.1063/1.2037202. Retrieved 2009-11-01. - Durant, Stéphane et al. (2005-12-02). "Theory of the transmission properties of an optical far-field superlens for imaging beyond the diffraction limit". J. Opt. Soc. Am. B/Vol. 23, No. 11/November 2006 23 (11): 2383–2392. Bibcode:2006JOSAB..23.2383D. doi:10.1364/JOSAB.23.002383. Retrieved 2009-10-26. - Salandrino, Alessandro; Nader Engheta (2006-08-16). "Far-field subdiffraction optical microscopy using metamaterial crystals: Theory and simulations". Phys. Rev. B 74 (7): 075103. Bibcode:2006PhRvB..74g5103S. doi:10.1103/PhysRevB.74.075103. - Liu, Zhaowei et al. (2007-05-22). "Experimental studies of far-field superlens for sub-diffractional optical imaging". Optics Express 15 (11): 6947–6954. Bibcode:2007OExpr..15.6947L. doi:10.1364/OE.15.006947. PMID 19547010. Retrieved 2009-10-26. - Geoffroy, Lerosey et al. (2007-02-27). "Focusing Beyond the Diffraction Limit with Far-Field Time Reversal". AAAS Science 315 (5815): 1120–1122. Bibcode:2007Sci...315.1120L. doi:10.1126/science.1134824. PMID 17322059. - Liu, Zhaowei et al. (2007-03-27). "Far-Field Optical Hyperlens Magnifying Sub-Diffraction-Limited Objects". AAAS Science 315 (5819): 1686. Bibcode:2007Sci...315.1686L. doi:10.1126/science.1137368. PMID 17379801. - Rho, Junsuk; Ye, Ziliang; Xiong, Yi; Yin, Xiaobo; Liu, Zhaowei; Choi, Hyeunseok; Bartal, Guy; Zhang, Xiang (1 December 2010). "Spherical hyperlens for two-dimensional sub-diffractional imaging at visible frequencies". Nature Communications 1 (9): 143. Bibcode:2010NatCo...1E.143R. doi:10.1038/ncomms1148. - Huang, Bo; Wang, W.; Bates, M.; Zhuang, X. (2008-02-08). "Three-Dimensional Super-Resolution Imaging by Stochastic Optical Reconstruction Microscopy". Science 319 (5864): 810–813. Bibcode:2008Sci...319..810H. doi:10.1126/science.1153529. PMC 2633023. PMID 18174397. - Pendry, John (2003-04-07). "Perfect cylindrical lenses". Optics express 11 (7): 755. Bibcode:2003OExpr..11..755P. doi:10.1364/OE.11.000755. Retrieved 2009-11-04. - Milton, Graeme W.; Nicorovici, Nicolae-Alexandru P.; McPhedran, Ross C.; Podolskiy, Viktor A. (2005-12-08). "A proof of superlensing in the quasistatic regime, and limitations of superlenses in this regime due to anomalous localized resonance". Proceedings of the Royal Society A 461 (2064): 3999 (36 pages). Bibcode:2005RSPSA.461.3999M. doi:10.1098/rspa.2005.1570. - Schurig, D.; J. B. Pendry, and D. R. Smith (2007-10-24). "Transformation-designed optical elements". Optics express 15 (22): 14772 (10 pages). Bibcode:2007OExpr..1514772S. doi:10.1364/OE.15.014772. - Tsang, Mankei; Psaltis, Demetri (2008). "Magnifying perfect lens and superlens design by coordinate transformation". Physical Review B 77 (3): 035122. arXiv:0708.0262. Bibcode:2008PhRvB..77c5122T. doi:10.1103/PhysRevB.77.035122. - Huang, Fu Min et al. (2008-06-24). "Nanohole Array as a Lens". Nano Lett. (American Chemical Society) 8 (8): 2469–2472. Bibcode:2008NanoL...8.2469H. doi:10.1021/nl801476v. PMID 18572971. Retrieved 2009-12-21. - "Northeastern physicists develop 3D metamaterial nanolens that achieves super-resolution imaging". prototype super-resolution metamaterial nanonlens. Nanotechwire.com. 2010-01-18. Retrieved 2010-01-20. - Casse, B. D. F.; Lu, W. T.; Huang, Y. J.; Gultepe, E.; Menon, L.; Sridhar, S. (2010). "Super-resolution imaging using a three-dimensional metamaterials nanolens". Applied Physics Letters 96 (2): 023114. Bibcode:2010ApPhL..96b3114C. doi:10.1063/1.3291677. - Jung, J. and; L. Martín-Moreno and F J García-Vidal (2009-12-09). "Light transmission properties of holey metal films in the metamaterial limit: effective medium theory and subwavelength imaging". New Journal of Physics 11 (12): 123013. Bibcode:2009NJPh...11l3013J. doi:10.1088/1367-2630/11/12/123013. - Silveirinha, Mario G.; Engheta, Nader; Nader Engheta (2009-03-13). "Transporting an Image through a Subwavelength Hole". Physical Review Letters 102 (10): 103902. Bibcode:2009PhRvL.102j3902S. doi:10.1103/PhysRevLett.102.103902. PMID 19392114. - Kang, Hyeong-Gon; Tokumasu, Fuyuki; Clarke, Matthew; Zhou, Zhenping; Tang, Jianyong; Nguyen, Tinh; Hwang, Jeeseong (2010). "Probing dynamic fluorescence properties of single and clustered quantum dots toward quantitative biomedical imaging of cells". Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology 2: 48–58. doi:10.1002/wnan.62. - "David R Smith (May 10, 2004). "Breaking the diffraction limit". Institute of Physics. Retrieved May 31, 2009. - Pendry, J. B. (2000). "Negative refraction makes a perfect lens". Phys. Rev. Lett. 85 (18): 3966–9. Bibcode:2000PhRvL..85.3966P. doi:10.1103/PhysRevLett.85.3966. PMID 11041972. - Podolskiy, V.A.; Narimanov, EE (2005). "Near-sighted superlens". Opt. Lett. 30 (1): 75–7. arXiv:physics/0403139. Bibcode:2005OptL...30...75P. doi:10.1364/OL.30.000075. PMID 15648643. - Tassin, P.; Veretennicoff, I; Vandersande, G (2006). "Veselago's lens consisting of left-handed materials with arbitrary index of refraction". Opt. Commun. 264: 130. Bibcode:2006OptCo.264..130T. doi:10.1016/j.optcom.2006.02.013. - Melville, DOS; Blaikie, R (2005). "Super-resolution imaging through a planar silver layer". Optics Express 13 (6): 2127–34. Bibcode:2005OExpr..13.2127M. doi:10.1364/OPEX.13.002127. PMID 19495100. - Fang, Nicholas; Lee, H; Sun, C; Zhang, X (2005). "Sub–Diffraction-Limited Optical Imaging with a Silver Superlens". Science 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849. - Zhang, Yong; Fluegel, B.; Mascarenhas, A. (2003). "Total Negative Refraction in Real Crystals for Ballistic Electrons and Light". Physical Review Letters 91 (15): 157404. Bibcode:2003PhRvL..91o7404Z. doi:10.1103/PhysRevLett.91.157404. PMID 14611495. - The Quest for the Superlens By John B. Pendry and David R. Smith. Scientific American. July 2006. Free PDF download from Imperial College. - Subwavelength imaging - Professor Sir John Pendry at MIT – "The Perfect Lens: Resolution Beyond the Limits of Wavelength" - Surface plasmon subwavelength optics 2009-12-05 - Superlenses to overcome the diffraction limit - Breaking the diffracion limit Overview of superlens theory - Flat Superlens Simulation EM Talk - Superlens microscope gets up close - Superlens breakthrough - Superlens breaks optical barrier - Materials with negative index of refraction by V.A. Podolskiy - Optimizing the superlens: Manipulating geometry to enhance the resolution by V.A. Podolskiy and Nicholas A. Kuhta - Now you see it, now you don't: cloaking device is not just sci-fi - Initial page describes first demonstration of negative refraction in a natural material - Negative-index materials made easy - Simple 'superlens' sharpens focusing power – A lens able to focus 10 times more intensely than any conventional design could significantly enhance wireless power transmission and photolithography (New Scientist, 24 April 2008) - Far-Field Optical Nanoscopy by Stefan W.Hell. VOL 316. SCIENCE. 25 MAY 2007
http://en.wikipedia.org/wiki/Superlens
13
77
IDENTIFYING CHEMICAL REACTIONS: Matter can combine or break apart to produce new types of matter with very ________________________________. When this occurs, it is said that matter has undergone a chemical reaction. EVIDENCE FOR CHEMICAL REACTIONS: Studying chemistry can be just like solving a mystery. To determine whether or not a chemical reaction has occurred, one needs to look for observable clues. If any one of these four things occur (and more than one can occur at the same instance), a chemical reaction has occurred. WRITING A CHEMICAL EQUATION: Many chemical reactions are important principally for the ________________________________ they release. The popular fuel, propane, is used for cooking food and heating homes. Propane is in one of a group of compounds called Hydrocarbons. A hydrocarbon contains carbon and hydrogen atoms (and sometimes oxygen is within the molecule), and they combine with oxygen in chemical reactions to form carbon dioxide and water. The reaction that occurs when propane is burned can be represented by the following word equation. A chemical reaction is an equation that shows what happens in a reaction. All chemical reactions are composed of ________________________________ (chemicals that are present before the reaction) and ________________________________ (chemicals that are produced after the reaction takes place.) In the chemical equation above, the Na+1 and the Cl-1 are the reactants. The NaCl is the product. The symbol between the Cl-1 and the NaCl, ( ® ), is the YIELDS sign. The arrow points towards the products and is used to show how the reaction takes place. ||Read plus or and. Used between two formulas to indicate reactants combined or products formed| ||Read yields or produces. Used to separate reactants (on the left) from products (on the right). The arrow points in the direction of change (we will always point the arrow toward the RIGHT)| ||Read solid. Written after a symbol or formula to indicate that the physical state of the substance is solid.| ||Read liquid. Written after a symbol or formula to indicate that the physical state of the substance is liquid.| ||Read gas. Written after a symbol or formula to indicate that the physical state of the substance is gaseous.| ||Read aqueous. Written after a symbol or formula to indicate that the substance is dissolved in water.| ||Indicates that the reaction is reversible.| ||Read No Reaction. Indicates that the given reactants do not react with each other.| SYMBOLS THAT INDICATE THE STATE: Equations can also be written to indicate the physical state of the reactants and products. In fact, sometimes an equation cannot be fully understood unless this information is shown. The symbols (g), (l), (s), and (aq) indicate whether the substance is a ________________________________, a liquid, a ________________________________, or one dissolved in water. The above graphic shows these other conventions used in writing chemical equations. While the arrow in an equation shows the direction of change, it implies that reactions occur in only one direction. This is not always the case, under suitable conditions, many chemical reactions can be reversed. H2 (g) + O2 (g) ® H2O (l) + Energy You have already learned that water can be separated into its elements. The reaction can be written as follows. Notice that the second equation is the reverse of the first. A chemical reaction that is reversible can be described by a single equation in which a double arrow shows that the reaction is possible in both directions. The equation says that hydrogen and oxygen combine to form water, releasing energy in the process. It also says that with the addition of energy to water, under suitable conditions, hydrogen and oxygen will be formed. In a chemical reaction, matter cannot be created ________________________________. For example in the reaction... There are 2 atoms of hydrogen and 2 atoms of chlorine on the reactants side, but only one atom of hydrogen and chlorine is on the products side. To fix this problem and to follow the rule that matter cannot be created nor destroyed, we must balance the equation. By balancing the equation, you must have equal numbers of each of the different atoms on both the reactants and the products sides. To balance equations, there are a few rules that must be followed. First, locate the most complex compound and start balancing each of the different atoms (saving oxygen and hydrogen for last). For example, 1. Start with NaCl. (This is the most complex compound because HCl has hydrogen in it and we save that for last.) 2. There is one atom of sodium on each side so move on to the chlorine. 3. There is one atom of chlorine on each side, so move on to the HCl. 4. There is one atom of hydrogen in HCl and two atoms of hydrogen on the reactants side, so put a 1/2 coefficient in front of H2, so there is only one atom of hydrogen on each side. 5. Since there can NOT be any fractions in the final answer - multiply all the coefficients by 2 as to eliminate the 1/2 coefficient. 6. Check for the lowest common coefficients. Another way of working this problem is to take inventory of each atom on the Reactants side and another inventory of each atom on the Products side of the yields sign. Given the following equation below: First, take "Inventory of each atom" by counting the number of atoms for each element on each side of the yields sign. Second, If you have an Even number of one-type of element on one side you must have an Even number of that element on the other side. Therefore, look for the even - odd combinations. Which in this case is oxygen. For now, the number of potassium atoms, and the number of chlorine atoms are equal - so we will not do anything to those molecules that contain potassium (K) or chlorine (Cl) at this time (in order to change potassium or chloride numbers). However, there are three Oxygen atoms in Potassium Chlorate (KClO3), and Two Oxygen atoms in oxygen gas (O2). In most cases, begin working with the "odd" number first to get your even number of atoms. So, place a two in front of the Potassium Chlorate molecule: And take "Inventory" again. Remember, that the Coefficient in front of the molecule means that there are now that many molecules. Example: ___2___ KClO3 = KClO3 + KClO3 (or two molecules of potassium chlorate). Therefore we must use the coefficient as a multiplier for each element - for example: 2 multiplied by O3 = Six oxygen: 2 x 3 oxygen = 6 oxygen. By placing the two in front of the potassium chlorate molecule - that gives us our Even number of Oxygen atoms (which we want) and that also changes the number of potassium atoms and chlorine atoms. So let us balance the potassium and chlorine atoms on each side of the yields sign - by placing a two in front of the potassium chloride molecule (KCl). And take "Inventory" again. After taking "Inventory" we notice that we need six oxygen atoms on the products side of the yield sign to balance the equation. So we place a three in front of the oxygen gas molecule. And take "Inventory" again. Now that all our atoms are equal on the reactants side as well as on the products side - we are finished. Our balanced equation now reads: Reading the above equation: Two molecules of Potassium Chlorate yields two molecules of Potassium Chloride and three molecules of Oxygen gas. Remember, not all equations are this easy to balance, and may require changing the numbers often, but to assist in a few short-cuts - save oxygen and hydrogen for last to balance, and save any "free-standing" elements for last as well. Free-standing means those elements which do not form molecules, or form diatomic molecules (H2 , N2 , O2 , F2 , Cl2 ). Write and balance the equation for the burning, or combustion of ethylene, C2H4. - Since ethylene is a hydrocarbon like propane, the equation for the combustion of propane can serve as a model. The equation for the combustion of ethylene are these. - Since both Carbon and Hydrogen occur only once on each side of the arrow, you can begin with either element. If you start with hydrogen, the equation can be balanced by placing a 2 in front of the water on the right. Now the number of hydrogen atoms is balanced, but the number of carbon and oxygen atoms is not. Balance the carbon next. Notice that if you change the coefficient of C2H4, you also change the number of hydrogens. Now both the carbon and hydrogen are balanced, but there are six oxygen atoms indicated on the right and only two on the left. Placing a three in front of the oxygen in the equation will complete the process. 1. Write and balance the equation for the reaction of sodium and water to produce sodium hydroxide and hydrogen gas. 2. Write and balance the equation for the formation of magnesium nitride from its elements. Write the balanced equations for each of the following reactions: 1a. Cu + H20 ® CuO + H2 1b. Al(NO3)3 + NaOH ® Al(OH)3 + NaNO3 1c. KNO3 ® KNO2 + O2 1d. Fe + H2SO4 ® Fe2(SO4)3 + H2 1e. O2 + CS2 ® CO2 + SO2 1f. Mg + N2 ® Mg3N2 1g. When copper(II) carbonate is heated, it forms copper(II) oxide and carbon dioxide gas. 1h. Sodium reacts with water to produce sodium hydroxide and hydrogen gas. 1i. Copper combines with sulfur to form copper(I) sulfide. 1j. Silver nitrate reacts with sulfuric acid to produce silver sulfate and nitric acid (HNO3) BALANCING COMBUSTION REACTIONS Combustion is an exothermic reaction in which a substance combines with oxygen forming products in which all elements are combined with oxygen. It is a process we commonly call burning. Usually energy is released in the form of heat and light. The general form of combustion equations for hydrocarbons is: Most combustion reactions are the oxidation of a fuel material with oxygen gas. A complete combustion produces carbon dioxide from all the carbon in the fuel, water from the hydrogen in the fuel, and sulfur dioxide from any sulfur in the fuel. Methane burns in air to make carbon dioxide and water. First, place a two in front of the water to take care of all the hydrogens and a two in front of the oxygen. Anything you have to gather (any atom that comes from two or more sources in the reactants or gets distributed to two or more products) should be considered last. What if the oxygen does not come out right? Lets consider the equation for the burning of butane, C4H10. Insert the coefficients for carbon dioxide and water. We now have two oxygens on the left and thirteen oxygens on the right. The real problem is that we must write the oxygen as a diatomic gas. The chemical equation is not any different from an algebraic equation in that you can multiply both sides by the same thing and not change the equation. Multiply both sides by two to get the following. Now the oxygens are easy to balance. There are twenty-six oxygens on the right, so the coefficient for the oxygen gas on the left must be thirteen. Now it is correctly balanced. What if you finally balanced the same equation with: Either equation is balanced, but not to the LOWEST integer. Algebraically you can divide these equations by two or three, respectively, to get the lowest integer coefficients in front of all of the materials in the equation. Now that we are complete pyromaniacs, lets try burning isopropyl alcohol, C3H7OH. First take care of the carbon and hydrogen. But again we come up with an oxygen problem. The same process works here. Multiply the whole equation (except oxygen) by two. Now the number thirteen fits in the oxygen coefficient. (Do you understand why?) The equation is balanced with six carbons, sixteen hydrogens, and twenty-eight oxygens on each side. ALSO CALLED COMBINATION, CONSTRUCTION, OR COMPOSITION REACTIONS The title of this section contains four names for the same type of reaction. Your text may use any of these. I prefer the first of the names and will use "synthesis" where your text may use one of the other words. The hallmark of a synthesis reaction is a single product. A synthesis reaction might be symbolized by: Predicting synthesis reactions: What would you expect for a product if aluminum metal reacts with chlorine gas? Based on the observed regularity, the following reaction seems likely. Keep in mind that this is not a prediction that aluminum metal will react with chlorine. Whether a reaction occurs depends on many factors. At the moment, you have no basis for concluding that a reaction between aluminum and chlorine will occur. You can predict an equation for the reaction between aluminum and chloride because of the observed regularity that elements react to form compounds. You can write the formula for the product because of regularities concerning chemical formulas. You learned that aluminum always has a +3 charge in compounds, and that chlorine always has a -1 charge when it forms a binary compound with a metal. Two materials, elements or compounds, come together to make a single product. Some examples of synthesis reactions are: Hydrogen gas and Oxygen gas burn to produce water. Sulfur Trioxide reacts with Water to make Sulfuric Acid. What would you see in a test tube if you were witness to a synthesis reaction? You would see two different materials combine. A single new material appears. ALSO CALLED DESYNTHESIS, DECOMBINATION, OR DECONSTRUCTION Of the names for this type of reaction, I prefer the name decomposition. Mozart composed until age 35. After that, he decomposed. Yes, a decomposition is a coming apart. A single reactant comes apart into two or more products, symbolized by: A decomposition reaction is opposite of a synthesis reaction. In a decomposition reaction, a compound breaks down to form two or more simpler substances. What is the equation for the decomposition of water, H2O? Since water contains only two elements, the decomposition products can be predicted as the individual elements. Decomposition versus Dissociation: 1. 2 NaCl(l) ® 2 Na (s) + Cl2 (g) Electrolysis Reaction - Decomposition Reaction 2. NaCl(s) ® Na+1(aq) + Cl-1(aq) Dissolved in Water - Dissociation Reaction Reaction "A", a Decomposition Reaction, there is a _____________________ CHANGE from the sodium chloride molecule which produces sodium metal and chlorine gas, substances with properties very different from those of salt (NaCl) when an electric current is passed through it. Most decomposition reactions form ELEMENTAL SUBSTANCES. The change in Reaction "B" produces _____________________ substances. The sodium ions and chloride ions are present in sodium chloride. The ions (sodium and chloride) have very different properties from those of neutral elements shown in equation "A". Reactions similar to "B" are considered a Dissociation not a Decomposition. Other Dissociations are described by the following equations: Some examples of decomposition reactions are: potassium chlorate when heated comes apart into oxygen gas and potassium chloride: and heating sodium bicarbonate releases water and carbon dioxide and sodium carbonate. In a "test tube" you would see a single material coming apart into more than one new material. Problem: Write an equation for the decomposition of lithium chloride, LiCl, and an equation for its dissociation. SINGLE REPLACEMENT REACTIONS ALSO CALLED SINGLE DISPLACEMENT, SINGLE SUBSTITUTION, OR ACTIVITY REPLACEMENT Synthesis reactions occur between two or more different elements. But elements may also react with compounds. A reaction in which one element takes the place of another element as part of a compound, is called a single replacement reaction. In this type of reaction, a metal always replaces another metal and a nonmetal always replaces another nonmetal. The general equation for a single replacement reaction is: Notice that element "A" replaces "C" in the compound "BC." Is the product, "C", an element or a compound? Consider the following reaction: If chlorine gas is bubbled through a solution of potassium bromide, chlorine replaces bromine in the compound and elemental bromine is produced. Notice that before reacting, chlorine is uncombined. It is a free (uncombined) element. Bromine, however, is combined with potassium in the compound potassium bromide. After reacting, the opposite is true. The chlorine is now combined with potassium in the compound potassium chloride, and bromine exists as a free element. The reaction is usually described with the words, chlorine has replaced bromine, and the reaction is called a single replacement reaction. Replacement reactions are not reversible. In other words, the following reaction will NOT take place! Therefore, the reaction will not happen so the equation is written as: Predicting if a reaction will occur: Activity Series: Halogens: There is an interesting regularity observed in replacement reactions involving the halogens: Fluorine, Chlorine, Bromine, and Iodine. Each halogen will react to replace any of the halogens below it in the periodic table, but will not replace those above. For example, chlorine will replace Bromine and Iodine, but it will not replace Fluorine. Activity Series: Metals: Metals also undergo replacement reactions, and regularities similar to those described for the halogens are observed. Metals can be listed in a series in which each metal will replace all the metals below it on the list, but none of the metals above it. Such list is commonly called an activity series. The activity series for metals is determined by experiments in which pairs of metals are compared for reactivity. The MORE active elements are closer to the TOP of the list, while the LESS active elements are closer to the BOTTOM of the list. Use the Activity Series to predict whether the following reaction can occur under normal conditions. Since Magnesium is above copper on the activity series, magnesium is more active and will replace copper in the compound CuSO4. This reaction will occur. Here is an example of a single replacement reaction: silver nitrate solution has a piece of copper placed into it. The solution begins to turn blue and the copper seems to disappear. Instead, a silvery-white material appears. How could you predict that this reaction would take place without stepping into the lab? Answer: Again, you need to look at the activity series and locate copper and silver. Since copper is higher on the list (more active) than silver (less active), it is safe to assume this reaction will occur. Confirmation, again, can only take place in the lab. DOUBLE REPLACEMENT REACTIONS ALSO CALLED DOUBLE DISPLACEMENT OR METATHESIS Double replacement reactions or metathesis reactions (metathesis is a Greek term meaning "changing partners" and accurately describes what happens.) The general equation for a double replacement reaction is: Predicting Double Replacement Reactions: Deciding whether a double replacement reaction will occur is a matter of predicting whether an insoluble product can form. If sodium nitrate is substituted for lead nitrate, you will see no reaction when the solutions in the two test tubes are mixed. Why Not? If you write out all possible combinations of metal and nonmetal ions and check the table below, you will see that none of them is insoluble. The combinations of a number of different positive and negative ions to form precipitates and soluble compounds. Some texts refer to single and double replacement reactions as solution reactions or ion reactions. That is understandable, considering these are mostly done in solutions in which the major materials we would be considering are in ion form. I think that there is some good reason to call double replacement reactions de-ionizing reactions because a pair of ions are taken from the solution in these reactions. Lets take an example. Above is the way the reaction might be published in a book, but the equation does not tell the whole story. Dissolved silver nitrate becomes a solution of silver ions and nitrate ions. Potassium chloride ionizes the same way. When the two solutions are added together, the silver ions and chloride ions find each other and become a solid precipitate. (They rain or drop out of the solution, this time as a solid.) Since silver chloride (See Chart Above) is insoluble in water, the ions take each other out of the solution. Here is another way to take the ions out of solution. Hydrochloric acid and sodium hydroxide (acid and base) neutralize each other to make water and a salt. Again the solution of hydrochloric acid is a solution of hydrogen (hydronium ions in the acid and base section) and chloride ions. The other solution to add to it, sodium hydroxide, has sodium ions and hydroxide ions. The hydrogen and hydroxide ions take each other out of the solution by making a covalent compound (water). One more way for the ions to be taken out of the water is for some of the ions to escape as a gas. The carbonate and hydrogen ions became water and carbon dioxide. The carbon dioxide is lost as a gas to the ionic solution, so the equation can not go back. One way to consider double replacement reactions is as follows: Two solutions of ionic compounds are really just sets of dissolved ions, each solution with a positive and a negative ion material. The two are added together, forming a mixture of four ions. If two of the ions can form (1) an insoluble material, (2) a covalent material such as water, or (2) a gas that can escape, it qualifies as a reaction. Not all of the ions are really involved in the reaction. Those ions that remain in solution after the reaction has completed are called spectator ions, that is, they are not involved in the reaction. There is some question as to whether they can see the action of the other ions, but that is what they are called. OXIDATION and REDUCTION: Many familiar chemical processes belong to a class of reactions called Oxidation-Reduction, or Redox, reactions. Every minute, redox reactions are taking place in your body and all around you. Reactions in batteries, burning of wood in a campfire, corrosion of metals, ripening of fruit, and combustion of gasoline to name a few examples. You have already seen some examples of oxidation and reduction reactions, when you classified reactions earlier this chapter. Recall that in a combustion reaction, such as the combustion of methane or the rusting of iron, oxygen is a reactant. Hopefully, you learned that these reactions can be classified as synthesis or combustion. The term oxidation also seems reasonable for describing these reactions because, in both cases, a reactant combines with oxygen. The term reduction is used to describe the reverse process. An example of reduction is the decomposition of water. However, not all oxidation reactions involve oxygen. There are many similar reactions between metals and nonmetals that are classified as oxidation or reduction reactions. When a piece of copper is placed in a colorless silver nitrate solution, you can tell that a chemical reaction occurs because the solution turns blue over a period of time. You also notice a silvery coating that forms on the piece of copper. You know that copper(II) ions in solution are blue, so copper(II) ions must be forming. The silver nitrate solution contains silver ions; these ions must be coming out of solution to form solid silver-colored material. The unbalanced equation for the reaction between solid copper and silver ions is this: The equation tells you that solid copper atoms are changing to copper ions at the same time that silver ions are changing to solid silver atoms. Consider what is happening to the two reactants (Cu and Ag). Each copper atom loses two electrons to form a copper(II) ion: Each neutral copper atom acquired a +2 charge by losing two electrons. Whenever an atom or ion becomes more positively charged (positive charge increases) in a chemical reaction, the process is called ________________________________. As you know, electrons do not exist alone. In this reaction, electrons are transferred to the silver ions in solution: When a silver ion acquires one electron, it loses its +1 charge and becomes a neutral atom. Whenever an atom or ion becomes less positively charged or more negative (positive charge is decreased) in a chemical reaction, the process is called ________________________________. Each of these equations describes only half of what takes place when solid copper reacts with silver ions. Reactions that show just half a process are called Half-Reactions. You always need two half-reactions - one for oxidation and one for reduction - to describe any Redox reaction. It is impossible for oxidation to occur by itself. The electrons given up by oxidation cannot exist alone; they must be used by a reduction reaction. Thus, there will always be an oxidation half-reaction whenever there is a reduction half-reaction, and vice-versa. NET EQUATION FOR A REDOX REACTION: You might simply add the equations for the half-reaction, but doing this may not give you a balanced equation. Balancing the net equation for a redox reaction also requires consideration of the ________________________________ in each of the half-reaction equations. If electrons appear in the net equation for the redox reaction, you know you have made a mistake, because electrons cannot exist by themselves. The overall equation is balanced only when the number of electrons lost in one half-reaction equals the electrons gained in one half-reaction. Example: Balance the following equation: The equations for the half-reactions can be written like this: Balance the number of electrons in the two half-reactions by multiplying the second equation by two: The number of electrons in both equations is now equal. Add the equations for the two half-reactions to give the balanced redox equation: Oxidation and reduction occur together. The solid copper (from the example) is called the ________________________________, because it brings about the reduction of silver ions. If the silver ions were not present, the copper would not oxidize. The substance that contains the silver ion(s) is the ________________________________, because it causes the oxidation of copper atoms. Notice that the substance that is reduced is the oxidizing agent and the substance that is oxidized is the reducing agent. And then there are some REDOX equations that need a POSITIVE Charge added. Since we use the electron as our negative symbol we want to use a proton as our positive symbol. The proton can be represented as p+1, but a better way is to use H +1. That way we can add water molecules to the problem to equal the number of "extra" hydrogen. Let's try the question below... Balance the following equation for a redox reaction. 2a. Balance the following equation for a redox reaction. 2b. Write a balanced equation for the reduction of iron(III) ions to iron atoms by the oxidation of nickel atoms to nickel(II) ions in an aqueous solution. 2c. In the reaction Fe+2 + Mg ® Fe + Mg+2, which reactant is the oxidizing agent? and reducing agent? 2d, Balance the half reaction for: Br-1 + MnO4-1 ® Br2 + Mn+2 To keep track of electron transfers in redox reactions more easily, oxidation numbers have been assigned to all atoms and ions. An Oxidation number is the real or ________________________________ on an atom or ion has when all bonds are assumed to be ionic What is the oxidation number for each element in the compound of Na3PO4? Determine the oxidation number for each element in the compound:
http://www.avon-chemistry.com/chem_intro_lecture.html
13
305
Concepts about Geometry in coordinate geometry, the x-coordinate of a point - that is, the horizontal distance of that point from the vertical or y-axis. For example, a point with the coordinates (4, 3) has an abscissa of 4. The y-coordinate of a point is known as the ordinate. in geometry, the perpendicular distance from a vertex (corner) of a figure, such as a triangle, to the base (the side opposite the vertex). another name for coordinate geometry. annulus ("ring" in Latin) in geometry, the plane area between two concentric circles, making a flat ring. |in geometry, a section of a curved line or circle. A circle has three types of arc: a semicircle, which is exactly half of the circle; minor arcs, which are less than the semicircle; and major arcs, which are greater than the semicircle. An arc of a circle is measured in degrees, according to the angle formed by joining its two ends to the center of that circle. A semicircle is therefore 180°, whereas a minor arc will always be less than 180° (acute or obtuse) and a major arc will always be greater than 180° but less than 360° (reflex). arc minute, arc second units for measuring small angles, used in geometry, surveying, map-making, and astronomy. An arc minute (symbol ´) is one-sixtieth of a degree, and an arc second (symbol ") is one-sixtieth of an arc minute. Small distances in the sky, as between two close stars or the apparent width of a planet's disk, are expressed in minutes and seconds of arc. in coordinate geometry, a straight line that a curve approaches more and more closely but never reaches. The x and y axes are asymptotes to the graph of xy = constant (a rectangular hyperbola). If a point on a curve approaches a straight line such that its distance from the straight line is d, then the line is an asymptote to the curve if limit d tends to zero as the point moves toward infinity. Among conic sections (curves obtained by the intersection of a plane and a double cone), a hyperbola has two asymptotes, which in the case of a rectangular hyperbola are at right angles to each other. axis (plural axes) |in geometry, one of the reference lines by which a point on a graph may be located. The horizontal axis is usually referred to as the x-axis, and the vertical axis as the y-axis. The term is also used to refer to the imaginary line about which an object may be said to be symmetrical (axis of symmetry) - for example, the diagonal of a square - or the line about which an object may revolve (axis of rotation).| in mathematics, the number of different single-digit symbols used in a particular number system. In our usual (decimal) counting system of numbers (with symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, 9) the base is 10. In the binary number system, which has only the symbols 1 and 0, the base is two. A base is also a number that, when raised to a particular power (that is, when multiplied by itself a particular number of times as in 102 = 10 × 10 = 100), has a logarithm equal to the power. For example, the logarithm of 100 to the base ten is 2. In geometry, the term is used to denote the line or area on which a polygon or solid stands. in coordinate geometry, components used to define the position of a point by its perpendicular distance from a set of two or more axes, or reference lines. For a two-dimensional area defined by two axes at right angles (a horizontal x-axis and a vertical y- axis), the coordinates of a point are given by its perpendicular distances from the y-axis and x-axis, written in the form (x,y). For example, a point P that lies three units from the y-axis and four units from the x-axis has Cartesian coordinates (3,4) (see abscissa and ordinate). In three-dimensional coordinate geometry, points are located with reference to a third, z- axis, mutually at right angles to the x and y axes. The Cartesian coordinate system can be extended to any finite number of dimensions (axes), and is used thus in theoretical mathematics. It is named for the French mathematician, René Descartes. The system is useful in creating technical drawings of machines or buildings, and in computer-aided design (CAD). |in geometry, a straight line joining any two points on a curve. The chord that passes through the center of a circle (its longest chord) is the diameter. The longest and shortest chords of an ellipse (a regular oval) are called the major and minor axes respectively.| in geometry, the curved line that encloses a curved plane figure, for example a circle or an ellipse. Its length varies according to the nature of the curve, and may be ascertained by the appropriate formula. The circumference of a circle is pd or 2pr, where d is the diameter of the circle, r is its radius, and p is the constant pi, approximately equal to 3.1416. of a surface, curving inward, or away from the eye. For example, a bowl appears concave when viewed from above. In geometry, a concave polygon is one that has an interior angle greater than 180°. Concave is the opposite of convex. in geometry, a solid or surface consisting of the set of all straight lines passing through a fixed point (the vertex) and the points of a circle or ellipse whose plane does not contain the vertex. A circular cone of perpendicular height, with its apex above the center of the circle, is known as a right circular cone; it is generated by rotating an isosceles triangle or framework about its line of symmetry. A right circular cone of perpendicular height h and base of radius r has a volume V = 1/3 pr2h. The distance from the edge of the base of a cone to the vertex is called the slant height. In a right circular cone of slant height l, the curved surface area is prl, and the area of the base is pr2. Therefore the total surface area A = prl + pr2 = pr(l + r). in geometry, having the same shape and size, as applied to two-dimensional or solid figures. With plane congruent figures, one figure will fit on top of the other exactly, though this may first require rotation and/or rotation of one of the figures. of a surface, curving outward, or toward the eye. For example, the outer surface of a ball appears convex. In geometry, the term is used to describe any polygon possessing no interior angle greater than 180°. Convex is the opposite of concave. in geometry, a number that defines the position of a point relative to a point or axis (reference line). Cartesian coordinates define a point by its perpendicular distances from two or more axes drawn through a fixed point mutually at right angles to each other. Polar coordinates define a point in a plane by its distance from a fixed point and direction from a fixed line. coordinate geometry or analytical geometry system of geometry in which points, lines, shapes, and surfaces are represented by algebraic expressions. In plane (two-dimensional) coordinate geometry, the plane is usually defined by two axes at right angles to each other, the horizontal x-axis and the vertical y-axis, meeting at O, the origin. A point on the plane can be represented by a pair of Cartesian coordinates, which define its position in terms of its distance along the x-axis and along the y-axis from O. These distances are respectively the x and y coordinates of the point. Lines are represented as equations; for example, y = 2x + 1 gives a straight line, and y = 3x2 + 2x gives a parabola (a curve). The graphs of varying equations can be drawn by plotting the coordinates of points that satisfy their equations, and joining up the points. One of the advantages of coordinate geometry is that geometrical solutions can be obtained without drawing but by manipulating algebraic expressions. For example, the coordinates of the point of intersection of two straight lines can be determined by finding the unique values of x and y that satisfy both of the equations for the lines, that is, by solving them as a pair of simultaneous equations. The curves studied in simple coordinate geometry are the conic sections (circle, ellipse, parabola, and hyperbola), each of which has a characteristic equation. in geometry, describing lines or points that all lie in the same plane. in geometry, a regular solid figure whose faces are all squares. It has six equal-area faces and 12 equal-length edges. If the length of one edge is l, the volume V of the cube is given by: V = l3 and its surface area A by: A = 6l2 in geometry, the locus of a point moving according to specified conditions. The circle is the locus of all points equidistant from a given point (the center). Other common geometrical curves are the ellipse, parabola, and hyperbola, which are also produced when a cone is cut by a plane at different angles. Many curves have been invented for the solution of special problems in geometry and mechanics - for example, the cissoid (the inverse of a parabola) and the cycloid. in geometry, a curve resembling a series of arches traced out by a point on the circumference of a circle that rolls along a straight line. Its applications include the study of the motion of wheeled vehicles along roads and tracks. in geometry, a tubular solid figure with a circular base. In everyday use, the term applies to a right cylinder, the curved surface of which is at right angles to the base. The volume V of a cylinder is given by the formula V = pr2h, where r is the radius of the base and h is the height of the cylinder. Its total surface area A has the formula A = 2pr(h + r), where 2prh is the curved surface area, and 2pr2 is the area of both circular ends. in geometry, a ten-sided polygon. in mathematics, an array of elements written as a square, and denoted by two vertical lines enclosing the array. For a 2 × 2 matrix, the determinant is given by the difference between the products of the diagonal terms. Determinants are used to solve sets of simultaneous equations by matrix methods. When applied to transformational geometry, the determinant of a 2 × 2 matrix signifies the ratio of the area of the transformed shape to the original and its sign (plus or minus) denotes whether the image is direct (the same way round) or indirect (a mirror image). in science, any directly measurable physical quantity such as mass (M), length (L), and time (T), and the derived units obtainable by multiplication or division from such quantities. For example, acceleration (the rate of change of velocity) has dimensions (LT-2), and is expressed in such units as km s-2. A quantity that is a ratio, such as relative density or humidity, is dimensionless. In geometry, the dimensions of a figure are the number of measures needed to specify its size. A point is considered to have zero dimension, a line to have one dimension, a plane figure to have two, and a solid body to have three. in geometry, a property of a conic section (circle, ellipse, parabola, or hyperbola). It is the distance of any point on the curve from a fixed point (the focus) divided by the distance of that point from a fixed line (the directrix). A circle has an eccentricity of zero; for an ellipse it is less than one; for a parabola it is equal to one; and for a hyperbola it is greater than one. in mathematics, expression that represents the equality of two expressions involving constants and/or variables, and thus usually includes an equals sign (=). For example, the equation A = pr2 equates the area A of a circle of radius r to the product pr2. The algebraic equation y = mx + c is the general one in coordinate geometry for a straight line. If a mathematical equation is true for all variables in a given domain, it is sometimes called an identity and denoted by º. Thus (x + y)2 º x2 + 2xy + y2 for all x, y Î R. An indeterminate equation is an equation for which there is an infinite set of solutions - for example, 2x = y. A diophantine equation is an indeterminate equation in which the solution and terms must be whole numbers (after Diophantus of Alexandria, c. AD 250). (from Latin for “a piece cut off”) in geometry, a “slice” taken out of a solid figure by a pair of parallel planes. A conical frustum, for example, resembles a cone with the top cut off. The volume and area of a frustum are calculated by subtracting the volume or area of the “missing” piece from those of the whole figure. in geometry, a curve formed by cutting a right circular cone with a plane so that the angle between the plane and the base is greater than the angle between the base and the side of the cone. All hyperbolae are bounded by two asymptotes (straight lines which the hyperbola moves closer and closer to but never reaches). A hyperbola is a member of the family of curves known as conic sections. A hyperbola can also be defined as a path traced by a point that moves such that the ratio of its distance from a fixed point (focus) and a fixed straight line (directrix) is a constant and greater than 1; that is, it has an eccentricity greater than 1. maximum and minimum in coordinate geometry, points at which the slope of a curve representing a function changes from positive to negative (maximum), or from negative to positive (minimum). A tangent to the curve at a maximum or minimum has zero gradient. Maxima and minima can be found by differentiating the function for the curve and setting the differential to zero (the value of the slope at the turning point). For example, differentiating the function for the parabola y = 2x2 - 8x gives dy/dx = 4x - 8. Setting this equal to zero gives x = 2, so that y = -8 (found by substituting x = 2 into the parabola equation). Thus the function has a minimum at the point (2, -8). in mathematics and statistics, the middle number of an ordered group of numbers. If there is no middle number (because there is an even number of terms), the median is the mean (average) of the two middle numbers. For example, the median of the group 2, 3, 7, 11, 12 is 7; that of 3, 4, 7, 9, 11, 13 is 8 (the average of 7 and 9). In geometry, the term refers to a line from the vertex of a triangle to the midpoint of the opposite side. dedicated (person); adj. Geometry, flattened at poles. oblation, n. offering; sacrifice. oblational, adj. in coordinate geometry, the y coordinate of a point; that is, the vertical distance of the point from the horizontal or x-axis. For example, a point with the coordinates (3,4) has an ordinate of 4. See abscissa. parallel lines and parallel planes in mathematics, straight lines or planes that always remain a constant distance from one another no matter how far they are extended. This is a principle of Euclidean geometry. Some non-Euclidean geometries, such as elliptical and hyperbolic geometry, however, reject Euclid's parallel axiom. in geometry, a basic element, whose position in the Cartesian system may be determined by its coordinates. Mathematicians have had great difficulty in defining the point, as it has no size, and is only the place where two lines meet. According to the Greek mathematician Euclid, (i) a point is that which has no part; (ii) the straight line is the shortest distance between two points. in geometry, a plane (two-dimensional) figure with three or more straight-line sides. Common polygons have names which define the number of sides (for example, triangle, quadrilateral, pentagon). These are all convex polygons, having no interior angle greater than 180°. The sum of the internal angles of a polygon having n sides is given by the formula (2n - 4) × 90°; therefore, the more sides a polygon has, the larger the sum of its internal angles and, in the case of a convex polygon, the more closely it approximates to a circle. in geometry, a theorem stating that in a right triangle, the square of the hypotenuse (the longest side) is equal to the sum of the squares of the other two sides (legs). If the hypotenuse is c units long and the lengths of the legs are a and b, then c2 = a2 + b2. The theorem provides a way of calculating the length of any side of a right triangle if the lengths of the other to sides are known. It is also used to determine certain trigonometrical relationships such as sin2 q + cos2 q = 1. furthur elaboration .... abbreviation for quod erat demonstrandum (Latin “which was to be proved”), added at the end of a geometry proof. in mathematics, a polynomial equation of second degree (that is, an equation containing as its highest power the square of a variable, such as x2). The general formula of such equations is ax2 + bx + c = 0, in which a, b, and c are real numbers, and only the coefficient a cannot equal 0. In coordinate geometry, a quadratic function represents a parabola. Some quadratic equations can be solved by factorization, or the values of x can be found by using the formula for the general solution x = [-b ± Ö(b2 -4ac)]/2a. Depending on the value of the discriminant b2 -4ac, a quadratic equation has two real, two equal, or two complex roots (solutions). When b2 -4ac > 0, there are two distinct real roots. When b2 - 4ac = 0, there are two equal real roots. When b2 - 4ac < 0, there are two distinct complex roots. in geometry, an equilateral (all sides equal) parallelogram. Its diagonals bisect each other at right angles, and its area is half the product of the lengths of the two diagonals. A rhombus whose internal angles are 90° is called a square. triangle in which one of the angles is a right angle (90°). It is the basic form of triangle for defining trigonometrical ratios (for example, sine, cosine, and tangent) and for which the Pythagorean theorem holds true. The longest side of a right triangle is called the hypotenuse. Its area is equal to half the product of the lengths of the two shorter sides. A triangle constructed with its hypotenuse as the diameter of a circle with its opposite vertex on the circumference is a right triangle. This is a fundamental theorem in geometry, first credited to the Greek mathematician Thales about 580 BC. in geometry, part of a circle enclosed by two radii and the arc that joins them. in geometry, part of a circle cut off by a straight line or chord, running from one point on the circumference to another. All angles in the same segment are equal. in geometry, a quadrilateral (four-sided) plane figure with all sides equal and each angle a right angle. Its diagonals bisect each other at right angles. The area A of a square is the length l of one side multiplied by itself (A = l × l). Also, any quantity multiplied by itself is termed a square, represented by an exponent of power 2; for example, 4 × 4 = 42 = 16 and 6.8 × 6.8 = 6.82 = 46.24. An algebraic term is squared by doubling its exponent and squaring its coefficient if it has one; for example, (x2)2 = x4 and (6y3)2 = 36y6. A number that has a whole number as its square root is known as a perfect square; for example, 25, 144 and 54,756 are perfect squares (with roots of 5, 12 and 234, respectively). |in geometry, a straight line that touches a curve and gives the gradient of the curve at the point of contact. At a maximum, minimum, or point of inflection, the tangent to a curve has zero gradient. Also, in trigonometry, a function of an acute angle in a right triangle, defined as the ratio of the length of the side opposite the angle to the length of the side adjacent to it; a way of expressing the gradient of a line.| tetrahedron (plural tetrahedra) in geometry, a solid figure (polyhedron) with four triangular faces; that is, a pyramid on a triangular base. A regular tetrahedron has equilateral triangles as its faces. In chemistry and crystallography, tetrahedra describe the shapes of some molecules and crystals; for example, the carbon atoms in a crystal of diamond are arranged in space as a set of interconnected regular tetrahedra. in geometry, a four-sided plane figure (quadrilateral) with no two sides parallel. in geometry, a four-sided plane figure (quadrilateral) with only two sides parallel. If the parallel sides have lengths a and b and the perpendicular distance between them is h (the height of the trapezoid), its area A= 1/2 h(a + b). An isosceles trapezoid has its sloping sides (legs) equal, is symmetrical about a line drawn through the midpoints of its parallel sides, and has equal base angles. in geometry, a three-sided plane figure, the sum of whose interior angles is 180°. Triangles can be classified by the relative lengths of their sides. A scalene triangle has three sides of unequal length; an isosceles triangle has at least two equal sides; an equilateral triangle has three equal sides (and three equal angles of 60°). Triangles can also be classified by their angle measures: a right triangle has one right (90°) angle; an acute triangle has three acute (less than 90°) angles; an obtuse triangle has one obtuse (greater than 90°) angle; an equiangular triangle has three equal angles. (All equilateral triangles are equiangular, and vice versa.) If the length of one side of a triangle is l and the perpendicular distance from that side to the opposite corner is h (the height or altitude of the triangle), its area A = 1/2 (lh). vertex (plural vertices) in geometry, a point shared by three or more sides of a solid figure; the point farthest from a figure's base; or the point of intersection of two sides of a plane figure or the two rays of an angle. in geometry, the space occupied by a three-dimensional solid object. A prism (such as a cube) or a cylinder has a volume equal to the area of the base multiplied by the height. For a pyramid or cone, the volume is equal to one-third of the area of the base multiplied by the perpendicular height. The volume of a sphere is equal to 4/3 × pr3, where r is the radius. Volumes of irregular solids may be calculated by the technique of integration.
http://library.thinkquest.org/C007273/geomconcept.html
13
63
The meridian circle is an instrument for timing of the passage of stars across the local meridian, an event known as a transit, while at the same time measuring their angular distance from the nadir. These are special purpose telescopes mounted so as to allow pointing only in the meridian, the great circle through the north point of the horizon, the zenith, the south point of the horizon, and the nadir. Meridian telescopes rely on the rotation of the Earth to bring objects into their field of view and are mounted on a fixed, horizontal, east-west axis. The similar transit instrument, transit circle or transit telescope is likewise mounted on a horizontal axis, but the axis need not be fixed in the east-west direction. For instance, a surveyor's theodolite can function as a transit instrument if its telescope is capable of a full revolution about the horizontal axis. Meridian circles are often called by these names, although they are less specific. For many years, transit timings were the most accurate method of measuring the positions of heavenly bodies, and meridian instruments were relied upon to perform this painstaking work. Before spectroscopy, photography, and the perfection of reflecting telescopes, the measuring of positions (and the deriving of orbits and astronomical constants) was the major work of observatories. Fixing a telescope to move only in the meridian has advantages in the high-precision work for which these instruments are employed: - The very simple mounting is easier to manufacture and maintain to a high precision. - At most locations on the Earth, the meridian is the only plane in which celestial coordinates can be indexed directly with such a simple mounting; the equatorial coordinate system aligns naturally with the meridian at all times. Revolving the telescope about its axis moves it directly in declination, and objects move through its field of view in right ascension. - All objects in the sky are subject to the distortion of atmospheric refraction, which tends to make objects appear slightly higher in the sky than they actually are. At the meridian, this distortion is in declination only, and is easily accounted for; elsewhere in the sky, refraction causes a complex distortion in coordinates which is more difficult to reduce. Such complex analysis is not conducive to high precision. Basic instrument The state of the art of meridian instruments of the late 19th and early 20th century is described here, giving some idea of the precise methods of construction, operation and adjustment employed. The earliest transit telescope was not placed in the middle of the axis, but nearer to one end, to prevent the axis from bending under the weight of the telescope. Later, it was usually placed in the centre of the axis, which consisted of one piece of brass or gun metal with turned cylindrical steel pivots at each end. Several instruments were made entirely of steel, which was much more rigid than brass. The pivots rested on V-shaped bearings, either set into massive stone or brick piers which supported the instrument, or attached to metal frameworks on the tops of the piers. The temperature of the bearings was monitored by thermometers. The piers were usually separate from the foundation of the building, to prevent transmission of vibration from the building to the telescope. To relieve the pivots from the weight of the instrument, which would have distorted their shape, each end of the axis was supported by a hook with friction rollers, suspended from a lever supported by the pier, counterbalanced so as to leave only about 10 pounds force (45 N) on each bearing. In some cases, the counterweight pushed up on the bearing from below. The bearings were set nearly in a true east-west line, but fine adjustment was possible by horizontal and vertical screws. A spirit level was used to monitor for any inclination of the axis to the horizon. Eccentricity (an off-center condition) of the telescope's axis was accounted for, in some cases, by providing another telescope through the axis itself. By observing the motion of an artificial star through this axis telescope as the main telescope was rotated, the shape of the pivots, and any wobble of the axis, could be determined. Near each end of the axis, attached to the axis and turning with it, was a circle or wheel for measuring the angle of the telescope to the horizon. Generally of 3 feet to 3.5 ft diameter, it was divided to 2 or 5 arcminutes, on a slip of silver set into the face of the circle near the circumference. These graduations were read by microscopes, generally four for each circle, mounted to the piers or a framework surrounding the axis, at 90° intervals around the circles. By averaging the four readings the eccentricity (from inaccurate centering of the circles) and the errors of graduation were greatly reduced. Each microscope was furnished with a micrometer screw, which moved crosshairs, with which the distance of the circle graduations from the centre of the field of view could be measured. The drum of the screw was divided to measure single seconds of arc (0.1" being estimated), while the number of revolutions were counted by a kind of comb in the field of view. The microscopes were placed at such a distance from the circle that one revolution of the screw corresponded to 1 arcminute (1') on the circle. The error was determined occasionally by measuring standard intervals of 2' or 5' on the circle. The periodic errors of the screw were accounted for. On some instruments, one of the circles was graduated and read more coarsely than the other, and was used only in finding the target stars. The telescope consisted of two tubes screwed to the central cube of the axis. The tubes were usually conical and as stiff as possible to help prevent flexure. The connection to the axis was also as firm as possible, as flexure of the tube would affect declinations deduced from observations. The flexure in the horizontal position of the tube was determined by two collimators - telescopes placed horizontally in the meridian, north and south of the transit circle, with their objective lenses towards it. These were pointed at one another (through holes in the tube of the telescope, or by removing the telescope from its mount) so that the crosshairs in their foci coincided. The collimators were often permanently mounted in these positions, with their objectives and eyepieces fixed to separate piers. The meridian telescope was pointed to one collimator and then the other, moving through exactly 180°, and by reading the circle the amount of flexure (the amount the readings differed from 180°) was found. Absolute flexure, that is, a fixed bend in the tube, was detected by arranging that eyepiece and objective lens could be interchanged, and the average of the two observations of the same star was free from this error. Parts of the apparatus were sometimes enclosed in glass cases to protect them from dust. These cases had openings for access. Other parts were closed against dust by removable silk covers. Certain instrumental errors could be averaged out by reversing the telescope on its mounting. A carriage was provided, which ran on rails between the piers, and on which the axis, circles and telescope could be raised by a screw-jack, wheeled out from between the piers, turned 180°, wheeled back, and lowered again. The observing building housing the meridian circle did not have a rotating dome, as is often seen at observatories. Since the telescope observed only in the meridian, a vertical slot in the north and south walls, and across the roof between these, was all that was necessary. The building was unheated and kept as much as possible at the temperature of the outside air, to avoid air currents which would disturb the telescopic view. The building also housed the clocks, recorders, and other equipment for making observations. At the focal plane, the eye end of the telescope had a number of vertical and one or two horizontal wires (crosshairs). In observing stars, the telescope was first directed downward at a basin of mercury forming a perfectly horizontal mirror and reflecting an image of the crosshairs back up the telescope tube. The crosshairs were adjusted until coincident with their reflection, and the line of sight was then perfectly vertical; in this position the circles were read for the nadir point. The telescope was next brought up to the approximate declination of the target star by watching the finder circle. The instrument was provided with a clamping apparatus, by which the observer, after having set the approximate declination, could clamp the axis so the telescope could not be moved in declination, except very slowly by a fine screw. By this slow motion, the telescope was adjusted until the star moved along the horizontal wire (or if there were two, in the middle between them), from the east side of the field of view to the west. Following this, the circles were read by the microscopes for a measurement of the apparent altitude of the star. The difference between this measurement and the nadir point was the nadir distance of the star. A movable horizontal wire or declination-micrometer was also used. Another method of observing the apparent altitude of a star was to take half of the angular distance between the star observed directly and its reflection observed in a basin of mercury. The average of these two readings was the reading when the line of sight was horizontal, the horizontal point of the circle. The small difference in latitude between the telescope and the basin of mercury was accounted for. The vertical wires were used for observing transits of stars, each wire furnishing a separate result. The time of transit over the middle wire was estimated, during subsequent analysis of the data, for each wire by adding or subtracting the known interval between the middle wire and the wire in question. These known intervals were predetermined by timing a star of known declination passing from one wire to the other, the pole star being best on account of its slow motion. Timings were originally made by an "eye and ear" method, estimating the interval between two beats of a clock. Later, timings were registered by pressing a key, the electrical signal making a mark on a strip recorder. Later still, the eye end of the telescope was usually fitted with an impersonal micrometer, a device which allowed matching a vertical crosshair's motion to the star's motion. Set precisely on the moving star, the crosshair would trigger the electrical timing of the meridian crossing, removing the observer's personal equation from the measurement. The field of the wires could be illuminated; the lamps were placed at some distance from the piers in order not to heat the instrument, and the light passed through holes in the piers and through the hollow axis to the center, whence it was directed to the eye-end by a system of prisms. To determine absolute declinations or polar distances, it was necessary to determine the observatory's colatitude, or distance of the celestial pole from the zenith, by observing the upper and lower culmination of a number of circumpolar stars. The difference between the circle reading after observing a star and the reading corresponding to the zenith was the zenith distance of the star, and this plus the colatitude was the north polar distance. To determine the zenith point of the circle, the telescope was directed vertically downwards at a basin of mercury, the surface of which formed an absolutely horizontal mirror. The observer saw the horizontal wire and its reflected image, and moving the telescope to make these coincide, its optical axis was made perpendicular to the plane of the horizon, and the circle reading was 180° + zenith point. In observations of stars refraction was taken into account as well as the errors of graduation and flexure. If the bisection of the star on the horizontal wire was not made in the centre of the field, allowance was made for curvature, or the deviation of the star's path from a great circle, and for the inclination of the horizontal wire to the horizon. The amount of this inclination was found by taking repeated observations of the zenith distance of a star during the one transit, the pole star being the most suitable because of its slow motion. Attempts were made to record the transits of a star photographically. A photographic plate was placed in the focus of a transit instrument and a number of short exposures made, their length and the time being registered automatically by a clock. The exposing shutter was a thin strip of steel, fixed to the armature of an electromagnet. The plate thus recorded a series of dots or short lines, and the vertical wires were photographed on the plate by throwing light through the objective lens for one or two seconds. Meridian circles required precise adjustment to do accurate work. The rotation axis of the main telescope needed to be exactly horizontal. A sensitive spirit level, designed to rest on the pivots of the axis, performed this function. By adjusting one of the V-shaped bearings, the bubble was centered. The line of sight of the telescope needed to be exactly perpendicular to the axis of rotation. This could be done by sighting a distant, stationary object, lifting and reversing the telescope on its bearings, and again sighting the object. If the crosshairs did not intersect the object, the line of sight was halfway between the new position of the crosshairs and the distant object; the crosshairs were adjusted accordingly and the process repeated as necessary. Also, if the rotation axis was known to be perfectly horizontal, the telescope could be directed downward at a basin of mercury, and the crosshairs illuminated. The mercury acted as a perfectly horizontal mirror, reflecting an image of the crosshairs back up the telescope tube. The crosshairs could then be adjusted until coincident with their reflection, and the line of sight was then perpendicular to the axis. The line of sight of the telescope needed to be exactly within the plane of the meridian. This was done approximately by building the piers and the bearings of the axis on an east-west line. The telescope was then brought into the meridian by repeatedly timing the (apparent, incorrect) upper and lower meridian transits of a circumpolar star and adjusting one of the bearings horizontally until the interval between the transits was equal. Another method used calculated meridian crossing times for particular stars as established by other observatories. This was an important adjustment and much effort was spent in perfecting it. In practice, none of these adjustments were perfect. The small errors introduced by the imperfections were mathematically corrected during the analysis of the data. Zenith telescopes Some telescopes designed to measure star transits are zenith telescopes designed to point straight up at or near the zenith for extreme precision measurement of star positions. They use an altazimuth mount, instead of a meridian circle, fitted with leveling screws. Extremely sensitive levels are attached to the telescope mount to make angle measurements and the telescope has an eyepiece fitted with a micrometer. The idea of having an instrument (quadrant) fixed in the plane of the meridian occurred even to the ancient astronomers and is mentioned by Ptolemy, but it was not carried into practice until Tycho Brahe constructed a large meridian quadrant. Meridian circles have been used since the 18th century to accurately measure positions of stars in order to catalog them. This is done by measuring the instant when the star passes through the local meridian. Its altitude above the horizon is noted as well. Knowing one's geographic latitude and longitude these measurements can be used to derive the star's right ascension and declination. Once good star catalogs were available a transit telescope could be used anywhere in the world to accurately measure local longitude and time by observing local meridian transit times of catalogue stars. Prior to the invention of the atomic clock this was the most reliable source of accurate time. In the Almagest Ptolemy describes a meridian circle which consisted of a fixed graduated outer ring and a movable inner ring with tabs that used a shadow to set the Sun's position. It was mounted vertically and aligned with the meridian. The instrument was used to measure the altitude of the Sun at noon in order to determine the path of the ecliptic. 17th century (1600s) A meridian circle enabled the observer to determine simultaneously right ascension and declination, but it does not appear to have been much used for right ascension during the 17th century, the method of equal altitudes by portable quadrants or measures of the angular distance between stars with an astronomical sextant being preferred. These methods were very inconvenient and in 1690 Ole Rømer invented the transit instrument. 18th century (1700s) The transit instrument consists of a horizontal axis in the direction east and west resting on firmly fixed supports, and having a telescope fixed at right angles to it, revolving freely in the plane of the meridian: At the same time Rømer invented the altitude and azimuth instrument for measuring vertical and horizontal angles, and in 1704 he combined a vertical circle with his transit instrument, so as to determine both co-ordinates at the same time. This latter idea was, however, not adopted elsewhere although the transit instrument soon came into universal use (the first one at Greenwich was mounted in 1721), and the mural quadrant continued till the end of the century to be employed for determining declinations. The advantage of using a whole circle, as less liable to change its figure, and not requiring reversal in order to observe stars north of the zenith, was then again recognized by Jesse Ramsden, who also improved the method of reading off angles by means of a micrometer microscope as described below. 19th century (1800s) The making of circles was shortly afterwards taken up by Edward Troughton, who in 1806 constructed the first modern transit circle for Groombridge's observatory at Blackheath, the Groombridge Transit Circle (a meridian transit circle). Troughton afterwards abandoned the idea, and designed the mural circle to take the place of the mural quadrant. In the United Kingdom the transit instrument and mural circle continued till the middle of the 19th century to be the principal instrument in observatories, the first transit circle constructed there being that at Greenwich (mounted in 1850) but on the continent the transit circle superseded them from the years 1818-1819, when two circles by Johann Georg Repsold and by Reichenbach were mounted at Göttingen, and one by Reichenbach at Königsberg. The firm of Repsold and Sons was for a number of years eclipsed by that of Pistor and Martins in Berlin, who furnished various observatories with first-class instruments, but following the death of Martins the Repsolds again took the lead, and made many transit circles. The observatories of Harvard College (United States), Cambridge and Edinburgh had large circles by Troughton and Simms, who also made the Greenwich circle from the design of Airy. 20th century and beyond (1900s and 2000s) A modern day example of this type of telescope is the 8 inch (~0.2m) Flagstaff Astrometric Scanning Transit Telescope (FASTT) at the USNO Flagstaff Station Observatory. Modern meridian circles are usually automated. The observer is replaced with a CCD camera. As the sky drifts across the field of view, the image built up in the CCD is clocked across (and out of) the chip at the same rate. This allows some improvements: - The CCD can collect light for as long as the image is crossing it, allowing a dimmer limiting magnitude to be reached. - The data can be collected for as long as the telescope is in operation - an entire night is possible, allowing a strip of sky many degrees in length to be scanned. - Data can be compared directly to any reference object which happens to be within the scan - usually a bright extragalactic object, like a quasar, with an accurately-known position. This eliminates the need for some of the painstaking adjustment of the meridian instrument, although monitoring of declination, azimuth, and level is still performed with CCD scanners and laser interferometers. - Atmospheric refraction can be accounted for automatically, by monitoring temperature, pressure, and dew point of the air electronically. - Data can be stored and analyzed at will. - Groombridge Transit Circle (1806) - Carlsberg Meridian Telescope (Carlsberg Automatic Meridian Circle) (1984) - Tokyo Photoelectric Meridian Circle (1985) See also - This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Transit Circle". Encyclopædia Britannica (11th ed.). Cambridge University Press. - Chauvenet, William (1868). A Manual of Spherical and Practical Astronomy, II. Trubner & Co., London. pp. 131, 282., at Google books - Newcomb, Simon (1906). A Compendium of Spherical Astronomy. MacMillan Co., New York. p. 317ff, 331ff. , at Google books - Norton, William A. (1867). A Treatise on Astronomy, Spherical and Physical. John Wiley & Son, New York. p. 24ff. , at Google books - Chauvenet (1868), p. 132, art. 119; p. 283, art. 195 - Norton (1867), p. 39ff - Bond, William C.; Bond, George P.; Winlock, Joseph (1876). Annals of the Astronomical Observatory of Harvard College. Press of John Wilson and Son, Cambridge, Mass. p. 25. , at Google books - Bond, Bond and Winlock (1876), p. 25 - Bond, Bond and Winlock (1876), p. 27 - Bond, Bond and Winlock (1876), p. 25 - Bond, Bond and Winlock (1876), p. 26 - Chauvenet (1868), p. 138, art. 121 - Norton (1867), p. 33ff - Ptolemy, Claudius; Toomer, G. J. (1998). Ptolemy's Almagest. Princeton University Press. p. 61. ISBN 0-691-00260-6. - Stone, Ronald C.; Monet, David G. (1990). "The USNO (Flagstaff Station) CCD Transit Telescope and Star Positions Measured From Extragalactic Sources". Proceedings of IAU Symposium No. 141. pp. 369–370., at SAO/NASA ADS - The Carlsberg Meridian Telescope Further reading - The Practical Astronomer, Thomas Dick (1848) - Elements of Astronomy, Robert Stawell Ball (1886) - Meridian circle observations made at the Lick Observatory, University of California, 1901-1906, Richard H. Tucker (1907) - an example of the adjustments and observations of an early 20th century instrument
http://en.wikipedia.org/wiki/Meridian_circle
13
70
The formula for the volume of cylinders can also be applied to the volume of cones, which occupy only one third of the space of a corresponding cylinder with the same base and height. To find volume of cone dimensions and measurements, we simply use the formula for the area of a circle for one of the bases and then multiply by one third of the height of the cone. This formula is very similar to volume of cylinder and prism volume formulas. When we're talking about the concept of volume, we're talking about the amount of space that's defined by a 3 dimensional figure. The volume of a cylinder can be calculated by finding the base area and since the base area is a circle, you can say that's pi r squared times its height. Now some text books will use a capital H some will use a lower case h as long as you remember that you're going to find your base area and then multiply by the height of your solid. But there's a special relationship between the volume of the cylinder and volume of a cone. Well if I drew 2 figures, that had the same radius and the same height then what I could do is I could say that the volume of this cone will be one third the volume of that cylinder. So we can say that the volume of any cone is equal to one third times the base area which is going to be pi r squared since the base of a cone is a circle times the height. So I could re-write this in terms of a radius we could say that this is one third pi r squared times h. So the only 2 things you need to know to calculate the volume of a cone is the radius and the height. Let's look at a quick example, in this problem we're being asked to find the volume I see that I have a radius of 5 centimeters and the height is 10 centimeters and I know it's the height because we have a right angle there. So we're going to start by writing our formula just like every problem in Geometry. We're going to say the volume is equal to one third base area times our height. Now we need to identify our known variables, we know that our base area is equal to pi r squared, we know that our height is equal to 10 centimeters and we know that our radius is equal to 5 centimeters. Now one thing to be careful of when you're taking a test or quiz they might give you a diameter in which case you have to divide by 2 to find your radius. So now we can just substitute in to our volume formula, so we're going to say that volume is equal to 1 third because it's a third of the volume of an identical cylinder that has, well it can't be identical because they're different shapes but a cylinder with the same radius and the same height. So we're going to say one third times pi times our radius now instead of writing radius I'm going to substitute in 5 centimeters so I'm going to erase that and I'm going to write 5 centimeters and we're going to square that and we need to multiply by our height and our height is 10 centimeters. So we're going to say times 10 centimeters, so if we do 5 squared that's going to be 25, so we're going to have one third 25 square centimeters times 10 centimeters. Now we're going to check our dimensions here we should have something to the third dimension, because it's asking us to find volume. And since we have centimeters squared times centimeters that's going to be centimeters to the third or cubic centimeters. So this is going to be one third, 25 times 10 is 250, so we're going to say times 250 cubic centimeters. Now some teachers will say leave it as a fraction, some teachers will say put it in a decimal if they want a decimal just use your calculator so I'm going to say that the volume here is 250 cubic centimeters divided by 3. The key thing to these is remembering our volume formula and that the only 2 things that you need to know to calculate the volume of a cone is the radius and the height. So hopefully caught Brian's mistake there, let's walk through it really quickly. When Brian set up the formula he set it up correctly with a pi in the formula for the volume of a cone. But then he made a common mistake which is he forgot to bring tot bring the pi down into his equation. So let's go ahead and put those there pi, pi and pi in the answer. So the actual answer should be 250 pi over 3 cubic centimeters. Remember to check your work.
http://brightstorm.com/math/geometry/volume/volume-of-cones/
13
50
Logic gates take binary values and perform functions on them, similar to the functions found in simple algebra. Binary algebra is the set of mathematical laws that are valid for binary values. A binary value can only be a 1 or a 0. 1 is a high value, representing true and high voltage. 0 is a low value, representing a false value and low voltage. Logic gates are typically packaged in integrated circuits, although they can be constructed using analogue components. Integrated circuits allow multiple logic gates to be packaged in one chip and are usually quite reliable. Logic gates typically come in two flavors, TTL (transistor-transistor logic) and CMOS (Complementary Metal Oxide Semiconductor). One must be careful mixing the two types, there logic low and logic high are different voltages. A CMOS might take a TTL high as a LOW and a TTL will accept a CMOS low as a high. Because of this they are generally incompatible, but there are a few CMOS that can accept TTL inputs and vice versa. The buffer and NOT gates are the simplest of the logic gates. The buffer would be used as a digital signal booster, if a logic signal was to travel for some distance voltage drop from wire resistance would lower a logic high voltage so low that when it reaches its destination its read as a logic low, putting this in between would solve that problem. The buffer's algebra function is B = A. NOT gates simply change the input from a 1 to a 0 or vice versa. It is also called an inverter and has many uses in logic circuits. For example, you have 2 lights, but you only want 1 on at any one time, you would put a NOT gate between one light so when there is a logic high input 1 light is on and the other connected to the NOT gate is off and when there is a logic low input the second comes on because of the NOT gate. The circle on the end of the triangle indicates that its an inverting gate and you can recognize any inverting logic device by this circle. The equivalent binary algebra function is B = A', where B is the output and A is an input value. The AND gate most commonly come in IC packages of 2 and 3 input versions. The output only produces a logical 1 when all of the inputs are 1. An AND gate could be used in an alarm circuit, where input A would be a reed switch input and B would be an armed control, So the alarm would only be activated if the alarm was active AND the reed switch was circuit was opened (opened door ect.). The equivalent binary algebra function is C = A * B * ... * N, where C is the output and A and B are two of N total inputs. The OR gate has a minimum of two inputs and produces an output of 1 if at least one of the inputs has a value of 1. An OR gate could be used to expand the number of reed switches in the previous example. The equivalent binary algebra function is C = A + B + ... + N, where C is the output and A and B are two of N total inputs. The NAND gate has a minimum of two inputs and is the equivalent of an AND gate with a NOT gate on the output. It produces a 0 only if all of the inputs are 1. A NAND gate could be used to switch a device off if it gets too hot or a cooling fan stops working, so a temperature sensor would be connected to input A and a tachometer output filtered so its a logical high when there is rotation and logical low when there is no rotation, is connected to input B. The equivalent binary algebra function is C = (A * B * ... * N)', where C is the output and A and B are two of N total inputs. The NOR gate has a minimum of two inputs and is the same as an OR gate with a NOT gate on the output. The NOR gate produces a 1 only if all of the inputs are 0. The NOR gate can be used to shutdown a device if either of 2 temperature sensors measure a temperature too high for the circuit. The equivalent binary algebra function is F=(A+B+...+N)', where F is the output and A and B are two of N total inputs. The XOR aka EOR gate has a minimum of two inputs. The NOR gate produces a 1 only if one of the inputs is a 1. The equivalent binary algebra function is C = AB'+ A'B, where C is the output and A and B are two inputs. The XNOR aka ENOR gate has a minimum of two inputs. The XNOR gate produces an output of 1 if inputs A and B match. The most obvious application for this logic gate is a comparator, this could be used as error checking in data transmission or it can form the basis of a combination keypad. The equivalent binary algebra function is C = AB + A'B', where C is the output and A and B are two inputs.
http://www.freeinfosociety.com/article.php?id=3
13
59
This Lesson continues from Lesson 2. A PROPORTION IS A STATEMENT that two ratios are the same. 5 is to 15 as 8 is to 24. 5 is the third part of 15, just as 8 is the third part of 24. We will now introduce this symbol 5 : 15 to signify the ratio of 5 to 15. A proportion will then appear as follows: 5 : 15 = 8 : 24. "5 is to 15 as 8 is to 24." Problem 1. Read the following. Why is each one a proportion? a) 2 : 6 = 10 : 30 "2 is to 6 as 10 is to 30." Because 2 is the third part of 6, just as 10 is the third part of 30. b) 12 : 3 = 24 : 6 "12 is to 3 as 24 is to 6." Because 12 is four times 3, just as 24 is four times 6. c) 2 : 3 = 10 : 15 "2 is to 3 as 10 is to 15." Because 2 is two thirds of 3, just as 10 is two thirds of 15. Problem 2. Complete each proportion. AB, CD are straight lines, and AB is three fifths of CD. Express that ratio as a proportion. AB : CD = 3 : 5 Example 1. If, proportionally, a : b = 3 : 4, then, explicitly, what ratio has a to b? Answer. The proportion implies the ratio of a to b, but it does not state that ratio explicitly. What ratio has 3 to 4? 3 is three fourths of 4. Explicitly, then, that is the ratio of a to b. a is three fourths of b. Proportions imply ratios. Problem 4. Explicitly, what ratio has x to y? a) x : y = 1 : 5. x is the fifth part of y. b) x : y = 32 : 8. x is four times y. c) x : y = 7 : 10. x is seven tenths of y. The theorem of the alternate proportion The numbers in a proportion are called the terms: the 1st, the 2nd, the 3rd, and the 4th. 1st : 2nd = 3rd : 4th We say that the 1st and the 3rd are corresponding terms, as are the 2nd and the 4th. The following is the theorem of the alternate proportion: (Euclid, VII. 13.) For example, since 1 : 3 = 5 : 15, 1 : 5 = 3 : 15. Problem 5. State the alternate proportion. This leads to: The theorem of the same multiple Let us complete this proportion, 4 : 5 = 12 : ? 4 is four fifths of 5 (Lesson 2), but it is not obvious of what number 12 is four fifths. Alternately, however, 4 is the third part of 12 -- or we could say that 4 has been multiplied by 3. Therefore, 5 also must be multiplied by 3 -- 4 : 5 = 12 : 15 4 : 5 = 3 × 4 : 3 × 5. This is called the theorem of the same multiple. 4 is four fifths of 5. But each 4 has that same ratio to each 5. Two 4's, then, upon adding them, will have that same ratio to two 5's. Three 4's will have that same ratio to three 5's. And so on. Any number of 4's will have that same ratio, four fifths, to an equal number of 5's. Here is how we state the theorem: (Euclid, VII. 17.) Problem 6. Write five pairs of numbers that have the same ratio as 3 : 4. Create them by taking the same multiple of both 3 and 4. For example, 6 : 8, 9 : 12, 12 : 16, 15 : 20, 18 : 24 Problem 7. Complete each proportion. Problem 9. Complete this proportion, 2.45 : 7 = 245 : 700. Since 2.45 has been multiplied by 100, then 7 also must be multiplied by 100. PQ is two fifths of RS. If PQ is 12 miles, then how long is RS? Solution. Since PQ is two fifths of RS, then proportionally, PQ : RS = 2 : 5. If PQ is 12 miles, then PQ : RS = 2 : 5 = 12 miles : ? miles. That is, 12 miles corresponds to PQ and 2. And since 12 is 6 × 2, the missing term is 6 × 5: PQ : RS = 2 : 5 = 12 miles : 30 miles. RS is 30 miles. PQ : RS = 2 : 5, RS : PQ = 5 : 2. Now, what ratio has 5 to 2? 5 is two and a half times 2. RS therefore is two and a half times PQ. And if PQ is 12 miles, then RS is 24 + 6 = 30 miles. AB is three fourths of CD. Specifically, AB is 24 cm. How long is CD? AB : CD = 3 : 4 = 24 cm : ? Since 24 is 8 × 3, the missing term is 8 × 4 = 32 cm. The theorem of the common divisor Since we may multiply both terms by the same number, then, symmetrically, we may divide both terms by the same number. 25 : 40 = 5 : 8 upon dividing both 25 and 40 by 5. Explicitly, then, we see that 25 is five eighths of 40. Problem 11. Explicitly, what ratio has 16 to 40? Express that ratio so that the terms have no common divisors (except 1). Upon dividing both terms by 8, 16 : 40 = 2 : 5. When the terms of a ratio have no common divisors except 1, then we have expressed their ratio with the lowest terms. They are the smallest terms -- the smallest pair of numbers -- that have that ratio. Problem 12. Explicitly, what ratio have the following? Express each ratio with the lowest terms. a) 6 is three fourths of 8, upon dividing each term by 2. The theorem of extremes and means Which is what we wanted to prove. By working the proof backwards, we could show that, conversely, if This theorem, or at any rate its algebraic version, seems to be the only one taught in the schools, and it has become the mechanical method for solving all ratio problems. The student should resist that tempatation and should understand the facts of ratio and proportion. We include it here only for the purpose of explaining the following: Example 3. If a and b are numbers such that four a's are equal to three b's, then what ratio has a to b? Answer. Three fourths. For since a is three fourths of b. Problem 13. If eight m's are equal to five n's, then what ratio has m to n? The language of ratio Example 4. Joan earns $1600 a month, and pays $400 for rent. Express that fact in the language of ratio. Answer. "A quarter of Joan's salary goes for rent." That sentence, or one like it, expresses the ratio of $400 to $1600, of the part that goes for rent to her whole income. We are not concerned with the numbers themselves, but only their ratio. Example 5. In Erik's class there are 30 pupils, while in Ana's there are only 10. Express that fact in the language of ratio. Answer. "In Erik's class there are three times as many pupils as in Ana's." This expresses the ratio of 30 pupils to 10. Example 6. In a class of 24 students there were 16 B's. Express that fact in the language of ratio. Answer. "Two thirds of the class got B." This expresses the ratio of the part that got B to the whole number of students; 16 out of 24. Their common divisor is 8. 8 goes into 16 two times and into 24 three times. 16 is two thirds of 24. Problem 14. Express each of the following in the language of ratio. Use a complete sentence. a) In a class of 30 pupils, there were 10 A's. A third of the class got A. b) Out of 120 people surveyed, 20 responded No. A sixth of the people surveyed responded No. c) The population of Eastville is 60,000, while the population of The population of Eastville is three times the population of Westville. d) Over the summer, John saved $1000, while Bob has saved only $100. Over the summer, John saved ten times more than Bob. e) At a party, there were 12 girls and 4 boys. At that party, there were three times as many girls as boys. f) In a class of 28 students, there were 21 A's. Three fourths of the students got A. g) In a survey of 60 people, 40 answered Yes. Two thirds of the people surveyed answered Yes. h) In a class of 40 pupils, 25 got a B. Five eighths of the pupils got B. i) Of the 2100 students who voted, 1400 voted for Harrison. Two thirds of the students voted for Harrison. j) This month's bill is $50, while last month's was only $20. This month's bill is two and a half times last month's. k) Sabina makes $24,000 a year, while Clara makes only $16,000. Sabina makes one and a half times what Clara makes. l) In the past thirty years, the population grew from 20,000 to 70,000. In the past thirty years, the population grew three and a half times. Please make a donation to keep TheMathPage online. Copyright © 2013 Lawrence Spector Questions or comments?
http://www.themathpage.com/areal/ratio-proportion.htm
13
100
Elementary Statistics for AP Psychology (page 2) Practice questions for this study guide can be found at: A large amount of data can be collected in research studies. Psychologists need to make sense of the data. Qualitative data are frequently changed to numerical data for ease of handling. Quantitative data already is numerical. Numbers that are used simply to name something are said to be on a nominal scale and can be used to count the number of cases. For example, for a survey, girls can be designated as "1," whereas boys can be designated as "2." These numbers have no intrinsic meaning. Numbers that can be ranked are said to be on an ordinal scale, and can be put in order. For example, the highest scorer can be designated as "1," the second highest as "2," the third highest as "3," etc. These numbers cannot be averaged. Number 1 could have scored 50 points higher than 2. Number 2 may have scored 4 points higher than 3. If there is a meaningful difference between each of the numbers, the numbers are said to be on an interval scale. For example, the difference between 32° Fahrenheit (F) and 42°F is 10°F. The difference between 64°F and 74°F is also 10°F. However, 64°F is not twice as hot as 32°F. When a meaningful ratio can be made with two numbers, the numbers are said to be on a ratio scale. The key difference between an interval scale and a ratio scale is that the ratio scale has a real or absolute zero point. For quantities of weight, volume, and distance, zero is a meaningful concept, whereas the meaning of 0°F is arbitrary. Statistics is a field that involves the analysis of numerical data about representative samples of populations. Numbers that summarize a set of research data obtained from a sample are called descriptive statistics. In general, descriptive statistics describe sets of interval or ratio data. After collecting data, psychologists organize the data to create a frequency distribution, an orderly arrangement of scores indicating the frequency of each score or group of scores. The data can be pictured as a histogram—a bar graph from the frequency distribution—or as a frequency polygon—a line graph that replaces the bars with single points and connects the points with a line. With a very large number of data points, the frequency polygon approaches a smooth curve. Frequency polygraphs are shown in Figure 6.1. Measures of Central Tendency Measures of central tendency describe the average or most typical scores for a set of research data or distribution. Measures of central tendency include the mode, median, and mean. The mode is the most frequently occurring score in a set of research data. If two scores appear most frequently, the distribution is bimodal; if three or more scores appear most frequently, the distribution is multimodal. The median is the middle score when the set of data is ordered by size. For an odd number of scores, the median is the middle one. For an even number of scores, the median lies halfway between the two middle scores. The mean is the arithmetic average of the set of scores. The mean is determined by adding up all of the scores, then dividing by the number of scores. For the set of quiz scores 5, 6, 7, 7, 7, 8, 8, 9, 9, 10; the mode is 7; the median is 7.5; the mean is 7.6. The mode is the least used measure of central tendency, but can be useful to provide a "quick and dirty" measure of central tendency especially when the set of data has not been ordered. The mean is generally the preferred measure of central tendency because it takes into account the information in all of the data points; however, it is very sensitive to extremes. The mean is pulled in the direction of extreme data points. The advantage of the median is that it is less sensitive to extremes, but it doesn't take into account all of the information in the data points. The mean, mode, and median turn out to be the same score in symmetrical distributions. The two sides of the frequency polygon are mirror images as shown in Figure 6.1a. The normal distribution or normal curve is a symmetric, bell-shaped curve that represents data about how many human characteristics are dispersed in the population. Distributions where most of the scores are squeezed into one end are skewed. A few of the scores stretch out away from the group like a tail. The skew is named for the direction of the tail. Figure 6.1b pictures a negatively skewed distribution, and Figure 6.1c shows a positively skewed distribution. The mean is pulled in the direction of the tails, so the mean is lower than the median in a negatively skewed distribution, and higher than the median in a positively skewed distribution. In very skewed distributions, the median is a better measure of central tendency than the mean. Measures of Variability Variability describes the spread or dispersion of scores for a set of research data or distribution. Measures of variability include the range, variance, and standard deviation. The range is the largest score minus the smallest score. It is a rough measure of dispersion. For the same set of quiz scores (5, 6, 7, 7, 7, 8, 8, 9, 9, 10), the range is 5. Variance and standard deviation (SD) indicate the degree to which scores differ from each other and vary around the mean value for the set. Variance and standard deviation indicate both how much scores group together and how dispersed they are. Variance is determined by computing the difference between each value and the mean, squaring the difference between each value and the mean (to eliminate negative signs), summing the squared differences, then taking the average of the sum of squared differences. The standard deviation of the distribution is the square root of the variance. For a different set of quiz scores (6, 7, 8, 8, 8, 8, 8, 8, 9, 10), the variance is 1 and the SD is 1. Standard deviation must fall between 0 and half the value of the range. If the standard deviation approaches 0, scores are very similar to each other and very close to the mean. If the standard deviation approaches half the value of the range, scores vary greatly from the mean. Frequency polygons with the same mean and the same range, but a different standard deviation, that are plotted on the same axes show a difference in variability by their shapes. The taller and narrower frequency polygon shows less variability and has a lower standard deviation than the short and wider one. Since you don't bring a calculator to the exam, you won't be required to figure out variance or standard deviation. Scores can be reported in different ways. One example is the standard score or z score. Standard scores enable psychologists to compare scores that are initially on different scales. For example, a z score of 1 for an IQ test might equal 115, while a z score of 1 for the SAT I might equal 600.The mean score of a distribution has a standard score of zero. A score that is one standard deviation above the mean has a z score of 1. A standard score is computed by subtracting the mean raw score of the distribution from the raw score of interest, then dividing the difference by the standard deviation of the distribution of raw scores. Another type of score, the percentile score, indicates the percentage of scores at or below a particular score. Thus, if you score at the 90th percentile, 90% of the scores are the same or below yours. Percentile scores vary from 1 to 99. A statistical measure of the degree of relatedness or association between two sets of data, X and Y, is called the correlation coefficient. The correlation coefficient (r) varies from –1 to +1. One indicates a perfect relationship between the two sets of data. If the correlation coefficient is –1, that perfect relationship is inverse; as one variable increases, the other variable decreases. If the correlation coefficient (r) is +1, that perfect relationship is direct; as one variable increases the other variable increases, and as one variable decreases, the other variable decreases. A correlation coefficient (r) of 0 indicates no relationship at all between the two variables. As the correlation coefficient approaches –1 or +1, the relationship between variables gets stronger. Correlation coefficients are useful because they enable psychologists to make predictions about Y when they know the value of X and the correlation coefficient. For example, if r = .9 for scores of students in an AP Biology class and for the same students in AP Psychology class, a student who earns an A in biology probably earns an A in psychology, whereas a student who earns a D in biology probably earns a D in psychology. If r = .1 for scores of students in an English class and scores of the same students in AP Calculus class, knowing the English grade doesn't help predict the AP Calculus grade. Correlation does not imply causation. Correlation indicates only that there is a relationship between variables, not how the relationship came about. The strength and direction of correlations can be illustrated graphically in scattergrams or scatterplots in which paired X and Y scores for each subject are plotted as single points on a graph. The slope of a line that best fits the pattern of points suggests the degree and direction of the relationship between the two variables. The slope of the line for a perfect positive correlation is r = +1, as in Figure 6.2a. The slope of the line for a perfect negative correlation is r = –1, as in Figure 6.2b. Where dots are scattered all over the plot and no appropriate line can be drawn, r = 0 as in Figure 6.2c, which indicates no relationship between the two sets of data. Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Bullying in Schools - A Teacher's Guide to Differentiating Instruction - Steps in the IEP Process
http://www.education.com/study-help/article/elementary-statistics/?page=2
13
50
The Eddington luminosity (also referred to as the Eddington limit) in a star is defined as the point where the gravitational force inwards equals the continuum radiation force outwards, assuming hydrostatic equilibrium and spherical symmetry. Hydrostatic equilibrium occurs when compression due to Gravity is balanced by a Pressure gradient which creates a Pressure gradient force in the opposite When exceeding the Eddington luminosity, a star would initiate a very intense continuum driven stellar wind from its outer layers. A stellar wind is a flow of neutral or charged gas ejected from the upper atmosphere of a Star. Since most massive stars have luminosities far below the Eddington luminosity, however, their winds are mostly driven by the less intense line absorption. Originally, Sir Arthur Stanley Eddington only took the electron scattering into account when calculating this limit, something that now is called the classical Eddington limit. Sir Arthur Stanley Eddington, OM (28 December 1882 – 22 November 1944 was an English Astrophysicist of the early 20th century Nowadays, the modified Eddington limit also counts on other continuum processes such as bound-free and free-free interaction. The limit is obtained by setting the outward continuum radiation pressure equal to the inward gravitational force. Both forces decrease by inverse square laws, so once equality is reached, the hydrodynamic flow is different throughout the star. The pressure support of a star is given by the equation of hydrostatic equilibrium: The outward force of radiation pressure is given by: where σT is the Thomson scattering cross-section for the electron and the gas is assumed to be purely made of ionized hydrogen. Hydrostatic equilibrium occurs when compression due to Gravity is balanced by a Pressure gradient which creates a Pressure gradient force in the opposite In Physics, Thomson scattering is the scattering of Electromagnetic radiation by acharged particle The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J κ is the opacity of the stellar material. Equating these two pressures and solving for the luminosity gives the Eddington Luminosity: where M is the mass of the central object, M☉ the mass and L☉ the luminosity of the Sun, mp the mass of a proton and σT the Thomson cross-section for the electron. The Sun (Sol is the Star at the center of the Solar System. The mass of the proton appears because, in the typical environment for the outer layers of a star, the radiation pressure acts on electrons, which are driven away from the center. Because protons are negligibly pressured by the analog of Thomson scattering, due to their larger mass, the result is to create a slight charge separation and therefore a radially directed electric field, acting to lift the positive charges, which are typically free protons under the conditions in stellar atmospheres. When the outward electric field is sufficient to levitate the protons against gravity, both electrons and protons are expelled together. Thus in certain circumstances the balance can be different than it is for hydrogen. For example, in an evolved star with a pure helium atmosphere, the electric field would have to lift a helium nucleus (an alpha particle), with nearly four times the mass of a proton, while the radiation pressure would act on two free electrons. Helium ( He) is a colorless odorless tasteless non-toxic Inert Monatomic Chemical Alpha particles (named after and denoted by the first letter in the Greek alphabet, α consist of two Protons and two Neutrons bound together into a Thus twice the usual Eddington luminosity would be needed to drive off an atmosphere of pure He. On the other hand, at very high temperatures, as in the environment of a black hole or neutron star, high energy photon interactions with nuclei or even with other photons, can create an electron-positron plasma. A black hole is a theoretical region of space in which the Gravitational field is so powerful that nothing not even Electromagnetic radiation (e A neutron star is a type of remnant that can result from the Gravitational collapse of a massive Star during a Type II, Type Ib or Type In that situation the mass of the neutralizing positive charge carriers is ~1836 times smaller (the proton:electron mass ratio), while the radiation pressure on the positrons doubles the effective upward force per unit mass, so the limiting luminosity needed is reduced by a factor of ~2*1836. Thus the exact value of the Eddington luminosity depends on the chemical composition of the gas layer and the spectral energy distribution of the emission. Gas with cosmological abundances of hydrogen and helium is much more transparent than gas with solar abundance ratios. Chemical elements data references Earth bulk continental crust and upper continental crust C1 &mdash Crust CRC Handbook Atomic line transitions can greatly increase the effects of radiation pressure, and line driven winds exist in some bright stars. The role of the Eddington limit in today’s research lies in explaining the very high mass loss rates seen in for example the series of outbursts of η Carinae in 1840-1860. Eta Carinae (η Carinae or η Car is a Hypergiant Luminous blue variable Star in the Carina constellation. The regular, line driven stellar winds can only stand for a mass loss rate of around 10 − 4 – 10 − 3 solar masses per year, whereas we need mass loss rates of up to 0. 5 solar masses per year to understand the η Carinae outbursts. This can be done with the help of the super-Eddington continuum driven winds. Gamma ray bursts, novae and supernovae are examples of systems exceeding their Eddington luminosity by a large factor for very short times, resulting in short and highly intensive mass loss rates. Gamma-ray bursts ( GRB s are the most luminous electromagnetic events occurring in the Universe since the Big Bang. A nova (pl novae or novas) is a Cataclysmic nuclear explosion caused by the accretion of hydrogen onto the surface of a White A supernova (plural supernovae or supernovas) is a stellar Explosion. Some X-ray binaries and active galaxies are able to maintain luminosities close to the Eddington limit for very long times. X-ray binaries are a class of Binary stars that are luminous in X-rays The X-rays are produced by matter falling from one component (usually a relatively normal An active galactic nucleus ( AGN) is a compact region at the centre of a Galaxy which has a much higher than normal luminosity over some or all of the Electromagnetic For accretion powered sources such as accreting neutron stars or cataclysmic variables (accreting white dwarfs), the limit may act to reduce or cut off the accretion flow, imposing an Eddington limit on accretion corresponding to that on luminosity. A neutron star is a type of remnant that can result from the Gravitational collapse of a massive Star during a Type II, Type Ib or Type Cataclysmic variable stars ( CV) are stars which irregularly increase in brightness by a large factor then drop back down to a quiescent state A white dwarf, also called a degenerate dwarf, is a small Star composed mostly of Electron-degenerate matter. Super-Eddington accretion onto stellar-mass black holes is one possible model for ultraluminous X-ray sources (ULXs). For accreting black holes, all the energy released by accretion does not have to appear as outgoing luminosity, since energy can be lost through the event horizon, down the hole. A black hole is a theoretical region of space in which the Gravitational field is so powerful that nothing not even Electromagnetic radiation (e In General relativity, an event horizon is a boundary in Spacetime, an area surrounding a Black hole or a Wormhole, inside which events cannot Such sources effectively may not conserve energy. Then the accretion efficiency, or the fraction of energy actually radiated of that theoretically available from the gravitational energy release of accreting material, enters in an essential way. It is however also important to note that the Eddington limit is not a strict limit of the luminosity of a stellar object. Several potentially important factors have been left out, and a couple of super-Eddington objects have been observed that do not seem to have the high mass loss rate that we would expect. It is therefore of interest to also look at other possible factors that might affect the maximum luminosity of a star: • Porosity. A problem with the idea of a steady, continuum driven winds lies in the fact that both the radiative flux and gravitational acceleration scale with r-2. The ratio between the two factors would then be constant, and in a super-Eddington star, the whole envelope would become gravitationally unbound at the same time. This is not what is seen, and a possible solution is introducing an atmospheric porosity, where we imagine the stellar atmosphere to consist of denser regions surrounded by lower density gas regions. The result would be that the coupling between radiation and matter would be reduced, and the full force of the radiation field would only be seen in the outer lower density layers of the atmosphere that are more homogenous. • Turbulence. A possible destabilizing factor might the turbulent pressure arising when energy in the convection zones builds up a field of supersonic turbulent motions. The convection zone of a Star is the range of radii in which energy is transported primarily by Convection. The importance of turbulence is being debated, however. • Photon bubbles. Another potentially important factor is the photon bubble effect that might give a good explanation to some stable super-Eddington objects. These photon bubbles would develop spontaneously in radiation-dominated atmospheres when the magnetic pressure exceeds the gas pressure. We can imagine a region in the stellar atmosphere with a density lower than the surroundings, but with a higher radiation pressure. The region would then rise through the atmosphere, with radiation diffusing in from the sides, leading to an even higher radiation pressure. This effect would be able to transport radiation more efficiently than a homogenous atmosphere, and the allowed total radiation rate would be a lot higher. In accretion discs, we would be able to reach luminosities as high as 10-100 times the Eddington limit without experiencing the previously mentioned instabilities. An accretion disc (or accretion disk) is a structure (often a Circumstellar disk) formed by diffuse material in orbital motion around a central body
http://www.citizendia.org/Eddington_luminosity
13
63
Reserving an area of memory to store a value is referring to as declaring a variable. To declare a variable, use the DECLARE keyword using the following formula: DECLARE Variable1 Options; To declare more than one variable, you can use the following formula: DECLARE Variable1 Options; Variable2 Options; Variable_n Options; The DECLARE keyword lets the interpreter know that you are making a declaration. Each variable must have a name. There are rules you must follow: When declaring a variable, after setting the name of a variable, you must specify the amount of memory that the variable will need to store its value. Since there are various kinds of information a database can deal with, PL/SQL provides a set of data types. Therefore, a variable declaration uses the following formulas: DECLARE Variable1 DataType; DECLARE Variable1 DataType1; Variable2 DataType2; Variable_n DataType_n; When declaring a variable, the interpreter reserves a space in the computer memory for it but the space doesn't necessarily hold a recognizable value. This means that, at this time, the variable is null. One way you can change this is to give a value to the variable. This is referred to as initializing the variable. There are two ways you can initialize a variable: when declaring it or after declaring it. To initialize a variable when declaring it, follow its name by the := operator and the declared value: DECLARE Variable1 DataType := Value; If you are declaring more than one variable and you want to initialize one or more, follow the desired one with := and the necessary value: DECLARE Variable1 DataType1 := Value1; Variable2 DataType2 := Value1; Variable_n DataType_n := Value_n; After declaring an initializing a variable, you can use it. For example you can enter it in the parentheses of DBMS_OUTPUT.PUT_LINE(). A field of characters can consist of any kinds of alphabetical symbols in any combination, readable or not. If you want a variable to hold a fixed number of characters, such as the book shelf numbers of a library, declare it with the CHAR data type. Here is an example: DECLARE Gender CHAR; By default, the CHAR data type can be applied to a variable that would hold one character at a time. After declaring the variable, when initializing it, include its value in single-quotes. When initializing the variable, you must use only one character. Here is an example: SQL> DECLARE Gender CHAR := 'F'; 2 BEGIN 3 DBMS_OUTPUT.PUT_LINE('Gender: ' || Gender); 4 END; 5 / Gender: F PL/SQL procedure successfully completed. A string is a character or a combination of characters. If a variable will hold strings of different lengths, you can declare it using either the VARCHAR2 or the NVARCHAR2 data type. The maximum length of text that the variable can hold is 32767 bytes. In some circumstances, you will need to change or specify the number of characters used in a string variable. To specify the maximum number of characters that can be stored in a string variable, on the right side of CHAR, VARCHAR2, or NVARCHAR2, type an opening and a closing parentheses. Inside of the parentheses, type the desired number. To initialize the variable, include its value between single-quotes. Here is an example: SQL> DECLARE FirstName NVARCHAR2(40) := 'Patricia'; 2 BEGIN 3 DBMS_OUTPUT.PUT_LINE('First Name: ' || FirstName); 4 END; 5 / First Name: Patricia PL/SQL procedure successfully completed. The NCHAR and NVARCHAR2 types follow the same rules as the CHAR and VARCHAR2 respectively except that they can be applied to variables that would hold international characters; that is, characters of languages other than US English. This is done following the rules of Unicode formats. An integer, also called a natural number, or a whole number, is a number that can start with a + or a - sign and is made of digits. Between the digits, no character other than a digit is allowed. When the number starts with +, such as +44 or +8025, such a number is referred to as positive and you should omit the starting + sign. This means that the number should be written as 44 or 8025. Any number that starts with + or simply a digit is considered as greater than 0 or positive. A positive integer is also referred to as unsigned. On the other hand, a number that starts with a - symbol is referred to as negative. To declare an integer variable, use the INT or the INTEGER Data type. You can initialize the variable with a number between -2,147,483,648 and 2,147,483,647. Here is an example: SQL> DECLARE Distance INTEGER := 628635; 2 BEGIN 3 DBMS_OUTPUT.PUT_LINE('Distance: ' || Distance); 4 END; 5 / If you want a variable that would hold small natural numbers, declare it using the SMALLINT data type. When initializing the variable, assign it a small number between -32,768 and 32,767. Here is an example: SQL> DECLARE Age SMALLINT := 36; 2 BEGIN 3 DBMS_OUTPUT.PUT_LINE('Age: ' || Age); 4 END; 5 / A decimal number is a number that can have a period (or the character used as the decimal separator as set in the Control Panel) between the digits. An example would be 12.625 or 44.80. Like an integer, a decimal number can start with a + or just a digit, which would make it a positive number. A decimal number can also start with a - symbol, which would make it a negative number. If the number represents a fraction, a period between the digits specifies what portion of 1 was cut. A floating-point number is a fractional number. To declare a variable for decimal values that do not require too much precision, use the FLOAT or REAL data type. Here is an example: SQL> DECLARE Measure FLOAT := 36.12; 2 BEGIN 3 DBMS_OUTPUT.PUT_LINE('Measure: ' || Measure); 4 END; 5 / To declare a variable for decimal values, use the NUMBER data type. A precision is the number of digits used to display a numeric value. For example, the number 42005 has a precision of 5, while 226 has a precision value of 3. If the data type is specified as an integer (the INT and its variants) or a floating-point number (FLOAT and REAL), the precision is fixed by the database and you can just accept the value set by the interpreter. The scale of a number if the number of digits on the right side of the period (or the character set as the separator for decimal numbers for your language, as specified in Control Panel). The scale is used only for numbers that have a decimal part. To control the level of precision applied on a NUMBER variable, follow the NUMBER data type by parentheses. In the parentheses, use two values separated by a comma. The left value represents the precision. The right value represents the scale. The value must be an integer between 0 and 18. Here is an example: SQL> DECLARE Measure NUMBER(8, 3) := 284636.48; 2 BEGIN 3 DBMS_OUTPUT.PUT_LINE('Measure: ' || Measure); 4 END; 5 / A DATE data type is used for a variable whose values would consist of date and/or time values. The entries must be valid date or time values. The date value of a DATE variable can be comprised between January 1st, 4712 BC and December 31, 9999. To initialize a DATE variable, include its value between single-quote. For a date, use the following format: The first number represents the day. If the number is between 1 and 9, you can omit or include a leading 0. The second section will contain the 3-letter name of the month in any case of your choice (remember that SQL is not case-sensitive). The right section contains the value of the year: Probably to be on the safe side, you should always express the year with 4 digits. Here is an example: SQL> DECLARE DateOfBirth DATE := '06-Feb-1996'; 2 BEGIN 3 DBMS_OUTPUT.PUT_LINE('Date of Birth: ' || DateOfBirth); 4 END; 5 / Date of Birth: 06-FEB-96 PL/SQL procedure successfully completed. A Boolean value is a piece of information stated as being true or false. To declare a Boolean variable, use the BOOLEAN type. Here is an example: DECLARE IsOrganDonor BOOLEAN; As stated previously, you can initialize the variable when declaring. Here is an example: DECLARE IsOrganDonor BOOLEAN := TRUE; To initialize the variable after declaring it, in the BEGIN...END section, access the variable and assign the desired value. Here is an example: DECLARE IsOrganDonor BOOLEAN; BEGIN IsOrganDonor := TRUE; END; /
http://www.functionx.com/oracle/Lesson04.htm
13
50
To graph secant and cosecant, find values of the reciprocal functions and plot them on the coordinate plane. Unlike the graphs of sine and cosine, secant and cosecant have vertical asymptotes whenever the cosine and sine equal zero, respectively. Graphing transformations is made easier by substituting theta for the quantity in parenthesis and solving for x. Also, notice that neither graph has x-intercepts. I want to graph transformations of the secant and cosecant functions but first let's take another look at the graph of y equals secant theta. Let's recall that secant is 1 over cosine theta and so it inherits a lot of properties from cosine theta. For example cosine theta is even so is secant and it's easy to show that. Secant of negative theta is 1 over cosine of negative theta and that's of course cosine theta. So 1 over cosine theta is secant theta, so that proves that secant is even, now we'll use that right away. Let's plot some points for secant theta, now first let's start with 0 secant theta of 0. What's cosine of 0 it's 1 right? So secant of 0 will be 1 over 1 which is 1. Let's try pi over 3 I like pi over 3 because cosine of pi over 3 is a half. So secant will be 2, and then something interesting happens at pi over 2 cosine is 0 so secant is going to be undefined. Now let's keep in mind that secant is an even function, so secant of negative pi over 3 is going to be the same as pi over 3 will be 2. And at negative pi over 2 it'll be undefined just as it is here. And so this gives us enough to plot half a period of the secant function, let's do that right now. We have vertical asymptotes at negative pi over 2 and pi over 2, and then we've got 3 points to use 0, 1 pi over 3, 2 and negative pi over 3, 2. So let's plot this, pi over 3 is two thirds the way from 0 at pi over 2 alright that's pi over 6, pi over 3 and then negative pi over 6, negative pi over 3. And so half a period of secant looks like this, it's kind of a u-shaped graph with asymptotes on either side. Now what happens elsewhere, let's make the observation that secant inherits another property from cosine, the add pi property. What's secant of theta plus pi? It's 1 over cosine of theta plus pi, and when we add pi to the argument of cosine we get negative cosine. So this is 1 over minus cosine theta and that means this will be minus secant theta. What this tells us is if I add pi to an angle I take the opposite value, so for example, let me start here if I add pi to negative pi over 3 I get 2 pi over 3. I take the opposite value negative 2, this makes it really easy to extend this table downward. So if I add pi to 0 I get pi I take the opposite value negative 1 add pi to this 4 pi over 3 take the opposite value negative 2 I add pi to this and I get 3 pi over 2 still going to be undefined. And so I have another half period, negative 2, negative 1, negative 2 at 2 pi over 3 pi and 4 pi over 3. Add another vertical asymptote at 3 pi over 2. Let me plot that first so this is the second half period notice all the values are negative. We have negative 1 pi at 2 pi over 3 which is a third the way from pi over 2 to pi we've got negative 2. And then here we have negative 2 again, and so we've got this side down in u-shape. How are these 2 shapes related? They are exactly the same shape, take this flip it across the x axis and shift it pi units to the right and you'll get this piece here. And once you've got these 2 pieces you've got a complete period of the secant function and you can get more periods by taking this graph and shifting it to the right or left. And so for example if you wanted to extend this to the right, we take this piece right, this value at 2 it's negative pi over 3, 2 we'd have a value that corresponds here. This value at 1, 0 1 we'd have a value that corresponds right here and then another pi over 3 to the right we'd be back up at 2. So we'd have another upward u-shape here and we could extend backwards too. We'd have another negative 1 here, another negative 2 here so you can extend it as far as you need to. But just remember that first half period to get from that first one to the second one you flip across the x axis and shift to the right pi.
http://www.brightstorm.com/math/precalculus/trigonometric-functions/transforming-secant-and-cosecant-problem-1/
13
50
- Iowa Core Mathematics (.pdf) - Iowa Core Mathematics (.doc) - Iowa Core Mathematics with DOK (.pdf) - Iowa Core Mathematics with DOK (.doc) Resources to support Iowa Core Mathematics Standards in this domain: Solve problems involving measurement and estimation of intervals of time, liquid volumes, and masses of objects. - 3.MD.1. Tell and write time to the nearest minute and measure time intervals in minutes. Solve word problems involving addition and subtraction of time intervals in minutes, e.g., by representing the problem on a number line diagram. - 3.MD.2. Measure and estimate liquid volumes and masses of objects using standard units of grams (g), kilograms (kg), and liters (l).1 Add, subtract, multiply, or divide to solve one-step word problems involving masses or volumes that are given in the same units, e.g., by using drawings (such as a beaker with a measurement scale) to represent the problem.2 Represent and interpret data. - 3.MD.3. Draw a scaled picture graph and a scaled bar graph to represent a data set with several categories. Solve one- and two-step “how many more” and “how many less” problems using information presented in scaled bar graphs. For example, draw a bar graph in which each square in the bar graph might represent 5 pets. - 3.MD.4. Generate measurement data by measuring lengths using rulers marked with halves and fourths of an inch. Show the data by making a line plot, where the horizontal scale is marked off in appropriate units— whole numbers, halves, or quarters. Geometric measurement: understand concepts of area and relate area to multiplication and to addition. - 3.MD.5.Recognize area as an attribute of plane figures and understand concepts of area measurement. - A square with side length 1 unit, called “a unit square,” is said to have “one square unit” of area, and can be used to measure area. - A plane figure which can be covered without gaps or overlaps by n unit squares is said to have an area of n square units. - 3.MD.6. Measure areas by counting unit squares (square cm, square m, square in, square ft, and improvised units). - 3.MD.7.Relate area to the operations of multiplication and addition. - Find the area of a rectangle with whole-number side lengths by tiling it, and show that the area is the same as would be found by multiplying the side lengths. - Multiply side lengths to find areas of rectangles with whole-number side lengths in the context of solving real world and mathematical problems, and represent whole-number products as rectangular areas in mathematical reasoning. - Use tiling to show in a concrete case that the area of a rectangle with whole-number side lengths a and b + c is the sum of a × b and a × c. Use area models to represent the distributive property in mathematical reasoning. - Recognize area as additive. Find areas of rectilinear figures by decomposing them into non-overlapping rectangles and adding the areas of the non-overlapping parts, applying this technique to solve real world problems. Geometric measurement: recognize perimeter as an attribute of plane figures and distinguish between linear and area measures. - 3.MD.8. Solve real world and mathematical problems involving perimeters of polygons, including finding the perimeter given the side lengths, finding an unknown side length, and exhibiting rectangles with the same perimeter and different areas or with the same area and different perimeters. 1 Excludes compound units such as cm3 and finding the geometric volume of a container. 2 Excludes multiplicative comparison problems (problems involving notions of “times as much”; see Glossary, Table 2).
http://educateiowa.gov/index.php?option=com_content&view=article&id=2266&Itemid=4367
13
68
In the early days of computing, the data we worked with consisted of integers, real numbers, and characters. Later, we moved on to time and money data. Today, as we increasingly deal with environmental and other geographic information, we need new ways of looking at spatial data. For millennia, cartographers have attempted to represent the round Earth on flat maps. The first four decades of geographic information systems (GIS) have attempted to automate this process, typically using a "flat Earth" paradigm of map sheets and two-dimensional coordinates. The result has been an unwieldy collection of complex math, preset views, and location-dependent precision. An alternative is to model the Earth using a "round Earth" paradigm. In this way, we can roam freely with our geographic applications, modeling surface features without restriction, and calculating spatial relationships with uniform high precision. In this article we'll demonstrate an approach to representing the location, storage, retrieval, and manipulation of data in terms of its spatial relationships. We'll use elementary trigonometry and three-dimensional vector algebra to develop programs that demonstrate the key ideas. Then we'll build on these concepts to show how you can develop a complete GIS that has unprecedented speed and precision, without the use of a conventional GIS solution. To illustrate these concepts, let's build a simple geographical atlas that lets you roam anywhere on the globe, viewing surface features at varying scales. In the general case, we would model our geographic features of interest as points, lines, areas, or volumes. For simplicity, this application will deal only with line objects. The geographic location of a line object can be given by an ordered set of vertex coordinates. Figure 1 illustrates some sample application objects. provides their numeric specification in the familiar terms of latitude and longitude--the angles that give the location of geographic features relative to the equator and a prime meridian. The frame of reference is geocentric, meaning that the angles are measured from the center of the Earth; see Figure 2. Latitude is labeled phi and longitude is labeled lambda. While early scientists thought of the planet as a perfect sphere, we now know it is somewhat flattened at the poles, an "ellipsoid of rotation." However, since the eccentricity of the Earth is not great (less than a third of one percent), we'll assume for the moment that the Earth is indeed a perfect sphere. Since latitudes and longitudes are angles, when we work with them we must be prepared to calculate sines, cosines, tangents, arc tangents, and the like. Even with today's math coprocessors, this can get messy. For instance, have you ever tried to find the tangent of 90 degrees? You will if your application deals with objects in the polar regions. Generally, such calculations lack a geographically uniform distribution of precision. Luckily, a point's location on the Earth's surface can be represented in other ways. Consider a 3-D geocentric space having three orthogonal axes projecting through the equator and the poles. Call these axes X, Y, and Z. Now we can locate a point on the surface with the three coordinates x,y,z; see Figure 3. The X axis projects through the Atlantic ocean just off West Africa, the Y axis projects through the Indian ocean just west of Sumatra, and the Z axis projects through the North Pole. The pictured surface point P(x,y,z) might be somewhere in northern Afghanistan. Given the 3-D space just described, there's another way to describe the location of a surface point. Instead of referring its coordinates directly, we could describe the vector perpendicular to the surface at that point. For a perfectly spherical Earth, this normal would pass through the center. It has unit length, and its direction is defined by the angles formed between it and the X, Y, and Z axes. These angles are called direction angles. We'll be working with the cosines of the direction angles--direction cosines--labeled di, dj, and dk, respectively. The point in Afghanistan can now be referred to as P(di,dj,dk); the point off West Africa has the coordinates (1, 0, 0); the point in the Indian ocean is at (0,1,0); the North Pole point is at (0,0,1); and the South Pole point is at (0,0,-1); see Figure 4. Recording direction cosines as double types in C typically provides sub-millimetric precision. This usually surpasses the precision of your very best field data. shows some geometric vector-algebra functions and their supporting structures and constants. Most developers are familiar with latitude and longitude. In addition, there are "flat-Earth" coordinate systems such as UTMs and State Plane that are used by surveyors and map makers. Few, however, are familiar with direction cosines. Consequently, if our new system is to be of any use, we'll need an efficient method of converting between direction cosines and these other coordinates. For simplicity, we'll restrict input to latitudes and longitudes. We begin by converting a file of geographic data to direction cosines. At first glance, using direction cosines to locate a point on the Earth's surface seems inefficient, since we're trading two items (latitude and longitude) for three. But in modeling geographic objects, we often have multiple locations associated with specific objects. For example, a line object such as a coastline, river, or road is usually modeled as an ordered sequence of connected vertices. In such instances, we might select a single, "central" location and relate all the associated vertices to it. But will this "differential" position encoding be effective? In developing planar projections, map makers look for the recognizability of shapes (conformity) and the uniformity of scale in all directions (isometry). One of their best efforts is the stereographic projection which, over moderate distances, produces a view of the Earth that's both conformal and isometric. (Despite its name, this projection of 3-D onto 2-D provides no depth perception.) If an object is restricted in size, it can be represented in the plane of a stereographic projection without significant distortion. This means we can use a specific scheme of differential location recording in which each vertex of the line is encoded as a stereographic planar displacement from some central position. As such, this differential value will have just two components, say dx and dy. Using only short int types for dx and dy, resolution of better than a meter can be maintained for surface objects as large as ten kilometers in extent. For better resolution, we can use float or long int types; for poorer, we can use signed char. So, typically, we'll have traded in three doubles for two short ints, a significant reduction in storage requirements. We refer to these differentially encoded coordinates as local coordinates. The full, three-element direction cosine global coordinates can easily be reconstructed at any time, using only elementary vector algebra. Since we're creating an application to select and display terrestrial "objects," it makes sense to store the data externally under some kind of object-oriented scheme. But how should the objects be indexed? Conventional wisdom suggests that we index our data on the basis of decomposition (or hashing) of the object's coordinates. For this application, however, let's try something different. First, let's establish a file as the general repository for the local coordinates of all the vertices of all of the objects modeled. We'll provide access to the individual parts of this file using file pointers. Next, let's set up an index file of object headers. Each header will hold the object's identifier, the global coordinates of its center, a file pointer to the local coordinates of its vertices, and the vertices count. The object's identifier can serve as a link to its other attributes (if any). The header will also contain an estimate of the object's geographic extent, described shortly. When we load the database with an object, we can determine its "center" by calculating the "vector mean" of the direction vertices' cosines. We can then use this center to differentially encode coordinates for the vertices. Because this application will let you zoom in and out through a wide range of scales, we're providing two classes of line objects: those required for close-ups (dense) and those needed only for wide-area presentations (sparse). Since the application is to be interactive, we'll want to reduce unnecessary data retrieval and processing time (especially if we don't have floating-point hardware). To determine if objects are "onscreen" or not, the application will need to know their geographic extents. Using vector algebra, we can calculate these as surface distances, for which we'll need arc (or great circle) distances. While we're loading objects into the database, it will prove useful to calculate and store the maximum great circle distance that any vertex is displaced from the object's center; see Figure 5. and the called functions in provide code to read and convert location data to direction cosines, differentially encode them, calculate distances, and build a location-dependent database. Our application provides a window on the world, so to speak, by displaying objects that come within a field of view you select. A view is defined in terms of location and scale. The location of the display's center can be expressed as a latitude and longitude. Scale can be expressed as the ratio between a distance on the screen and a distance on the ground. Figure 6 illustrates such a window, while and the called functions in show the code needed to establish an initial view and scale. Now that field of view is defined, we can locate objects that might come into that field using distance calculations. If you think of the display as circular rather than rectangular, then you can calculate a maximum radius for the display. You can go to the database and select those objects that might be displayed. (The graphics-library clipping function will fine-tune the selection later.) The header for each object contains the maximum distance of any vertex from the object's center. This was calculated and stored when we loaded the database. So, to determine if the object might be in the field of view, simply: 1. find the distance between its center point and that of the display; 2. subtract the maximum radius for the object; and, 3. subtract the maximum radius for the display. If the result is negative, you'll want to retrieve the object from the database for further processing; otherwise, ignore it. Figure 7 illustrates both of these conditions. provide code to make the selection and bring the selected objects into memory. Next we need to project each object's vertices into the plane of the display (projection plane), which is generally not the same as that of the display. For simplicity, we'll go back to the sphere and reproject the object's vertices using, for this example, the stereographic projection. Other projections--gnomonic, orthographic, Mercator, and the like -- might also be used. Gnomonic is the easiest, but stereographic looks better and is worth the effort. give the code to perform the projections and draw the objects. (For more about map projections, see "Map Projections Used By The U.S. Geological Survey, Sec. Ed.," Geological Survey Bulletin 1532, Department of the Interior, U.S. Government Printing Office, Washington, DC, 1984.) Suppose you want to change the scale or view of the display. Simply modify these items and repeat the previous operations. A simple outside loop that changes the scale or map center point will work. shows code to accept changes via the sign and arrow keys. That completes our simple atlas application. Even with the slowest PC, you can now inspect the world's coastlines without preselection of view or scale. To more fully exercise the system, raw world-coastline data (in ASCII form) is available for download, together with a prebuilt world-coastline database (in binary); an executable View program in DOS real mode, compiled for VGA with math coprocessor emulation; and ASCII source code for the programs. Since the Earth is closer to an ellipsoid of rotation than a sphere, we'll need to extend our vector algebra. The required quadratic vector algebra has been fully implemented in the Hipparchus Geopositioning Model with significant improvements in speed and precision over conventional geodesy methods. (See Geodesy, by Henry D. Bomford, Oxford University Press, 1973.) For this sample application, we calculated a local center point for each object and then used this to select objects from the database. We also used these center points to encode the large number of vertex coordinates associated with our objects. Suppose we could precalculate a set of center points that would serve the same purposes for all the objects in the database. Ideally, such a set of center points would provide both fast spatial indexing and a flexible association with objects. In such a spatial index, each indexed database "bucket" would hold some prescribed maximum number of object-defining coordinates. Then we could have geographically large cells for surface regions where we've little or no data and geographically small cells where we have a lot of data. The Hipparchus Geopositioning Model implements just such a scheme using a flexible partitioning system called a "Voronoi cell structure." Figure 8 shows one such tessellation of the Earth. The structure illustrated would be suitable for indexing population-related data objects. Voronoi cell structures are always global, even if the application is localized. A cell structure is defined by its cell center points. For each cell, the structure includes a unique cell identifier, the global coordinates of the cell's center point, the cell's maximum radius, and an ordered list of neighbor-cell identifiers. The boundaries between cells exist only mathematically. The special property of the Voronoi cell structure is that any surface point can be classified unambiguously as belonging to one cell or another on the basis of surface distance. A point is always closer to the center point of its "owner" cell than to the center point of any other cell. For a discussion of the Voronoi tessellation of the plane, see Algorithms, Second Edition, by Robert Sedgewick (Addison-Wesley, 1988). For a description of its adaptation to the surface of the ellipsoid, see "Hipparchus Geopositioning Model: an Overview," by H. Lukatela in Proceedings of Auto Carto 8 (American Society for Photogrammetry and Remote Sensing, 1987), and the Hipparchus Tutorial) by Ron V. Gilmore (Geodyssey, 1992). In the context of a Voronoi cell structure, an object's vertices are associated with their closest-cell center points as well as an object header. Objects can then be defined without geographic size restriction of any kind. Objects can consist of sets of points, lines, or regions spanning any number of cells. Regions can be nonsimply connected: An island group can be modeled as a single object, islands can have interior lakes with islands, and so on. Volumes can be modeled as regions having elevation or depth attributes. Figure 9 shows the intersection of two overlapping region objects in the Voronoi context. Cell center points rather than object centers are used for the differential encoding of coordinates. Lists are maintained for each case: For more about these data structures, refer again to "Hipparchus Geopositioning Model: An Overview" and the Hipparchus Tutorial. When used as an index to objects stored externally, the Voronoi cell structure proves remarkably effective in reducing unnecessary disk accesses. Not only are all the cells containing object data known to the application program, but cells associated with open windows are known as well. As you pan and zoom the window, precise retrieval instructions can be fed to the database. References to random locations are traced to their owner cells by a geographically direct search route. Ownership of a point by a particular cell is confirmed when a comparison of distances with the cell's immediate neighbor cell center points shows them to be more distant. In this application, we had to search our entire index to determine which data was to be selected. This was because we knew of no way to map directly from the 3-D ordered domain of our real-world objects into the linearly ordered domain of the computer. But when we associate these objects with a Voronoi cell structure, the situation changes. The unambiguous classification of object vertices into a specific, linearly ordered structure of cells makes possible the use of hierarchical searches for the data, resulting in significant efficiencies. Since the order of cell identifiers in a cell structure is irrelevant to its algorithmic operation, cells can be arranged in any order. Therefore, data-access bias can be arbitrarily imposed without affecting the logic of the application. The demand for efficient handling of crushing volumes of spatial data has arrived. Round-Earth vector algebra and the Voronoi tessellation can be combined to provide unrestricted modeling and efficient manipulation of terrestrial objects. Precise spatial indexing can be provided on the basis of distance calculations rather than coordinate decomposition. Monolithic geographic information systems may soon be history.
http://www.geodyssey.com/papers/dobbs92.html
13
56
The Physics Philes, lesson 12: ‘Round and ‘Round It Goes In which cars are driven, friction is revisited, and sleds slide. Last week I tried to explain circular motion. I spent this week trying to understand how the circular motion equations are applied. Success in this regard has been…um…nonuniform. But I think I have a pretty good grasp of the basics. Let’s see if I’m right! First, let’s do a little review. As we know from last week, when a particle moves in a circular path at a constant speed, the particle’s acceleration is directed toward the center of the circle. (Remember, there is always acceleration toward the center because the particle is constantly changing direction.) As we saw last week, this centripetal acceleration can be expressed in terms of speed and the radius of the circle: Centripetal acceleration can also be expressed in terms of period T, or the time it takes for a particle to make one trip around the circle: Uniform circular motion is governed by Newton’s second law of motion. Remember Newton’s second law? Force equals mass times acceleration. Or, in math: ∑F = ma. For a particle to move in a circle, the net force on the particle must be directed toward the center. We can find this net force by substituting the a sub rad = v squared/R into Newton’s famous equation: The particle doesn’t need to go in a full circle to find the net force. The equation is good for any path that can be regarded as part of a circular arc. Review over! Let’s try some questions! Let’s say we’ve got a 25.0 kg sled on some frictionless ice. The sled is tied to a pole by a 5.00 m piece of rope. Once the sled is given a little push, the sled revolves uniformly around the pole and makes five revolutions a minute. What is the force exerted on the rope? Since the sled is moving uniformly, we are dealing with uniform circular motion, which means that the only acceleration we have to deal with the is radial (or centripetal) acceleration. To find our target variable (force F) we’ll use Newton’s second law of motion. Here is a diagram of a sled going in a circle: But it’s not very useful in solving this problem. Here is a (more helpful) free body diagram: I included both the situational diagram and the free-body diagram because, if you’re like me, you might have a little trouble figuring out which way to orient your axes. But look, I’ve oriented the positive x-axis to point toward the middle of the circle, because that is the direction the radial acceleration must be pulling. The question doesn’t tell us what the acceleration is, so before we use Newton’s second law we have to find the radial acceleration. Which means we’ll have to use: The problem says that the sled makes five revolutions every minute. But to find period T we need to figure out how long it takes for the sled to make one revolution. T = (60.0 s)/(5 rev) = 12 s Now that we know the value for period T, we can plug in the values to the radial acceleration equation: Let’s say we have a car rounding a flat, unbanked curve with radius R. If the coefficient of static friction between the tires and the road is μ sub s, what is the maximum speed at which the driver can take the curve without sliding? OK, since the car’s acceleration as it rounds the curve has a magnitude a = v^2/R, the maximum speed with correspond with the maximum radial acceleration and the maximum horizontal force on the car toward the center of its circular path. The only horizontal force acting on the care is the friction force exerted by the road. So the equations we’ll need are It includes the static friction force, weight, and the normal force. The friction force must point toward the center of the circular path in order to cause the radial acceleration. Since the car doesn’t slid toward or away the center of the circle, the friction force is static friction: f sub max = μ sub s (n). Acceleration is toward the center of the circular path, and there is no vertical acceleration. That means we have: That second equation basically says that the magnitude of the normal force is equal to magnitude of the weight. The first equation says that the force needed to keep the car moving in a circular path increases as the car’s speed increases. But if we apply what we know about static friction, we know that the force available is f sub max = μ sub s (n) = μ sub s (mg). This will limit how fast the car can go around the track. If we substitute f sub max for f and v sub max for v in the net force x equation, we get: That means that the maximum speed to the car can be expressed with this equation: There are so many more different ways of using the uniform circular motion equations! Unfortunately, some of them I had a hard time wrapping my head around. Like conical pendulums. I have a feeling that my issues with that stems from my only very tenuous grasp of trigonometry. (I know! I’m working on it. I promise!) Does all of this make sense to you? Did I mess anything up? Do you like my new way of typing equations? Please let me know! Featured image credit: Mr. Fujisawa
http://teenskepchick.org/2012/09/03/the-physics-philes-round-and-round-it-goes/
13
77
Unit 16 Section 4 : Formulae for circumference and area There are formulae to find the size of the area (A) and circumference (C) of a circle. To find either of these we need to use the size of the radius (r) or diameter (D). Radius and Diameter The length of the radius goes from the edge to the centre of the circle, and the diameter goes all the way from edge to edge through the centre, so the diameter is exactly twice the length of the radius. If we know the radius then we double it to get the diameter, and if we know the diameter then we halve it to get the radius. Diameter and Circumference The circumference of a circle is just over three times the diameter, so to calculate the circumference we need to multiply the diameter by a value which is a bit bigger than 3. In fact, the value we have to multiply by is called pi and is represented by the greek letter pi which looks like this: The value of pi is 3.14159265... and the decimal part of the number carries on for ever without recurring. Normally we use the button marked on our calculator to solve problems involving pi, but if we do need to work by hand or we only have a basic calculator then we tend to use 3.14 as an approximation. If we know the circle diameter D then we multiply it by (pi) to get the circumference C. This is normally written as a formula: We can reverse the process too: if we know the circumference then we can divide it by to find the diameter. |C = D| Note that if we know the radius we would multiply it by 2 to get the diameter, and then multiply by pi to get the circumference. This can be seen in the other formula for circumference: |C = 2r| Radius and Area The radius and area of a circle are also linked by this number which is roughly 3.14. The formula to find the area A using the radius r is: It is very important to realise that the r² part of the calculation is done before you multiply by . This is because |A = r²| BiDMAS tells us that indices (like squaring a number) are calculated before multiplications. So, if we know the circle radius r we can square it and then multiply by to find the area A. Reversing this process is slightly trickier: to go from the area back to the circumference we need to divide by pi and then square-root the result. It is important to get these two operations the right way round. The diagram below summarises the operations needed to do calculations involving the measurements in a circle: Work out the answer to each of these questions then click on the button marked to see whether you are correct. Practice Question 1 A circle has radius 6 cm. (a) Calculate its area, accurate to 1 decimal place. (b) Calculate its circumference, accurate to 1 decimal place. Practice Question 2 A circle has diameter 7 m. (a) Calculate its circumference, accurate to 1 decimal place. (b) Calculate its area, accurate to 1 decimal place. Practice Question 3 The circumference of a circle is 18.2 cm. Calculate the length of the diameter of the circle, accurate to 1 decimal place. Practice Question 4 The area of a circle is 22.8 m². Calculate the length of the radius of the circle, accurate to 1 decimal place. Work out the answers to the questions below and fill in the boxes. Click on the button to find out whether you have answered correctly. If you are right then will appear and you should move on to the next question. If appears then your answer is wrong. Click on to clear your original answer and have another go. If you can't work out the right answer then click on |GIVE YOUR ANSWERS TO 1 DECIMAL PLACE IN ALL THE QUESTIONS BELOW| USE THE BUTTON ON YOUR CALCULATOR OR 3.14 FOR PI You have now completed Unit 16 Section 4 Return to the Y8 Tutorials Menu Since these pages are under development, we find any feedback extremely valuable. Click here to fill out a very short form which allows you make comments about the page, or simply confirm that everything works correctly. Produced by A.J.Reynolds August 2008
http://www.cimt.plymouth.ac.uk/projects/mepres/book8/bk8i16/bk8_16i4.htm
13
52
||This article needs additional citations for verification. (May 2011)| |Map of Earth| |Lines of longitude appear vertical with varying curvature in this projection, but are actually halves of great ellipses, with identical radii at a given latitude.| |Lines of latitude appear horizontal with varying curvature in this projection; but are actually circular with different radii. All locations with a given latitude are collectively referred to as a circle of latitude.| |The equator divides the planet into a Northern Hemisphere and a Southern Hemisphere, and has a latitude of 0°.| Longitude (pron.: // or //), is a geographic coordinate that specifies the east-west position of a point on the Earth's surface. It is an angular measurement, usually expressed in degrees and denoted by the Greek letter lambda (λ). Points with the same longitude lie in lines running from the North Pole to the South Pole. By convention, one of these, the Prime Meridian, which passes through the Royal Observatory, Greenwich, England, establishes the position of zero degrees longitude. The longitude of other places is measured as an angle east or west from the Prime Meridian, ranging from 0° at the Prime Meridian to +180° eastward and −180° westward. Specifically, it is the angle between a plane containing the Prime Meridian and a plane containing the North Pole, South Pole and the location in question. This forms a right-handed coordinate system with the z axis (right hand thumb) pointing from the Earth's center toward the North Pole and the x axis (right hand index finger) extending from Earth's center through the equator at the Prime Meridian. A location's north-south position along a meridian is given by its latitude, which is (not quite exactly) the angle between the local vertical and the plane of the Equator. If the Earth were perfectly spherical and homogeneous, then longitude at a point would just be the angle between a vertical north-south plane through that point and the plane of the Greenwich meridian. Everywhere on Earth the vertical north-south plane would contain the Earth's axis. But the Earth is not homogenous, and has mountains—which have gravity and so can shift the vertical plane away from the Earth's axis. The vertical north-south plane still intersects the plane of the Greenwich meridian at some angle; that angle is astronomical longitude, the longitude you calculate from star observations. The longitude shown on maps and GPS devices is the angle between the Greenwich plane and a not-quite-vertical plane through the point; the not-quite-vertical plane is perpendicular to the surface of the spheroid chosen to approximate the Earth's sea-level surface, rather than perpendicular to the sea-level surface itself. The measurement of longitude is important both to cartography and to provide safe ocean navigation. Mariners and explorers for most of history struggled to determine precise longitude. Finding a method of determining exact longitude took centuries, resulting in the history of longitude recording the effort of some of the greatest scientific minds. As to longitude, I declare that I found so much difficulty in determining it that I was put to great pains to ascertain the east-west distance I had covered. The final result of my labours was that I found nothing better to do than to watch for and take observations at night of the conjunction of one planet with another, and especially of the conjunction of the moon with the other planets, because the moon is swifter in her course than any other planet. I compared my observations with an almanac. After I had made experiments many nights, one night, the twenty-third of August 1499, there was a conjunction of the moon with Mars, which according to the almanac was to occur at midnight or a half hour before. I found that...at midnight Mars's position was three and a half degrees to the east. By comparing the relative positions of the moon and Mars with their anticipated positions, Vespucci was able to crudely deduce his longitude. But this method had several limitations: First, it required the occurrence of a specific astronomical event (in this case, Mars passing through the same right ascension as the moon), and the observer needed to anticipate this event via an astronomical almanac. One needed also to know the precise time, which was difficult to ascertain in foreign lands. Finally, it required a stable viewing platform, rendering the technique useless on the rolling deck of a ship at sea. See Lunar distance (navigation). In 1612, Galileo Galilei proposed that with sufficiently accurate knowledge of the orbits of the moons of Jupiter one could use their positions as a universal clock and this would make possible the determination of longitude, but the method he devised was impracticable and it was never used at sea. In the early 18th century there were several maritime disasters attributable to serious errors in reckoning position at sea, such as the loss of four ships of the fleet of Sir Cloudesley Shovell in the Scilly naval disaster of 1707. Motivated by these disasters, in 1714 the British government established the Board of Longitude: prizes were to be awarded to the first person to demonstrate a practical method for determining the longitude of a ship at sea. These prizes motivated many to search for a solution. John Harrison, a self-educated English clockmaker then invented the marine chronometer, a key piece in solving the problem of accurately establishing longitude at sea, thus revolutionising and extending the possibility of safe long distance sea travel. Though the British rewarded John Harrison for his marine chronometer in 1773, chronometers remained very expensive and the lunar distance method continued to be used for decades. Finally, the combination of the availability of marine chronometers and wireless telegraph time signals put an end to the use of lunars in the 20th century. Unlike latitude, which has the equator as a natural starting position, there is no natural starting position for longitude. Therefore, a reference meridian had to be chosen. It was a popular practice to use a nation's capital as the starting point, but other significant locations were also used. While British cartographers had long used the Greenwich meridian in London, other references were used elsewhere, including: El Hierro, Rome, Copenhagen, Jerusalem, Saint Petersburg, Pisa, Paris, Philadelphia, Pennsylvania, and Washington D.C. In 1884, the International Meridian Conference adopted the Greenwich meridian as the universal Prime Meridian or zero point of longitude. Noting and calculating longitude Longitude is given as an angular measurement ranging from 0° at the Prime Meridian to +180° eastward and −180° westward. The Greek letter λ (lambda), is used to denote the location of a place on Earth east or west of the Prime Meridian. Each degree of longitude is sub-divided into 60 minutes, each of which is divided into 60 seconds. A longitude is thus specified in sexagesimal notation as 23° 27′ 30″ E. For higher precision, the seconds are specified with a decimal fraction. An alternative representation uses degrees and minutes, where parts of a minute are expressed in decimal notation with a fraction, thus: 23° 27.500′ E. Degrees may also be expressed as a decimal fraction: 23.45833° E. For calculations, the angular measure may be converted to radians, so longitude may also be expressed in this manner as a signed fraction of π (pi), or an unsigned fraction of 2π. For calculations, the West/East suffix is replaced by a negative sign in the western hemisphere. Confusingly, the convention of negative for East is also sometimes seen. The preferred convention—that East be positive—is consistent with a right-handed Cartesian coordinate system, with the North Pole up. A specific longitude may then be combined with a specific latitude (usually positive in the northern hemisphere) to give a precise position on the Earth's surface. Longitude at a point may be determined by calculating the time difference between that at its location and Coordinated Universal Time (UTC). Since there are 24 hours in a day and 360 degrees in a circle, the sun moves across the sky at a rate of 15 degrees per hour (360°/24 hours = 15° per hour). So if the time zone a person is in is three hours ahead of UTC then that person is near 45° longitude (3 hours × 15° per hour = 45°). The word near was used because the point might not be at the center of the time zone; also the time zones are defined politically, so their centers and boundaries often do not lie on meridians at multiples of 15°. In order to perform this calculation, however, a person needs to have a chronometer (watch) set to UTC and needs to determine local time by solar or astronomical observation. The details are more complex than described here: see the articles on Universal Time and on the equation of time for more details. Singularity and discontinuity of longitude Note that the longitude is singular at the Poles and calculations that are sufficiently accurate for other positions, may be inaccurate at or near the Poles. Also the discontinuity at the ±180° meridian must be handled with care in calculations. An example is a calculation of east displacement by subtracting two longitudes, which gives the wrong answer if the two positions are on either side of this meridian. To avoid these complexities, consider replacing latitude and longitude with another horizontal position representation in calculation. Plate movement and longitude The Earth's tectonic plates move relative to one another in different directions at speeds on the order of 50 to 100mm per year. So points on the Earth's surface on different plates are always in motion relative to one another, for example, the longitudinal difference between a point on the Equator in Uganda, on the African Plate, and a point on the Equator in Ecuador, on the South American Plate, is increasing by about 0.0014 arcseconds per year. These tectonic movements likewise affect latitude. If a global reference frame such as WGS84 is used, the longitude of a place on the surface will change from year to year. To minimize this change, when dealing just with points on a single plate, a different reference frame can be used, whose coordinates are fixed to a particular plate, such as NAD83 for North America or ETRS89 for Europe. Length of a degree of longitude The length of a degree of longitude depends only on the radius of a circle of latitude. For a sphere of radius a that radius at latitude φ is (cos φ) times a, and the length of a one-degree (or π/180 radians) arc along a circle of latitude is |0°||110.574 km||111.320 km| |15°||110.649 km||107.551 km| |30°||110.852 km||96.486 km| |45°||111.132 km||78.847 km| |60°||111.412 km||55.800 km| |75°||111.618 km||28.902 km| |90°||111.694 km||0.000 km| where e, the eccentricity of the ellipsoid, is related to the major and minor axes (the equatorial and polar radii respectively) by An alternative formula is Cos φ decreases from 1 at the equator to zero at the poles, so the length of a degree of longitude decreases likewise. This contrasts with the small (1%) increase in the length of a degree of latitude, equator to pole. The table shows both for the WGS84 ellipsoid with a = 6,378,137.0 m and b = 6,356,752.3142 m. Note that the distance between two points 1 degree apart on the same circle of latitude, measured along that circle of latitude, is slightly more than the shortest (geodesic) distance between those points; the difference is less than 0.6 m. Ecliptic latitude and longitude Ecliptic latitude and longitude are defined for the planets, stars, and other celestial bodies in a broadly similar way to that in which terrestrial latitude and longitude are defined, but there is a special difference. The plane of zero latitude for celestial objects is the plane of the ecliptic. This plane is not parallel to the plane of the celestial equator, but rather is inclined to it by the obliquity of the ecliptic, which currently has a value of about 23° 26′. The closest celestial counterpart to terrestrial latitude is declination, and the closest celestial counterpart to terrestrial longitude is right ascension. These celestial coordinates bear the same relationship to the celestial equator as terrestrial latitude and longitude do to the terrestrial equator, and they are also more frequently used in astronomy than celestial longitude and latitude. The polar axis (relative to the celestial equator) is perpendicular to the plane of the Equator, and parallel to the terrestrial polar axis. But the (north) pole of the ecliptic, relevant to the definition of ecliptic latitude, is the normal to the ecliptic plane nearest to the direction of the celestial north pole of the Equator, i.e. 23° 26′ away from it. Ecliptic latitude is measured from 0° to 90° north (+) or south (−) of the ecliptic. Ecliptic longitude is measured from 0° to 360° eastward (the direction that the Sun appears to move relative to the stars), along the ecliptic from the vernal equinox. The equinox at a specific date and time is a fixed equinox, such as that in the J2000 reference frame. However, the equinox moves because it is the intersection of two planes, both of which move. The ecliptic is relatively stationary, wobbling within a 4° diameter circle relative to the fixed stars over millions of years under the gravitational influence of the other planets. The greatest movement is a relatively rapid gyration of Earth's equatorial plane whose pole traces a 47° diameter circle caused by the Moon. This causes the equinox to precess westward along the ecliptic about 50″ per year. This moving equinox is called the equinox of date. Ecliptic longitude relative to a moving equinox is used whenever the positions of the Sun, Moon, planets, or stars at dates other than that of a fixed equinox is important, as in calendars, astrology, or celestial mechanics. The 'error' of the Julian or Gregorian calendar is always relative to a moving equinox. The years, months, and days of the Chinese calendar all depend on the ecliptic longitudes of date of the Sun and Moon. The 30° zodiacal segments used in astrology are also relative to a moving equinox. Celestial mechanics (here restricted to the motion of solar system bodies) uses both a fixed and moving equinox. Sometimes in the study of Milankovitch cycles, the invariable plane of the solar system is substituted for the moving ecliptic. Longitude may be denominated from 0 to radians in either case. Longitude on bodies other than Earth Planetary co-ordinate systems are defined relative to their mean axis of rotation and various definitions of longitude depending on the body. The longitude systems of most of those bodies with observable rigid surfaces have been defined by references to a surface feature such as a crater. The north pole is that pole of rotation that lies on the north side of the invariable plane of the solar system (near the ecliptic). The location of the Prime Meridian as well as the position of body's north pole on the celestial sphere may vary with time due to precession of the axis of rotation of the planet (or satellite). If the position angle of the body's Prime Meridian increases with time, the body has a direct (or prograde) rotation; otherwise the rotation is said to be retrograde. In the absence of other information, the axis of rotation is assumed to be normal to the mean orbital plane; Mercury and most of the satellites are in this category. For many of the satellites, it is assumed that the rotation rate is equal to the mean orbital period. In the case of the giant planets, since their surface features are constantly changing and moving at various rates, the rotation of their magnetic fields is used as a reference instead. In the case of the Sun, even this criterion fails (because its magnetosphere is very complex and does not really rotate in a steady fashion), and an agreed-upon value for the rotation of its equator is used instead. For planetographic longitude, west longitudes (i.e., longitudes measured positively to the west) are used when the rotation is prograde, and east longitudes (i.e., longitudes measured positively to the east) when the rotation is retrograde. In simpler terms, imagine a distant, non-orbiting observer viewing a planet as it rotates. Also suppose that this observer is within the plane of the planet's equator. A point on the Equator that passes directly in front of this observer later in time has a higher planetographic longitude than a point that did so earlier in time. However, planetocentric longitude is always measured positively to the east, regardless of which way the planet rotates. East is defined as the counter-clockwise direction around the planet, as seen from above its north pole, and the north pole is whichever pole more closely aligns with the Earth's north pole. Longitudes traditionally have been written using "E" or "W" instead of "+" or "−" to indicate this polarity. For example, the following all mean the same thing: The reference surfaces for some planets (such as Earth and Mars) are ellipsoids of revolution for which the equatorial radius is larger than the polar radius; in other words, they are oblate spheroids. Smaller bodies (Io, Mimas, etc.) tend to be better approximated by triaxial ellipsoids; however, triaxial ellipsoids would render many computations more complicated, especially those related to map projections. Many projections would lose their elegant and popular properties. For this reason spherical reference surfaces are frequently used in mapping programs. Tidally-locked bodies have a natural reference longitude passing through the point nearest to their parent body: 0° the center of the primary-facing hemisphere, 90° the center of the leading hemisphere, 180° the center of the anti-primary hemisphere, and 270° the center of the trailing hemisphere. However, libration due to non-circular orbits or axial tilts causes this point to move around any fixed point on the celestial body like an analemma. - American Practical Navigator - Cardinal direction - Geodetic system - Geographic coordinate system - Geographical distance - Great-circle distance - History of longitude - Horse latitudes - List of cities by latitude - List of cities by longitude - Meridian arc - Natural Area Code - Orders of magnitude - World Geodetic System - Oxford English Dictionary - Vespucci, Amerigo. "Letter from Seville to Lorenzo di Pier Francesco de' Medici, 1500." Pohl, Frederick J. Amerigo Vespucci: Pilot Major. New York: Columbia University Press, 1945. 76-90. Page 80. - "Longitude clock comes alive". BBC. March 11, 2002. - Coordinate Conversion - "λ = Longitude east of Greenwich (for longitude west of Greenwich, use a minus sign)." John P. Snyder, Map Projections, A Working Manual, USGS Professional Paper 1395, page ix - Read HH, Watson Janet (1975). Introduction to Geology. New York: Halsted. pp. 13–15. - Osborne, P (2008)The Mercator Projections(Chapter 5) - Rapp, Richard H. (1991). Geometric Geodesy, Part I, Dept. of Geodetic Science and Surveying, Ohio State Univ., Columbus, Ohio.(Chapter 3) - Where is zero degrees longitude on Mars? - Copyright 2000 - 2010 © European Space Agency. All rights reserved. - First map of extraterrestrial planet - Center of Astrophysics. |Find more about Longitude at Wikipedia's sister projects| |Definitions and translations from Wiktionary| |Media from Commons| |Learning resources from Wikiversity| |News stories from Wikinews| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| |Travel information from Wikivoyage| - Resources for determining your latitude and longitude - IAU/IAG Working Group On Cartographic Coordinates and Rotational Elements of the Planets and Satellites - "Longitude forged": an essay exposing a hoax solution to the problem of calculating longitude, undetected in Dava Sobel's Longitude, from TLS, November 12, 2008. - Longitude And Latitude Of Points of Interest - Length Of A Degree Of Latitude And Longitude Calculator Read in another language This page is available in 84 languages - Беларуская (тарашкевіца) - Bahasa Indonesia - Basa Jawa - Kreyòl ayisyen - Norsk bokmål - Norsk nynorsk - Олык марий - Српски / srpski - Tiếng Việt
http://en.mobile.wikipedia.org/wiki/Longitude
13
64
Granites constitute a major portion of the continental crust. They outcrop over many areas of the earth’s surface as discrete bodies called plutons, ranging in size from 10 km2 to thousands of km2. The granite magmas are believed to be sourced from great depths in the lower to mid levels of the continental crust, but the plutons crystallize in the upper crust, typically at depths of 1–5 km. Furthermore, granite plutons are often part of batholiths, which are regional areas comprised of hundreds of plutons. As far as can be ascertained, granite magmatism primarily occurs in the continental crust and involves four separate but potentially quantifiable stages—generation, segregation, ascent, and emplacement—that operate over length scales ranging from 10-5 to 106 meters (Petford et al. 2000). Once in place, the final stage is cooling. Explanations for the formation of even a single granite pluton have become somewhat controversial, even in the conventional geologic community, as once held conventions are being challenged. No longer is research focusing on just the mineralogy, geochemistry, and isotopes of granites as clues to their formation (which has hithertofore supposedly required long timescales of millions of years), but on the physical processes as well. The results of such research have drastically shortened the intrusion timescales of many plutons to just centuries and even months (Petford et al. 2000). Various evidences are now being cited that change the long-held, extended time frames for granite formation (Coleman, Gray, and Glazner 2004). Conventional thinking has been that plutons form from the slow rising of diapirs, large molten masses that intrude into the host rocks and then cool. However, the problem of how the host rocks provide the space for these intruding diapirs has been increasingly recognised. In contrast, there are field data that indicate persuasively that plutons have formed from small batches of magma that accumulated in succession by dike intrusions. This new thinking drastically reduces the timescales for magma intrusion to form granite plutons, but most geologists are still convinced, based on their unerring commitment to radioisotope dating, that plutons require long uniformitarian time frames to form and then to cool primarily by conduction. Thus, the formation and the cooling of granite plutons are still regarded as prima facie evidences against the year-long, catastrophic global Flood on a young earth (Young and Stearley 2008). Granites are composed of several major minerals (quartz, K-feldspar, plagioclase, biotite, & hornblende), with minor constituents such as zircon (zirconium silicate). Tiny zircon grains (1–5 microns in diameter) are often found encased within large flakes (1–5 mm in diameter) of ubiquitous biotite. The zircon grains usually carry trace amounts of 238U, whose radioactive decay has provided a means by which it is claimed the ages of granites can be measured. Nevertheless, as the 238U in the zircon grains decays to 206Pb, it leaves physical evidence of that decay in the form of radiohalos, spherical zones of discoloration around these zircon grains (the radiocenters). Radiohalos are, in fact, the damage left by the emission of alpha (a) during the 238U decay process. The crystal structure of the surrounding biotite is damaged by the a-particles being “fired” in all directions like “bullets,” producing various concentric shells of darkening or discoloration (which are dark rings when viewed in cross-section). The radii of these concentric rings are related to the energies of the a-decay daughters in the 238U decay series (fig. 1b). Of the different radiohalo types distinct from 238U (and 232Th), presently the only ones to be identified with known a-radioactivity are the Po (polonium) radiohalos (fig. 1a, c, d). There are three Po isotopes in the 238U-decay chain. In sequence they are 218Po (half-life of 3.1 minutes), 214Po (half-life of 164 microseconds), and 210Po (half-life of 138 days). Found also in fluorite and cordierite, these radiohalos could only have been produced by either the respective Po radioisotopes that then parented the subsequent a-decays, or by non- a-emitting parents (Gentry 1973, 1974). Because of the radiohalos being located along cleavages and cracks in fluorite grains and biotite flakes, secondary fluid transport processes are thought to have been responsible for supplying the required Po radioisotopes to the radiocentres. The reason for the attempts to account for the Po radiohalos by some secondary process is simple—the half-lifes of the respective Po isotopes are so short that the only alternative is the Po was primary, that is, the Po was independent of 238U originally in the granitic magmas which are supposed to have slowly cooled to form the granite plutons. However, there are obstacles to any secondary process. First, there is the problem of isotopic separation of the Po radioisotopes from their parent 238U having occurred naturally. Second, the concentration of Po necessary to produce a radiohalo is as high as 5 x 109 atoms (approximately 50% Po), and yet the host minerals contain only ppm abundances of U, which apparently means only a negligible supply of Po daughter atoms is available for capture in a radiocenter at any given time. Fig. 1. Composite schematic drawing of (a) a 218Po halo, (b) a 238U halo, (c) a 214Po halo, and (d) a 210Po halo, with radii proportional to the ranges of the a-particles in air. The nuclides responsible for the a-particles are listed for the different halo rings (after Gentry 1973). Therefore, there are strict time limits for the formation of the Po radiohalos by primary or secondary processes in granites. It was for this reason that Gentry (1974, 1986, 1988) proposed that the three different types of Po radiohalos in biotites resulted from the decay of primordial Po (original Po not derived by 238U decay), and thus claimed that the host granites also had to be primordial, that is produced by fiat creation. He thus perceived all granites to be Precambrian, and part of the earth’s crust created during the Creation Week. However, Wise (1989) documented that six of the 22 locations reported in the literature where Po radiohalos had been found were hosted by Phanerozoic granites which had been formed during the Flood. Additionally, many of the occurrences of Po radiohalos were in proximity to higher than normal U concentrations in nearby rocks and/or minerals, suggesting ideal sources for fluid separation and transport of the Po. Indeed, Snelling (2000) documented reports of 210Po as a detectable species in volcanic gases, in volcanic/hydrothermal fluids associated with subaerial volcanoes and fumaroles, and associated with mid-ocean ridge hydrothermal vent fluids and chimney deposits, as well as in groundwaters. The distances travelled by the Po in these fluids were up to several kilometres. Such evidence supports a viable secondary transport model for Po in hydrothermal fluids in granite plutons after their emplacement and during the waning stages of the crystallization and cooling of granite magmas (Snelling 2005a; Snelling and Armitage 2003; Snelling, Baumgardner, and Vardiman 2003). Po radiohalos thus appear to indicate that very rapid geological processes were responsible for their production, due to their very short half-lifes. This places severe time constraints on the processes by which granites can form, that is, granites had to form rapidly in much shorter time periods than is conventionally interpreted from the longer half-life of 238U decay (4.46 x 109 years). The potential problems thus arise with the conventional interpretation of isotopic systems and/or plutonic processes. Glazner, et al. (2004) have stated: The prevailing view that plutons cool in less than a million years requires such conflicting ages to reflect the problems in isotopic systematics. However, it may be that many such age differences are real and that the problem lies instead with assumptions about plutonic processes. However, the existence of Po radiohalos in granite plutons supports the convention that plutonic processes occurred in very short time frames, so the problem has to be with the interpretation of the isotopic systematics, a concern echoed by Paterson and Tobisch (1992). Snelling (2005a), Snelling and Armitage (2003), and Snelling, Baumgardner, and Vardiman (2003) have proposed a model for the secondary transport of Po in hydrothermal fluids to form Po radiohalos during pluton cooling, so it needs to be recognized just how rapidly plutonic processes must have occurred. A classic location for the widespread outcropping of granites is in the Sierra Nevada of eastern Central California (fig. 2). Known as the Sierra Nevada Batholith, hundreds of granite plutons outcrop over an area of 40,000–45,000 km2. The batholith is conventionally of Mesozoic age, and lies along the western edge of the Paleozoic North American craton (Bateman 1992). It was emplaced in strongly deformed but weakly metamorphosed strata ranging in conventional age from Proterozoic to Cretaceous. To the east of the batholith in the White and Inyo Mountains, sedimentary rocks of Proterozoic and Paleozoic age crop out, and metamorphosed sedimentary volcanic rocks of Paleozoic and Mesozoic age crop out west of the batholith in the western metamorphic belt. The plutonic rocks of the batholith range in composition from gabbro to leucogranite, but tonalite, granodiorite, and granite are the most common rock types. Most of the plutons have been assigned to intrusive suites that appear to be spatially related to one another by being intruded sequentially and thus show regular age patterns. In the central Sierra Nevada are the spectacular granite outcrops of the Yosemite National Park (fig. studies on granites and the formation (Bateman 3), so these have been the focus of many previous 1992). This present study investigates the occurrence and distribution of U and Po radiohalos in the biotites of selected granite plutons in various, easily accessed outcrops in Yosemite National Park and shows how they provide evidence and support of the rapid formation of these granites. A particular focus was the set of nested plutons of the Tuolumne Intrusive Suite, which in contrast to the other granite plutons appear to have been intruded in a progressive sequence to form a very large zoned pluton. The generation of granite magmas and emplacement of granite plutons are still topics of debate among geologists today (Pitcher 1993). In recent years, a consensus has emerged that granite magmatism is a rapid, dynamic process operating at timescales of ≤100,000 years (Paterson and Tobisch 1992; Petford et al. 2000). Petford et al. (2000) state that research into the origin of granite has shifted away from geochemistry and isotopic studies towards understanding the physical processes involved. Heat advected into the lower crust from underlying hot mantle-derived basaltic magmas would rapidly and efficiently cause partial melting of crustal rocks, producing enough melt in 200 years or less to form a granite magma that can then be transported to the area of emplacement in the upper crust (Bergantz 1989; Huppert and Sparks 1988; Jackson, Cheadle, and Atherton 2003). Transport of the melt and magma (melt plus suspended solids) operates over two length scales (Miller, Watson, and Harrison 1988). First, segregation is small-scale movement of melt (centimeters to decimeters) within the source region. Second, once the melt is segregated, long-range (kilometer-scale) ascent of the magma through the continental crust to the site of final emplacement occurs. Crucial physical properties of a granite melt that facilitate its segregation are viscosity and density. Traditionally, granite melt has been thought to have a viscosity close to that of solid rock, but experimental studies have now demonstrated that the viscosity is a function of the composition, temperature and water content of the melt (Clemens and Petford 1999; Pitcher 1993). Fig. 2. Generalized geology of the Sierra Nevada and adjacent areas of eastern California, showing the location of the Yosemite National Park (after Bateman 1992). Deformation has been shown from field evidence to be the dominant mechanism that segregates and focuses granite melt flow in the lower crust, and this has been confirmed by rock deformation experiments (Brown and Rushmer 1997; Rutter and Neumann 1995). However, the proven efficiency of melt segregation by deformation makes it unlikely that large, granite magma chambers will form in the region of partial melting (Petford 1995; Petford and Koenders 1998). Instead, the most viable driving force for subsequent large-scale vertical transport of the melt through the continental crust is gravity. However, the traditional idea of buoyant granite magma ascending through the continental crust as slow-rising, hot diapirs or by stoping has been largely replaced. New models involving the ascent of granite magmas in narrow conduits, either as self-propagating dikes (Clemens and Mawer 1992; Clemens, Petford, and Mawer 1997; Petford 1995) along pre-existing faults (Petford, Kerr, and Lister 1993; Yoshinobu, Okaya, and Paterson 1998) or as an interconnected network of active shear zones and dilational structures (Collins and Sawyer 1996; D’Lemos, Brown, and Strachan 1993), are overcoming the severe thermal and mechanical problems associated with transporting very large volumes of magma through the upper brittle continental crust (Marsh 1982). A striking aspect of the ascent of granite melt in dikes compared to diapiric rise is the extreme difference in magma ascent rate between both processes, with the former up to a factor of 106 faster depending upon the viscosity of the material and conduit width (Clemens, Petford, and Mawer 1997; Petford, Kerr, and Lister 1993). In fact, field and experimental studies support the narrow dike widths (~1–50 m) and rapid ascent velocities predicted by fluid dynamical models (Brandon, Chacko, and Creaser 1996; Scalliet et al. 1994). Thus these dike ascent models have brought the timescale for granite magmatism more in line with that for large scale and volume catastrophic volcanism. Fig. 3. Scenic views in the Yosemite National Park. The total landscape is composed of many outcropping granite plutons that have intruded one another. The final stage, emplacement, in the granite forming process has challenged geologists for most of the twentieth century with the so-called space problem (Glazner et al. 2003; Pitcher 1993), the way in which the host rocks make room for the newly incoming magma. This problem becomes all the more acute where batholithic volumes (>1 x 105 km3) of granite magma are considered to have been emplaced in a single episode. However, the recognition of the important role played by tectonic activity in making space in the crust for incoming magmas during their ascent has helped to potentially solve this problem (Hutton 1988). Also contributing to the solution have been the more realistic determinations of the geometry of granite plutons at depth, and the recognition that emplacement is an episodic process involving discrete pulses of magma. The majority of plutons so far investigated appear as flat-lying to open funnel-shaped structures with central or marginal feeder zones, consistent with field studies that have demonstrated plutons to be sheeted on a decimeter to kilometer scale (Ameglio and Vigneresse 1999; Bouchez, Hutton, and Stephens 1997; Hutton 1992; Petford 1992). Thus the best comprehensive model of pluton emplacement, combining all empirical studies, envisages pluton growth commencing with a birth stage, characterized by lateral spreading, followed by an inflation stage, marked by vertical thickening. This would result in plutons, at the fastest magma delivery rates, being emplaced in less than 1,000 years (Harris and Ayres 2000). Petford et al. (2000) concluded with the comment: The rate-limiting step in granite magmatism is the timescale of partial melting (Harris Vance, and Ayers 2000; Petford, Clemens, and Vigneresse 1997); the follow-on stages of segregation, ascent and emplacement can be geologically extremely rapid—perhaps even catastrophic. Once emplaced the magma must crystallize completely and cool to form the final granite pluton. Traditionally this process has been considered to have been primarily by conduction, thus taking millions of years. However, more recently, cooling models have increasingly incorporated convective cooling as the major component (Norton and Knight 1977; Parmentier 1981; Spera 1982; Torrance and Sheu 1978), and empirical studies (Brown 1987; Hardee 1982) have proved that thick igneous bodies do in fact cool primarily by circulating water (Snelling and Woodmorappe 1998). Thus, computer programs have been used to generate the most recent models for cooling plutons (Hayba and Ingebritsen 1997; Ingebritsen and Hayba 1994). The initial source of the water required for this convective cooling is within the magma itself, the same water dissolved in the magma that lowered its viscosity and thus aided its emplacement. As the granite crystallizes the dissolved water becomes concentrated in the residual magma, which increases its cooling rate. When the residual magma eventually becomes saturated with water as the temperature continues to fall, the water is released as steam under pressure that is thus forced through the cooling granite pluton fracturing its outer contact zone, and out into the host rocks through those fractures carrying heat with it (Burnham 1997; Candela 1991; Zhao and Brown 1992). This in turn allows any cooler water present in the host rocks to flow through those same fractures back into the granite pluton, thus setting up a convective cell circulation between the host rocks and the pluton (Cathles 1977). This facilitates the rapid increase of the cooling process, as more and more heat is carried by these hydrothermal fluids from the magma out into the host rocks where it dissipates. Spera (1982) concluded: Hydrothermal fluid circulation within a permeable or fractured country rock accounts for most heat loss when magma is emplaced into water-bearing country rock . . . Large hydrothermal systems tend to occur in the upper parts of the crust where meteoric water is more plentiful. Thus Snelling and Woodmorappe (1998) concluded that millions of years are not necessary for the cooling of large igneous bodies such as granite plutons. When radiohalos where first reported between 1880 and 1890, they remained a mystery until the discovery of radioactivity (Gentry 1973). Now they are recognized as any type of discolored radiation-damaged region within a mineral, resulting from the a-emissions from a central radioactive inclusion or radiocenter. Usually the radiohalos when viewed in rock thin sections appear as concentric rings that were initiated by the a-decay of 238U or 232Th series (Gentry 1973, 1974). Radiohalos are usually found in igneous rocks, most commonly in granitic rocks in which biotite is a major mineral. In fact, biotite is the major mineral in which the radiohalos occur. While observed mainly in Precambrian rocks (Gentry 1968, 1970, 1971; Henderson and Bateson 1934; Henderson, Mushkat, and Crawford 1934; Iimori and Yoshimura 1926; Joly 1917a, b, 1923, 1924; Kerr-Lawson 1927, 1928; Owen 1988; Wiman 1930), radiohalos have been shown to exist in rocks stretching from the Precambrian to the Tertiary (Holmes 1931; Snelling 2005a; Stark 1936; Wise 1989). Within the 238U decay series, the three Po isotopes have been the only a-emitters observed to form radiohalos other than 238U itself (fig. 1). These isotopes and their respective half-lifes are 218Po (3.1 minutes), 214Po (164 microseconds), and 210Po (138 days). Their very short half-lifes constrain the formation of the granites that they have been found in to a short time frame (Gentry 1986, 1988; Snelling 2000, 2005a), because the Po radiohalos can only form after the granites have crystallized and cooled, yet the Po isotopes are already in the granite magma when it is emplaced. Thus, if granite magma emplacement and pluton cooling are not extremely rapid, then these Po isotopes would not have survived to form the Po radiohalos (Snelling 2008a). This is consistent with, and in support of, a young earth model. As the rings for the Po precursors are usually missing (Snelling, Baumgardner, and Vardiman 2003), the source of the Po for the radiohalos has been an area of contention (Snelling 2000). Was it primary, or did a secondary process transport it? Gentry (1986) proposed that the Po radiohalos had been produced by primordial Po, having an origin independent of any U, so therefore, all granites and granitic rocks were formed by fiat creation during the Creation Week. In contrast, based on all the available evidence, Snelling (2000) suggested a possible model for transporting the Po via hydrothermal fluids during the latter stages of cooling of granite plutons to sites where the Po isotopes would have been precipitated and concentrated in radiocenters that then formed the respective Po radiohalos in the granites. Subsequently, Snelling and Armitage (2003) investigated the radiohalos in the biotite flakes within three granite plutons, demonstrating first that these granite plutons had been intruded and cooled during the Flood. They found that the biotite grains contained both fully-formed 238U and 232Th radiohalos around zircon and monazite inclusions (radiocenters) respectively, thus providing a physical, integral, historical record of at least 100 million years worth (at today’s rates) of accelerated radioactive decay during the recent year-long Flood. However, Po radiohalos were also often found in the same biotite flakes as the U radiohalos, usually less than 1 mm away. They thus argued that the source of the Po isotopes must have been the U in the zircon grains within the biotite flakes, the same zircon inclusions that are the radiocenters to the U radiohalos. Snelling and Armitage (2003) then went on to progressively reason the evidence that confirms the tentative model suggested by Snelling (2000). Because the precursor to 218Po is the inert gas 222Rn, after it is produced by 238U decay in the zircon grains it is capable of diffusing out of the zircon crystal lattice. Concurrently, as previously described, as the emplaced granite magma crystallizes and cools, the water dissolved in it is released below 400ºC so that hydrothermal fluids begin flowing around the constituent minerals and through the granite pluton, including along the cleavage planes within the biotite flakes. As argued by Snelling and Armitage (2003), and elaborated by Snelling (2005a), these hydrothermal fluids were capable of then transporting 222Rn (and its daughter Po isotopes) from the zircon inclusions to sites where new radiocenters were formed by Po isotopes precipitating in lattice imperfections containing rare ions of S, Se, Pb, halides or other species with a geochemical affinity for Po. Continued hydrothermal fluid transport of Po would have also replaced the Po in the radiocenters as it a-decayed to produce the Po radiohalos, thus progressively supplying the 5 x 109 Po atoms needed to form fully registered Po radiohalos. Significantly, as none of the radiohalos (Po or U) could form or be preserved until the biotite crystals had formed and cooled below the thermal annealing temperature for a-tracks of 150ºC (Laney and Laughlin 1981), yet the hydrothermal fluids probably started transporting Rn and the Po isotopes immediately they were expelled from the magma just below 400ºC, this implies cooling of the Po-radiohalo-containing granite plutons had to be extremely rapid, in only 6–10 days (Snelling 2008a). Snelling, Baumgardner, and Vardiman (2003) and Snelling (2005a) have summarized this model for hydrothermal fluid transport of U-decay products (Rn, Po) in a six-step diagram. The final step concludes with the comment: With further passing of time and more a-decays both the 238U and 210Po radiohalos are fully formed, the granite cools completely and hydrothermal fluid flow ceases. Note that both radiohalos have to form concurrently below 150ºC. The rate at which these processes occur must therefore be governed by the 138 day half-life of 210Po. To get 218Po and 214Po radiohalos these processes would have to have occurred even faster. Of course, if the U and Po radiohalos have both formed in a few days during the 6–10 days while the granite plutons cooled during the Flood, then this implies 100 million years worth of accelerated 238U decay occurred over a time frame of a few days. Thus the U-Pb isotopic systematics within the zircons in these granite plutons are definitely not providing absolute “ages” as conventionally interpreted. No other large batholith has offered conditions as favorable for geologic study as the Sierra Nevada Batholith, where exposures are good almost everywhere due to the glaciation in the high elevations (fig. 3) and the arid climate in the eastern escarpment (Bateman 1992). Before 1956, the batholith was considered a barrier to relating the stratified rocks in the western metamorphic belt to remnants within the batholith and to the strata east of the batholith in the Basin and Range Province, until plate tectonics provided a “solution,” due to the Sierra Nevada being considered to lie within the zone affected by Mesozoic and Cenozoic convergence of the North American plate with plates of the Pacific Ocean. The batholith is a segment of the Mesozoic batholiths that encircle the Pacific Basin. Systematic mapping of the region (fig. 2) by geologists of the U.S. Geological Survey grew out of independent studies of discrete areas or topics by individuals or small groups of individuals, and was completed at the 1:62,500 scale in 1982. The relevant quadrangle areas have also been combined into two comprehensive maps, one by Huber, Bateman, and Wahrhaftig (1989) covering the Yosemite National Park area and its surroundings (summarized in fig. 4), and the other by Bateman (1992) covering the belt across the batholith between latitudes 37º and 38ºN. Fig. 4. Geologic map of the southern and central Yosemite National Park (after Huber, Bateman, and Wahrhaftig 1989). The locations of the samples used in this study are indicated. The legend listing the chronologic sequence of mapped units is below. Those units sampled are indicated in bold. Studies of the individual plutons within the batholith have provided crucial contributions to the development of the model for rapid granite pluton formation discussed above. For example, Reid, Evans, and Fates (1983) studied magma mixing in granite rocks of the central Sierra Nevada and concluded that the intrusion of mafic magmas in the lower crust were important in the generation of the Sierra Nevada Batholith, having caused partial melting that generated granite magmas. These mafic and granite magmas then mixed during their emplacement, the evidence for this relationship being found in the El Capitan Granite and the Half Dome Granodiorite in the Yosemite National Park. Subsequently, Ratajeski, Glazner, and Miller (2001) concluded that the intrusion of the Yosemite Valley Suite involved two probably closely timed pulses of mafic-felsic magmatism. The first stage yielded the El Capitan Granite, mafic enclaves and the Rockslides diorite, while the second stage yielded the Taft Granite and a mafic dike swarm. A previous study by Kistler et al. (1986) attributed the compositional zoning of the Tuolumne Intrusive Suite to the initial generation of a basalt magma in the lower continental crust. This basaltic magma then interacted with and melted some of the more siliceous and isotopically more radiogenic lower crust rocks to produce mixtures that were emplaced into the upper crust as the equigranular outer units of the suite. These studies thus confirm the role of mafic magmas in the rapid heating of the lower crust to rapidly form granite magmas. Various other studies have been done on the emplacement of the Sierra Nevada granite plutons. Paterson and Vernon (1995) applied the popular model for the emplacement of spherical granite plutons, that is, “ballooning” or in situ inflation of the magma chamber, to their studies of the Papoose Flat Pluton and the plutons of the Tuolumne Intrusive Suite. They concluded that many such plutons are better viewed as syntectonic nested diapirs, which implies that magma ascent may have occurred by the rise of large magma batches. Furthermore, normally zoned plutons may have thus formed by intrusion of several pulses of magma, rather than by in situ crystal fractionation from a single parent melt. Glazner et al. (2003) addressed the problem of making space for large batholiths such as the Sierra Nevada by suggesting that isostatic sinking of the growing magmatic pile into its substrate would have displaced the sub-batholithic crust toward the backarc region via large-scale intracrustal flow. They thus concluded that this may be the solution for the space problem, the crust beneath batholiths having been involved in lateral, large-scale, two- or three-dimensional intracrustal flow accompanied by thrust faulting. Mahan et al. (2003) studied the McDoogle Pluton near the eastern margin of the Sierra Nevada Batholith, and found field, microstructural, and geochronological evidence that indicated the pluton had been emplaced as a subvertically sheeted complex, not by the diapiric rise of multiple batches of magma. Thus, such sheeted dike emplacement of the pluton would have reduced the time frame for the pluton’s formation, which was confirmed by zircon U-Pb isotope data. McNulty, Tong, and Tobisch (1996) had similarly studied the Jackass Lakes Pluton in the central Sierra Nevada, and concluded that this pluton had also formed via sheet-like assembly of a dike-fed magma chamber. Furthermore, the bulk emplacement was facilitated by multiple processes, including lateral expansion of sheets and ductile wall-rock shortening at the final emplacement site, stoping and caldera formation, and possibly roof uplift or doming. McNulty, Tong, and Tobisch (1996) concluded with the comment: Hybrid viscoelastic models also offer realistic alternatives to end-member models (that is, dike versus diapir). More accurate models of pluton emplacement will allow better understanding of the construction of magmatic arcs, and ultimately how tectonics are manifested at plate boundaries. Studies of the evidence for how and when the plutons in the Sierra Nevada Batholith were emplaced are thus challenging previous conventional models and their associated timescales. For example, Coleman and Glazner (1997) considered the Sierra Nevada Batholith as a whole and concluded that during what they informally called the Sierra Crest magmatic event extremely voluminous magmatism resulted in rapid crustal growth. From approximately 98 to 86 Ma (a veritable instant in conventional geologic time) greater than 4000 km2 of exposed granodioritic to granitic crust was emplaced in eastern California to form about 50% of the Sierra Nevada Batholith, including the largest composite intrusive suites (such as the Tuolumne). Furthermore, although they comprise an insignificant volume of exposed rocks (less than 100 km2), mafic magmas were intruded contemporaneously with each episode of magmatism during this event, often mingling with the granite magmas during emplacement. Thus, the heat from these mantle-derived mafic magmas appears to have triggered this large-scale granite magmatic event. The Tuolumne Intrusive Suite (consisting of the Kuna Crest Granodiorite, Glen Aulin Tonalite, Half Dome Granodiorite, Cathedral Peak Granodiorite, and the Johnson Granite Porphyry) in the Yosemite National Park area has recently been used prominently as a prime example of the evidence that large and broadly homogenous plutons have accumulated incrementally supposedly over millions of years (Glazner et al., 2004). As previously discussed, a growing body of data suggests that many granite plutons were rapidly assembled as a series of sheet-like intrusions, while others preserve evidence that they were rapidly injected as a series of steep dikes. The Tuolumne Intrusive Suite of Yosemite National Park has long been thought to have crystallized from several large batches of magma that were emplaced in rapid succession (Bateman and Chappell 1979). Thus, Glazner et al. (2004) cited the abundant, unequivocal field evidence for the incremental dike emplacement of the plutons of the Tuolumne Intrusive Suite. For example, in many places the outer margin of the Tuolumne is clearly composed of granodiorite dikes that invaded its wall-rocks. Furthermore, near its contact with the Glen Aulin Tonalite, the Half Dome Granodiorite contains sheets of varying composition and tabular swarms of mafic enclaves, but then grades inward to a more homogeneous rock. There in the Half Dome Granodiorite the occurrence of dikes or sheets is less certain, but they concluded that the textural homogeneity of large plutons like the Half Dome Granodiorite could also reflect the post-emplacement annealing of amalgamated dikes or sheets, and such a pluton thus might contain any number of cryptic contacts. This field evidence thus supports the model of Petford et al. (2000) for the incremental emplacement of very large zoned plutons such as the Tuolumne Intrusive Suite in less than 100,000 years. However, Glazner et al. (2004) state that the geochronologic data contradicts that timescale. In particular, U-Pb zircon data from the Tuolumne Intrusive Suite indicate that the plutons were assembled over a period of 10 m.y. between 95 and 85 Ma, with the oldest intrusions at the margins and the youngest at the center (Coleman and Glazner 1997; Coleman et al. 2004). The Half Dome Granodiorite evidently was intruded over a period of about 4 m.y., with older ages near the outer contact and younger ages near the inner contact, even though it has been mapped as a single continuous pluton. Nevertheless, Coleman, Gray, and Glazner (2004) admit that the apparent lateral age variations in the Tuolumne Intrusive Suite are not consistent with the field evidence for its emplacement as a series of small intrusions assembled incrementally as sheets or dikes through the entire suite, and not just its outer units where such evidence is so obvious. Furthermore, they state that when large plutons are dated multiple times by the zircon U-Pb method, it is fairly common for the resulting dates to disagree by more than the analytical errors. Bateman and Chappell (1979) reported on their comprehensive study of the concentric texturally and compositionally zoned plutonic sequence of the Tuolumne Intrusive Suite, in which they sought to develop and test a model for the origin of comagmatic plutonic sequences in the Sierra Nevada Batholith. Their study involved detailed petrologic and geochemical investigations of a large suite of samples in two traverses across the Tuolumne plutons. Modal, major oxide and trace element analyses, as well as some mineral analyses, were undertaken in order to characterize each of the plutons and quantify the mineralogical and compositional variations within and between them. Structural and textural variations were also described. They concluded that the compositional zoning within the suite indicated that with decreasing temperature the sequence solidified from the margins inward, with solidification being interrupted repeatedly by surges of fluid core magma. Bateman (1992) provided a comprehensive compilation of all the data from the studies of the Sierra Nevada Batholith up until that date, and this includes descriptions of all the plutons investigated in this current study. The Granodiorite of Kuna Crest (Kkc) (fig. 4) is dark gray and equigranular, with the average composition changing from quartz diorite to granodiorite. Its modal composition is quartz (15–22%), K-feldspar (9–15%), plagioclase (44–50%), biotite (10–12%), hornblende (7–13%), and sphene (0.5–1.0%) (Bateman and Chappell 1979). The Half Dome Granodiorite (Khd) is coarser grained than the Granodiorite of Kuna Crest (Kkc), and includes an outer equigranular facies and an inner megacrystic facies. Hornblende (5 mm x 1.5 cm) and biotite (1 cm wide) crystals decrease in abundance inward, while plagioclase abundance remains constant, but both quartz and alkali (K)-feldspar abundances increase inward. Its modal composition is quartz (20–27%), K-feldspar (16–26%), plagioclase (39–51%), biotite (4–11%), hornblende (1–8%), and sphene (0.2–1.2%) (Bateman and Chappell 1979). The Cathedral Peak Granodiorite (Kcp) contains blocky alkali-feldspar megacrysts (commonly 3 cm x 5 cm), with the size and abundance of the megacrysts decreasing inward. Except for the inward decrease in abundance of these megacrysts and an absence of hornblende in the innermost parts, the modal composition of the Cathedral Peak Granodiorite is fairly constant, with quartz (24–30%), K-feldspar (20–28%), plagioclase (40–52%), biotite (1.3–4.5%), hornblende (0–2%), and sphene (0.1–0.7%) (Bateman and Chappell 1979). The Sentinel Granodiorite (Kse) (fig. 4) is equigranular and contains well-formed crystals of hornblende and biotite, and abundant wedgeshaped crystals of sphene. The Yosemite Creek Granodiorite (Kyc) is a dark-gray medium- to coarse-grained granitic rock of highly variable composition containing plagioclase phenocrysts. Both these plutons have sometimes been regarded as members of the Tuolumne Intrusive Suite, but are usually placed in the Intrusive Suite of Jack Main Canyon, or the Intrusive Suite of Sonora Pass by some workers. The Intrusive Suite of Yosemite Valley includes the El Capitan Granite and the Taft Granite, which have yielded U-Pb isotopic ages of about 102–103 Ma (Stern et al. 1981). The El Capitan Granite (Kec) (fig. 4) is a weakly to moderately megacrystic, leucocratic biotite granite. It is light gray, medium to coarse grained, and contains K-feldspar megacrysts (1–2 cm long) and small “books” of biotite. The Taft Granite (Kt) is medium-grained, very light gray, and on the Q-A-P diagram plots in the granite field. The Intrusive Suite of Washburn Lake, or of Buena Vista Crest according to some workers, includes the Granodiorite of Illilouette Creek (Kic) (fig. 4), which has yielded a discordant U-Pb age of 100 Ma (Stern et al. 1981). The Granodiorite of Illilouette Creek is the oldest, most mafic, and largest intrusion in this suite of plutons. It is dark, medium-grained, equigranular hornblende-biotite granodiorite and hornblende tonalite, with the combined hornblende and biotite content varying from 15% to 50%. Sparse euhedral crystals of hornblende are as long as 10 cm. The intrusive relationships between these plutons can be seen in Fig. 4. Bateman (1992) also reported that many of the biotite flakes found in the Sierra Nevada granite plutons contain tiny zircon crystals around which are “pleochroic halos,” another name for radiohalos. This observation had been previously made by Snetsinger (1967), who identified which mineral formed the nuclei (radiocenters) of the pleochroic halos in biotites in some of the Sierra Nevada granites. Because it was zircons that formed the radiocenters to those observed halos, they were undoubtedly 238U radiohalos. Each of the chosen granite plutons was sampled at the locations shown in Fig. 4. Access to the outcrops was available by road and by walking trails. The samples were collected where the outcrops were freshest with the approval of the Yosemite National Park via the granting of a sampling and research permit. Some of the sampled outcrops are shown in Fig. 5. Fist-sized (1–2 kg) pieces of granite were collected at each location, the details of which were recorded using a Garmin GPS II Plus hand-held unit. Fig. 5. Outcrops of granites, many in road cuts, in the Yosemite National Park that were sampled for this study (for location details see fig. 4). A standard petrographic thin section was obtained for each sample. Photo-micrographs representative of some of these samples of the Yosemite granites as seen under the microscope are provided in Fig. 6. In the laboratory, portions of the samples were crushed to liberate the biotite grains. Biotite flakes were then handpicked with tweezers from each crushed sample and placed on the adhesive surface of a piece of Scotch tape™ fixed to the flat surface of a laminated board on a laboratory table with its adhesive side up. Once numerous biotite flakes had been mounted on the adhesive side of this piece of tape, a fresh piece of Scotch tape™ was placed over them and firmly pressed along its length so as to ensure the two pieces were stuck together with the biotite flakes firmly wedged between them. The upper piece of tape was then peeled back in order to pull apart the sheets composing the biotite flakes, and this piece of tape with thin biotite sheets adhering to it was then placed over a standard glass microscope slide so that the adhesive side and the thin mica flakes adhered to it. This procedure was repeated with another piece of Scotch tape™ placed over the original tape and biotite flakes affixed to the board, the adhering biotite flakes being progressively pulled apart and transferred to microscope sides. As necessary, further handpicked biotite flakes were added to replace those fully pulled apart. In this way tens of microscope slides were prepared for each sample, each with many (at least 20) thin biotite flakes mounted on it. This is similar to the method pioneered by Gentry (1988). A minimum of 50 microscope slides was prepared for each sample (at least 1,000 biotite flakes) to ensure good representative sampling statistics. Fig. 6. Photo-micrographs of some of the Yosemite granites used in this study, the locations of which are plotted on Fig. 4. All photo-micrographs are at the same scale (20× or 1 mm = 40µm) and the granites are as viewed under crossed polars. Each slide for each sample was then carefully examined under a petrological microscope in plane polarized light and all radiohalos present were identified, noting any relationships between the different radiohalo types and any unusual features. The numbers of each type of radiohalo in each slide were counted by progressively moving the slide backwards and forwards across the field of view, and the numbers for each slide were then tallied and tabulated for each sample. All results are listed in Table 1. Of the thirteen rock units sampled, four units had some samples yielding no radiohalos: the Granodiorite of Kuna Crest (two samples), the Sentinel Granodiorite (one sample), the Yosemite Creek Granodiorite (one sample), and the Tonalite of the Gateway (one sample). Nevertheless, all the granitic rock units sampled contained at least some radiohalos. In Table 1, the number of radiohalos per slide was calculated by adding up the total number of all radiohalos found in all samples of that particular rock unit, divided by the number of slides made and viewed for counting of radiohalos. The number of polonium radiohalos per slide was calculated in a similar way, except it was the total number of polonium radiohalos divided by the number of slides examined for that rock unit. And finally, the ratio in the last column was calculated by taking the number of 210Po radiohalos and dividing by the number of 238U halos. Photo-micrographs of some representative radiohalos are shown in Fig. 7. The 238U radiohalos in Fig. 7a and b are “over exposed” meaning there has been so much rapid 238U decay that the resultant heavy discoloration of the biotite has blurred all the inner rings (compare with fig. 1). Often only holes remain in the centers of the 238U radiohalos where the tiny zirocn radiocenters have been lost during the peeling apart of the biotite flakes to tape them to the microscope slides in Fig. 7a (especially) other incomplete radiohalos stains can be been (lower right). These are due to this biotite sheet not cutting through the radiocenters of these (spherical) radiohalos. These stains likely represent four 210Po radiohalos and another 238U radiohalo, but only the visible complete radiohalos were recorded in Table 1. The outer ring of the 214Po radiohalo is very faint and see in these photo-micrographs. In Fig. 7d the single 210Po radiohalo is easily identified by its single outer ring about 39 µm (microns) in diameter. Note that its radiocenter is a hollow “bubble” where hydrothermal fluids deposited the 210Po atoms which then a-decayed to discolor the biotite and form the radiohalo. This feature is not so clearly seen in Fig. 7c, where the 210Po radiocenter is only about 100 µm from the nearly 238U radiocenter in the same biotite flake. The hydrothermal fluids thus did not have far to transport 222Rn and Po from the 238U radiocenter to form and supply the 210Po radiocenter within weeks so that the 238U and 210Po radiohalos formed concurrently. In Fig. 7e are three “over-exposed” 210Po radiohalos. This is indicative of there having been a lot of 210Po atoms in the radiocenters that then decayed. The diffuseness of the radiation damage is due to the large sizes of the radiocenters, which appear to now be empty “holes” that may originally have been fluid-filled “bubbles.” There are also remnants of much larger fluid inclusions in the same biotite flake. And finally, in Fig. 7f are three more diffuse 210Po radiohalos, as well as 210Po radiation staining around an elongated radiocenter that appears to have been a fluid inclusion. The other radiation stains in the same biotite flake represent radiohalos whose radiocenters are not in this plane of observation on this cleavage plane in this biotite flake. The data in Table 1 indicate that all these granitic rock units contain more 210Po radiohalos than 238U radiohalos (except the Sentinel Granodiorite which has equal numbers). There is also a wide range in the radiohalo abundances, from the Tonalite of the Gateway with only one 210Po radiohalo in two samples, and the Granodiorite of Kuna Crest with only five 210Po radiohalos and three 238U radiohalos (0.05 radiohalos per slide), to the Cathedral Peak Granodiorite with 325 210Po radiohalos and six 238U radiohalos (3.31 radiohalos per slide). Thus the ratio of the number of 210Po radiohalos to the number of 238U radiohalos varies from 1:1 in the Sentinel Granodiorite and 1.7:1 in the Granodiorite of Kuna Crest to 54:1 in the Cathedral Peak Granodiorite. The only rock units that contain other halos apart from 210Po and 238U halos are the Granite of Lee Vining Canyon and the Granodiorite of Arch Rock which both contain some 218Po halos, and the Half Dome Granodiorite with a single 214 halo. |Intrusive Suite||Rock unit (Pluton)||Samples (slides)||Radiohalos||Number of radiohalos per slide||Number of Po radiohalos per slide||Ratio 210Po:238U| |Tuolumne||Granodiorite of Kuna Crest||3 (150)||5||0||0||3||0||0.05||0.03||1.7:1| |Half Dome Granodiorite||2 (100)||55||1||0||30||0||0.82||0.53||1.8:1| |Cathedral Peak Granodiorite||2 (100)||325||0||0||6||0||3.31||3.25||54:1| |Johnson Granite Porphyry||1 (50)||157||0||0||6||0||3.26||3.14||26:1| |Jack Main Canyon (Sonora Pass) (?)||Yosemite Creek Granodiorite||2 (100)||8||0||0||0||0||0.08||0.08||—| |Sentinel Granodiorite||3 (150)||29||0||0||29||0||0.39||0.19||1:1| |Washburn Lake (Buena Vista Crest) (?)||Granodiorite of Illilouette Creek||2 (100)||24||0||0||8||0||0.32||0.24||3:1| |Yosemite Valley||El Capitan Granite||3 (150)||111||0||0||17||0||0.85||0.74||6.5:1| |El Capitan Granite enclave||1 (50)||68||0||0||0||0||1.36||1.36||—| |Taft Granite||1 (50)||58||0||0||6||0||1.28||1.16||9.7:1| |Fine Gold||Tonalite of the Gateway||2 (100)||1||0||0||0||0||0.01||0.01||—| |Granodiorite of Arch Rock||2 (100)||106||0||7||10||0||1.23||1.13||10.6:1| |Scheelite||Granite of Lee Vining Canyon||1 (50)||108||0||2||13||0||2.46||2.2||8.3:1| The results obtained for these Yosemite granites confirm the model for the formation of polonium radiohalos proposed by Snelling (2005a). Both 238U and 210Po radiohalos were found present together in the same biotite flakes in fourteen of the 25 samples studied. Because the thermal annealing temperature of radiohalos in biotite is 150ºC (Armitage and Back 1994; Laney and Laughlin 1981), the 238U and 210Po radiohalos in these biotite flakes had to have formed concurrently below that temperature. However, the short half-life of 210Po places a time constraint on the necessary conditions for the formation of the biotite flakes within the crystallizing and cooling granites and then the radiohalos of only 6–10 days or several weeks at most. The almost complete absence of 214Po and 218Po radiohalos implies both an insufficient supply of hydrothermal fluids and a slow rate of hydrothermal fluid transport, which restricted the formation of those radiohalos due to their very short half-lifes. It also implies that 222Rn was likely absent in the hydrothermal fluids. Therefore, Po was most likely transported as 210Po in the fluids to the nucleation sites where the 210Po radiohalos formed. Fig. 8 is a schematic conceptual temperature versus time cooling curve diagram which visualizes the timescale constraints on granite magma crystallization and cooling, hydrothermal fluid transport, and the formation of polonium radiohalos (Snelling 2008a). Granite magmas when intruded are at temperatures of 650–750ºC, and the hydrothermal fluids are released at temperatures of 370–410ºC after most of the granite and its constituent minerals have crystallized. However, the accessory zircon grains with their contained 238U crystallize very early at higher temperatures, and may have even been already formed in the magma when it was intruded. Thus the 238U decay producing Po isotopes had already begun well before the granite had fully crystallized, before the hydrothermal fluids had begun flowing, and before the crystallized granite had cooled to 150ºC. Furthermore, by the time the temperature of the granite and the hydrothermal fluids had cooled to 150ºC, the heat energy driving the hydrothermal fluid convection would have begun to wane and the vigor of the hydrothermal flow would also have begun to diminish (fig. 8). The obvious conclusion has to be that if the processes of magma intrusion, crystallization, and cooling required 100,000–1 million years, then so much Po would have already decayed and thus been lost from the hydrothermal fluids by the time the granite and fluids had cooled to 150ºC that there simply would not have been enough Po isotopes left to generate the Po radiohalos (Snelling, 2008a). Fig. 7. Photo-micrographs of representative radiohalos in Tuolumne Intrusive Suite granites. All the biotite grains are as viewed in plane polarized light, and the scale bars are all 50 µm (microns) long. Both catastrophic granite formation (Snelling 2008a) and accelerated radioisotope decay (Vardiman, Snelling and Chaffin 2005) are relevant to the hydrothermal fluid transport model for Po radiohalo formation. However, halo formation itself provides constraints on the rates of both those processes (Snelling 2005a). If 238U in the zircon radiocenters supplied the concentrations of Po isotopes required to generate the Po radiohalos, the 238U and Po radiohalos must form over the same timescale of hours to days, as required by the Po isotopes’ short half-lifes. This requires 238U production of Po to be grossly accelerated. The 500 million–1 billion a-decays to generate each 238U radiohalo, equivalent to at least 100 million years’ worth of 238U decay at today’s decay rates, had to have taken place in hours to days to supply the required concentration of Po for producing an adjacent Po radiohalo. However, because accelerated 238U decay in the zircons would have been occurring as soon as the zircons crystallized in the magma at 650–750ºC, unless the granite magma fully crystallized and cooled to below 150ºC very rapidly, all the 238U in the zircons would have rapidly decayed away, as would have also the daughter Po isotopes, before the biotite flakes were cool enough for the 238U and Po radiohalos to form and survive without annealing. Furthermore, the hydrothermal fluid flows needed to transport the Po isotopes along the biotite cleavage planes from the zircons to the Po radiocenters are not long sustained, even in the conventional framework, but decrease rapidly due to cooling of the granite (Snelling 2008a). Thus Snelling (2005a) concluded from all these considerations that the granite intrusion, crystallization, and cooling processes occurred together over a timescale of only about 6–10 days. However, someone might inquire what requires the hydrothermal fluid flow interval to be so brief? Surely, because the zircon radiocenters and their 238U radiohalos are near to (typically within only 1 mm or so) the Po radiocenters in the same biotite flakes, could not the hydrothermal flow have indeed carried each Po atom from the 238U radiocenters to the Po radiocenters within minutes, but the interval of hydrothermal flow persist over many thousands of years during which the billion Po atoms needed for each Po radiohalo are transported that short distance? In this case the 238U decay and the generation of Po atoms could be stretched over that longer interval. However, as already noted above, by the time a granite body and its hydrothermal fluids cool to below 150ºC, most of the energy to drive the hydrothermal convection system and fluid flow has already dissipated (Snelling 2008a). The hydrothermal fluids are expelled from the crystallizing granite and start flowing at between 410 and 370ºC (fig. 8). So unless the granite cooled rapid from 400ºC to below 150ºC, most of the Po transported by the hydrothermal fluids would have been flushed out of the granite by the vigorous hydrothermal convective flows as they diminished. Simultaneously, much of the energy to drive these flows dissipates rapidly as the granite temperature drops. Thus, below 150ºC (when the Po radiohalos start forming) the hydrothermal fluids have slowed down to such an extent that they cannot sustain protracted flow. Moreover, the capacity of the hydrothermal fluids to carry dissolved Po decreases dramatically as the temperature becomes low. Fig. 8. Schematic, conceptual, temperature versus time cooling curve diagram to show the timescale for granite crystallization and cooling, hydrothermal fluid transport, and the formation of polonium radiohalos (after Snelling 2008a). Thus sufficient Po had to be transported quickly to the Po radiocenters to form the Po radiohalos while there was still enough energy at and below 150ºC to drive the hydrothermal fluid flow rapidly enough to get the Po isotopes to the deposition sites before they decayed. This is the time and temperature “window” depicted schematically in Fig. 8. It would, thus, simply be impossible for the Po radiohalos to form slowly over many thousands of years at today’s groundwater temperatures in cold granites. Hot hydrothermal fluids are needed to dissolve and carry the polonium atoms, and heat is needed to drive rapid hydrothermal convection to move Po transporting fluids fast enough to supply the Po radiocenters to generate the Po radiohalos. Furthermore, the required heat cannot be sustained for the 100 million years or more while sufficient 238U decays at today’s rates to produce the 500 million–1 billion Po atoms needed for each Po radiohalo. In summary, for there to be sufficient Po to produce a radiohalo after the granite has cooled to 150ºC, the timescales of the decay process as well as the cooling both must be on same order as the lifetimes of the Po isotopes. Thus, the hydrothermal fluid flow had to be rapid, as the convection system was shortlived while the granite crystallized and cooled rapidly within 6–10 days, and as it transported sufficient Po atoms to generate the Po radiohalos within hours to a few days. Formation of these granites, from emplacement to cooling, therefore had to have been on a timescale that previously has been considered impossible. Various studies have shown that emplacement of a melt is rapid via dikes and fractures assisted by the tectonics (Clemens and Mawer 1992; Coleman and Glazner 2004). Other studies have shown that cooling of the melt has been aided by hydrothermal fluids and groundwater flow (Brown 1987; Burnham 1997; Cathles 1977; Hardee 1982; Hayba and Ingebritsen 1997). Conventional thinking, that the formation of granite intrusions is a slow process over hundreds to thousands of years, is in need of drastic revision. The granites of Yosemite show that their formation had to be rapid in order for the radiohalos present in them to exist. The four rock units of the Tuolumne Intrusive Suite of the Sierra Nevada Batholith display a pattern of Po radiohalos in which their numbers increase inwards within the suite according to the time sequence in which these units were progressively intruded. The first granite pluton intruded, the Granodiorite of Kuna Crest, only contains a few 210Po radiohalos, while there are progressively more 210Po radiohalos in the Half Dome Granodiorite, intruded next, and the Cathedral Peak Granodiorite and the Johnson Granite Porphyry, intruded last (table 1). This matches the pattern of Po radiohalos Snelling and Armitage (2003) observed in the zoned La Posta Pluton in the Peninsular Ranges Batholith east of San Diego, where there was also sequential intrusion of the granitic phases now making up that zoned pluton. Fig. 9 shows the sequence of intrusion within the Tuolumne Intrusive Suite. The last phase of these multiple intrusions, the Johnson Granite Porphyry, was also probably related and connected to volcanism at the surface (Huber 1989; Titus, Clark, and Rikoff 2005) (fig. 10). The implication of the Po radiohalos numbers is that the greater the volume of hydrothermal fluids the more polonium would have been transported and the more Po radiohalos would have formed, as has been confirmed by Snelling (2005a, b, 2006, 2008b, c, d). First, in granites where hydrothermal ore deposits have formed in veins due to large, sustained hydrothermal fluid flows, there are huge numbers of Po radiohalos, for example, in the Land’s End Granite, Cornwall, England (Snelling 2005a), and in the Mole Granite, New South Wales, Australia (Snelling 2009). Second, where hydrothermal fluids were produced by mineral reactions, at a specific pressure-temperature boundary during regional metamorphism of sandstone, four to five times more Po radiohalos were generated, precisely at that specific metamorphic boundary (Snelling 2005b, 2008b). Third, where hydrothermal fluids flowing in narrow shear zones had rapidly metamorphosed the wall rocks, Po radiohalos were present in the resultant metamorphic rock, a type of metamorphic rock that otherwise does not host Po radiohalos (Snelling 2006). Fourth, where the hydrothermal fluids generated in the central granite at the highest grade within a regional metamorphic complex flowed and decreased outwards into that complex, the Po radiohalos numbers also progressively decreased outwards in the complex (Snelling 2008c). Fifth, in a granite pluton which has an atypically wide contact metamorphic and metasomatic aureole around it due to the high volume of hydrothermal fluids it released during its crystallization and cooling, Po radiohalos numbers are higher than in other granite plutons (Snelling 2008d). Thus, the significance of the increasing Po radiohalos numbers progressively inwards within this nested suite of plutons in the Tuolumne Intrusive Suite (table 1) according to the order in which they were intruded is the implication that there were progressively more hydrothermal fluids within each successive pluton. It is particularly evident from the much higher Po radiohalos numbers in the last two intrusive phases, the Cathedral Peak Granodiorite and the Johnson Granite Porphyry, that they sustained greater hydrothermal fluid volumes and flows. This increase in the hydrothermal fluids in the later stages of any intrusive sequence is due to the water released as the earlier intrusive phases crystallized and cooled building up in the later residual intrusive phases; particularly if the hydrothermal fluids are not readily escaping out into the surrounding host rocks. Whereas many other granite plutons intruded into sedimentary rocks containing connate and ground waters that assisted rapid granite cooling by convection cells being established outwards from the plutons (Snelling and Woodmorappe 1998), these Tuolumne plutons intruded into existing granite plutons, and then successively one another (fig. 9). Consequently, since granites have poor connective porosities and therefore poor permeabilities, the successively generated hydrothermal fluids would have been essentially “bottled up” in the later intrusive phases. Another “tell-tale” sign of the high volume of hydrothermal fluids that was in the Cathedral Peak Granodiorite is the large K-feldspar megacrysts which dominate its porphyritic texture. Magmatic hydrothermal fluids are known to have played a major role in their formation (Cox et al. 1996; Lee and Parsons 1997; Lee, Waldron and Parsons 1995). It was this continued build-up in the confined volume of hydrothermal fluids that also consequently explains why the last phase in this intrusive suite, the Johnson Granite Porphyry, was likely connected to explosive volcanism to the land surface above these cooling plutons (fig. 10). That large volumes of hydrothermal fluids were increasingly being confined to the inwards migrating hotter crystallizing core of the Cathedral Peak Granodiorite into which the Johnson Granite Porphyry intruded is evident from the observed inward decrease in the size and abundance of the K-feldspar megacrysts in the Cathedral Peak Granodiorite (Bateman and Chappell 1979). Indeed, the explosive volcanism would have released the confining pressure on the increasing volume of bottled-up hydrothermal fluids, the Johnson Granite Porphyry cooling from the residual pulse of magma that supplied the explosive volcanism (Huber 1989; Titus et al. 2005). Fig. 9. A map view of the sequential emplacement of the Tuolomne Intrusive Suite to form a set of nested plutons: nodiorite of Kune Crest, (b) (c) Half Dome Granodiorite, and (d) Cathedral Peak Granodiorite and Johnson Granite Porphyry (after Huber 1989). Since the Tuolumne Intrusive Suite is a nested set of plutons, there were severe constraints, due to the 150ºC thermal annealing temperature of the radiohalos, on the lapse of time between the intrusion of each phase of the suite. Each phase had to have been rapidly emplaced, crystallized and cooled sufficiently before the next phases were sequentially intruded, so that the entire suite of nested plutons was in place before the radiohalos began forming below 150ºC. Otherwise, the heat given off by each successively emplaced phase, which intruded its predecessors, would have annealed all radiohalos in them. That each phase had crystallized and cooled before the next phase was intruded has been confirmed by a recent study of the internal contacts within the suite (Zak and Paterson 2005). These are highly variable from relatively sharp, with no contact metamorphic effects from any major temperature differences between the earlier crystallized pluton and the subsequent intruding pluton, to gradational, the latter indicative of large scale mixing where the earlier (host) and subsequent (intruding) magmas must have both been crystallizing together. Coleman, Gray, and Glazner (2004) concluded that the successive development of the suite was a relatively rapidly emplaced series of small intrusions as possible sheets or dikes to incrementally assemble each pluton (or phase). However, so that annealing of the radiohalos would not occur above 150ºC, all the phases of the entire suite had to have intruded so rapidly that the entire suite cooled below 150ºC more or less at the same time. Furthermore, because of the short half-life of 210Po and the need for the hydrothermal fluids within the cooling granite masses to rapidly transport sufficient 210Po to supply the radiocenters to form the 210Po radiohalos before the 210Po decayed, the successive emplacement and cooling of the entire suite of nested plutons thus must have only taken several weeks. This survival of the Po radiohalos as a result of the rapid sequential emplacement of these nested plutons also implies that there could not have been a “heat problem” (Snelling 2005a). Whatever mechanisms dissipated the heat from these crystallizing and cooling magmas (Snelling 2008a; Snelling and Woodmorappe 1998) did so rapidly and efficiently without annealing the Po radiohalos in the surrounding earlier intruded phases of this nested suite of plutons. Thus this entire intrusive event, consisting of successive pulses of granite magma emplacement and cooling, must only have taken several weeks, culminating with a violent volcanic eruption. That would have finally dissipated the remaining heat by rapidly moving it to the earth’s atmosphere in steam and to the earth’s surface in the rhyolitic tuffs and lavas released by the eruption. Fig. 10. Final stages in the development of the nested plutons of the Tuolumne Intrusive Suite (after Huber 1989). The Johnson Granite Porphyry represents the final phase of the suite that intruded the Cathedral Peak Granodiorite and erupted through a volcanic caldera at the earth’s surface above, spewing volcanic ash and debris across it. The volcanic deposit and much of the underlying rock were subsequently removed by erosion to create today’s land surface. Finally, the formation of the hundreds of granitic plutons of the Sierra Nevada batholith, some of which outcrop on a grand and massive scale in the Yosemite area, can thus be adequately explained within the biblical framework for earth history. The regional geologic context suggests that late in the Flood year, after deposition of thick sequences of fossiliferous sedimentary strata, a subduction zone developed just to the west at the western edge of the North American plate (Huber 1989). Because plate movements were then catastrophic during the Flood year (Austin et al. 1994), as the cool Pacific plate was catastrophically subducted under the overriding North American plate the western edge region of the latter was deformed, resulting in buckling of its sedimentary strata and metamorphism at depth (fig. 11). The Pacific plate was also progressively heated as it was subducted, so that its upper side began to partially melt and thus produce large volumes of basalt magma. Rising into the lower continental crust of the deformed western edge of the North American plate, the heat from these basalt magmas in turn caused voluminous partial melting of this lower continental crust, generating buoyant granite magmas. These rapidly ascended via dikes into the upper crust, where they were emplaced rapidly and progressively as the hundreds of coalescing granite plutons that now form the Sierra Nevada batholith. The presence of polonium radiohalos in many of the Yosemite area granite plutons is confirmation of their rapid crystallization and cooling late in the closing phases of the Flood year. Conventional radioisotope dating, which assigns ages of 80–120 million years to these granites (Bateman 1992), is grossly in error because of not taking into account the acceleration of the nuclear decay (Vardiman et al 2005). Subsequent rapid erosion at the close of the Flood, as the waters drained rapidly off the continents, followed by further erosion early in the post-Flood era and during the post-Flood Ice Age, have exposed and shaped the outcropping of these granite plutons in the Yosemite area as seen today. Conventional thinking has been that granites in the continental crust have formed slowly over 105 to 106 years. In the last two decades though, evidence has accumulated to convince many geologists that granite pluton emplacement was a relatively rapid process over timescales of only years to tens of years. Dilation pressures in the deep crustal sources forced magma through fractures as dikes to feed rapidly shallow crustal magma chambers. Rapid cooling was aided by hydrothermal convection. Fig. 11. Subduction of an oceanic plate (Pacific plate) during convergence with a continental plate (North American plate). Magma, formed by partial melting of the overriding continental plate, rises into the upper continental plate to form granite plutons and volcanoes along a mountain chain (after Huber 1989). The evidence left by radiohalos found in Yosemite granites, however, further challenges the timescale of even this recent school of thought. The short half-lifes of polonium isotopes place severe time constraints on the formation and cooling of the biotite flakes containing the radiohalos produced by these polonium isotopes. The hydrothermal fluids which were critical to the rapid cooling process also transported the 222Rn and polonium isotopes from U decay in zircon inclusions to generate the nearby polonium radiohalos within hours to days, once the granite’s temperature has fallen below 150ºC, the radiohalo annealing temperature. For the supply of 222Rn and Po isotopes to be maintained during the whole pluton formation process, so as to still generate the Po radiohalos, the U decay rate had to have been grossly accelerated, and the Yosemite granite plutons must have formed and cooled below 150ºC within six to ten days. This timescale, of course, is consistent with granite pluton formation within the young earth model. Furthermore, the acceleration of radioisotope decay means that absolute dates for rocks calculated on the assumption of decay having been constant are grossly in error. The nested plutons of the Tuolumne Intrusive Suite provide a test of the hydrothermal fluid transport model for the generation of Po radiohalos. The volume of hydrothermal fluids increased in each pluton as it was successively emplaced, so that the final magma at the center of the suite contained enough volatiles to feed a violent volcanic eruption at the earth’s surface. The model predicted the progressively greater volume of hydrothermal fluids would have generated more Po radiohalos in each successive pluton, and more Po radiohalos were indeed found. We would like to thank Dr. Larry Vardiman for his original suggestion to do this radiohalo study in Yosemite National Park, for help in collecting the rock samples, and for support of this research generally. Thanks to Mark Armitage for his help with some of the photomicrographs, and for processing some of the samples to obtain the radiohalos counts. Thanks to the Institute for Creation Research (ICR) for funding much of this project and for providing the equipment and facilities to do some of the research. Thanks to the National Parks Service for permission to collect the samples. And thanks also to Dallel’s parents and friends for their support and encouragement in her part of this study, which resulted in an M.S. dissertation in the ICR Graduate School. Ameglio, L. and J.-L. Vigneresse. 1999. Geophysical imaging of the shape of granitic intrusions at depth: A review. In Understanding granites: Integrating new and classical techniques, eds. A. Castro, C. Fernandez, and J.-L. Vigneresse, (special publication 168), pp. 39–54. London: Geological Society. Armitage, M. H. and E. Back. 1994. The thermal erasure of radiohalos in biotite. Creation Ex Nihilo Technical Journal 8(2):212–222. Austin, S. A., J. R. Baumgardner, D. R. Humphreys, A. A. Snelling, L. Vardiman, and K. P. Wise. 1994. Catastrophic plate tectonics: A global Flood model of earth history. In Proceedings of the third international conference on creationism, ed. R. E. Walsh, pp. 609–621. Pittsburgh, Pennsylvania: Creation Science Fellowship. Bateman, P. C. and B. W. Chappell. 1979. Crystallization, fractionation, and solidification of the Tuolumne Intrusive Series, Yosemite National Park, California. Geological Society of America Bulletin 90:465–482. Bateman, P. C. 1992. Plutonism in the central part of the Sierra Nevada Batholith, California. U.S. Geological Survey Professional Paper 1483, 185p. Bergantz, G. W. 1989. Underplating and partial melting: Implications for melt generation and extraction. Science 254:1039–1095. Bouchez, J. L., D. H. W. Hutton, and W. E. Stephens, eds. 1997. Granite: From segregation of melt to emplacement fabrics. Dordrecht, The Netherlands: Kluwer Academic Publishers. Brandon, A. D., T. Chacko, and R. A. Creaser. 1996. Constraints on rates of granitic magma transport from epidote dissolution kinetics. Science 271:1845–1848. Brown, M. and T. Rushmer. 1997. In Deformation-enhanced fluid transport in the earth’s crust and mantle, ed. M. Holness, pp.111–144. London: Chapman and Hall. Brown, S. R. 1987. Fluid flow through rock joints: The effect of surface roughness. Journal of Geophysical Research 92:1337–1347. Burnham, C. W. 1997. Magmas and hydrothermal fluids. In Geochemistry of hydrothermal ore deposits, 3rd ed., ed. H. L. Barnes, pp. 63–123. New York: Wiley. Candela, P. A. 1991. Physics of aqueous phase evolution in plutonic environments. American Mineralogist 76:1081–1091. Cathles, L. M. 1977. An analysis of the cooling of intrusives by ground-water convection which includes boiling. Economic Geology 72:804–826. Clemens, J. D. and C. K. Mawer. 1992. Granitic magma transport by fracture propagation. Tectonophysics 204:339–360. Clemens, J. D., N. Petford, and C. K. Mawer. 1997. In Deformation-enhanced fluid transport in the earth’s crust and mantle, ed. M. Holness, pp. 145–172. London: Chapman and Hall. Clemens, J. D. and N. Petford. 1999. Granitic melt viscosity and silicic magma dynamics in contrasting tectonic settings. Journal of the Geological Society of London 156:1057–1060. Coleman, D. S. and A. F. Glazner. 1997. The Sierra Crest magmatic event: Rapid formation of juvenile crust during the Late Cretaceous in California. International Geology Review 39:768–787. Coleman, D. S., W. Gray, and A. F. Glazner. 2004. Rethinking the emplacement and evolution of zoned plutons: Geochronologic evidence for incremental assembly of the Tuolumne Intrusive Suite, California. Geology 32(5):433–436. Collins, W. J., and E. W. Sawyer. 1996. Pervasive granitoid magma transport through the lower-middle crust during non-coaxial compressional deformation. Journal of Metamorphic Geology 14:565–579. Cox, R. A., T. J. Dempster, B. R. Bell, and G. Rogers. 1996. Crystallization of the Shap Granite: Evidence from zoned K-feldspar megacrysts. Journal of the Geological Society of London 153:625-635. D’Lemos, R. S., M. Brown, and R. A. Strachan. 1993. Granite magma generation, ascent and emplacement within a transpressional orogen. Journal of the Geological Society of London 149:487–490. Gentry, R. V. 1968. Fossil alpha-recoil analysis of certain variant radioactive halos. Science 160:1228–1230. Gentry, R. V. 1970. Giant radioactive halos: Indicators of unknown radioactivity. Science 169: 670–673. Gentry, R. V. 1971. Radiohalos: Some unique lead isotopic ratios and unknown alpha activity. Science 173:727–731. Gentry, R. V. 1973. Radioactive halos. Annual Review of Nuclear Science 23:347–362. Gentry, R. V. 1974. Radiohalos in a radiochronological and cosmological perspective. Science 184:62–66. Gentry, R. V. 1986. Radioactive halos: Implications for creation. In Proceedings of the first international conference on creationism, ed. R. E. Walsh, C. L. Brooks, and R. S. Crowell, vol. 2, pp. 89–100. Pittsburgh, Pennsylvania: Creation Science Fellowship. Gentry, R. V. 1988. Creation’s tiny mystery, 347 p. Knoxville, Tennessee: Earth Science Associates. Glazner, A. F., J. M. Bartley, W. B. Hamilton, and B. S. Carl. 2003. Making space for batholiths by extrusion of subbatholithic crust. International Geology Review 45:959–967. Glazner, A. F., J. M. Bartley, D. S. Coleman, W. Gray, and R. Z. Taylor. 2004. Are plutons assembled over millions of years by amalgamation from small magma chambers? GSA Today 14(4/5):4–11. Hardee, H. C. 1982. Permeable convection above magma bodies. Tectonophysics 84:179–195. Harris, N., D. Vance, and M. Ayres. 2000. From sediment to granite: Timescales of anatexis in the upper crust. Chemical Geology 162:155–167. Hayba, D. O., and S. E. Ingebritsen. 1997. Multiphase groundwater flow near cooling plutons. Journal of Geophysical Research 102:12,235–12,252. Henderson, G. H., and S. Bateson. 1934. A quantitative study of pleochroic haloes—I. Proceedings of the Royal Society of London, Series A 145:563–581. Henderson, G. H., G. M. Mushkat, and D. P. Crawford. 1934. A quantitative study of pleochroic haloes—III Thorium. Proceedings of the Royal Society of London, Series A 158:199–211. Holmes, A. 1931. Radioactivity and geological time. In Physics of the earth—IV. The age of the earth. Bulletin of the National Research Council 80:124–460. Huber, H. K. 1989. The geologic story of Yosemite National Park. Yosemite National Park, California: The Yosemite Association. Huber, N. K., P. C. Bateman, and C. Wahrhaftig. 1989. Geologic map of Yosemite National Park and vicinity, California. U.S. Geological Survey Miscellaneous Investigations Series Map I-1874, 1 sheet, scale 1:125,000. Huppert, H. E., and R. S. J. Sparks. 1988. The generation of granitic magmas by intrusion of basalt into continental crust. Journal of Petrology 29:599–642. Hutton D. H. W. 1988. Granite emplacement mechanisms and tectonic controls: Inferences from deformation studies. Transactions of the Royal Society of Edinburgh. Earth Sciences 79:245–255. Hutton, D. H. W. 1992. Granite sheeted complexes: Evidence for the dyking ascent mechanism. Transactions of the Royal Society of Edinburgh. Earth Sciences 83:377–382. Iimori, S., and J. Yoshimura. 1926. Pleochroic halos in biotite: Probable existence of the independent origin of the actinium series. Scientific Papers of the Institute of Physical and Chemical Research 5(66):11–24. Ingebritsen, S. E., and D. O. Hayba. 1994. Fluid flow and heat transport near the critical point of H2O. Geophysical Research Letters 21:2199–2202. Jackson, M. D., M. J. Cheadle, and M. P. Atherton. 2003. Quantitative modeling of granitic melt generation and segregation in the continental crust. Journal of Geophysical Research 108(B7, ECV 3):1–21. Joly, J. 1917a. Radio-active halos. Philosophical Transactions of the Royal Society of London, Series A 217:51–79. Joly, J. 1917b. Radio-active halos. Nature 99:456–458, 476–478. Joly, J. 1923. Radio-active halos. Proceedings of the Royal Society of London, Series A 102: 682–705. Joly, J. 1924. The radioactivity of the rocks. Nature 114:160–164. Kerr-Lawson, D. E. 1927. Pleochroic haloes in biotite from near Murray Bay. University of Toronto Studies in Geology Series 24:54–71. Kerr-Lawson, D. E. 1928. Pleochroic haloes in biotite. University of Toronto Studies in Geology Series 27:15–27. Kistler, R. W., B. W. Chappell, D. L. Peck, and P. C. Bateman. 1986. Isotopic variation in the Tuolumne Intrusive Suite, central Sierra Nevada, California. Contributions to Mineralogy and Petrology 94:205–220. Laney, R., and A. W. Laughlin. 1981. Natural annealing of pleochroic haloes in biotite samples from deep drill holes, Fenton Hill, New Mexico. Geophysical Research Letters 8(5):501–504. Lee, M. R., and I. Parsons. 1997. Dislocation formation and albitization in alkali feldspar from the Shap Granite. American Mineralogist 82:557–570. Lee, M. R., K. A. Waldron, and I. Parsons. 1995. Exsolution and alteration microtextures in alkali feldspar phenocrysts from the Shap Granite. Mineralogical Magazine 59:63–78. Mahan, K. H., J. M. Bartley, D. S. Coleman, A. F. Glazner, and B. S. Carl. 2003. Sheeted intrusion of the synkinematic McDoogle pluton, Sierra Nevada, California. Geological Society of America Bulletin 115(12):1570–1582. Marsh, B. D. 1982. On the mechanics of igneous diapirism, stoping and zone melting. American Journal of Science 282:808–855. McNulty, B. A., W. Tong, and O. T. Tobisch. 1996. Assembly of a dike-fed magma chamber: The Jackass Lakes pluton, central Sierra Nevada, California. Geological Society of America Bulletin 108(8):926–940. Miller, C. F., E. B. Watson, and T. M. Harrison. 1988. Perspectives on the source, segregation and transport of granitoid magmas. Transactions of the Royal Society of Edinburgh. Earth Sciences 79:135–156. Norton, D., and J. Knight. 1977. Transport phenomena in hydrothermal systems: Cooling plutons. American Journal of Science 277:937–981. Owen, M. R. 1988. Radiation-damaged halos in quartz. Geology 16:529–532. Parmentier, E. M. 1981. Numerical experiments on O18 depletion in igneous intrusions cooling by groundwater convection. Journal of Geophysical Research 86: 7131–7144. Paterson, S. R., and O. T. Tobisch. 1992. Rates of processes in magmatic arcs: Implications for the timing and nature of pluton emplacement and wall rock deformation. Journal of Structural Geology 14(3):291–300. Paterson, S. R., and R. H. Vernon. 1995. Bursting the bubble of ballooning plutons: A return to nested diapirs emplaced by multiple processes. Geological Society of America Bulletin 107(11):1356–1380. Petford, N. 1995. Segregation of tonalitic-trondhjemitic melts in the continental crust: The mantle connection. Journal of Geophysical Research 100:15,735–15,743. Petford, N., J. D. Clemens, and J.-L. Vigneresse. 1997. In Granite from segregation of melt to emplacement fabrics, eds. J.-L. Bouchez, D. H. W. Hutton, and W. E. Stephens, pp. 3–10. Dordretch: Kluwer. Petford, N., A. R. Cruden, K. J. W. McCaffrey, and J.-L. Vigneresse. 2000. Granite magma formation, transport and emplacement in the earth’s crust. Nature 408:669–673. Petford, N., R. C. Kerr, and J. R. Lister. 1993. Dike transport of granitoid magmas. Geology 21:845–848. Petford, N., and M. A. Koenders. 1998. Self-organisation and fracture connectivity in rapidly heated continental crust. Journal of Structural Geology 20:1425–1434. Pitcher, W. S. 1993. The nature and origin of granite, 321p. London: Blackie Academic and Professional. Ratajeski, K., A. F. Glazner, and B. Miller. 2001. Geology and geochemistry of mafic to felsic plutonic rocks in the Cretaceous Intrusive Suite of Yosemite Valley, California. Geological Society of America Bulletin 113(11):1486–1502. Reid, J. B. Jr., O. C. Evans, and D. G. Fates. 1983. Magma mixing in granitic rocks of the central Sierra Nevada, California. Earth and Planetary Science Letters 66:243–261. Rutter, E. H., and D. H. K. Neumann. 1995. Experimental deformation of partially molten Westerly Granite under fluid-absent conditions, with implications for the extraction of granitic magmas. Journal of Geophysical Research 100:15,697–15,715. Scalliet, B., A. Pecher, P. Rochette, and M. Champenois. 1994. The Gangotri Granite (Garhwal Himalaya): Laccolith emplacement in an extending collisional belt. Journal of Geophysical Research 100:585–607. Snelling, A. A. 2000. Radiohalos. In Radioisotopes and the age of the earth: A young-earth creationist research initiative, eds. L. Vardiman, A. A. Snelling, and E. F. Chaffin, pp. 381–468. El Cajon, California: Institute for Creation Research; St. Joseph, Missouri: Creation Research Society. Snelling, A. A. 2005a. Radiohalos in granites: Evidence for accelerated nuclear decay. In Radioisotopes and the age of the earth: Results of a young-earth creationist research initiative, eds. L. Vardiman, A. A. Snelling, and E. F. Chaffin, pp. 101–207. El Cajon, California: Institute for Creation Research; Chino Valley, Arizona: Creation Research Society. Snelling, A. A. 2005b. Polonium radiohalos: The model for their formation tested and verified. Impact #386. El Cajon, California: Institute for Creation Research. Snelling, A. A. 2006. Confirmation of rapid metamorphism of rocks. Impact #392. El Cajon, California: Institute for Creation Research. Snelling, A. A. 2008a. Catastrophic granite formation: Rapid melting of source rocks, and rapid magma intrusion and cooling. Answers Research Journal 1:11–25 Snelling, A. A. 2008b. Testing the hydrothermal fluid transport model for polonium radiohalo formation: The Thunderhead Sandstone, Great Smoky Mountains, Tennessee–North Carolina. Answers Research Journal 1:53–54. Snelling, A. A. 2008c. Radiohalos in the Cooma Metamorphic Complex, New South Wales, Australia: The mode and rate of regional metamorphism. In Proceedings of the sixth international conference on creationism, ed. A. A. Snelling, pp. 371–387. Pittsburgh, Pennsylvania: Creation Science Fellowship; Dallas, Texas: Institute for Creation Research. Snelling, A. A. 2008d. Radiohalos in the Shap Granite, Lake District, England: Evidence that removes objections to Flood geology. In Proceedings of the sixth international conference on creationism, ed. A. A. Snelling, pp. 389–405. Pittsburgh, Pennsylvania; Creation Science Fellowship, and Dallas, Texas: Institute for Creation Research. Snelling, A. A. 2009. Radiohalos in the Mole Granite, New South Wales, Australia, in contrast to other granites in the New England Batholith: A potential correlation with hydrothermal ore veins. In preparation. Snelling, A. A., and M. H. Armitage. 2003. Radiohalos—A tale of three granitic plutons. In Proceedings of the fifth international conference on creationism, ed. R. L. Ivey, Jr., pp. 243–267. Pittsburgh, Pennsylvania: Creation Science Fellowship. Snelling, A. A., J. R. Baumgardner, and L. Vardiman. 2003. Abundant Po radiohalos in Phanerozoic granites and timescale implications for their formation. EOS, Transactions of the American Geophysical Union 84(46), Fall Meeting Supplement, Abstract V32C–1046. Snelling, A. A. and J. Woodmorappe. 1998. The cooling of thick igneous bodies on a young earth. In Proceedings of the fourth international conference on creationism, ed. R. E. Walsh, pp. 527–545. Pittsburgh, Pennsylvania: Creation Science Fellowship. Snetsinger, K. G. 1967. Nuclei of pleochroic halos in biotites of some Sierra Nevada granitic rocks. American Mineralogist 52:1901–1903. Spera, F. J. 1982. Thermal evolution of plutons: A parameterized approach. Science 207:299–301. Stark, M. 1936. Pleochroitische (Radioaktive) Höfe ihre Verbreitung in den Gesteinen und Veränderlickheit. Chemie der Erde 10:566–630. Stern, T. W., P. C. Bateman, B. A. Morgan, M. F. Newell, and D. L. Peck. 1981. Isotopic U-Pb ages of zircons from the granitoids of the central Sierra Nevada. U.S. Geological Survey Professional Paper 1185, 17p. Titus, S. J., R. Clark, and B. Rikoff. 2005. Geologic and geophysical investigation of two fine-grained granites, Sierra Nevada Batholith, California: Evidence for structural control on emplacement and volcanism. Geological Society of America Bulletin 117:1256–1271. Torrance, K. E., and J. P. Sheu. 1978. Heat transfer from plutons undergoing hydrothermal cooling and thermal cracking. Numerical Heat Transfer 1:147–161. Vardiman, L., A. A. Snelling, and E. F. Chaffin, eds. 2005. Radioisotopes and the age of the earth: Results of a youngearth creationist research initiative. El Cajon, California: Institute for Creation Research; Chino Valley, Arizona: Creation Research Society. Wiman, E. 1930. Studies of some Archaean rocks in the neighbourhood of Uppsala, Sweden, and their geological position. Bulletin of the Geological Institute, University of Uppsala 23:1–170. Wise, K. P. 1989. Radioactive halos: Geological concerns. Creation Research Society Quarterly 25:171–176. Yoshinobu, A. S., D. A. Okaya, and S. R. Paterson. 1998. Modeling the thermal evolution of fault-controlled magma emplacement models: Implications for the solidification of granitoid plutons. Journal of Structural Geology 20(9–10):1205–1218. Young, D. A., and R. F. Stearley. 2008. The Bible, rocks and time: Geological evidence for the age of the earth. Downers Grove, Illinois: InterVarsity Press. Zak, J., and S. R. Paterson. 2005. Characteristics of internal contacts in the Tuolumne Batholith, central Sierra Nevada, California (USA): Implications for episodic emplacement and physical processes in a continental arc magma chamber. Geological Society of America Bulletin 117:1242–1255. Zhao, J., and E. T. Brown. 1992. Thermal cracking induced by water flow through joints in heated granite. International Journal of Rock Mechanics 17:77–82. Cutting-edge creation research. Free. Answers Research Journal (ARJ) is a professional, peer-reviewed technical journal for the publication of interdisciplinary scientific and other relevant research from the perspective of the recent Creation and the global Flood within a biblical framework. High-quality papers for Answers Research Journal, sponsored by Answers in Genesis, are now invited for submission. Interested authors should download and read the Instructions to Authors Manual PDF file for all details of requirements, procedures, paper mechanics, referencing style, and the technical review process for submitted papers.
http://www.answersingenesis.org/articles/arj/v2/n1/radiohalos-in-yosemite-granites
13
59
The formation and evolution of the Solar System is estimated to have begun 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the centre, forming the Sun, while the rest flattened into a protoplanetary disc out of which the planets, moons, asteroids, and other small Solar System bodies formed. This widely accepted model, known as the nebular hypothesis, was first developed in the 18th century by Emanuel Swedenborg, Immanuel Kant, and Pierre-Simon Laplace. Its subsequent development has interwoven a variety of scientific disciplines including astronomy, physics, geology, and planetary science. Since the dawn of the space age in the 1950s and the discovery of extrasolar planets in the 1990s, the models have been both challenged and refined to account for new observations. The Solar System has evolved considerably since its initial formation. Many moons have formed from circling discs of gas and dust around their parent planets, while other moons are believed to have been bodies captured by their planets or, as in the case of the Earth's Moon, to have resulted from giant collisions. Collisions between bodies have occurred continually up to the present day and have been central to the evolution of the solar system. The positions of the planets often shifted, and planets have switched places. This planetary migration now is believed to have been responsible for much of the Solar System's early evolution. In roughly 5 billion years, the Sun will cool and expand outward to many times its current diameter (becoming a red giant), before casting off its outer layers as a planetary nebula, and leaving behind a stellar corpse known as a white dwarf. In the far distant future, the gravity of passing stars gradually will whittle away at the Sun's retinue of planets. Some planets will be destroyed, others ejected into interstellar space. Ultimately, over the course of trillions of years, it is likely that the Sun will be left alone with no bodies in orbit around it. Ideas concerning the origin and fate of the world date from the earliest known writings; however, for almost all of that time, there was no attempt to link such theories to the existence of a "Solar System", simply because it was not generally believed that the Solar System, in the sense we now understand it, existed. The first step toward a theory of Solar System formation and evolution was the general acceptance of heliocentrism, the model which placed the Sun at the centre of the system and the Earth in orbit around it. This conception had been gestating for millennia, but was widely accepted only by the end of the 17th century. The first recorded use of the term "Solar System" dates from 1704. The current standard theory for Solar System formation, the nebular hypothesis, has fallen into and out of favour since its formulation by Emanuel Swedenborg, Immanuel Kant, and Pierre-Simon Laplace in the 18th century. The most significant criticism of the hypothesis was its apparent inability to explain the Sun's relative lack of angular momentum when compared to the planets. However, since the early 1980s studies of young stars have shown them to be surrounded by cool discs of dust and gas, exactly as the nebular hypothesis predicts, which has led to its re-acceptance. Understanding of how the Sun will continue to evolve required an understanding of the source of its power. Arthur Stanley Eddington's confirmation of Albert Einstein's theory of relativity led to his realisation that the Sun's energy comes from nuclear fusion reactions in its core. In 1935, Eddington went further and suggested that other elements also might form within stars. Fred Hoyle elaborated on this premise by arguing that evolved stars called red giants created many elements heavier than hydrogen and helium in their cores. When a red giant finally casts off its outer layers, these elements would then be recycled to form other star systems. One of these regions of collapsing gas (known as the pre-solar nebula) would form what became the Solar System. This region had a diameter of between 7000 and 20,000 astronomical units (AU) and a mass just over that of the Sun. Its composition was about the same as that of the Sun today. Hydrogen, along with helium and trace amounts of lithium produced by Big Bang nucleosynthesis, formed about 98% of the mass of the collapsing cloud. The remaining 2% of the mass consisted of heavier elements that were created by nucleosynthesis in earlier generations of stars. Late in the life of these stars, they ejected heavier elements into the interstellar medium. Because of the conservation of angular momentum, the nebula spun faster as it collapsed. As the material within the nebula condensed, the atoms within it began to collide with increasing frequency, converting their kinetic energy into heat. The centre, where most of the mass collected, became increasingly hotter than the surrounding disc. Over about 100,000 years, the competing forces of gravity, gas pressure, magnetic fields, and rotation caused the contracting nebula to flatten into a spinning protoplanetary disc with a diameter of ~200 AU and form a hot, dense protostar (a star in which hydrogen fusion has not yet begun) at the centre. At this point in its evolution, the Sun is believed to have been a T Tauri star. Studies of T Tauri stars show that they are often accompanied by discs of pre-planetary matter with masses of 0.001–0.1 solar masses. These discs extend to several hundred AU—the Hubble Space Telescope has observed protoplanetary discs of up to 1000 AU in diameter in star-forming regions such as the Orion Nebula—and are rather cool, reaching only a thousand Kelvin at their hottest. Within 50 million years, the temperature and pressure at the core of the Sun became so great that its hydrogen began to fuse, creating an internal source of energy which countered the force of gravitational contraction until hydrostatic equilibrium was achieved. This marked the Sun's entry into the prime phase of its life, known as the main sequence. Main sequence stars are those which derive their energy from the fusion of hydrogen into helium in their cores. The Sun remains a main sequence star today. The various planets are thought to have formed from the solar nebula, the disc-shaped cloud of gas and dust left over from the Sun's formation. The currently accepted method by which the planets formed is known as accretion, in which the planets began as dust grains in orbit around the central protostar. Through direct contact, these grains formed into clumps between one and ten kilometres (km) in diameter, which in turn collided to form larger bodies (planetesimals) of ~5 km in size. These gradually increased through further collisions, growing at the rate of centimetres per year over the course of the next few million years. The inner Solar System, the region of the Solar System inside 4 AU, was too warm for volatile molecules like water and methane to condense, so the planetesimals which formed there could only form from compounds with high melting points, such as metals (like iron, nickel, and aluminium) and rocky silicates. These rocky bodies would become the terrestrial planets (Mercury, Venus, Earth, and Mars). These compounds are quite rare in the universe, comprising only 0.6% of the mass of the nebula, so the terrestrial planets could not grow very large. The terrestrial embryos grew to about 0.05 Earth masses and ceased accumulating matter about 100,000 years after the formation of the Sun; subsequent collisions and mergers between these planet-sized bodies allowed terrestrial planets to grow to their present sizes (see Terrestrial planets below). The gas giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where the material is cool enough for volatile icy compounds to remain solid. The ices which formed the Jovian planets were more abundant than the metals and silicates which formed the terrestrial planets, allowing the Jovian planets to grow massive enough to capture hydrogen and helium, the lightest and most abundant elements. Planetesimals beyond the frost line accumulated up to four Earth masses within about 3 million years. Today, the four gas giants comprise just under 99% of all the mass orbiting the Sun. Theorists believe it is no accident that Jupiter lies just beyond the frost line. Because the frost line accumulated large amounts of water via evaporation from infalling icy material, it created a region of lower pressure that increased the speed of orbiting dust particles and halted their motion toward the Sun. In effect, the frost line acted as a barrier that caused material to accumulate rapidly at ~5 AU from the Sun. This excess material coalesced into a large embryo of about 10 Earth masses, which then began to grow rapidly by swallowing hydrogen from the surrounding disc, reaching 150 Earth masses in only another 1000 years and finally topping out at 318 Earth masses. Saturn may owe its substantially lower mass simply to having formed a few million years after Jupiter, when there was less gas available to consume. T Tauri stars like the young Sun have far stronger stellar winds than more stable, older stars. Uranus and Neptune are believed to have formed after Jupiter and Saturn did, when the strong solar wind had blown away much of the disc material. As a result, the planets accumulated little hydrogen and helium—not more than 1 Earth mass each. Uranus and Neptune are sometimes referred to as failed cores. The main problem with formation theories for these planets is the timescale of their formation. At the current locations it would have taken a hundred million years for their cores to accrete. This means that Uranus and Neptune probably formed closer to the Sun—near or even between Jupiter and Saturn—and later migrated outward (see Planetary migration below). Motion in the planetesimal era was not all inward toward the Sun; the Stardust sample return from Comet Wild 2 has suggested that materials from the early formation of the Solar System migrated from the warmer inner Solar System to the region of the Kuiper belt. After between three and ten million years, the young Sun's solar wind would have cleared away all the gas and dust in the protoplanetary disc, blowing it into interstellar space, thus ending the growth of the planets. At the end of the planetary formation epoch the inner Solar System was populated by 50–100 Moon- to Mars-sized planetary embryos. Further growth was possible only because these bodies collided and merged, a process which took up to 100 million years. These objects would have gravitationally interacted with one another, tugging at each other's orbits until they collided, growing larger until the four terrestrial planets we know today took shape. One such giant collision is believed to have formed the Moon (see Moons below), while another removed the outer envelope of the young Mercury. One unresolved issue with this model is that it cannot explain how the initial orbits of the proto-terrestrial planets, which would have needed to be highly eccentric in order to collide, produced the remarkably stable and near-circular orbits the terrestrial planets possess today. One hypothesis for this "eccentricity dumping" is that the terrestrials formed in a disc of gas still not expelled by the Sun. The "gravitational drag" of this residual gas would have eventually lowered the planets' energy, smoothing out their orbits. However, such gas, if it existed, would have prevented the terrestrials' orbits from becoming so eccentric in the first place. Another hypothesis is that gravitational drag occurred not between the planets and residual gas but between the planets and the remaining small bodies. As the large bodies moved through the crowd of smaller objects, the smaller objects, attracted by the larger planets' gravity, formed a region of higher density, a "gravitational wake", in the larger objects' path. As they did so, the increased gravity of the wake slowed the larger objects down into more regular orbits. As Jupiter migrated inward following its formation (see Planetary migration below), resonances would have swept across the asteroid belt, dynamically exciting the region's population and increasing their velocities relative to each other. The cumulative action of the resonances and the embryos either scattered the planetesimals away from the asteroid belt or excited their orbital inclinations and eccentricities. Some of those massive embryos too were ejected by Jupiter, while others may have migrated to the inner Solar System and played a role in the final accretion of the terrestrial planets. During this primary depletion period, the effects of the giant planets and planetary embryos left the asteroid belt with a total mass equivalent to less than 1% that of the Earth, composed mainly of small planetesimals. This is still 10–20 times more than the current mass in the main belt, which is about 1/2,000 the Earth's mass. A secondary depletion period that brought the asteroid belt down close to its present mass is believed to have followed when Jupiter and Saturn entered a temporary 2:1 orbital resonance (see below). The inner Solar System's period of giant impacts probably played a role in the Earth acquiring its current water content (~6 kg) from the early asteroid belt. Water is too volatile to have been present at Earth's formation and must have been subsequently delivered from outer, colder parts of the Solar System. The water was probably delivered by planetary embryos and small planetesimals thrown out of the asteroid belt by Jupiter. A population of main-belt comets discovered in 2006 has been also suggested as a possible source for Earth's water. In contrast, comets from the Kuiper belt or farther regions delivered not more than about 6% of Earth's water. The panspermia hypothesis holds that life itself may have been deposited on Earth in this way, although this idea is not widely accepted. The migration of the outer planets is also necessary to account for the existence and properties of the Solar System's outermost regions. Beyond Neptune, the Solar System continues into the Kuiper belt, the scattered disc, and the Oort cloud, three sparse populations of small icy bodies thought to be the points of origin for most observed comets. At their distance from the Sun, accretion was too slow to allow planets to form before the solar nebula dispersed, and thus the initial disc lacked enough mass density to consolidate into a planet. The Kuiper belt lies between 30 and 55 AU from the Sun, while the farther scattered disc extends to over 100 AU, and the distant Oort cloud begins at about 50,000 AU. Originally, however, the Kuiper belt was much denser and closer to the Sun, with an outer edge at approximately 30 AU. Its inner edge would have been just beyond the orbits of Uranus and Neptune, which were in turn far closer to the Sun when they formed (most likely in the range of 15–20 AU), and in opposite locations, with Uranus farther from the Sun than Neptune. After the formation of the Solar System, the orbits of all the giant planets continued to change slowly, influenced by their interaction with large number of remaining planetesimals. After 500–600 million years (about 4 billion years ago) Jupiter and Saturn fell into a 2:1 resonance; Saturn orbited the Sun once for every two Jupiter orbits. This resonance created a gravitational push against the outer planets, causing Neptune to surge past Uranus and plough into the ancient Kuiper belt. The planets scattered the majority of the small icy bodies inwards, while themselves moving outwards. These planetesimals then scattered off the next planet they encountered in a similar manner, moving the planets' orbits outwards while they moved inwards. This process continued until the planetesimals interacted with Jupiter, whose immense gravity sent them into highly elliptical orbits or even ejected them outright from the Solar System. This caused Jupiter to move slightly inward. Those objects scattered by Jupiter into highly elliptical orbits formed the Oort cloud; those objects scattered to a lesser degree by the migrating Neptune formed the current Kuiper belt and scattered disc. This scenario explains the Kuiper belt's and scattered disc's present low mass. Some of the scattered objects, including Pluto, became gravitationally tied to Neptune's orbit, forcing them into mean-motion resonances. Eventually, friction within the planetesimal disc made the orbits of Uranus and Neptune circular again. In contrast to the outer planets, the inner planets are not believed to have migrated significantly over the age of the Solar System, because their orbits have remained stable following the period of giant impacts. Impacts are believed to be a regular (if currently infrequent) part of the evolution of the Solar System. That they continue to happen is evidenced by the collision of Comet Shoemaker-Levy 9 with Jupiter in 1994, and the impact feature Meteor Crater in Arizona. The process of accretion, therefore, is not complete, and may still pose a threat to life on Earth. The evolution of the outer Solar System appears to have been influenced by nearby supernovae and possibly also passage through interstellar clouds. The surfaces of bodies in the outer Solar System would experience space weathering from the solar wind, micrometeorites, and the neutral components of the interstellar medium. The evolution of the asteroid belt after Late Heavy Bombardment was mainly governed by collisions. Objects with large mass have enough gravity to retain any material ejected by a violent collision. In the asteroid belt this usually is not the case. As a result, many larger objects have been broken apart, and sometimes newer objects have been forged from the remnants in less violent collisions. Moons around some asteroids currently can only be explained as consolidations of material flung away from the parent object without enough energy to entirely escape its gravity. Jupiter and Saturn have a number of large moons, such as Io, Europa, Ganymede and Titan, which may have originated from discs around each giant planet in much the same way that the planets formed from the disc around the Sun. This origin is indicated by the large sizes of the moons and their proximity to the planet. These attributes are impossible to achieve via capture, while the gaseous nature of the primaries make formation from collision debris another impossibility. The outer moons of the gas giants tend to be small and have eccentric orbits with arbitrary inclinations. These are the characteristics expected of captured bodies. Most such moons orbit in the direction opposite the rotation of their primary. The largest irregular moon is Neptune's moon Triton, which is believed to be a captured Kuiper belt object. Moons of solid Solar System bodies have been created by both collisions and capture. Mars's two small moons, Deimos and Phobos, are believed to be captured asteroids. The Earth's Moon is believed to have formed as a result of a single, large oblique collision. The impacting object likely had a mass comparable to that of Mars, and the impact probably occurred near the end of the period of giant impacts. The collision kicked into orbit some of the impactor's mantle, which then coalesced into the Moon. The impact was probably the last in series of mergers that formed Earth. It has been further hypothesized that the Mars-sized object may have formed at one of the stable Earth-Sun Lagrangian points (either L4 or L5) and drifted from its position. Pluto's moon Charon may also have formed by means of a large collision; the Pluto-Charon and Earth-Moon systems are the only two in the Solar System in which the satellite's mass is at least 1% that of the larger body. The planets' orbits are chaotic over longer timescales, such that the whole Solar System possesses a Lyapunov time in the range of 2–230 million years. In all cases this means that the position of a planet along its orbit ultimately becomes impossible to predict with any certainty (so, for example, the timing of winter and summer become uncertain), but in some cases the orbits themselves may change dramatically. Such chaos manifests most strongly as changes in eccentricity, with some planets' orbits becoming significantly more—or less—elliptical. Ultimately, the Solar System is stable in that none of the planets will collide with each other or be ejected from the system in the next few billion years. Beyond this, within five billion years or so Mars's eccentricity may grow to around 0.2, such that it lies on an Earth-crossing orbit, leading to a potential collision. In the same timescale, Mercury's eccentricity may grow even further, and a close encounter with Venus could theoretically eject it from the Solar System altogether or send it on a collision course with Venus or Earth. The Earth and its Moon are one example of this configuration. Today, the Moon is tidally locked to the Earth; one of its revolutions around the Earth is equal to one of its rotations about its axis, which means that it always shows one face to the Earth. The Moon will continue to recede from Earth, and Earth's spin will continue to slow gradually. In about 50 billion years, if the two worlds survive the Sun's expansion, they will become tidally locked to each other; each will be visible from only one hemisphere of the other. Other examples are the Galilean moons of Jupiter (as well as many of Jupiter's smaller moons) and most of the larger moons of Saturn. A different scenario occurs when the moon is either revolving around the primary faster than the primary rotates, or is revolving in the direction opposite the planet's rotation. In these cases, the tidal bulge lags behind the moon in its orbit. In the former case, the direction of angular momentum transfer is reversed, so the rotation of the primary speeds up while the satellite's orbit shrinks. In the latter case, the angular momentum of the rotation and revolution have opposite signs, so transfer leads to decreases in the magnitude of each (that cancel each other out). In both cases, tidal deceleration causes the moon to spiral in towards the primary until it either is torn apart by tidal stresses, potentially creating a planetary ring system, or crashes into the planet's surface or atmosphere. Such a fate awaits the moons Phobos of Mars (within 30 to 50 million years), Triton of Neptune (in 3.6 billion years), Metis and Adrastea of Jupiter, and at least 16 small satellites of Uranus and Neptune. Uranus' Desdemona may even collide with one of its neighboring moons. A third possibility is where the primary and moon are tidally locked to each other. In that case, the tidal bulge stays directly under the moon, there is no transfer of angular momentum, and the orbital period will not change. Pluto and Charon are an example of this type of configuration. Prior to the 2004 arrival of the Cassini–Huygens spacecraft, the rings of Saturn were widely thought to be much younger than the Solar System and were not expected to survive beyond another 300 million years. Gravitational interactions with Saturn's moons were expected to gradually sweep the rings' outer edge toward the planet, with abrasion by meteorites and Saturn's gravity eventually taking the rest, leaving Saturn unadorned. However, data from the Cassini mission led scientists to revise that early view. Observations revealed 10 km-wide icy clumps of material that repeatedly break apart and reform, keeping the rings fresh. Saturn's rings are far more massive than the rings of the other gas giants. This large mass is believed to have preserved Saturn's rings since the planet first formed 4.5 billion years ago, and is likely to preserve them for billions of years to come. Around 5.4 billion years from now, all of the hydrogen in the core of the Sun will have fused into helium. The core will no longer be supported against gravitational collapse and will begin to contract, heating a shell around the core until hydrogen begins to fuse within it. This will cause the outer layers of the star to expand greatly, and the star will enter a phase of its life in which it is called a red giant. Within 7.5 billion years, the Sun will have expanded to a radius of 1.2 AU—256 times its current size. At the tip of the red giant branch, as a result of the vastly increased surface area, the Sun's surface will be much cooler (about 2600 K) than now and its luminosity much higher—up to 2,700 current solar luminosities. For part of its red giant life, the Sun will have a strong stellar wind which will carry away around 33% of its mass. During these times, it is possible that Saturn's moon Titan could achieve surface temperatures necessary to support life. As the Sun expands, it will swallow the planets Mercury and, most likely, Venus. Earth's fate is less clear; although the Sun will envelop Earth's current orbit, the star's loss of mass (and thus weaker gravity) will cause the planets' orbits to move farther out. If it were only for this, Venus and Earth would probably escape incineration, but a 2008 study suggests that Earth will likely be swallowed up as a result of tidal interactions with the Sun's weakly bound outer envelope. Gradually, the hydrogen burning in the shell around the solar core will increase the mass of the core until it reaches about 45% of the present solar mass. At this point the density and temperature will become so high that the fusion of helium into carbon will begin, leading to a helium flash; the Sun will shrink from around 250 to 11 times its present (main sequence) radius. Consequently, its luminosity will decrease from around 3,000 to 54 times its current level, and its surface temperature will increase to about 4770 K. The Sun will become a horizontal branch star, burning helium in its core in a stable fashion much like it burns hydrogen today. The helium-fusing stage will last only 100 million years. Eventually, it will have to again resort to the reserves of hydrogen and helium in its outer layers and will expand a second time, turning into what is known as an asymptotic giant branch star. Here the luminosity of the Sun will increase again, reaching about 2,090 present luminosities, and it will cool to about 3500 K. This phase lasts about 30 million years, after which, over the course of a further 100,000 years, the Sun's remaining outer layers will fall away, ejecting a vast stream of matter into space and forming a halo known (misleadingly) as a planetary nebula. The ejected material will contain the helium and carbon produced by the Sun's nuclear reactions, continuing the enrichment of the interstellar medium with heavy elements for future generations of stars. This is a relatively peaceful event, nothing akin to a supernova, which our Sun is too small to undergo as part of its evolution. Any observer present to witness this occurrence would see a massive increase in the speed of the solar wind, but not enough to destroy a planet completely. However, the star's loss of mass could send the orbits of the surviving planets into chaos, causing some to collide, others to be ejected from the Solar System, and still others to be torn apart by tidal interactions. Afterwards, all that will remain of the Sun is a white dwarf, an extraordinarily dense object, 54% its original mass but only the size of the Earth. Initially, this white dwarf may be 100 times as luminous as the Sun is now. It will consist entirely of degenerate carbon and oxygen, but will never reach temperatures hot enough to fuse these elements. Thus the white dwarf Sun will gradually cool, growing dimmer and dimmer. As the Sun dies, its gravitational pull on the orbiting bodies such as planets, comets and asteroids will weaken due to its mass loss. All remaining planets' orbits will expand; if Venus, Earth, and Mars still exist; their orbits will lie roughly at , , and . They and the other remaining planets will become dark, frigid hulks, completely devoid of any form of life. They will continue to orbit their star, their speed slowed due to their increased distance from the Sun and the Sun's reduced gravity. Two billion years later, when the Sun has cooled to the 6000–8000K range, the carbon and oxygen in the Sun's core will freeze, with over 90% of its remaining mass assuming a crystalline structure. Eventually, after trillions more years, the Sun will finally cease to shine altogether, becoming a black dwarf. The Solar System travels alone through the Milky Way galaxy in a circular orbit approximately 30,000 light years from the galactic centre. Its speed is about 220 km/s. The period required for the Solar System to complete one revolution around the galactic centre, the galactic year, is in the range of 220–250 million years. Since its formation, the Solar System has completed at least 20 such revolutions. A number of scientists have speculated that the Solar System's path through the galaxy is a factor in the periodicity of mass extinctions observed in the Earth's fossil record. One hypothesis supposes that vertical oscillations made by the Sun as it orbits the galactic centre cause it to regularly pass through the galactic plane. When the Sun's orbit takes it outside the galactic disc, the influence of the galactic tide is weaker; as it re-enters the galactic disc, as it does every 20–25 million years, it comes under the influence of the far stronger "disc tides", which, according to mathematical models, increase the flux of Oort cloud comets into the Solar System by a factor of 4, leading to a massive increase in the likelihood of a devastating impact. However, others argue that the Sun is currently close to the galactic plane, and yet the last great extinction event was 15 million years ago. Therefore the Sun's vertical position cannot alone explain such periodic extinctions, and that extinctions instead occur when the Sun passes through the galaxy's spiral arms. Spiral arms are home not only to larger numbers of molecular clouds, whose gravity may distort the Oort cloud, but also to higher concentrations of bright blue giant stars, which live for relatively short periods and then explode violently as supernovae. Although the vast majority of galaxies in the Universe are moving away from the Milky Way, the Andromeda Galaxy, the largest member of our Local Group of galaxies, is heading towards it at about 120 km/s. In 2 billion years, Andromeda and the Milky Way will collide, causing both to deform as tidal forces distort their outer arms into vast tidal tails. When this initial disruption occurs, astronomers calculate a 12% chance that the Solar System will be pulled outward into the Milky Way's tidal tail and a 3% chance that it will become gravitationally bound to Andromeda and thus a part of that galaxy. After a further series of glancing blows, during which the likelihood of the Solar System's ejection rises to 30%, the galaxies' supermassive black holes will merge. Eventually, in roughly 7 billion years, the Milky Way and Andromeda will complete their merger into a giant elliptical galaxy. During the merger, if there is enough gas, the increased gravity will force the gas to the centre of the forming elliptical galaxy. This may lead to a short period of intensive star formation called a starburst. In addition the infalling gas will feed the newly formed black hole transforming it into an active galactic nucleus. The force of these interactions will likely push the Solar System into the new galaxy's outer halo, leaving it relatively unscathed by the radiation from these collisions. It is a common misconception that this collision will disrupt the orbits of the planets in the Solar System. While it is true that the gravity of passing stars can detach planets into interstellar space, distances between stars are so great that the likelihood of the Milky Way-Andromeda collision causing such disruption to any individual star system is negligible. While the Solar System as a whole could be affected by these events, the Sun and planets are not expected to be disturbed. However, over time, the cumulative probability of a chance encounter with a star increases, and disruption of the planets becomes all but inevitable. Assuming that the Big Crunch or Big Rip scenarios for the end of the universe do not occur, calculations suggest that the gravity of passing stars will have completely stripped the dead Sun of its remaining planets within 1 quadrillion (1015) years. This point marks the end of the Solar System. While the Sun and planets may survive, the Solar System, in any meaningful sense, will cease to exist. Studies of discs around other stars have also done much to establish a time frame for Solar System formation. Stars between one and three million years old possess discs rich in gas, whereas discs around stars more than 10 million years old have little to no gas, suggesting that gas giant planets within them have ceased forming. |Time since formation of the Sun||Event| |Billions of years before the formation of the Solar System||Previous generations of stars live and die, injecting heavy elements into the interstellar medium out of which the Solar System formed.| |~5 years before formation of the Solar System||If the Solar System formed in an Orion nebula-like star-forming region, the most massive stars are formed, live their lives, die, and explode in supernovae. One supernova possibly triggers the formation of the Solar System.| |0–1 years||Pre-solar nebula forms and begins to collapse. Sun begins to form.| |1–5 years||Sun is a T Tauri protostar.| |1 years||Outer planets form. By 107 years, gas in the protoplanetary disc has been blown away, and outer planet formation is likely complete.| |1 years||Terrestrial planets and the Moon form. Giant impacts occur. Water delivered to Earth.| |5 years||Sun becomes a main sequence star.| |2 years||Oldest known rocks on the Earth formed.| |5–6 years||Resonance in Jupiter and Saturn's orbits moves Neptune out into the Kuiper belt. Late Heavy Bombardment occurs in the inner Solar System.| |8 years||Oldest known life on Earth.| |4.6 years||Today. Sun remains a main sequence star, continually growing warmer and brighter by ~10% every 109 years.| |6 years||Sun's habitable zone moves outside of the Earth's orbit, possibly shifting onto Mars' orbit.| |7 years||The Milky Way and Andromeda Galaxy begin to collide. Slight chance the Solar System could be captured by Andromeda before the two galaxies fuse completely.| |10–12 years||Sun exhausts the hydrogen in its core, ending its main sequence life. Sun begins to ascend the red giant branch of the Hertzsprung-Russell diagram, growing dramatically more luminous (by a factor of up to 2700), larger (by a factor of up to 250 in radius), and cooler (down to 2600 K): Sun is now a red giant. Mercury and possibly Venus and Earth are swallowed.| |~12 years||Sun passes through helium-burning horizontal branch and asymptotic giant branch phases, losing a total of ~30% of its mass in all post-main sequence phases. Asymptotic giant branch phase ends with the ejection of a planetary nebula, leaving the core of the Sun behind as a white dwarf.| |>12 years||The white dwarf Sun, no longer producing energy, begins to cool and dim continuously, eventually reaching a black dwarf state.| |1015 years||Sun cools to 5 K. Gravity of passing stars detaches planets from orbits. Solar System ceases to exist.|
http://www.reference.com/browse/shows+one+face
13
147
The Babylonian astronomers kept detailed records on the rising and setting of stars, the motion of the planets, and the solar and lunar eclipses, all of which required familiarity with angular distances measured on the celestial sphere. Based on one interpretation of the Plimpton 322 cuneiform tablet (circa 1900 BC), some have even asserted that the ancient Babylonians had a table of secants. There is, however, much debate as to whether it is a table of Pythagorean triples, a solution of quadratic equations, or a trigonometric table. The Egyptians, on the other hand, used a primitive form of trigonometry for building pyramids in the 2nd millennium BC. The Rhind Mathematical Papyrus, written by the Egyptian scribe Ahmes (circa 1680-1620 BC), contains the following problem related to trigonometry: Ahmes' solution to the problem is the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity he found for the seked is the cotangent of the angle to the base of the pyramid and its face. Ancient Greek and Hellenistic mathematicians made use of the chord. Given a circle and an arc on the circle, the chord is the line that subtends the arc. A chord's perpendicular bisector passes through the center of the circle and bisects the angle. One half of the bisected chord is the sine of the bisected angle, that is, , and consequently the sine function is also known as the "half chord". Due to this relationship, a number of trigonometric identities and theorems that are known today were also known to Hellenistic mathematicians, but in their equivalent chord form. Although there is no trigonometry in the works of Euclid and Archimedes, in the strict sense of the word, there are theorems presented in a geometric way (rather than a trigonometric way) that are equivalent to specific trigonometric laws or formulas. For instance, propositions twelve and thirteen of book two of the Elements are the laws of cosine for obtuse and acute angles, respectively. Theorems on the lengths of chords are applications of the law of sines. And Archimedes' theorem on broken chords is equivalent to formulas for sines of sums and differences of angles. To compensate for the lack of a table of chords, mathematicians of Aristarchus' time would sometimes use the well known theorem that, in modern notation, sin α/ sin β < α/β < tan α/ tan β whenever 0° < β < α < 90°, among other theorems. The first trigonometric table was apparently compiled by Hipparchus of Nicaea (180 - 125 BC), who is now consequently known as "the father of trigonometry." Hipparchus was the first to tabulate the corresponding values of arc and chord for a series of angles. Although it is not known when the systematic use of the 360° circle came into mathematics, it is known that the systematic introduction of the 360° circle came a little after Aristarchus of Samos composed On the Sizes and Distances of the Sun and Moon (ca. 260 B.C.), since he measured an angle in terms of a fraction of a quadrant. It seems that the systematic use of the 360° circle is largely due to Hipparchus and his table of chords. Hipparchus may have taken the idea of this division from Hypsicles who had earlier divided the day into 360 parts, a division of the day that may have been suggested by Babylonian astronomy. In ancient astronomy, the zodiac had been divided into twelve "signs" or thirty-six "decans". A seasonal cycle of roughly 360 days could have corresponded to the signs and decans of the zodiac by dividing each sign into thirty parts and each decan into ten parts. It is due to the Babylonian sexagesimal number system that each degree is divided into sixty minutes and each minute is divided into sixty seconds. Menelaus of Alexandria (ca. 100 A.D.) wrote in three books his Sphaerica. In Book I, he established a basis for spherical triangles analogous to the Euclidean basis for plane triangles. He establishes a theorem that is without Euclidean analogue, that two spherical triangles are congruent if corresponding angles are equal, but he did not distinguish between congruent and symmetric spherical triangles. Another theorem that he establishes is that the sum of the angles of a spherical triangle is greater than 180°. Book II of Sphaerica applies spherical geometry to astronomy. And Book III contains the "theorem of Menelaus". He further gave his famous "rule of six quantities". Later, Claudius Ptolemy (ca. 90 - ca. 168 A.D.) expanded upon Hipparchus' Chords in a Circle in his Almagest, or the Mathematical Syntaxis. The thirteen books of the Almagest are the most influential and significant trigonometric work of all antiquity. A theorem that was central to Ptolemy's calculation of chords was what is still known today as Ptolemy's theorem, that the sum of the products of the opposite sides of a cyclic quadrilateral is equal to the product of the diagonals. A special case of Ptolemy's theorem appeared as proposition 93 in Euclid's Data. Ptolemy's theorem leads to the equivalent of the four sum-and-difference formulas for sine and cosine that are today known as Ptolemy's formulas, although Ptolemy himself used chords instead of sine and cosine. Ptolemy further derived the equivalent of the half-angle formula . Ptolemy used these results to create his trigonometric tables, but whether these tables were derived from Hipparchus' work cannot be determined. Neither the tables of Hipparchus nor those of Ptolemy have survived to the present day, although descriptions by other ancient authors leave little doubt that they once existed. The next significant developments of trigonometry were in India. The Indian mathematician and astronomer, Aryabhata (476–550 AD), in his work Aryabhata-Siddhanta, first defined the sine as the modern relationship between half an angle and half a chord, while also defining the cosine, versine, and inverse sine. His works also contain the earliest surviving tables of sine values and versine (1 − cosine) values, in 3.75° intervals from 0° to 90°, to an accuracy of 4 decimal places. He used the words jya for sine, kojya for cosine, ukramajya for versine, and otkram jya for inverse sine. The words jya and kojya eventually became sine and cosine respectively after a mistranslation. Other Indian mathematicians later expanded Aryabhata's works on trigonometry. In the 6th century, Varahamihira used the formulas In the 7th century, Bhaskara I produced a formula for calculating the sine of an acute angle without the use of a table. He also gave the following approximation formula for sin(x), which had a relative error of less than 1.9%: Later in the 7th century, Brahmagupta developed the formula as well as the Brahmagupta interpolation formula for computing sine values. Another later Indian author on trigonometry was Bhaskara II in the 12th century. Madhava (c. 1400) made early strides in the analysis of trigonometric functions and their infinite series expansions. He developed the concepts of the power series and Taylor series, and produced the trigonometric series expansions of sine, cosine, tangent and arctangent. Using the Taylor series approximations of sine and cosine, he produced a sine table to 12 decimal places of accuracy and a cosine table to 9 decimal places of accuracy. He also gave the power series of π and the θ, radius, diameter and circumference of a circle in terms of trigonometric functions. His works were expanded by his followers at the Kerala School up to the 16th century. The Indian works were later translated and expanded in the medieval Islamic world by Muslim mathematicians of mostly Arab and Persian descent. They enunciated a large number of theorems which freed the subject of trigonometry from dependence upon the complete quadrilateral, as was the case in Hellenistic mathematics due to the application of Menelaus' theorem. According to E. S. Kennedy, it was after this development in Islamic mathematics that "the first real trigonometry emerged, in the sense that only then did the object of study become the spherical or plane triangle, its sides and angles. In the 9th century, Muhammad ibn Mūsā al-Khwārizmī produced accurate sine and cosine tables, and the first table of tangents. He was also a pioneer in spherical trigonometry. By the 10th century, in the work of Abū al-Wafā' al-Būzjānī, Muslim mathematicians were using all six trigonometric functions, after discovering the secant, cotangent and cosecant functions. Abu al-Wafa had sine tables in 0.25° increments, to 8 decimal places of accuracy, and accurate tables of tangent values. He also developed the following trigonometric formula: Abū al-Wafā also established the angle addition identities, e.g. sin (a + b), and discovered the sine formula for spherical trigonometry: Another 10th century mathematician, Muhammad ibn Jābir al-Harrānī al-Battānī (Albatenius), was responsible for establishing a number of important trigometrical relationships, such as: Al-Jayyani (989–1079) of al-Andalus wrote The book of unknown arcs of a sphere, which is considered "the first treatise on spherical trigonometry" in its modern form, although spherical trigonometry in its ancient Hellenistic form was dealt with by earlier mathematicians such as Menelaus of Alexandria, who developed Menelaus' theorem to deal with spherical problems. However, E. S. Kennedy points out that while it was possible in pre-lslamic mathematics to compute the magnitudes of a spherical figure, in principle, by use of the table of chords and Menelaus' theorem, the application of the theorem to spherical problems was very difficult in practice. Al-Jayyani's work on spherical trigonometry "contains formulae for right-handed triangles, the general law of sines, and the solution of a spherical triangle by means of the polar triangle." This treatise later had a "strong influence on European mathematics", and his "definition of ratios as numbers" and "method of solving a spherical triangle when all sides are unknown" are likely to have influenced Regiomontanus. The method of triangulation was first developed by Muslim mathematicians, who applied it to practical uses such as surveying and Islamic geography, as described by Abū Rayhān al-Bīrūnī in the early 11th century. In the late 11th century, Omar Khayyám (1048–1131) solved cubic equations using approximate numerical solutions found by interpolation in trigonometric tables. In the 13th century, Nasīr al-Dīn al-Tūsī was the first to treat trigonometry as a mathematical discipline independent from astronomy, and he developed spherical trigonometry into its present form. He listed the six distinct cases of a right-angled triangle in spherical trigonometry, and he also stated the law of sines and provided a proof for it. In the 14th century, Jamshīd al-Kāshī gave trigonometric tables of values of the sine function to four sexagesimal digits (equivalent to 8 decimal places) for each 1° of argument with differences to be added for each 1/60 of 1°. Ulugh Beg also gives accurate tables of sines and tangents correct to 8 decimal places around the same time. In the 16th century, Taqi al-Din contributed to trigonometry in his Sidrat al-Muntaha, in which he was the first mathematician to extract the precise value of Sin 1°. He discusses the values given by his predecessors, explaining how Ptolemy used an approximate method to obtain his value of Sin 1° and how Abū al-Wafā, Ibn Yunus, al-Kashi, Qāḍī Zāda al-Rūmī, Ulugh Beg and Mirim Chelebi improved on the value. Taqi al-Din then solves the problem to obtain the precise value of Sin 1°: In China, Aryabhata's table of sines were translated into the Chinese mathematical book of the Kaiyuan Zhanjing, compiled in 718 AD during the Tang Dynasty. Although the Chinese excelled in other fields of mathematics such as solid geometry, binomial theorem, and complex algebraic formulas, early forms of trigonometry were not as widely appreciated as in the earlier Greek and then Indian and Islamic worlds. Instead, the early Chinese used an empirical substitute known as chong cha, while practical use of plane trigonometry in using the sine, the tangent, and the secant were known. However, this embryonic state of trigonometry in China slowly began to change and advance during the Song Dynasty (960–1279), where Chinese mathematicians began to express greater emphasis for the need of spherical trigonometry in calendrical science and astronomical calculations. The polymath Chinese scientist, mathematician and official Shen Kuo (1031–1095) used trigonometric functions to solve mathematical problems of chords and arcs. Victor J. Katz writes that in Shen's formula "technique of intersecting circles", he created an approximation of the arc of a circle s given the diameter d, sagita v, and length of the chord c subtending the arc, the length of which he approximated as s = c + 2v2/d. Sal Restivo writes that Shen's work in the lengths of arcs of circles provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing (1231–1316). As the historians L. Gauchet and Joseph Needham state, Guo Shoujing used spherical trigonometry in his calculations to improve the calendar system and Chinese astronomy. Along with a later 17th century Chinese illustration of Guo's mathematical proofs, Needham states that: Guo used a quadrangular spherical pyramid, the basal quadrilateral of which consisted of one equatorial and one ecliptic arc, together with two meridian arcs, one of which passed through the summer solstice point...By such methods he was able to obtain the du lü (degrees of equator corresponding to degrees of ecliptic), the ji cha (values of chords for given ecliptic arcs), and the cha lü (difference between chords of arcs differing by 1 degree). Despite the achievements of Shen and Guo's work in trigonometry, another substantial work in Chinese trigonometry would not be published again until 1607, with the dual publication of Euclid's Elements by Chinese official and astronomer Xu Guangqi (1562–1633) and the Italian Jesuit Matteo Ricci (1552–1610). The Opus palatinum de triangulis of Georg Joachim Rheticus, a student of Copernicus, was probably the first to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In the 18th century, Leonhard Euler's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, defining them as infinite series and presenting "Euler's formula" eix = cos(x) + i sin(x). Euler used the near-modern abbreviations sin., cos., tang., cot., sec., and cosec. Also in the 18th century, Brook Taylor defined the general Taylor series and gave the series expansions and approximations for all six trigonometric functions. The works of James Gregory in the 17th century and Colin Maclaurin in the 18th century were also very influential in the development of trigonometric series. Mastering the Seas: Advances in Trigonometry and Their Impact upon Astronomy, Cartography, and Maritime Navigation Jan 01, 2001; Mastering the Seas: Advances in Trigonometry and Their Impact upon Astronomy, Cartography, and Maritime NavigationOverviewUntil...
http://www.reference.com/browse/plane-trigonometry
13
108
Tephra fall is a major volcanic hazard and deposit characteristics are critical data used to quantify eruptive material. The homemade ashmeter is a device used to precisely measure thickness, area density, and bulk density of small ash deposits (< 20 mm). This instrument provides both direct measurements in the field and sample collection for laboratory analysis. The primary purpose of this device is to collect fallout from small-volume and distal eruption clouds. The homemade ashmeter is composed of an outer container, a funnel, an inner gauge, and a filter cap, and permits sampling without major weathering effects. It is constructed using mostly recycled materials, thus is very cost effective. To test this system, seven instruments were installed during the January 14 – March 16, 2012 eruption of Tungurahua volcano, Ecuador. The ashmeter allows the measurement and sampling of small tephra falls that can be used to improve fallout hazard assessments. Keywords:Ashmeter; Volcanology; Tephra, sampling; Thickness; Area density; Bulk density; Homemade; Tungurahua; Resolution calculations Introduction and background Recent volcanic eruptions such as 2008 Chaitén, Chile (Martin et al. 2009) and 2010 Eyjafjallajökull, Iceland (Ulfarsson and Unger 2011) have reminded us that ash fall is one of the most hazardous volcanic products due to its widespread distribution by winds, and can pose far-field impacts to society. In order to assess this hazard several characteristics of the tephra fallout must be measured in the field. The traditional method for fallout data collection during a volcanic eruption follows the recommendations of the International Volcanic Health Hazard Network ( 2012). The thickness is the most frequently used parameter and is measured directly with a tape ruler on a flat area, preferably not affected by rain or wind (Walker 1973). This measurement is repeated several times at the same location to calculate average thickness (generally in cm or mm) and then repeated in different locations to study the spatial distribution of the tephra fallout. Isopach maps created with these data are used to identify the origin of an eruption (vent), when not observed, and to calculate the total volume of tephra using various empirical laws (Pyle 1989; Fierstein and Nathenson 1992, Bonadonna et al. 1998, Bonadonna and Costa 2012). The total volume of tephra is the main parameter used to estimate the Volcanic Explosivity Index (VEI) of an eruption (Newhall and Self 1982). The area density (mass-per-unit area, also called load) of tephra fallout is calculated principally for modern deposits just after the eruption. The area density is obtained by sampling a representative area (generally a square of > 20 cm-side), drying the sample in the laboratory, and weighing the dry sample (Bonadonna et al. 1998; Scollo et al. 2007). The area density (generally given in kg/m2 or g/m2) is then calculated by dividing the weight of the sample by the sampling area. Isomass maps created with these data are used to calculate the total mass of tephra with the same methods as for isopach maps. The total mass of the eruptive products, including lava and pyroclastic flows, is used to calculate the Dense-Rock Equivalent (DRE) volume and the magnitude of the eruption (Tsuya 1955). The bulk density (generally given in kg/m3 or g/cm3) is calculated by dividing the area density by the thickness. This parameter, rarely used to its full potential, is critical to understanding the impact of tephra fallout on infrastructures and agriculture (Baxter 2000). Although the methodology for field collection of tephra fall from ancient eruptions is not likely to change, new methods for sampling ash fall from modern eruptions is allowing for more precise collections (Bonadonna et al. 1998). According to the IVHHN, the most cost-effective technique for ash collection is using plastic trays (or buckets) installed around a volcano prior to the eruption. For example, since 2007, a group of volcanologists from the Instituto Geofísico de la Escuela Politécnica Nacional de Quito ( 2012) has installed about 30 ash-collection receptacles around Tungurahua volcano to improve volume estimations of frequent, small-scale tephra emissions (Bustillos and Mothes 2010). More recently, the use of a portable electronic scale with a 0.1 g resolution combined with the receptacles has permitted in situ measurement of area densities of less than 100 g/m2 (Bernard et al. in preparation; Figure 1). Nevertheless, the methods used to measure or calculate the tephra deposit characteristics described above show several flaws: 1) heavy rains and strong winds can drastically rework the deposit; 2) for deposits less than 1 cm-thick, thickness measurements tend to be spurious and sampling is more difficult (Eychenne et al. 2012); and 3) when using trays, accumulation of pre-eruptive rain will impede some analyses, like leachates. This leads to small sample numbers and low-quality data, especially for small explosive eruptions and thin distal fallout from large-scale events, compromising tephra fall volume calculations (Bonadonna et al. 1998) and the understanding of volcanic ash transport (Riley et al. 2003). Precise fallout data are needed to feed and validate tephra dispersal and fallout models in order to produce realistic ash fall hazard maps (Carey and Sparks 1986; Bonadonna et al. 2005; Folch et al. 2008). Figure 1. Isopach map from the December 2011 eruption of Tungurahua volcano. The dots correspond to the ash collectors’ network. This paper presents a homemade device constructed using mostly recycled materials that collects thin ash falls while avoiding most impacts from weathering processes. Very precise layer thickness, area density, and bulk density are measured or calculated using the ashmeter. This work complies with a new guideline that encourages harmonization of tephra field-data collection so that tephra dispersal and fallout models can be improved and comparable among models (Bonadonna et al. 2011). How to construct an ashmeter The 10-minute video shown in Additional file 1 is a visual illustration of the step-by-step instructions presented in the following sections on how to build an ashmeter. The following list contains the common materials required to construct an ashmeter (Figure 2): 1) A large plastic bottle with a cylindrical lower part and a conical upper part (slope ≥ 35°), used to construct the outer container and the funnel; 2) a small plastic bottle and its cap, used to construct the inner gauge and the filter cap; the diameter of the lower part of the small bottle must be larger than the large bottle neck; 3) large aperture (> 2 mm) flexible plastic screen, used to protect the inner gauge from insects but allowing the volcanic ash to pass through; 4) small aperture (< 2 mm) rigid plastic screen, used to construct the filter cap; 5) paper filter (e.g. coffee filter or resistant kitchen paper), used to let the water pass through but not the fine-grained ash; 6) flexible wire, used to fix the funnel to the outer container; 7) hot silicone (e.g hot glue stick for craft making), used to glue the large aperture flexible plastic screen to the funnel; 8) permanent marker, used to calibrate the inner gauge; 9) transparent acetate paper or sticker for printer, used to print the inner gauge scale; 10) transparent adhesive tape, used to fix the inner gauge scale; 11) masking tape, used to mark the bottles for straight cuts. Figure 2. Material requirements to construct an ashmeter. 1) Large plastic bottle; 2) small plastic bottle; 3) large aperture (> 2 mm) flexible plastic screen; 4) small aperture (< 2 mm) rigid plastic screen; 5) paper filter (resistant kitchen paper); 6) flexible wire; 7) hot silicone glue gun; 8) permanent marker; 9) inner gauge scale on transparent acetate sticker; 10) transparent adhesive tape; 11) masking tape; 12) scissors; 13) small handheld rotary tool with 14) accessories (circular saw and drill); 15) electronic scale. Most of the material comes from recycling. The rest can be easily found in any hardware store. In addition, the construction of the ashmeter requires some tools like a hot glue gun, scissors, a small handheld rotary tool with accessories (circular saw and drill), and an electronic scale. The design and production of the inner gauge scale requires a computer with drawing software, a scanner, and a printer. Steps to build the ashmeter 1) Remove the neck of the large bottle (Figure 3). Figure 3. Steps to build the ashmeter. The arrows indicate where to achieve the actions described in the paragraph 1.2. 1) Remove the neck of the large bottle; 2) Remove the upper part of the large bottle; 3) Drill or cut holes in the lower part; 4) Drill a circular hole in the center of the lower part’s base; 5) Glue a large aperture flexible screen to the smaller aperture of the large bottle’s upper part; 6) Fix the funnel to the outer container; 7) Remove the bottom of the small bottle; 8) Create a thickness scale; 9) Drill a large hole in the cap of the small bottle; 10) Cut two plastic screens and a paper filter to fit in the cap; 11) Enclose the paper filter between the two plastic screens; 12) Assemble the different parts. 2) Remove the upper part of the large bottle about 5-10 mm below the break in slope. For a straight cut, mark the bottle with masking tape. 3) Drill or cut small holes in the lower part (top and bottom) to release humidity. 4) Drill a circular hole in the center of the lower of the large bottle, large enough so that the small bottle cap can pass through but small enough so that the whole neck of the small bottle won’t be able to. The lower portion of the large bottle becomes the outer container (Figure 4). Figure 4. A) Photograph and B) Sketch of the ashmeter. 1) Outer container; 2) funnel; 3) inner gauge; 4) cap filter. Note the slope of the funnel (> 35°) and the relationship between the size of the small bottle and the funnel lower aperture. 5) Glue a large aperture (> 2 mm) flexible screen to the smaller aperture in the upper part of the large bottle. The upper portion of the large bottle becomes the funnel (Figure 4). 6) Turn the funnel upside-down and fix it to the outer container by threading small pieces of flexible wire in several places around the perimeter of the container. 7) Remove the bottom of the small bottle so the upper part, placed upside-down, will fit in the outer container and so the funnel will exert some pressure to keep it vertical. This becomes the sample collector. 8) Create a thickness scale on the sample collector through the following: a) calculate the area (A) of accumulation based on the radius (r) of the outer container (A = πr2); b) create a chart calculating the volumes (V) of water (density ~1 g/cm3) corresponding to different theoretical thicknesses (T) (V = AT). As the sample collector does not have a regular shape, graduation must be adapted to the bottle models; c) for each thickness, pour a volume equivalent of water in the sample collector and label the water level; d) test the graduation and calculate the real error on the measurement, given by the difference between the reading of the water thickness and the water weight; e) if the error is lower than the resolutions of the graduation (half graduation), it can be transferred to masking tape, scanned, and transformed into a computer-designed ruler sticker that will be printed on a transparent acetate sticker; f) fix the ruler sticker on the sample collector; g) test the graduation; h) protect the ruler sticker from humidity with transparent adhesive tape. This becomes the inner gauge (Figure 4). It is better to create two (or more) inner gauges for one ashmeter to ease the field sampling. Once the ruler sticker is created the steps a) to e) are not needed anymore. The resolution of the inner gauge, presented in details in the Additional file 2, depends on the number of graduations obtained during this process. The distance between two graduations should be at least 2 mm to insure a good reading. 9) Drill a large hole in the cap of the small bottle. 10) Cut two plastic screens and a paper filter to fit into the cap. 11) Place the paper filter between the two plastic screens and fix them to the interior of the cap. One plastic screen protects the paper filter from the ash fall and rain. The other is used to prevent the paper filter from falling from the cap. This becomes the filter cap (Figure 4). 12) Assemble the different parts. The ashmeter is ready to be installed. Additional file 2. Absolute and relative resolution calculation (DOC). Step by step calculation of the absolute and relative resolution of the ashmeter with full equations and comparison with the traditional tape ruler technique. Format: DOC Size: 41KB Download file This file can be viewed with: Microsoft Word Viewer Time, cost, and installation With adequate tools, an ashmeter can be built in less than one hour. The most time-consuming task is calibrating the inner gauge. This step is greatly shortened once the ruler sticker is created. The cost of an ashmeter can be as little as one US dollar based on the use of recycled construction materials. The ashmeter must be installed vertically and fixed to a fence or a post in an open area without trees to avoid shadow effects or secondary accumulation (Figure 5A). Figure 5. Installation of an ashmeter and sampling. A) Example of an ashmeter deployed in the backyard of Rodrigo Ruiz on January 14, 2012 (Pillate); B) An example of a thin (1.1 mm-thick) deposit collected March 14, 2012 in Pillate, from the February 4, 2012 eruption. Functionality of the ashmeter The ashmeter allows measurement or calculation of at least five parameters (see Table 1 and Additional file 2 for equations). Tephra thickness (T) is read directly off the inner gauge (Figure 5B). Four measurements made with the inner gauge scale and a mobile scale ensure a more precise and representative value of the thickness. In addition to dry area density (ρA(D)), commonly used to create isomass maps (Scollo et al. 2007), the ashmeter allows calculation of in situ area density (ρA(is)) that can be particularly useful to assess fallout impact. Both parameters are calculated by dividing the mass of the tephra sample (measured in the field or dry in the laboratory) by the accumulation area (A). In situ (ρB(is)) and dry (ρB(D)) bulk densities of the deposit, also useful for tephra impact assessment, are obtained by dividing the area densities by the tephra thickness. Table 1. First data collected with the ashmeter network at Tungurahua volcano (from January 14 to March 16, 2012) The three measurements used to calculate the different parameters of the fallout are thickness, accumulation area, and tephra mass. The errors in these three measurements have an impact on the results and must be carefully estimated. Use of the ashmeter, if constructed and calibrated as suggested here, will give high quality results (Figure 6A). Figure 6. Measurements resolution and comparison with real error. A) Relative resolutions for thickness, area density, and bulk density measurements using the ashmeter as a function of the thickness compared to a conventional tape ruler. The relative resolutions have been calculated for an ashmeter model 1.5 (6-liter large bottle and 1.5-liter small bottle); B) Difference between theoretical resolutions and real error for 64 thickness measurements using 4 inner gauges (#24A, #80B, #83B, and #88B) as a function of the thickness. The dashed black arrows show the 9 measurements where the real error is higher than the theoretical resolution. The range of thickness readings for the ashmeter presented in this paper (6-liter large bottle model and a 1.5-liter small bottle model) is between 0.3 and 20 mm. The resolution of the thickness readings is not constant because of the shape of the inner gauge. For thin deposits, between 0.3 and 1 mm, the ashmeter has a theoretical absolute resolution of 0.025 mm (half graduation), while for thicker deposits, between 7 and 20 mm, this value is 0.25 mm. If compared to a tape ruler with a constant theoretical absolute resolution of 0.5 mm, the ashmeter allows thickness readings 2 to 20 times more precise than the traditional method (Additional file 2). To ensure the quality of the thickness reading a series of test with water has been carried out on four random inner gauges to look at the difference between the theoretical resolution and the real error (Figure 6B). From 64 thickness readings, only 9 measurements have a real error exceeding the theoretical resolution of the instrument. With the exception of one measurement, none of the real errors exceeded the theoretical resolution by more than 1 percent. The accumulation area for the ashmeter presented in this paper is about 214 cm2 (r = 8.25 cm). The resolution calculation gives a maximum error value of about 1 percent for this parameter. The resolution of the final parameter, tephra mass, depends on the electronic scale used for the measurements. In the resolution calculation we used a portable electronic scale (ideal for field work) with a 0.1 g resolution. The error on the mass measurement rapidly decreases with an increase of the sample size. Consequently, resolution errors of the area density and the bulk density for a 1 mm-thick tephra deposit are respectively 1 and 5 percent, a value rarely obtained with the traditional method. The ashmeter system magnifies the ash thickness in the inner gauge in order to have better readings of thin tephra deposits. As most of the error in bulk density comes from errors in thickness readings, this error decreases when the size-ratio (diameter) between the outer container and the inner gauge increases so the magnification is greater. Nevertheless, compaction due to loading may contribute to a variable error in thickness readings and cannot be avoided. Therefore, thickness and bulk density data obtained with the ashmeter should be put in perspective and future experiments must be designed to assess this matter. Sampling tephra deposits One of the major advantages of the ashmeter is that it is designed to collect small, pristine (undisturbed) tephra samples that can be used to characterize very small eruptions through grain size, composition, shape, and textural analysis (Riley et al. 2003). The ashmeter design (shape of the inner gauge, the funnel, and the filter cap) prevents pre-eruptive rain accumulation in the inner gauge and post-eruptive wind deflation process. Nevertheless to conserve pristine characteristics of the tephra deposit (bulk density, fine stratifications) it is important to collect the sample shortly after the ash fallout episode to avoid post-eruptive rain and fine-grained ash remobilization effects. The samples collected with the ashmeter are big enough for leachates analysis, laser diffraction grain-size analysis and SEM textural studies even for tephra fallout as thin as 1 mm (or about 20 g with the 8.25 cm-radius outer container). Larger samples (e.g. for sieving) can be obtained by constructing the ashmeter using a larger outer container. First results from the 2012 Tungurahua eruption On January 14 – 15, 2012, seven ashmeters were installed around Tungurahua volcano (Figure 7). The inner gauges of the ashmeters were collected on March 14 – 16, 2012. During this period the volcano produced three small ash emissions on February 4, February 22 – 25, and March 3 – 7. According to the Tungurahua Volcano Observatory reports ( http://www.igepn.edu.ec/index.php/informes/volcanicos.html webcite), the February 4 emission drifted westward in the direction of the Pillate station while the February 22 – 25, and March 3 – 7 emissions drifted southwestward in the direction of the Choglontus and Cahuají stations. Nevertheless, all seven ashmeters collected some ash during this period. Characteristics of the samples are presented in Table 1. Thickness readings were possible in 3 of the 7 ashmeters because the deposits were ≥ 0.3 mm thick (the threshold of the ashmeter); all others were too thin to record so only area density could be calculated for those stations. These results show a good relative resolution (< 10 percent) for the thickness readings and the bulk densities. The relative resolution for area densities vary between 1 and 28 percent for the seven samples, but only between 1 and 1.2 percent for the measurable samples (with thickness readings), highlighting the effect of the sample size on the resolution calculation. Figure 7. Map of Tungurahua volcano showing the ashmeter network. The dry area densities correspond to the ash collected within the first two months of installation (From January 14 to March 16, 2012). The arrows indicate the direction of the volcanic plume during the three small eruptions of February 4, February 22 – 25, and March 3 – 7, 2012. The size of the arrows indicates the relative size of the ash emissions. There is a clear difference between the dry bulk densities in stations Pillate (1450 kg/m3), Choglontus (1296 kg/m3), and Cahuají (1187 kg/m3). According to the Tungurahua Volcano Observatory, Pillate station was the most affected by the February 4 tephra fallout, whereas Choglontus and Cahuají stations accumulated principally during the February 22 – 25 and March 3 – 7 eruptions. The difference of bulk density can be therefore associated to a difference in compaction time and rain effects but it could also be associated with the tephra characteristics (grain-size, composition). This highlights the importance of a rapid sampling after the fallout in order to avoid compaction associated with multiple overlapping deposits or rain effects. Since March 14, 40 more ashmeters have been deployed around Tungurahua volcano that will allow a better quantification of Tungurahua ash emissions (Bernard et al., in preparation). The homemade ashmeter is a low-cost device used to collect small-volume tephra falls and allows high-accuracy measurement or calculation of thickness, area density, and bulk density of tephra deposits without the influence of most weathering effects. The ashmeter permits improved tephra field-data collection at both a local and regional scale. A dense local network can be particularly useful in characterizing small, repetitive explosive eruptions such as the 2012 Tungurahua or Etna eruptions. A regional network could greatly help to quantify and study the impact of Plinian eruptions in distal areas, such as the 2011 Cordón Caulle eruption in Chile that produced small amounts of ash fallout in the coasts of Argentina, Uruguay, and Brazil. The first results from the 2012 Tungurahua eruption prove that ashmeters can provide high-quality measurements, even for extremely small volume or thin deposits. Since the ashmeters are easy to use, more rapid reporting of eruption parameters is possible and can be used in hazard assessments in near real time. The author declares that he has no competing interests. This work is financially supported by the Universidad San Francisco de Quito. The author would like to thank the IGEPN, in particular S. Hidalgo and D. Andrade, J. Bustillos, A. Robles, J. Yuquilema, and D. Narváez for assistance with the development of the instrument, the installation of the network, and the collection of field-data. The author also would like to thank the personnel of OVT and the volunteers on Tungurahua volcano, whose observations during the eruption permit a better understanding of the data. The author acknowledges J. Quijozaca and B. Wade for their help in the construction of many ashmeters, installation, and collection of field-data. Stimulating discussions with P. Samaniego and JL Le Pennec (Institut de Recherche pour le Developpement, France) were much appreciated. B. Bartel and J. Lyons greatly helped to improve the understanding and the writing of the text. Two anonymous reviewers are acknowledged for their constructive and thorough reviews. Bonadonna C, Costa A (2012) Estimating the volume of tephra deposits: A new simple strategy. Geology 40-5:415-418 Publisher Full Text Bonadonna C, Ernst GGJ, Sparks RSJ (1998) Thickness variations and volume estimates of tephra fall deposits: the importance of particle Reynolds number. J Volcanol Geotherm Res 81:173-187 Publisher Full Text Bonadonna C, Scollo S, Cioni R, Pioli L, Pistolesi M (2001) Determination of the largest clasts of tephra deposits for the characterization of explosive volcanic eruptions: report of the IAVCEI Commission on Tephra Hazard Modelling. Bonadonna C, Connor C, Houghton B, Connor L, Byrne M, Laing A, Hincks T (2005) Probabilistic modeling of tephra dispersal: hazard assessment of a multiphase rhyolitic eruption at tarawera, New Zealand. J Geophys Res-Solid Earth 110:B03203 Publisher Full Text May 31 – June 4 2010, 2.7-O-07 Carey S, Sparks RSJ (1986) Quantitative models of the fallout and dispersal of tephra from volcanic eruption columns. Bull Volcanol 48:109-125 Publisher Full Text Eychenne J, Pennec J, Troncoso L, Gouhier M, Nedelec J (2012) Causes and consequences of bimodal grain-size distribution of tephra fall deposited during the August 2006 Tungurahua eruption (Ecuador). Bull Volcanol 74-1:187-205 Publisher Full Text Fierstein J, Nathenson M (1992) Another look at the calculation of fallout tephra volumes. Bull Volcanol 54:156-167 Publisher Full Text Folch A, Cavazzoni C, Costa A, Macedonio G (2008) An automatic procedure to forecast tephra fallout. J Volcanol Geotherm Res 177:767-777 Publisher Full Text http://www.igepn.edu.ec/index.php/informes/volcanicos.html webcite. Last visited Jan 2013 http://www.ivhhn.org/uploads/en/IVHHN_Ash_Collection_Procedures.pdf webcite. Last visited Jan 2013 Martin R, Watt S, Pyle D, Mather T, Matthews N, Georg R, Day J, Fairhead T, Witt M, Quayle B (2009) Environmental effects of ashfall in Argentina from the 2008 Chaitén volcanic eruption. J Volcanol Geotherm Res 184-3-4:462-472 Publisher Full Text Newhall CG, Self S (1982) The volcanic explosivity index (VEI): an estimate of explosive magnitude for historical volcanism. J Geophys Res 87:1231-1238 Publisher Full Text 108-B10, 2504Publisher Full Text Scollo S, Del Carlo P, Coltelli M (2007) Tephra fallout of 2001 Etna flank eruption: Analysis of the deposit and plume dispersion. J Volcanol Geotherm Res 160:147-164 Publisher Full Text Ulfarsson G, Unger E (2011) Impacts and responses of Icelandic aviation to the 2010 eyjafjallajökull volcanic eruption. Transportation research record. J Trans Res Board 2214-1:144-151 Publisher Full Text Walker GPL (1973) Explosive volcanic eruptions - a new classification scheme. Geol Rundsch 62:431-446 Publisher Full Text
http://www.appliedvolc.com/content/2/1/1
13
124
Copyright © University of Cambridge. All rights reserved. 1. Ancient and Classical Geometries As an essential part of their daily lives, ancient cultures knew a considerable amount of geometry as practical measurement and as rules for dividing and combining shapes of different kinds for building temples, palaces and for civil engineering. For their everyday practical purposes, people lived on a 'flat' Earth. A 'straight line' was a tightly stretched rope, and a circle could be drawn by tracing round a fixed point. Aristotle (384-322) BCE Much of the knowledge of these peoples was well-known around the Mediterranean, and when the Greek civilisation began to assert itself in the 4th century BCE, philosophers like Aristotle (384-322 BCE), developed a particular way of thinking, and promoted a mode of discussion which required the participants to state as clearly as possible the basis of their argument. In this atmosphere, Greek Logic was born. Euclid of Alexandria During this period, Alexandria became one of the important centres of Greek learning and this is where Euclid's Elements of Mathematics was written in about 300 BCE. Following Aristotle's principles, Euclid based his mathematics on a series of definitions of basic objects like points, straight lines, surfaces, angles, circles and triangles, and axioms (or postulates). These were the agreed starting points for his development of mathematics. The first three postulates are about what can be done, the next one about equality of right angles and the final statement uses the sum of two right angles to define whether two lines meet: - Draw a straight line from any point to any other point. - Produce (extend) a finite straight line continuously in a straight line. - Describe a circle with any centre - All right angles are equal to each - If a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, then if the two lines are produced indefinitely, they will meet on that side where the angles are less than the two Almost as soon as Euclid put his pen down, mathematicians and philosophers were having difficulty with the fifth postulate. In contrast to the short statements of the first four, the fifth looked as though it ought to be a theorem, not an axiom, meaning that it ought to be deducible from the other axioms. We know this from various logical analyses written by other mathematicians. In the fifth century CE, Proclus (411-485 CE) gave a simpler version of the fifth postulate: - Given a line and a point not on the line, it is possible to draw exactly one line through the given point parallel to the line. Today, this is known as Playfair's axiom, after the English mathematician John Playfair who wrote an important work on Euclid in 1795, even though this axiom had been known for over 1200 Arab mathematicians studied the Greek works, logically analysed the relatively complex statement of the fifth postulate, and produced their own versions. Abul Wafa al-Buzjani (940-998) Abul Wafa developed some important ideas in trigonometry and is said to have devised a wall quadrant [See Note 1 below] for the accurate measurement of the declination of stars. He also introduced the tangent, secant and cosecant functions and improved methods for calculating trigonometrical tables to 15' intervals and accurate to 8 decimal places. All this was done as part of an investigation into the Moon's orbit in his Theories of the Moon . The Abul Wafa crater is named after him. As a result of his trigonometric investigations, he developed ways of solving some problems of spherical triangles. Greek astronomers had long since introduced a geometrical model of the universe. Abul Wafa was the first Arab astronomer to use the idea of a spherical triangle to develop ways of measuring the distance between stars on the inside of a sphere. In the accompanying diagram, the blue triangle with sides a, b, and c represents the distances between stars on the inside of a sphere. The apex where the three angles are marked is the position of the Omar Khayyam (1048-1131) Famous for his poetry, Omar Khayyam was also an outstanding astronomer and mathematician who wrote Commentaries on the difficult postulates of Euclid's book . He tried to prove the fifth postulate and found that he had discovered some non-Euclidean properties of Omar Khayyam constructed the quadrilateral shown in the figure in an effort to prove that Euclid's fifth postulate could be deduced from the other four. He began by constructing equal line segments AD and BC perpendicular to AB. He recognized that if, by connecting C and D, he could prove that the internal angles at the top of the quadrilateral are right angles, then he would have shown that DC is parallel to AB. Although he showed that the internal angles at the top are equal (try it yourself) he could not prove that they were right angles. Nasir al-Din al-Tusi (1201-1274) Nasir al-Din al-Tusi (1201-1274) Al-Tusi wrote commentaries on many Greek texts and his work on Euclid's fifth postulate was translated into Latin and can be found in John Wallis' work of 1693. He criticised Euclid's proposition I, 28 "If a straight line falling on two straight lines makes the exterior angle equal to the interior and opposite angle on the same side, or the sum of the interior angles on the same side equal to two right angles, then the straight lines are parallel to one another." Al-Tusi's original diagram Al-Tusi's argument looked at the second part of the statement. Given two lines, AB and CD in the plane and a series of perpendiculars to CD drawn from PQ to XY so that they meet AB. On each side of these perpendiculars, one angle is acute (towards A), and the other obtuse (towards B). Clearly the perpendicular PQ is longer than each of the others and finally longer than XY. The opposite is also true; perpendicular XY is shorter than all those up to and including EF. So, if any pair of these perpendiculars is chosen to make a rectangle, the rectangle will contain an acute angle (on the A side) and an obtuse angle (on the B side). So how can we ensure that the perpendiculars are the same length, or show that both angles are right angles? One of al-Tusi's most important mathematical contributions was to show that the whole system of plane and spherical trigonometry was an independent branch of mathematics . In setting up the system, he discussed the comparison of curved lines and straight lines. The 'sine formula' for plane triangles had been known for some time, and Al-Tusi established an analogous formula for spherical triangles: Plane triangle sine rule Spherical triangle sine rule Great Circles Triangle The important idea here is that Abul Wafa and al-Tusi were dealing with the real problems of astronomy and between them they produced the first real-world non-Euclidean geometry which required calculation for its justification as well as logical argument. It was the ' Geometry of the Inside of a 2. Renaissance and Early Modern Developments The Painters' Perspective In the Middle Ages the function of Christian Art was largely hierarchical. Important people were made larger than others in the picture, and sometimes to give the impression of depth, groups of saints or angels were lined up in rows one behind the other like on a football terrace. Euclid's Optics provided a theoretical geometry of vision, but when the optical work of Al-Haytham (965-1039) became known, artists began to develop new techniques. Pictures in correct perspective appear in the fourteenth century, and methods of constructing the 'pavement' were no doubt handed down from master to apprentice. Leone Battista Alberti (1404-1472) published the first description of the method in 1435, and dedicated his book to Fillipo Brunelleschi (1377-1446) who is the person who gave the first correct method for constructing linear perspective and was clearly using this method by 1413. Leone Battista Alberti Alberti's method here is called distance point construction. In the centre of the picture plane, mark a line H (the horizon) and on it mark V (the vanishing point). Draw a series of equally spaced lines from V to the bottom of the picture. Then mark any point Z on the horizon line and draw a line from Z to the corner of the frame underneath H. This line will intersect all the lines from V. The points of intersection give the correct spaces for drawing the horizontal lines of the 'pavement' on which the painting will be based. Piero della Francesca (1412-1492) was a highly competent mathematician who wrote treatises on arithmetic and algebra and a classic work on perspective in which he demonstrates the important converse of proposition 21 in Euclid are similar to the same rectilinear figure are also similar to one Euclid uses this proposition to establish that similarity is a Piero's converse showed that if a pair of unequal parallel segments are divided into equal parts, the lines joining corresponding points converge to the vanishing point. Piero della Francesca (1412-1492) Piero Euclid VI, 21 diagram Piero's argument was based on the fact that each of the pairs of triangles $ABD$ and $AHK, ADE$ and $AKL$, etc. are similar, because $HK$ is parallel to $BC$, and that the ratio $AB$ to $BC$ is the same as $AH$ to $HI$. This implies that all the converging lines meet at A, the vanishing point (at Other famous artists improved on these methods, and in 1525 Albrecht Durer (1471-1528) produced a book demonstrating a number of mechanical aids for perspective drawing. Durer "Reclining woman" perspective picture Albrecht Durer (1471-1528) Desargues and Projective Geometry In 1639, Girard Desargues (1591-1661) wrote his ground-breaking treatise on projective geometry. He had earlier produced a manual of practical perspective for Architects and another on stone cutting for Masons, but his approach was theoretical and difficult to understand. In his 1639 treatise he introduced many new fundamental concepts. The term 'point at infinity' (the vanishing point) appears for the first time. He also uses the ideas of a 'cone of vision' and talks about 'pencils of lines', like the lines emanating from the vanishing point, (and if you can have a point at infinity, why not more, to make lines at This was a completely new kind of geometry. The fundamental relationships were based on ideas of 'projection and section' which means that any rigid Euclidean shape can be transformed into another 'similar' shape by a perspective transformation . A square can be transformed into a parallelogram (think of shadow play) and while the number and order of the sides remain the same, their length varies. Durer's cone picture The new geometry was not recognized at the time, because Desargues' technical language was difficult, and also because Rene Descartes' coordinate geometry published three years earlier was so popular. In the late 18th century Desargues' work was rediscovered, and developed both theoretically and practically into a coherent system, with central concepts of invariance and duality . In Projective geometry lengths, and ratios of lengths, angles and the shapes of figures, can all change under projection. Parallel lines do not exist because any pair of distinct lines intersect in a point. Properties that are invariant under projection are the order of three or more points on a line and the 'cross ratio, among four points, $A, B, C, D,$ so that Another important concept in projective geometry is duality . In the plane, the terms 'point' and 'line' are dual and can be interchanged in any valid statement to yield another valid statement. See Leo's articles Invariants and Projection and Section) and on the Four Colour 3. Modern Geometries In spite of the practical inventions of Spherical Trigonometry by Arab Astronomers, of Perspective Geometry by Renaissance Painters, and Projective Geometry by Desargues and later 18th century mathematicians, Euclidean Geometry was still held to be the true geometry of the real world. Nevertheless, mathematicians still worried about the validity of the parallel postulate. In 1663 the English mathematician John Wallis had translated the work of al-Tusi and followed his line of reasoning. To prove the fifth postulate he assumed that for every figure there is a similar one of arbitrary size. However, Wallis realized that his proof was based on an assumption equivalent to the parallel Saccheri's title page (1667-1733) entered the Jesuit Order in 1685. He went to Milan, studied philosophy and theology and mathematics. He became a priest and taught at various Jesuit Colleges, finally teaching philosophy and theology at Pavia, and holding the chair of mathematics there until his death. Saccheri knew about the work of the Arab mathematicians and followed the reasoning of al-Tusi in his investigation of the parallel postulate, and in 1733 he published his famous book, Euclid Freed from Every Flaw. In his first proposition at the beginning of his book, Saccheri constructed a quadrilateral in a similar manner to that of Omar Khayyam (above) and proved that the angles $ADB$ and $BCA$ are equal. He then considered the length of the upper side of the quadrilateral $CD$, and in Proposition III set up the three possibilities, depending on whether $CD$ is equal to, or less, or greater than the base $AB$. These possibilities are equivalent to: Hypothesis I : There is exactly one parallel (the right angle case, $CD=AB$) Hypothesis II: There are no parallels (the obtuse angle case, $CD$< $AB$) Hypothesis III : There are more than one parallel (the acute angle case, $CD$> $AB$) Saccheri Hypotheses Diagram Saccheri assumes that (i) a straight line divides the plane into two separate regions and (ii) that straight line can be infinite in extent. These assumptions are incompatible with the obtuse angle case, and so this is rejected. However, they are compatible with the acute angle case, and we can see from his diagram (fig. 33) and Proposition XXXII below that he is treating the intersection at infinity as a finite point, and this is where his contradiction lies. "Now I say there is (in the hypothesis of acute angle) a certain determinate acute angle $BAX$ drawn under which $AX$ (fig. 33) only at an infinite distance meets $BX$, and thus is a limit in part from within, in part from without; on the one hand of all those which under lesser acute angles meet the aforesaid $BX$ at a finite distance; on the other hand also of the others which under greater acute angles, even to a right angle inclusive, have a common perpendicular in two distinct points with $BX$." To us now, the curved line AX looks like an asymptote, but he says that $AX$ meets $BX$ "at an infinite distance" so that in the next Proposition XXXIII he states: of acute angle is absolutely false; because it is repugnant to the nature of the straight line". The irony is that in the next twenty or so pages, in order to show that the acute angle case is impossible, he demonstrates a number of elegant theorems of non-Euclidean geometry! It was clear that Saccheri could not cope with a perfectly logical conclusion that appeared to him to be against common sense. Saccheri's work was virtually unknown until 1899 when it was discovered and republished by the Italian mathematician, Eugenio Beltrami (1835-1900). As far as we know it had no influence on Lambert, Legendre or Gauss. Johan Heinrich Lambert Johan Heinrich Lambert (1728-1777) followed a similar plan to Saccheri. He investigated the hypothesis of the acute angle without obtaining a contradiction. Lambert noticed the curious fact that, in this new geometry, the angle sum of a triangle increased as the area of the (1752-1833) spent many years working on the parallel postulate and his efforts appear in different editions of his Elements de Geometrie . Legendre proved that the fifth postulate is equivalent to the statement that the sum of the angles of a triangle is equal to two right angles . Legendre also obtained a number of consistent but counter-intuitive results in his investigations, but was unable to bring these ideas together into a consistent system. Many of the consequences of the Parallel Postulate, taken with the other four axioms for plane geometry, can be shown logically to imply the Parallel Postulate. For example, these statements can also be regarded as equivalent to the Parallel Postulate. - In any triangle, the three angles sum to two right angles. - In any triangle, each exterior angle equals the sum of the two internally opposite angles. - If two parallel lines are cut by a transversal, the alternate interior angles are equal, and the corresponding angles are equal. Carl Freidrich Gauss (1777-1855) Carl Freidrich Gauss (1777-1855) Gauss was the first person to truly understand the problem of parallels. He began work on the fifth postulate by attempting to prove it from the other four. But by 1817 he was convinced that the fifth postulate was independent of the other four, and then began to work on a geometry where more than one line can be drawn through a given point parallel to a given line. He told one or two close friends about his work, though he never published it and in a private letter of 1824 he wrote: that (in a triangle) the sum of the three angles is less than 180o leads to a curious geometry, quite different from ours, but thoroughly consistent, which I have developed to my entire The final breakthrough was made quite independently by two men, and it is clear that both Bolyai and Lobachevski were completely unaware of each other's work. Nikolai Ivanovich Lobachevski Lobachevski (1792-1856) did not try to prove the fifth postulate but worked on a geometry where the fifth postulate does not necessarily hold. Lobachevski thought of Euclidean geometry as a special case of this more general geometry, and so was more open to strange and unusual possibilities. In 1829 he published the first account of his investigations in Russian in a journal of the university of Kazan but it was not noticed. His original work, Geometriya had already been completed in 1823, but not published until 1909. Lobachevski explained how his geometry works, "All straight lines which in a plane go out from a point can, with reference to a given straight line in the same plane, be divided into two classes - into cutting and non-cutting. The boundary lines of the one and the other class of those lines will be called parallel to the given line." The red line is the boundary, the 'parallel' to the line BC. Lobachevski tried to get his work Geometrical investigations on the theory of parallels recognized, and an account in French in 1837 brought his work on non-Euclidean geometry to a wide audience but the mathematical community was not yet ready to accept these (1802-1860) was the son of the mathematician Farkas Bolyai, a friend of Gauss. Farkas had worked on the problem of the fifth postulate, but had not been able to make any headway. Janos Bolyai (1802-1860) ) In 1823 young Janos wrote to his father saying, "I have discovered things so wonderful that I was astounded ... out of nothing I have created a strange new world." However it took Janos two more years before it was completed and his work was published as an appendix to his father's text-book. Janos had shown that a consistent geometry using the acute angle hypothesis case was possible. Janos Bolyai set out to investigate the three basic hypotheses of the right, obtuse, and acute angles by separating the case where the fifth postulate was true (the right angle case) from the cases where it was not true. On this basis he set up two systems of geometry, and searched for theorems that could be valid in Janos Bolyai's work was read by Gauss who recognized and gave credit to the young genius. However, when Gauss later explained to Janos that he himself had made these discoveries some years before, Janos was devastated. Later, Janos learned that Lobachevski had anticipated his work which disappointed him even more. He continued to work in mathematics, presenting some original ideas, but his enthusiasm and health deteriorated and he never published Lobachevski and Bolyai had discovered what we now call Hyperbolic Geometry. This is the geometry of the acute angle hypothesis where a 'line' is no longer a straight line and there are many possible lines through a given point which do not intersect another line. This is very difficult to visualize, and for people brought up to believe Euclidean geometry was 'true' this was counter-intuitive and unacceptable. Eugenio Beltrami (1835-1900) It was not until Beltrami produced the first model for hyperbolic geometry on the surface of a pseudo-sphere in 1868 that many mathematicians began to accept this strange new Imagine a circular polar grid (like a dart board) pulled up from the origin. It forms a trumpet-like surface. Any triangle drawn on this grid will become even more distorted when an apex is near the origin. All the lines going up the surface are asymptotes to a single central line rising vertically from the origin. These lines are all 'parallel' lines passing through a single limit point at infinity. If the Tractrix is rotated about its vertical axis, the surface formed will be a complete In the Poincare Model, all 'lines' are arcs of circles, except for the diameter (the arc of a circle with infinite radius). 'Parallel' lines are thought of as asymptotes where the limit point is on the circumference. With this model many 'parallels' can pass through the same point. This disc has a basic four-fold symmetry. The Yellow Poincare Disc has symmetry order seven. Maurits Escher used a six-fold symmetry for his "Circle Limit IV" engraving - the picture with the interlocking angels and devils. For more on Escher Gradually other models helped to make the new ideas more secure and in 1872 the famous German mathematician Felix Klein (1849-1925) produced his general view of geometry by unifying the different Spherical, Perspective Projective and Hyperbolic geometries with others as sets of axioms and properties invariant under the action of certain transformations. In this way, mathematicians at last became free to think of geometry in the abstract as a set of axioms, operations and logical rules that were not tied to the physical world. For pedagogical notes: Use the notes tab at the top of this article or click here . 1. Wall quadrants were invented and used for many years by astronomers for measuring the altitude of heavenly bodies. They have been specially built as part of ancient observatories, and as they became larger had to be supported by solid walls to keep them steady. It was believed that the larger the instrument was, the more accurate were the results obtained. It is true that the larger the instrument is, the easier it is to divide the scale of the quadrant into degrees, minutes and seconds. However, the accuracy can also depend on other things like the sighting instrument. For example, telescopes were not developed well enough to be reliable until the early 18th century, and because the mounting was fixed, it had limited use. In spite of the problems, Arab astronomers were able to achieve an accuracy of about 20 seconds of arc. These are the most reliable and accurate links. A quick search in Wikipedia often gives basic information, but be careful. It is always best to cross-check details with other sites. For all biographical details and special pages on non-Euclidean geometries and Mathematics and Art the MacTutor site at St Andrews University go to: For more detail on Mathematical techniques in Astronomy go to the 'Starry Messenger' site of the History of Science at Cambridge The Cut-the-Knot site has a good set of pages on non-Euclidean For excellent exposition and explanations of Euclid with Java applets go to David Clark's site at: 'The Origins of Perspective' is section 11 of a more extensive course on Art and Architecture based at Dartmouth college: And, if you have on-line access to Encyclopaedia Britannica or the Dictionary of Scientific Biography, then of course these give you much greater detail if you need it. Some books that open us to the range and fascination of cultural links are: Michele Emmer, (1993) The Visual Mind; Art and Mathematics MIT Press J.L. Heilbron, (1998) Geometry Civilised; History, Culture and Technique . Clarendon Press, AND a book to look out for: Eleanor Robson and Jackie Stedall (Editors) (December 2008), The Oxford Handbook of the History of Mathematics . Oxford University Press
http://nrich.maths.org/6352/index?nomenu=1
13
69
Comparison of analog and digital recording ||This article has multiple issues. Please help improve it or discuss these issues on the talk page. This article compares the two ways in which sound is recorded and stored. Actual sound waves consist of continuous variations in air pressure. Representations of these signals can be recorded using either digital or analog techniques. An analog recording is one where a property or characteristic of a physical recording medium is made to vary in a manner analogous to the variations in air pressure of the original sound. Generally, the air pressure variations are first converted (by a transducer such as a microphone) into an electrical analog signal in which either the instantaneous voltage or current is directly proportional to the instantaneous air pressure (or is a function of the pressure). The variations of the electrical signal in turn are converted to variations in the recording medium by a recording machine such as a tape recorder or record cutter—the variable property of the medium is modulated by the signal. Examples of properties that are modified are the magnetization of magnetic tape or the deviation (or displacement) of the groove of a gramophone disc from a smooth, flat spiral track. A digital recording is produced by converting the physical properties of the original sound into a sequence of numbers, which can then be stored and read back for reproduction. Normally, the sound is transduced (as by a microphone) to an analog signal in the same way as for analog recording, and then the analog signal is digitized, or converted to a digital signal, through an analog-to-digital converter and then recorded onto a digital storage medium such as a compact disc or hard disk. Two prominent differences in functionality are the bandwidth and the signal-to-noise ratio (S/N); however, both digital and analog systems have inherent strengths and weaknesses. The bandwidth of the digital system is determined, according to the Nyquist frequency, by the sample rate used. The bandwidth of an analog system is dependent by the physical capabilities of the analog circuits. The S/N of a digital system is first limited by the bit depth of the digitization process, but the electronic implementation of the digital audio circuit introduces additional noise. In an analog system, other natural analog noise sources exist, such as flicker noise and imperfections in the recording medium. Some functions of the two systems are also naturally exclusive to either one or the other, such as the ability for more transparent filtering algorithms in digital systems and the harmonic saturation of analog systems. Overview of differences It is a subject of debate whether analog audio is superior to digital audio or vice versa. The question is highly dependent on the quality of the systems (analog or digital) under review, and other factors which are not necessarily related to sound quality. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing, quantization noise, and the absolute limitation of dynamic range. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion. Accurate, high quality sound reproduction is possible with both analog and digital systems. Excellent, expensive analog systems may outperform digital systems, and vice versa; in theory any system of either type may be surpassed by a better, more elaborate and costly system of the other type, but in general it tends to be less expensive to achieve any given standard of technical signal quality with a digital system, except when the standard is very low. One of the most limiting aspects of analog technology is the sensitivity of analog media to minor physical degradation; however, when the degradation is more pronounced, analog systems usually perform better, often still producing recognizable sound, while digital systems will usually fail completely, unable to play back anything from the medium. (See digital cliff.) The principal advantages that digital systems have are very uniform source fidelity, inexpensive media duplication, and direct use of the digital 'signal' in today's popular portable storage and playback devices. Analog recordings by comparison require comparatively bulky, high-quality playback equipment to capture the signal from the media as accurately as digital. Early in the development of the Compact Disc, engineers realized that the perfection of the spiral of bits was critical to playback fidelity. A scratch the width of a human hair (100 micrometres) could corrupt several dozen bits, resulting in at best a pop, and far worse, a loss of synchronization of the clock and data, giving a long segment of noise until resynchronized. This was addressed by encoding the digital stream with a multi-tiered error-correction coding scheme which reduces CD capacity by about 20%, but makes it tolerant to hundreds of surface imperfections across the disk without loss of signal. In essence, "error correction" can be thought of as "using the mathematically encoded backup copies of the data that was corrupted." Not only does the CD use redundant data, but it also mixes up the bits in a predetermined way (see CIRC) so that a small flaw on the disc will affect fewer consecutive bits of the decoded signal and allow for more effective error correction using the available backup information. Error correction allows digital formats to tolerate quite a bit more media deterioration than analog formats. That is not to say poorly produced digital media are immune to data loss. Laser rot was most troublesome to the Laserdisc format, but also occurs to some pressed commercial CDs, and was caused in both cases by inadequate disc manufacture.[note 1] There can occasionally be difficulties related to the use of consumer recordable/rewritable compact discs. This may be due to poor-quality CD recorder drives, low-quality discs, or incorrect storage, as the information-bearing dye layer of most CD-recordable discs is at least slightly sensitive to UV light and will be slowly bleached out if exposed to any amount of it. Most digital recordings rely at least to some extent on computational encoding and decoding and so may become completely unplayable if not enough consecutive good data is available for the decoder to synchronize to the digital data stream, whereas any intact fragment of any size of an analog recording is playable. Unlike analog duplication, digital copies are exact replicas, which can be duplicated indefinitely[note 2] without degradation. Digital systems often have the ability for the same medium to be used with arbitrarily high or low quality encoding methods and number of channels or other content, unlike practically all analog systems which have mechanically pre-fixed speeds and channels. Most higher-end analog recording systems offer a few selectable recording speeds, but digital systems tend to offer much finer variation in the rate of media usage. There are also several non-sound related advantages of digital systems that are practical. Digital systems that are computer-based make editing much easier through rapid random access, seeking, and scanning for non-linear editing. Most digital systems also allow non-audio data to be encoded into the digital stream, such as information about the artist, track titles, etc., which is often convenient.[note 3] Noise and distortion In the process of recording, storing and playing back the original analog sound wave (in the form of an electronic signal), it is unavoidable that some signal degradation will occur. This degradation is in the form of distortion and noise . Noise is unrelated in time to the original signal content, while distortion is in some way related in time to the original signal content. Noise performance For electronic audio signals, sources of noise include mechanical, electrical and thermal noise in the recording and playback cycle. The actual process of digital conversion will always add some noise, however small in intensity; the bulk of this in a high-quality system is quantization noise, which cannot be theoretically avoided, but some will also be electrical, thermal, etc. noise from the analog-to-digital converted device. The amount of noise that a piece of audio equipment adds to the original signal can be quantified. Mathematically, this can be expressed by means of the signal to noise ratio (SNR or S/N). Sometimes the maximum possible dynamic range of the system is quoted instead. In a digital system, the number of quantization levels, in binary systems determined by and typically stated in terms of the number of bits, will have a bearing on the level of noise and distortion added to that signal. The 16-bit digital system of Red Book audio CD has 216= 65,536 possible signal amplitudes, theoretically allowing for an SNR of 98 dB. Each additional quantization bit adds 6 dB in possible SNR, e.g. 24 x 6 = 144 dB for 24 bit quantization, 126 dB for 21-bit, and 120 dB for 20-bit. With digital systems, the quality of reproduction depends on the analog-to-digital and digital-to-analog conversion steps, and does not depend on the quality of the recording medium, provided it is adequate to retain the digital values without error. Analog systems Consumer analog cassette tapes may have a dynamic range of 60 to 70 dB. Analog FM broadcasts rarely have a dynamic range exceeding 50 dB, though under excellent reception conditions the basic FM transmission system can achieve just over 80dB. The dynamic range of a direct-cut vinyl record may surpass 70 dB. Analog studio master tapes using Dolby-A noise reduction can have a dynamic range of around 80 dB. "Rumble" is a form of noise characteristic caused by imperfections in the bearings of turntables, the platter tends to have a slight amount of motion besides the desired rotation—the turntable surface also moves up-and-down and side-to-side slightly. This additional motion is added to the desired signal as noise, usually of very low frequencies, creating a "rumbling" sound during quiet passages. Very inexpensive turntables sometimes used ball bearings which are very likely to generate audible amounts of rumble. More expensive turntables tend to use massive sleeve bearings which are much less likely to generate offensive amounts of rumble. Increased turntable mass also tends to lead to reduced rumble. A good turntable should have rumble at least 60 dB below the specified output level from the pick-up.:79-82 Wow and flutter Wow and flutter are a change in frequency of an analog device and are the result of mechanical imperfections, with wow being a slower rate form of flutter. Wow and flutter are most noticeable on signals which contain pure tones. For LP records, the quality of the turntable will have a large effect on the level of wow and flutter. A good turntable will have wow and flutter values of less than 0.05%, which is the speed variation from the mean value. Wow and flutter can also be present in the recording, as a result of the imperfect operation of the recorder. Frequency response Digital mechanisms The frequency response of the standard for audio CDs is sufficiently wide to cover the entire normal audible range, which roughly extends from 20 Hz to 20 kHz. Commercial and industrial digital recorders record higher frequencies, while consumer systems inferior to the CD record a more restricted frequency range. Analog audio's frequency response is less flat than digital, but it can vary in the electronics. For digital systems, the upper limit of the frequency response is determined by the sampling frequency. The choice of sample rate used in a digital system is based on the Nyquist-Shannon sampling theorem. This states that a sampled signal can be reproduced exactly as long as it is sampled at a frequency greater than twice the bandwidth of the signal. Therefore, a sampling rate of 40 kHz would be theoretically enough to capture all the information contained in a signal having frequency bandwidth up to 20 kHz. Analog mechanisms High quality open-reel machines can extend from 10 Hz to above 20 kHz. The linearity of the response may be indicated by providing information on the level of the response relative to a reference frequency. For example, a system component may have a response given as 20 Hz to 20 kHz +/- 3 dB relative to 1 kHz. Some analog tape manufacturers specify frequency responses up to 20 kHz, but these measurements may have been made at lower signal levels. Compact cassettes may have a response extending up to 15 kHz at full (0 dB) recording level (Stark 1989). At lower levels usually -10 dB, cassettes typically rolls-off at around 20 kHz for most machines, due to the nature of the tape media caused by self-erasure (which worsens the linearity of the response). The frequency response for a conventional LP player might be 20 Hz - 20 kHz +/- 3 dB. Unlike the audio CD, vinyl records and cassettes do not require a cut-off in response above 20 kHz. The low frequency response of vinyl records is restricted by rumble noise (described above). The high frequency response of vinyl depends on the cartridge. CD4 records contained frequencies up to 50 kHz, while some high-end turntable cartridges have frequency responses of 120 kHz while having flat frequency response over the audible band (e.g. 20 Hz to 15 kHz +/-0.3 dB). In addition, frequencies of up to 122 kHz have been experimentally cut on LP records. In comparison, the CD system offers a frequency response of 20 Hz–20 kHz ±0.5 dB, with a superior dynamic range over the entire audible frequency spectrum. With vinyl records, there will be some loss in fidelity on each playing of the disc. This is due to the wear of the stylus in contact with the record surface. A good quality stylus, matched with a correctly set up pick-up arm, should cause minimal surface wear. Magnetic tapes, both analog and digital, wear from friction between the tape and the heads, guides, and other parts of the tape transport as the tape slides over them. The brown residue deposited on swabs during cleaning of a tape machine's tape path is actually particles of magnetic coating shed from tapes. Tapes can also suffer creasing, stretching, and frilling of the edges of the plastic tape base, particularly from low-quality or out-of-alignment tape decks. When a CD is played, there is no physical contact involved, and the data is read optically using a laser beam. Therefore no such media deterioration takes place, and the CD will, with proper care, sound exactly the same every time it is played (discounting aging of the player and CD itself); however, this is a benefit of the optical system, not of digital recording, and the Laserdisc format enjoys the same non-contact benefit with analog optical signals. Recordable CDs slowly degrade with time, called disc rot, even if they are not played, and are stored properly. Technical difficulty arises with digital sampling in that all high frequency signal content above the Nyquist frequency must be removed prior to sampling, which, if not done, will result in these ultrasonic frequencies "folding over" into frequencies which are in the audible range, producing a kind of distortion called aliasing. The difficulty is that designing a brick-wall anti-aliasing filter, a filter which would precisely remove all frequency content exactly above or below a certain cutoff frequency, is impractical. Instead, a sample rate is usually chosen which is above the theoretical requirement. This solution is called oversampling, and allows a less aggressive and lower-cost anti-aliasing filter to be used. Unlike digital audio systems, analog systems do not require filters for bandlimiting. These filters act to prevent aliasing distortions in digital equipment. Early digital systems may have suffered from a number of signal degradations related to the use of analog anti-aliasing filters, e.g., time dispersion, nonlinear distortion, temperature dependence of filters etc. (Hawksford 1991:8). Even with sophisticated anti-aliasing filters used in the recorder, it is still demanding for the player not to introduce more distortion. Hawksford (1991:18) highlighted the advantages of digital converters that oversample. Using an oversampling design and a modulation scheme called sigma-delta modulation (SDM), analog anti-aliasing filters can effectively be replaced by a digital filter. This approach has several advantages. The digital filter can be made to have a near-ideal transfer function, with low in-band ripple, and no aging or thermal drift. Higher sampling rates CD quality audio is sampled at 44.1 kHz (Nyquist frequency = 22.05 kHz) and at 16 bits. Sampling the waveform at higher frequencies and allowing for a greater number of bits per sample allows noise and distortion to be reduced further. DAT can store audio at up to 48 kHz, while DVD-Audio can be 96 or 192 kHz and up to 24 bits resolution. With any of these sampling rates, signal information is captured above what is generally considered to be the human hearing range. Work done in 1980 by Muraoka et al. (J.Audio Eng. Soc., Vol 29, pp2–9) showed that music signals with frequency components above 20 kHz were only distinguished from those without by a few of the 176 test subjects (Kaoru & Shogo 2001). Later papers, however, by a number of different authors, have led to a greater discussion of the value of recording frequencies above 20 kHz. Such research led some to the belief that capturing these ultrasonic sounds could have some audible benefit. Audible differences were reported between recordings with and without ultrasonic responses. Dunn (1998) examined the performance of digital converters to see if these differences in performance could be explained. He did this by examining the band-limiting filters used in converters and looking for the artifacts they introduce. A perceptual study by Nishiguchi et al. (2004) concluded that "no significant difference was found between sounds with and without very high frequency components among the sound stimuli and the subjects... however, [Nishiguchi et al] can still neither confirm nor deny the possibility that some subjects could discriminate between musical sounds with and without very high frequency components." Additionally, in blind tests conducted by Bob Katz, recounted in his book Mastering Audio: The Art and the Science, he found that listening subjects could not discern any audible difference between sample rates with optimum A/D conversion and filter performance. He posits that the primary reason for any aural variation between sample rates is due largely to poor performance of low-pass filtering prior to conversion, and not variance in ultrasonic bandwidth. These results suggest that the main benefit to using higher sample rates is that it pushes consequential phase distortion out of the audible range and that, under ideal conditions, higher sample rates may not be necessary. Digital errors A signal is recorded digitally by an analog-to-digital converter, which measures the amplitude of an analog signal at regular intervals, which are specified by the sample rate, and then stores these sampled numbers in computer hardware. The fundamental problem with numbers on computers is that the range of values that can be represented is finite, which means that during sampling, the amplitude of the audio signal must be rounded. This process is called quantization, and these small errors in the measurements are manifested aurally as a form of low level distortion. Analog systems do not have discrete digital levels in which the signal is encoded. Consequently, the original signal can be preserved to an accuracy limited only by the intrinsic noise-floor and maximum signal level of the media and the playback equipment, i.e., the dynamic range of the system. With digital systems, noise added due to quantization into discrete levels is more audibly disturbing than the noise-floor in analog systems. This form of distortion, sometimes called granular or quantization distortion, has been pointed to as a fault of some digital systems and recordings (Knee & Hawksford 1995, Stuart n.d.:6). Knee & Hawksford (1995:3) drew attention to the deficiencies in some early digital recordings, where the digital release was said to be inferior to the analog version. The range of possible values that can be represented numerically by a sample is defined by the number of binary digits used. This is called the resolution, and is usually referred to as the bit depth in the context of PCM audio. The quantization noise level is directly determined by this number, decreasing exponentially as the resolution increases (or linearly in dB units), and with an adequate number of true bits of quantization, random noise from other sources will dominate and completely mask the quantization noise. Dither as a solution It is possible to make quantization noise more audibly benign by applying dither. To do this, a noise-like signal is added to the original signal before quantization. Dither makes the digital system behave as if it has an analog noise-floor. Optimal use of dither (triangular probability density function dither in PCM systems) has the effect of making the rms quantization error independent of signal level (Dunn 2003:143), and allows signal information to be retained below the least significant bit of the digital system (Stuart n.d.:3). Dither algorithms also commonly have an option to employ some kind of noise shaping, which pushes the frequency response of the dither noise to areas that are less audible to human ears. This has no statistical benefit, but rather it raises the S/N of the audio that is apparent to the listener. One aspect that may degrade the performance of a digital system is jitter. This is the phenomenon of variations in time from what should be the correct spacing of discrete samples according to the sample rate. This can be due to timing inaccuracies of the digital clock. Ideally a digital clock should produce a timing pulse at exactly regular intervals. Other sources of jitter within digital electronic circuits are data-induced jitter, where one part of the digital stream affects a subsequent part as it flows through the system, and power supply induced jitter, where DC ripple on the power supply output rails causes irregularities in the timing of signals in circuits powered from those rails. The accuracy of a digital system is dependent on the sampled amplitude values, but it is also dependent on the temporal regularity of these values. This temporal dependency is inherent to digital recording and playback and has no analog equivalent, though analog systems have their own temporal distortion effects (pitch error and wow-and-flutter). Periodic jitter produces modulation noise and can be thought of as being the equivalent of analog flutter (Rumsey & Watkinson 1995). Random jitter alters the noise floor of the digital system. The sensitivity of the converter to jitter depends on the design of the converter. It has been shown that a random jitter of 5 ns (nanoseconds) may be significant for 16 bit digital systems (Rumsey & Watkinson 1995). For a more detailed description of jitter theory, refer to Dunn (2003). Jitter can degrade sound quality in digital audio systems. In 1998, Benjamin and Gannon researched the audibility of jitter using listening tests (Dunn 2003:34). They found that the lowest level of jitter to be audible was around 10 ns (rms). This was on a 17 kHz sine wave test signal. With music, no listeners found jitter audible at levels lower than 20 ns. A paper by Ashihara et al. (2005) attempted to determine the detection thresholds for random jitter in music signals. Their method involved ABX listening tests. When discussing their results, the authors of the paper commented that: 'So far, actual jitter in consumer products seems to be too small to be detected at least for reproduction of music signals. It is not clear, however, if detection thresholds obtained in the present study would really represent the limit of auditory resolution or it would be limited by resolution of equipment. Distortions due to very small jitter may be smaller than distortions due to non-linear characteristics of loudspeakers. Ashihara and Kiryu evaluated linearity of loudspeaker and headphones. According to their observation, headphones seem to be more preferable to produce sufficient sound pressure at the ear drums with smaller distortions than loudspeakers.' On the Internet-based hi-fi website TNT Audio, Pozzoli (2005) describes some audible effects of jitter. His assessment appears to run contrary to the earlier papers mentioned: 'In my personal experience, and I would dare say in common understanding, there is a huge difference between the sound of low and high jitter systems. When the jitter amount is very high, as in very low cost CD players (2ns), the result is somewhat similar to wow and flutter, the well known problem that affected typically compact cassettes (and in a far less evident way turntables) and was caused by the non perfectly constant speed of the tape: the effect is similar, but here the variations have a far higher frequency and for this reasons are less easy to perceive but equally annoying. Very often in these cases the rhythmic message, the pace of the most complicated musical plots is partially or completely lost, music is dull, scarcely involving and apparently meaningless, it does not make any sense. Apart for harshness, the typical "digital" sound, in a word... In lower amounts, the effect above is difficult to perceive, but jitter is still able to cause problems: reduction of the soundstage width and/or depth, lack of focus, sometimes a veil on the music. These effects are however far more difficult to trace back to jitter, as can be caused by many other factors.' Dynamic range The dynamic range of an audio system is a measure of the difference between the smallest and largest amplitude values that can be represented in a medium. Digital and analog differ in both the methods of transfer and storage, as well as the behavior exhibited by the systems due to these methods. Overload conditions There are some differences in the behaviour of analog and digital systems when high level signals are present, where there is the possibility that such signals could push the system into overload. With high level signals, analog magnetic tape approaches saturation, and high frequency response drops in proportion to low frequency response. While undesirable, the audible effect of this can be reasonably unobjectionable (Elsea 1996). In contrast, digital PCM recorders show non-benign behaviour in overload (Dunn 2003:65); samples that exceed the peak quantization level are simply truncated, clipping the waveform squarely, which introduces distortion in the form of large quantities of higher-frequency harmonics. The 'softness' of analog tape clipping allows a usable dynamic range that can exceed that of some PCM digital recorders. (PCM, or pulse code modulation, is the coding scheme used in Compact Disc, DAT, PC sound cards, and many studio recording systems.) In principle, PCM digital systems have the lowest level of nonlinear distortion at full signal amplitude. The opposite is usually true of analog systems, where distortion tends to increase at high signal levels. A study by Manson (1980) considered the requirements of a digital audio system for high quality broadcasting. It concluded that a 16 bit system would be sufficient, but noted the small reserve the system provided in ordinary operating conditions. For this reason, it was suggested that a fast-acting signal limiter or 'soft clipper' be used to prevent the system from becoming overloaded (Manson 1980:8). With many recordings, high level distortions at signal peaks may be audibly masked by the original signal, thus large amounts of distortion may be acceptable at peak signal levels. The difference between analog and digital systems is the form of high-level signal error. Some early analog-to-digital converters displayed non-benign behaviour when in overload, where the overloading signals were 'wrapped' from positive to negative full-scale. Modern converter designs based on sigma-delta modulation may become unstable in overload conditions. It is usually a design goal of digital systems to limit high-level signals to prevent overload (Dunn 2003:65). To prevent overload, a modern digital system may compress input signals so that digital full-scale cannot be reached (Jones et al. 2003:4). The dynamic range of digital audio systems can exceed that of analog audio systems. Typically, a 16 bit analog-to-digital converter may have a dynamic range of between 90 to 95 dB (Metzler 2005:132), whereas the signal-to-noise ratio (roughly the equivalent of dynamic range, noting the absence of quantization noise but presence of tape hiss) of a professional reel-to-reel 1/4 inch tape recorder would be between 60 and 70 dB at the recorder's rated output (Metzler 2005:111). The benefits of using digital recorders with greater than 16 bit accuracy can be applied to the 16 bits of audio CD. Stuart (n.d.:3) stresses that with the correct dither, the resolution of a digital system is theoretically infinite, and that it is possible, for example, to resolve sounds at -110 dB (below digital full-scale) in a well-designed 16 bit channel. Signal processing After initial recording, it is common for the audio signal to be altered in some way, such as with the use of compression, equalization, delays and reverb. With analog, this comes in the form of outboard hardware components, and with digital, the same is accomplished with plug-ins that are utilized in the user's DAW. A comparison of analog and digital filtering shows technical advantages to both methods, and there are several points that are relevant to the recording process. Analog hardware Many analog units possess unique characteristics that are desirable. Common elements are band shapes and phase response of equalizers and response times of compressors. These traits can be difficult to reproduce digitally because they are due to electrical components which function differently than the algorithmic calculations used on a computer. When altering a signal with a filter, the outputted signal may differ in time from the signal at the input, which is called a change in phase. Many equalizers exhibit this behavior, with the amount of phase shift differing in some pattern, and centered around the band that is being adjusted. This phase distortion can create the perception of a "ringing" sound around the filter band, or other coloration. Although this effect alters the signal in a way other than a strict change in frequency response, this coloration can sometimes have a positive effect on the perception of the sound of the audio signal. Digital filters One prime example is the invention of the linear phase equalizer, which has inherent phase shift that is homogeneous across the frequency spectrum. Digital delays can also be perfectly exact, provided the delay time is some multiple of the time between samples, and so can the summing of a multitrack recording, as the sample values are merely added together. A practical advantage of digital processing is the more convenient recall of settings. Plug-in parameters can be stored on the computer hard disk, whereas parameter details on an analog unit must be written down or otherwise recorded if the unit needs to be reused. This can be cumbersome when entire mixes must be recalled manually using an analog console and outboard gear. When working digitally, all parameters can simply be stored in a DAW project file and recalled instantly. Most modern professional DAWs (such as Avid Pro Tools and Apple Logic Pro) also process plug-ins in real time, which means that processing can be largely non-destructive until final mix-down. Analog modeling Many plug-ins exist now, such as by Waves and iZotope, that incorporate some kind of analog modeling. There are some engineers that endorse them and feel that they compare equally in sound to the analog processes that they imitate. Digital models also carry some benefits over their analog counterparts, such as the ability to remove noise from the algorithms and add modifications to make the parameters more flexible. On the other hand, other engineers also feel that the modeling is still inferior to the genuine outboard components and still prefer to mix "outside the box". Sound quality Subjective evaluation Subjective evaluation attempts to measure how well an audio component performs according to the human ear. The most common form of subjective test is a listening test, where the audio component is simply used in the context for which it was designed. This test is popular with hi-fi reviewers, where the component is used for a length of time by the reviewer who then will describe the performance in subjective terms. Common descriptions include whether the component has a 'bright' or 'dull' sound, or how well the component manages to present a 'spatial image'. Another type of subjective test is done under more controlled conditions and attempts to remove possible bias from listening tests. These sorts of tests are done with the component hidden from the listener, and are called blind tests. To prevent possible bias from the person running the test, the blind test may be done so that this person is also unaware of the component under test. This type of test is called a double-blind test. This sort of test is often used to evaluate the performance of digital audio codecs. There are critics of double-blind tests who see them as not allowing the listener to feel fully relaxed when evaluating the system component, and can therefore not judge differences between different components as well as in sighted (non-blind) tests. Those who employ the double-blind testing method may try to reduce listener stress by allowing a certain amount of time for listener training (Borwick et al. 1994:481-488). Early digital recordings Early digital audio machines had disappointing results, with digital converters introducing errors that the ear could detect (Watkinson 1994). Record companies released their first LPs based on digital audio masters in the late 1970s. CDs became available in the early 1980s. At this time analog sound reproduction was a mature technology. There was a mixed critical response to early digital recordings released on CD. Compared to vinyl record, it was noticed that CD was far more revealing of the acoustics and ambient background noise of the recording environment (Greenfield et al. 1986). For this reason, recording techniques developed for analog disc, e.g., microphone placement, needed to be adapted to suit the new digital format (Greenfield et al. 1986). Some analog recordings were remastered for digital formats. Analog recordings made in natural concert hall acoustics tended to benefit from remastering (Greenfield et al. 1990). The remastering process was occasionally criticised for being poorly handled. When the original analog recording was fairly bright, remastering sometimes resulted in an unnatural treble emphasis (Greenfield et al. 1990). Super Audio CD and DVD Audio The Super Audio CD (SACD) format was created by Sony and Philips, who were also the developers of the earlier standard audio CD format. SACD uses Direct Stream Digital, which works quite differently from the PCM format discussed in this article. Instead of using a greater number of bits and attempting to record a signal's precise amplitude for every sample cycle, a Direct Stream Digital recorder works by encoding a signal in a series of PWM pulses of fixed amplitude but variable duration and timing. The competing DVD-Audio format uses standard, linear PCM at variable sampling rates and bit depths, which at the very least match and usually greatly surpass those of a standard CD Audio (16 bits, 44.1 kHz). In the popular Hi-Fi press, it had been suggested that linear PCM "creates [a] stress reaction in people", and that DSD "is the only digital recording system that does not [...] have these effects" (Hawksford 2001). This claim appears to originate from a 1980 article by Dr John Diamond entitled Human Stress Provoked by Digitalized Recordings. The core of the claim that PCM (the only digital recording technique available at the time) recordings created a stress reaction rested on "tests" carried out using the pseudoscientific technique of Applied Kinesiology, for example by Dr Diamond at an AES 66th Convention (1980) presentation with the same title. Diamond had previously used a similar technique to demonstrate that rock music (as opposed to classical) was bad for your health due to the presence of the "stopped anapestic beat". Dr Diamond's claims regarding digital audio were taken up by Mark Levinson, who asserted that while PCM recordings resulted in a stress reaction, DSD recordings did not, see also. A double-blind subjective test between high resolution linear PCM (DVD-Audio) and DSD did not reveal a statistically significant difference. Listeners involved in this test noted their great difficulty in hearing any difference between the two formats. Analog warmth Some audio enthusiasts prefer the sound of vinyl records over that of a CD. Founder and editor Harry Pearson of The Absolute Sound journal says that "LPs are decisively more musical. CDs drain the soul from music. The emotional involvement disappears". Dub producer Adrian Sherwood has similar feelings about the analog cassette tape, which he prefers because of its warm sound. Those who favour the digital format point to the results of blind tests, which demonstrate the high performance possible with digital recorders. The assertion is that the 'analog sound' is more a product of analog format inaccuracies than anything else. One of the first and largest supporters of digital audio was the classical conductor Herbert von Karajan, who said that digital recording was "definitely superior to any other form of recording we know". He also pioneered the unsuccessful Digital Compact Cassette and conducted the first recording ever to be commercially released on CD: Richard Strauss's Eine Alpensinfonie. Was it ever entirely analog or digital? Complicating the discussion is that recording professionals often mix and match analog and digital techniques in the process of producing a recording. Analog signals can be subjected to digital signal processing or effects, and inversely digital signals are converted back to analog in equipment that can include analog steps such as vacuum tube amplification. For modern recordings, the controversy between analog recording and digital recording is becoming moot. No matter what format the user uses, the recording probably was digital at several stages in its life. In case of video recordings it is moot for one other reason; whether the format is analog or digital, digital signal processing is likely to have been used in some stages of its life, such as digital timebase correction on playback. An additional complication arises when discussing human perception when comparing analog and digital audio in that the human ear itself, is an analog-digital hybrid. The human hearing mechanism begins with the tympanic membrane transferring vibrational motion through the middle-ear's mechanical system—three bones (malleus, incus and stapes)—into the cochlea where hair-like nerve cells convert the vibrational motion stimulus into nerve impulses. Auditory nerve impulses are discrete signalling events which cause synapses to release neurotransmitters to communicate to other neurons (see here.) The all-or-none quality of the impulse can lead to a misconception that neural signalling is somehow 'digital' in nature, but in fact the timing and rate of these signalling events is not clocked or quantised in any way. Thus the transformation of the acoustic wave is not a process of sampling, in the sense of the word as it applies to digital audio. Instead it is a transformation from one analog domain to another, and this transformation is further processed by the neurons to which the signalling is connected. The brain then processes the incoming information and perceptually reconstructs the original analog input to the ear canal. It is also worth noting two issues that impact perception of sound playback. The first is human ear dynamic range which for practical and hearing safety reasons might be regarded as 120 decibels, from barely audible sound received by the ear situated within an otherwise silent environment, to the threshold of pain or onset of damage to the ear's delicate mechanism. The other critical issue is manifestly more complex; the presence and nature of background noise in any listening environment. Background noise subtracts useful hearing dynamic range, in any number of ways that depend on the nature of the noise from the listening environment: noise spectral content, noise coherence or periodicity, angular aspects such as localization of noise sources with respect to localization of playback system sources and so on. Hybrid systems While the words analog audio usually imply that the sound is described using a continuous time/continuous amplitudes approach in both the media and the reproduction/recording systems, and the words digital audio imply a discrete time/discrete amplitudes approach, there are methods of encoding audio that fall somewhere between the two, e.g. continuous time/discrete levels and discrete time/continuous levels. While not as common as "pure analog" or "pure digital" methods, these situations do occur in practice. Indeed, all analog systems show discrete (quantized) behaviour at the microscopic scale, and asynchronously operated class-D amplifiers even consciously incorporate continuous time, discrete amplitude designs. Continuous amplitude, discrete time systems have also been used in many early analog-to-digital converters, in the form of sample-and-hold circuits. The boundary is further blurred by digital systems which statistically aim at analog-like behavior, most often by utilizing stochastic dithering and noise shaping techniques. While vinyl records and common compact cassettes are analog media and use quasi-linear physical encoding methods (e.g. spiral groove depth, tape magnetic field strength) without noticeable quantization or aliasing, there are analog non-linear systems that exhibit effects similar to those encountered on digital ones, such as aliasing and "hard" dynamic floors (e.g. frequency modulated hi-fi audio on videotapes, PWM encoded signals). Although those "hybrid" techniques are usually more common in telecommunications systems than in consumer audio, their existence alone blurs the distinctive line between certain digital and analog systems, at least for what regards some of their alleged advantages or disadvantages. See also - Note that Laserdisc, despite using a laser optical system that has become commonly associated with digital disc formats, is an old analog format, except for its optional digital audio tracks; the video image portion of the content is always analog. - Unless imposed DRM restrictions apply. - It is technically possible, to implement analog systems with integrated digital metadata channels. - "Chapter 21: Filter Comparison". dspguide.com. Retrieved 2012-09-13. - [dead link] - Sony Europe (2001). Digital Audio Technology 4th edn, edited by J. Maes & M. Vercammen. Focal Press. - Driscoll, R. (1980). Practical Hi-Fi Sound, 'Analogue and digital', pages 61–64; 'The pick-up, arm and turntable', pages 79–82. Hamlyn. ISBN 0-600-34627-7. - Technics EPC-100CMK4 - "mastering". Positive-feedback.com. Retrieved 2012-08-15. - Thompson, Dan. Understanding Audio. Berklee Press, 2005, ch. 14. - "NHK Laboratories Note No. 486". Nhk.or.jp. Retrieved 2012-08-15. - Katz, Bob. Mastering Audio: The Art and the Science. Focal Press, 2007. - [dead link] - "Jitter explained - Part 1.4 [English]". Tnt-audio.com. Retrieved 2012-08-15. - John Eargle, Chris Foreman. Audio Engineering for Sound Reinforcement, The Advantages of Digital Transmission and Signal Processing. Retrieved 2012-09-14. - Bo Boddie. "Review: Waves CLA Classic Compressors & Eddie Kramer PIE Compressor and HLS EQ". Retrieved 2012-09-13. - "An In-Depth Interview with Tony Maserati". Retrieved 2012-09-13. - "Secrets Of The Mix Engineers: Chris Lord-Alge". Retrieved 2012-09-13. - http://www.diamondcenter.net/digitalstress.html[dead link] - Are the Kids All Right?: The Rock Generation and Its Hidden Death Wish, John Grant Fuller, ISBN0812909704, pp130-135 - James Paul. "Last night a mix tape saved my life | Music | The Guardian". Arts.guardian.co.uk. Retrieved 2012-08-15. - "ABX Testing article". Boston Audio Society. 1984-02-23. Retrieved 2012-08-15. - "Analog or Digital?". St-andrews.ac.uk. Retrieved 2012-08-15. ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (May 2008)| - Ashihara, K. et al. (2005). "Detection threshold for distortions due to jitter on digital audio", Acoustical Science and Technology, Vol. 26 (2005), No. 1 pp. 50–54. - Blech, D. & Yang, M. (2004). "Perceptual Discrimination of Digital Coding Formats", Audio Engineering Society Convention Paper 6086, May 2004. - Croll, M. (1970). "Pulse Code Modulation for High Quality Sound Distribution: Quantizing Distortion at Very Low Signal Levels", Research Department Report No. 1970/18, BBC. - Dunn, J. (1998). "The benefits of 96 kHz sampling rate formats for those who cannot hear above 20 kHz", Preprint 4734, presented at the 104th AES Convention, May 1998. - Dunn, J. (2003). "Measurement Techniques for Digital Audio", Audio Precision Application Note #5, Audio Precision, Inc. USA. Retrieved March 9, 2008. - Elsea, P. (1996). "Analog Recording of Sound". Electronic Music Studios at the University of California, Santa Cruz. Retrieved 9 March 2008. - Ely, S. (1978). "Idle-channel noise in p.c.m. sound-signal systems". BBC Research Department, Engineering Division. - Greenfield, E. et al. (1986). The Penguin Guide to Compact Discs, Cassettes and LPs. Edited by Ivan March. Penguin Books, England. - Greenfield, E. et al. (1990). The Penguin Guide to Compact Discs. Edited by Ivan March. Preface, viii-ix. Penguin Books, England. ISBN 0-14-046887-0. - Hawksford, M. (1991). "Introduction to Digital Audio", Images of Audio, Proceedings of the 10th International AES Conference, London, September 1991. Retrieved March 9, 2008. - Hawksford, M. (1995). "Bitstream versus PCM debate for high-density compact disc", ARA/Meridian web page, November 1995. - Hawksford, M. (2001). "SDM versus LPCM: The Debate Continues", 110th AES Convention, paper 5397. - Hicks, C. (1995). "The Application of Dither and Noise-Shaping to Nyquist-Rate Digital Audio: an Introduction", Communications and Signal Processing Group, Cambridge University Engineering Department, United Kingdom. - Jones, W. et al. (2003). "Testing Challenges in Personal Computer Audio Devices". Paper presented at the 114th AES Convention. Audio Precision, Inc., USA. Retrieved March 9, 2008. - Kaoru, A. & Shogo, K. (2001). "Detection threshold for tones above 22 kHz", Audio Engineering Society Convention Paper 5401. Presented at the 110th Convention, 2001. - Knee, A. & Hawksford, M. (1995). "Evaluation of Digital Systems and Digital Recording Using Real Time Audio Data". Paper for the 98th AES Convention, February 1995, preprint 4003 (M-2). - Lesurf, J. "Analog or Digital?", The Scots Guide to Electronics. Retrieved October 2007. - Libbey, T. "Digital versus analog: digital music on CD reigns as the industry standard", Omni, February 1995. - Lipshitz, S. "The Digital Challenge: A Report", The BAS Speaker, Aug-Sept 1984. - Lipshitz, S. (2005). "The Rise of Digital Audio: The Good, the Bad, and the Ugly". Abstract of Heyser Memorial Lecture given by Prof. Stanley Lipshitz at the 118th AES Convention. - Liversidge, A. "Analog versus digital: has vinyl been wrongly dethroned by the music industry?", Omni, February 1995. - Manson, W. (1980). "Digital Sound: studio signal coding resolution for broadcasting". BBC Research Department, Engineering Division. - Nishiguchi, T. et al. (2004). "Perceptual Discrimination between Musical Sounds with and without Very High Frequency Components", NHK Laboratories Note No. 486, NHK (Japan Broadcasting Corporation). - Pozzoli, G. "DIGITabilis: crash course on digital audio interfaces. Part 1.4 - Enemy Interception. Effects of Jitter in Audio", "TNT-Audio - online HiFi review", 2005. - Pohlmann, K. (2005). Principles of Digital Audio 5th edn, McGraw-Hill Comp. - Rathmell, J. et al. (1997). "TDFD-based Measurement of Analog-to-Digital Converter Nonlinearity", Journal of the Audio Engineering Society, Volume 45, Number 10, pp. 832–840; October 1997. - Rumsey, F. & Watkinson, J. (1995). The Digital Interface Handbook, 2nd edition. Sections 2.5 and 6. Pages 37 and 154-160. Focal Press. - Stark, C. (1989). Encyclopædia Britannica, 15th edition, Volume 27, Macropaedia article 'Sound', section: 'High-fidelity concepts and systems', page 625. - Stuart, J. (n.d.). "Coding High Quality Digital Audio". Meridian Audio Ltd, UK. Retrieved 9 March 2008. This article is substantially the same as Stuart's 2004 JAES article "Coding for High-Resolution Audio Systems", Journal of the Audio Engineering Society, Volume 52 Issue 3 pp. 117–144; March 2004. - Watkinson J. (1994). An Introduction to Digital Audio. Section 1.2 'What is digital audio?', page 3; Section 2.1 'What can we hear?', page 26. Focal Press. ISBN 0-240-51378-9.
http://en.wikipedia.org/wiki/Analog_recording_vs._digital_recording
13
60
Equations and Mass Relationships A balanced chemical equationA representation of a chemical reaction in which chemical symbols represent reactants on the left side and products on the right side. such as 4NH3(g) + 5O2(g) → 4NO(g) + 6H2O(g) (1) not only tells how many molecules of each kind are involved in a reaction, it also indicates the amount of each substanceA material that is either an element or that has a fixed ratio of elements in its chemical formula. that is involved. Equation (1) says that 4 NH3 molecules can react with 5 O2 molecules to give 4 NO molecules and 6 H2O molecules. It also says that 4 mol NH3 would react with 5 mol O2 yielding 4 mol NO and 6 mol H2O. The balanced equationA representation of a chemical reaction that has values of the stoichiometric coefficients of reactants and products such that the number of atoms of each element is the same before and after the reaction. does more than this, though. It also tells us that 2 × 4 = 8 mol NH3 will react with 2 × 5 = 10 mol O2, and that ½ × 4 = 2 mol NH3 requires only ½ × 5 = 2.5 mol O2. In other words, the equation indicates that exactly 5 mol O2 must react for every 4 mol NH3 consumed. For the purpose of calculating how much O2 is required to react with a certain amount of NH3 therefore, the significant information contained in Eq. (1) is the ratio We shall call such a ratio derived from a balanced chemical equation a stoichiometric ratio and give it the symbol S. Thus, for Eq. (1), The word stoichiometric comes from the Greek words stoicheion, “element,“ and metron, “measure.“ Hence the stoichiometric ratio measures one element (or compoundA substance made up of two or more elements and having those elements present in definite proportions; a compound can be decomposed into two or more different substances.) against another. EXAMPLE 1 Derive all possible stoichiometric ratios from Eq. (1) SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture. Any ratio of amounts of substance given by coefficients in the equation may be used: There are six more stoichiometric ratios, each of which is the reciprocal of one of these. [Eq. (2) gives one of them.] When any chemical reactionA process in which one or more substances, the reactant or reactants, change into one or more different substances, the products; chemical change involves rearrangement, combination, or separation of atoms. Also called chemical change. occurs, the amounts of substances consumed or produced are related by the appropriate stoichiometric ratios. Using Eq. (1) as an example, this means that the ratio of the amount of O2 consumed to the amount of NH3 consumed must be the stoichiometric ratio S(O2/NH3): Similarly, the ratio of the amount of H2O produced to the amount of NH3 consumed must be In general we can say that or, in symbols, Note that in the word Eq. (3a) and the symbolic Eq. (3b), X and Y may represent any reactantA substance consumed by a chemical reaction. or any productA substance produced by a chemical reaction. in the balanced chemical equation from which the stoichiometric ratio was derived. No matterAnything that occupies space and has mass; contrasted with energy. how much of each reactant we have, the amounts of reactants consumed and the amounts of products produced will be in appropriate stoichiometric ratios. EXAMPLE 2 Find the amount of water produced when 3.68 mol NH3 is consumed according to Eq. (1). Solution The amount of water produced must be in the stoichiometric ratio S(H2O/NH3) to the amount of ammonia consumed: Multiplying both sides nNH3 consumed, by we have This is a typical illustration of the use of a stoichiometric ratio as a conversion factorA relationship between two units of measure that is derived from the proportionality of one quantity to another; for example, the mass of a substances is proportional to its volume and the conversion factor from volume to mass is density.. Example 2 is analogous to Examples 1 and 2 from Conversion Factors and Functions, where densityThe ratio of the mass of a sample of a material to its volume. was employed as a conversion factor between mass and volume. Example 2 is also analogous to Examples 2.4 and 2.6, in which the Avogadro constant and molar massThe mass of a mole of substance; the same as molecular weight for molecular substances. were used as conversion factors. As in these previous cases, there is no need to memorize or do algebraic manipulations with Eq. (3) when using the stoichiometric ratio. Simply remember that the coefficients in a balanced chemical equation give stoichiometric ratios, and that the proper choice results in cancellation of units. In road-map form When using stoichiometric ratios, be sure you always indicate moles of what. You can only cancel moles of the same substance. In other words, 1 mol NH3 cancels 1 mol NH3 but does not cancel 1 mol H2O. The next example shows that stoichiometric ratios are also useful in problems involving the mass of a reactant or product. EXAMPLE 3 Calculate the mass of sulfur dioxide (SO2) produced when 3.84 mol O2 is reacted with FeS2 according to the equation 4 FeS2 + 11 O2 → 2 Fe2O3 + 8 SO2 Solution The problem asks that we calculate the mass of SO2 produced. As we learned in Example 2 of The Molar Mass, the molar mass can be used to convert from the amount of SO2 to the mass of SO2. Therefore this problem in effect is asking that we calculate the amount of SO2 produced from the amount of O2 consumed. This is the same problem as in Example 2. It requires the stoichiometric ratio The amount of SO2 produced is then The mass of SO2 is With practice this kind of problem can be solved in one step by concentrating on the units. The appropriate stoichiometric ratio will convert moles of O2 to moles of SO2 and the molar mass will convert moles of SO2 to grams of SO2. A schematic road map for the one-step calculation can be written as These calculations can be organized as a table, with entries below the respective reactants and products in the chemical equation. You may verify the additional calculations. |4 FeS2||+ 11 O2→||2 Fe2O3||+ 8 SO2| The chemical reaction in this example is of environmental interest. Iron pyrite (FeS2) is often an impurity in coal, and so burning this fuel in a power plant produces sulfur dioxide (SO2), a major air pollutant. Our next example also involves burning a fuel and its effect on the atmosphereA unit of pressure equal to 101.325 kPa or 760 mmHg; abbreviated atm. Also, the mixture of gases surrounding the earth.. EXAMPLE 4 What mass of oxygen would be consumed when 3.3 × 1015 g, 3.3 Pg (petagrams), of octane (C8H18) is burned to produce CO2 and H2O? Solution First, write a balanced equation 2C8H18 + 25O2 → 16CO2 + 18H2O The problem gives the mass of C8H18 burned and asks for the mass of O2 required to combine with it. Thinking the problem through before trying to solve it, we realize that the molar mass of octane could be used to calculate the amount of octane consumed. Then we need a stoichiometric ratio to get the amount of O2 consumed. Finally, the molar mass of O2 permits calculation of the mass of O2. Symbolically Thus 12 Pg (petagrams) of O2 would be needed. The large mass of oxygen obtained in this example is an estimate of how much O2 is removed from the earth’s atmosphere each year by human activities. Octane, a component of gasoline, was chosen to represent coal, gas, and other fossil fuels. Fortunately, the total mass of oxygen in the air (1.2 × 1021 g) is much larger than the yearly consumption. If we were to go on burning fuel at the present rate, it would take about 100 000 years to use up all the O2. Actually we will consume the fossil fuels long before that! One of the least of our environmental worries is running out of atmospheric oxygen.
http://chempaths.chemeddl.org/services/chempaths/?q=book/General%20Chemistry%20Textbook/Using%20Chemical%20Equations%20in%20Calculations/1198/equations-and-mass-rel
13
105
iOS Lesson 02 - First Triangle After the last lesson which laid out the basic app structure, we know start with the actual drawing in Lesson 02. Here's a screenshot of what we'll do: There are several steps perfomed in drawing geometry to the screen with OpenGL. First we need to tell OpenGL what geometry to draw. This is usually a bunch of triangles specified by 3 vertices each. OpenGL then invokes a vertex shader for every single vertex which is used to transform the geometry according to rotations and translation, or for simple lighting. The resulting surfaces are then rasterized, which means OpenGL figures out which pixels are covered by each surface. For every covered pixel OpenGL now invokes a fragment shader (if we use multisampling that can happen more then once per pixel, hence it's a fragment and not a pixel for generalization). The task of the fragment shader is to determine the fragment's color by applying colors, lighting or mapping images onto the geometry. Thus this lesson covers two things: how to draw a triangle and how a simple shader looks like. It's probably best if you download the source code and have a look at it while we go through. Geometry can be specified as several kinds of geometric primitive: GL_POINT, GL_LINES or GL_TRIANGLES, where lines and triangles have several variations for drawing a chain or strip of them. Nevertheless, each of these primitives is made up from a set of vertices. A vertex is a point in three dimensional space with a coordinate system where x points to the right, y points upward, and z points out of the display to the viewer's eye. We will talk about that a lot more in depth when we come to the translations, rotations and projections lesson. But for now it is important that we need 3 floats for the position x,y, and z, and an additional 4th component w to facilitate linear transformations (this is due to matrix multiplications). At the end of the vertex shader every vertex on the screen has to be within a cube ranging from -1.0 to 1.0 in all directions x,y, and z. We don't bother doing complex transformations yet and so we choose our positions to match these criteria. We just store the vertices as a consecutive array of float values in our Lesson02::init method like this: //create a triangle //4 floats define one vertex (x, y, z and w), first one is lower left geometryData.push_back(-0.5); geometryData.push_back(-0.5); geometryData.push_back(0.0); geometryData.push_back(1.0); //we go counter clockwise, so lower right vertex next geometryData.push_back( 0.5); geometryData.push_back(-0.5); geometryData.push_back(0.0); geometryData.push_back(1.0); //top vertex is last geometryData.push_back( 0.0); geometryData.push_back( 0.5); geometryData.push_back(0.0); geometryData.push_back(1.0); As you perhaps noticed, we are specifying the vertices in a counter clockwise order. This helps OpenGL to discard geometry that we cannot see anyways, by defining that only triangles with counter clockwise order are facing towards us (so the surface is one sided). Imagine you would turn this triangle around, as soon as you go further than 90 degrees, the order of the vertices flips! Ok now we have an array of float values. How do we send them to OpenGL? For that we employ a concept called Vertex Buffer Objects (VBO). This is basically allocating a buffer in the video memory and moving the geometry data over there. Then when we draw our geometry it already resids on the chip thus reducing the bandwitdh requirements per frame. This works fine for geometry that does not deform but is static through application runtime. As soon as the topology of the geometry changes, the new data has to be copied to the buffer again. All other kinds of transformations like translation, rotation, and scaling can be easily done in the vertex shader without changing the buffer contents. //generate an ID for our geometry buffer in the video memory and make it the active one //send the data to the video memory glBufferData(GL_ARRAY_BUFFER, geometryData.size() * sizeof(float), &geometryData, GL_STATIC_DRAW); The first step is to generate such a buffer in video memory. This is done by the call to glGenBuffers, where the parameters are the number of buffers to allocate, and a pointer to the variable which will hold the id of the buffer. m_geometryBuffer is defined as an unsigned int in the header file. Then we bind the buffer so that all following buffer operations are performed on our geometry buffer. GL_ARRAY_BUFFER means that we store plain data in there, and is used always but when we want to index into another buffer. Finally, we send the geometry data to our VBO using glBufferData. The parameters are the type again, then how many bytes we want to copy, a pointer to the data, and a usage hint to allow some optimiziation on memory usage by the graphics chip. For static geometry this is usually GL_STATIC_DRAW, but might be GL_DYNAMIC_DRAW if we were to upload new data regularly. Right now we just pass over geometry data, and we could draw a triangle in a solid color. But we can use that methodology for other per vertex attributes as well, for instance we can specify a color for each vertex. Color values in most graphical applications are specified as intensities in the red, green, and blue channel. These range from 0 to 255 in most image formats, but as we deal with floating point numbers on the graphics chip, the intensities are specified in the range from 0 to 1 here. The same concept as for the geometry data applies to the color data as well: //create a color buffer, to make our triangle look pretty //3 floats define one color value (red, green and blue) with 0 no intensity and 1 full intensity //each color triplet is assigned to the vertex at the same position in the buffer, so first color -> first vertex //first vertex is red colorData.push_back(1.0); colorData.push_back(0.0); colorData.push_back(0.0); //lower right vertex is green colorData.push_back(0.0); colorData.push_back(1.0); colorData.push_back(0.0); //top vertex is blue colorData.push_back(0.0); colorData.push_back(0.0); colorData.push_back(1.0); //generate an ID for the color buffer in the video memory and make it the active one //send the data to the video memory glBufferData(GL_ARRAY_BUFFER, colorData.size() * sizeof(float), &colorData, GL_STATIC_DRAW); Allright, now that our data is in the video memory, we just need to display it. This only goes in conjunction with telling OpenGL about the shader program we want to use. We won't go into detail about how the loading of a shader works as it is a mere technicality. The Shader class is pretty well documented though for those who want to know the details. Let me know in the forums if there's a need for further explanations. //load our shader m_shader = new Shader("shader.vert", "shader.frag"); NSLog(@"Encountered problems when loading shader, application will crash..."); //tell OpenGL to use this shader for all coming rendering What is basically happening in this bit of code, is that we create an object of our class Shader and tell it which files to use as input. As we covered earlier, there is a vertex shader and a fragment shader stage. They both get their own file which we'll look at in the next section, and then they are loaded, compiled and linked to a shader program which is a vertex shader and fragment shader pair. That happens in the compileAndLink() method, which returns false on failure. Finally we tell OpenGL to use the shader program that we just generated by invoking glUseProgram. That was easy. Now how do we plug the geometry and color data into this shader? A vertex shader can cope with several input values per vertex, as stated earlier when we added the color buffer. These values are called vertex attributes. From within the shader these attributes are accessed by globally defining them as e.g. attribute vec4 position; which means we have an attribute called position that is specified as a vector of 4 float values. From the application's point of view, we now need to be able to say that the data in our geometry buffer has to be assigned to an attribute called "position", and the color buffer maps to "color". We can use whatever name we like, but let's stick with those for this lesson. //get the attachment points for the attributes position and color m_positionLocation = glGetAttribLocation(m_shader->getProgram(), "position"); m_colorLocation = glGetAttribLocation(m_shader->getProgram(), "color"); //check that the locations are valid, negative value means invalid if(m_positionLocation < 0 || m_colorLocation < 0) NSLog(@"Could not query attribute locations"); //enable these attributes OpenGL indexes all attributes in a shader program when it is linked, and by calling glGetAttribLocation with the shader and the name of the attribute in the shader source code we can query that index and store it. m_positionLocation and m_colorLocation are ints, because any invalid attribute name yields a response of -1. Lastly we have to enable passing data to these attributes through glEnableVertexAttribArray. Up to now we have just added all steps to our application initialization method. But the actual drawing has to happen every frame. So in our Lesson02::draw method we now have to map our buffered data to the attributes and finally invoke the drawing. //bind the geometry VBO //point the position attribute to this buffer, being tuples of 4 floats for each vertex glVertexAttribPointer(m_positionLocation, 4, GL_FLOAT, GL_FALSE, 0, NULL); To map the data, we first bind the geometry buffer to make it active as we did before when we populated it with data. Then we tell OpenGL that this buffer is the place to take the data for the position attribuge from. glVertexAttribPointer does exactly that, and expects the attribute location, how many values make one element (here 4 float make one vertex), the data type(float), whether the values should be normalized (usually not), a stride that is used if we interleave data (we don't, so 0), and the last parameter can be a pointer to an array that would be copied over to the graphic card every frame. But we can also pass NULL which will tell OpenGL to use the currently bound buffer contents. //bint the color VBO //this attribute is only 3 floats per vertex glVertexAttribPointer(m_colorLocation, 3, GL_FLOAT, GL_FALSE, 0, NULL); The procedure for the color buffer looks the same. Now we're done preparing everything and can draw our triangle: //initiate the drawing process, we want a triangle, start at index 0 and draw 3 vertices glDrawArrays(GL_TRIANGLES, 0, 3); The call to glDrawArrays takes 3 arguments: the primitive we want to draw (can be one of GL_POINTS, GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, and GL_TRIANGLES), the offset to the first element in the buffer, and how many vertices we are going to draw. If we were to add another triangle with 3 new vertices to both the geometry and color buffer and wanted to draw both, we would just increase the number of vertices from 3 to 6 and would get two triangles here! Yay! We've actually drawn the first triangle to the screen! OpenGL Shading Language But we didn't look at what happens on the graphic chip yet. The heart of what is called a programmable graphics pipeline are the vertex and fragment shader which are specified in the OpenGL ES Shading Language or GLSL. This looks pretty close to to C programming, but has some different data types with a focus on vectors with 2,3, or 4 components. For float data these vectors are vec2, vec3 and vec4. We are not going to look at all the details for GLSL here, but focus on the simple example for now. But a really good and extensive help for coding GLSL are pages 3 and 4 of the quick reference card which can be found here: http://www.khronos.org/opengles/sdk/2.0/docs/reference_cards/OpenGL-ES-2_0-Reference-card.pdf So what does our vertex shader shader.vert look like? We already talked about the attributes which serve as input: //the incoming vertex' position attribute vec4 position; //and its color attribute vec3 color; There we see a 4-component vector for the position and a 3-component one for the color in action. But we actually don't need the color in the vertex shader but in the fragment shader. To pass values from the vertex to the fragment shader they have to be defined globally with the varying qualifier. //the varying statement tells the shader pipeline that this variable //has to be passed on to the next stage (so the fragment shader) varying lowp vec3 colorVarying; The name colorVarying might sound a bit weird, but it serves a demonstrational purpose. Remember that the vertex shader is called for each vertex, so 3 times for the triangle, but the fragment shader is executed for every covered pixel. The trick is, that the value in a varying variable is linearly interpolated in each component. This is why our resulting triangle has these smooth color gradients. The keyword lowp stands for low precision. In GLSL we need to specify a precision for our variables, being either heighp, mediump or lowp. For color values precision is not crucial so we're happy with low precision. For vertex positions you will usually use high precision to prevent weird artifacts at boundaries. //the shader entry point is the main method colorVarying = color; //save the color for the fragment shader gl_Position = position; //copy the position When the shader is executed, it's main() method is called. In the main method we pass on the color value, and copy the position attribute into a predefined output variable gl_Position which is not mandatory and specifies the position used for rasterization. Finally the fragment shader shader.frag uses the passed on color value to store it in the fragment shader specific output variable gl_FragColor. As its main method is invoked for every pixel, by setting the color for every covered fragment we get a nicely colored shape on the screen. //incoming values from the vertex shader stage. //if the vertices of a primitive have different values, they are interpolated! varying lowp vec3 colorVarying; //create a vec4 from the vec3 by padding a 1.0 for alpha //and assign that color to be this fragment's color gl_FragColor = vec4(colorVarying, 1.0); gl_FragColor is a vec4 so we need to pad a value for alpha at the end. The alpha channel defines the opacity of the color. We want our triangle to be fully opaque, so 1.0. Note: GLSL does not allow implicit type casting from int to float, meaning you cannot just write a "1" here but have to type "1.0"! With that I conclude this lesson and leave it up to you to play around with the colors, the vertex positions, and adding more triangles. Make sure you understand these basic principles!
http://nehe.gamedev.net/tutorial/ios_lesson_02__first_triangle/50001/
13
145
Centripetal force, weight, stress by Donald E. Simanek The acceleration of a body is the change in its velocity divided by the time duraton of that change, a = D V/D t. Since velocity is a vector, any change in its size or direction, or both, requires acceleration. A body moving in a curved path is accelerating, even if its speed is constant in size. Therefore, from Newton's law, we know that the net force on it is non-zero. The diagram shows a body moving with constant speed, V on a curved path of radius R. Two positions are shown, during which the body has moved a distance S along an arc. At the beginning of the time interval the body's velocity is V1. At the end of the interval it is V2. We give them distinguishing subscripts because they have different directions, even though they have the same size. During that time the body moves through angle a. At the right we show a vector diagram of the relation between these velocities and their vector difference, D V. Now consider the limiting case as the time interval gets very small, approaching zero. The angle approaches zero also. The diagrams have two similar, very skinny triangles. We can write: D S / R = D V / V So: V D S = R D V and: V (D S / D t ) = R (D V / D t ). But V = D S / D t, and a = D V / D t, so we can write: V2 = R a, which becomes a = V2/R . This is the size of the centripetal acceleration vector. The direction of the centripetal acceleration vector is inward toward the center of the arc at any instant, and is therefore perpendicular to the velocity vector at that instant, which is always tangent to the curve. We can associate this acceleration with the inward (radial) component of whatever net force happens to be acting on the body (of mass m). We call that component the centripetal force, with size Fcentripetal = m a = mV2/R. It, too, is a vector directed radially inward. Centripetal force is not some new kind of force, but just a convenient name for the radial component of the sum of all of the real forces acting on the body. A fuller treatment would show that this definition of centripetal force is useful for any kind of motion along a curve of any sort. It is not restricted to circles. Nor is it restricted to constant speed along the path. This works because any physical path is such that a small enough portion of it approximates a circlular arc very well. In calculus, we take the limit as the arc becomes zero length, and speak of the relation between the instantaneous radius of the arc at that point on the curve, the instantaneous velocity at that point and the instantaneous acceleration there. The relation still turns out to be a = V2/R. And if the net force has a component tangent to the arc, that causes an increase or decrease in the body's speed along that arc. For a calculus treatment of this and many other physics topics, consult Jess H. Brewer's excellent physics tutorial, The Skeptic's Guide to Physics. See the celestical mechanics chapters. We will assume the mass of the spring itself is negligible compared to the mass of the two blocks of matter. Therefore the spring tension exerts forces of equal size on each of them, as shown. F2 and F1 represent the gravitational force on upper and lower blocks. In equilibrium the lower block must experience an upward force of F1 + F2 (from whatever supports it from below). If these were equal in size, then for equilibrium, the tension T would also be that same size. Now suppose that the upper block were made smaller in mass. The tension in the spring must get proportionally smaller to achieve static equilibrium. Therefore the spring must increase in length, separating the two blocks a bit. The same thing would happen if some additional non-uniform gravitational field were applied to the system, such that it exerted greater downward force on the lower block than it did on the upper one. The tension on the spring would decrease and its length would increase. But what if such an external field caused a stronger upward force on the upper block than on the lower one? Again, the tension on the spring would decrease and its length would increase. This becomes important when one discusses Earth tides due to non-uniform gravitational fields from the Moon. The figure shows a person of mass m standing on a bathroom scale on the surface of the Earth. The scale exerts a force of size W upward on his feet. We call this scale reading the "weight" of the man. If the Earth were stationary, the man would be in equilibrium, and we would have W = mg, where mg is the gravitational force on the man. Think of the scale as like the spring between masses. It responds to the stress between feet and floor. The size of the man relative to the Earth's radius is greatly exaggerated in this diagram. The relative sizes of the forces are also. Newton's law tells us that when a body of mass m has an acceleration a, the net force on the body must be F = ma. So the size of the net force on this man must be F = ma = mv2/R. The figure shows the force vectors W and mg, which are the only forces acting on the man. The vector F is their sum. W is directed along the radius of the Earth. Being the radial component of the net force (it is the net force in this case), its size is a = v2/R (the centripetal force). Now compare these two cases. On the non-rotating Earth the man's weight was of size mg. Remember, the weight of an object is the force required to support it, i.e., the force exerted upward by the weighing scale. With the Earth rotating, that force is smaller than before. The contact force between the man's feet and the scale is reduced. But all other such stress forces are reduced as well, within the man, within the scale's springs, within the body of the Earth itself. This causes a slight decompression of these materials, a relaxation of the spring in the scales. In fact, the entire body of the earth expands slightly and the man and scale move outward from the axis of rotation slightly, until forces come into balance with the requirements of rotational stability at the new radius. This is the reason for the equatorial bulge of the Earth due to its own axial rotation. As we said, the diagrams are exaggerated. Simple calculation shows that the centripetal acceleration is only 0.3% of g. So the net force is only 0.3% of W. The man's weight (registered on the scale) is 0.3% smaller than it would be at one of the Earth's poles. The resulting relaxation of stress in materials is the reason for the equatorial bulge of the Earth, making the equatorial radius 43 kilometer greater than the polar radius. Suppose someone says that the reduction in weight is solely due to the increased radius of the equator and the slightly smaller gravitational field at the larger radius. Then if the earth were prefectly rigid and there were no equatorial bulge, you would logically conclude from this that since the radius doesn't change, then the weight (registed on the scale) doesn't change. But this happens to be false. The weight would decrease, as it must, from the fact of the radial acceleration due to the circular motion and Newton's law F = ma. If someone says that the reduction in weight is due to the decreased stress caused by the stretching of the materials of the Earth, scale and man, we must agree, for that's what the scale mechanism measures, and that's our definition of weight. But this stress reduction is also the reason for the stretch of the earth, and for the increased radius at the equator. And this in turn does decrease the gravitational force due to the Earth at that larger distance. The bottom line is that the gravitational force decreases and the stress decreases, but not in proportion. To say just one of these things is the "cause" of any one of the others is just too simplistic to be useful. And to try to "get by" explaining the equatorial bulge without mentioning stress reduction in materials is somewhat of a cheat. But inertial frames are very rare. We do encounter many situations where "for all practical purposes" a system seems to be an inertial frame. On the surface of the earth, a laboratory can often be considered an inertial frame, for the net acceleration of the room as it is carried around by the earth's axial rotation and its revolution around the sun, and by the solar systems motion in the galaxy, etc., is very small compared to the larger accelerations we are studying. We do include the effect of axial rotation by "correcting" the gravitational force to include the centripetal acceleration. And if we are dealing with very large scale motions of air and water, we must include also the Coriolis effects due to the acceleration of our reference frame. What this boils down to is that if the net force on a body in our reference frame Fnet, real = ma + Fextra, we try to identify that extra force term, and then subtract it from the "real" forces, and go ahead and use this "corrected" form of Newton's laws. This is equivalent to defining -Fextra as "fictitious" force, adding that to the real force term, and thereby preserving Newton's law, using it "as if" we were doing the problem in an inertial frame. In a frame of reference centered at the earth's center and rotating with the earth, one of these fictitious forces on a body at the earth's surface is called the "centrifugal" force. It is just the negative of the centripetal force on that body. It is directed outward from the earth's center. Other important forces at the earth's surface are the coriolis forces, significant when dealing with huge masses of air or water in meteorology, oceanography, and long range ballistics. This lengthy preamble is leading up to an often-asked question. "When we use the term centripetal force, does that mean we are doing the problem in a non-inertial frame?" No, it doesn't. Quite the contrary, for centripetal force on an object is just the radial component of the real force on it. "Centripetal force" is used primarily when we are doing a problem in which something is moving in an orbit around a fixed point, and we are using a fixed (inertial) coordinate frame centered on that point. We can know that if the the vector sum of all forces acting on a body fixed in that reference frame do add to zero. Planetary orbits are a good example. Generally in these cases, we are using an inertial polar coordinate frame of reference. But when we choose to fix our coordinate system on a body we know is accelerating, so that the coordinates are also accelerating (perhaps undergoing both rotation and linear acceleration), then we find that when we the real forces we measure on a body fixed in that (moving) frame do not add to zero. Much of the confusion about tides stems from failure to specify whether we assume an inertial frame of reference, or an accelerting frame of reference. It can also arise from forgetting that inertial frames can be either cartesian or polar. Polar coordinates are not limited to rotating frames. The Wikepedia has two pages worth looking at: Centrifugal force. Be sure to take the link to "Centrifugal force (rotating reference frame)". Centripetal force is defined to be the radial component of the net force on a body when the body's position is represented in a polar coordinate system (coordinates being radius from a fixed center and angle from a fixed reference angle). It is just a label to distinguish the radial component from the tangential component of the net force. But it is a very useful name, for the radial component figures into a very useful equation: F = mv2/r . Centripetal force, being a component of the net force on a body, is a "real" force. Real forces include forces due to other material objects: gravitation, electric attraction and repulsion, magnetic forces, contacts forces (deformation from contact, and also forces due to friction). In high school textbooks one sometimes sees centrifugal force defined as the reaction to the centripetal force. I deplore this with a passion, for it is not necessary and not useful for anything. Furthermore it causes anguish and confusion when students go to college and learn the definition of centrifugal force used when dealing with rotating frames of reference. Rotating reference frames are useful, employed by mechanical engineers and astronomers or anyone who must do actual calculations in rotating reference frames. Centrifugal force is a fictitious force used to simplify such calculations. But this approach is seldom used in introductory courses, so it would be best not to mention it. Yet it is appealing to naive students (and to some textbook authors) because it seems to correspond to the feeling of being "thrown outward" when one is on a rotating platform, as a carousel, or in an automobile going around a tight curve. It's a "feel-good" explanation for students who will not have to actually do any calculations using the concept. Actually, what a person feels in a situation such as this is the result of the car exerting a force perpendicular to your motion that causes your own motion to depart from straight line motion into a curved path. It is a physiological feeling. Centrifugal force is not a force due to any other physical object. That's why we call it a "fictitious force". Any problem done in a rotating coordinate system can be done in an fixed coordinate system without using the centrifugal force concept. The results, of course, must be identical. A polar coordinate system can be a fixed, inertial coordinate system. A rotating coordinate system can be either polar or cartesian. "Polar" and "rotating" are not synonyms. If your chosen coordinates are not rotating, the word "centripetal" is not appropriate. If you don't know whether your coordinates are rotating, then you need to study the basics to find out, for if you proceed without knowing that fact you have a 50-50 chance of messing up the problem, and any attempt to discuss it will result in confusion for everyone. Newton's F = ma applies only in inertial (fixed, non-accelerating) coordinate systems. Put another way, the word "inertial" applies to any coordinate system in which Newton's law is correct. It also means "any coordinate system that isn't accelerating". A coordinate system moving with acceleration is a non-inertial system. When you do a problem in a rotating non-inertial system, you must modify Newton's law to read Freal + Fcentrifugal = ma, where Fcentrifugal is a fictitious "correction" force to compensate for using a non-inertial coordinate system, and is not a force due to the physical influence of other real objects. Many discussions I see of this on the internet are often ill-considered opinions that people offer without having looked at the broader picture. Indeed, many teachers have not had sufficient exposure to this in college and university. It generally occupies a chapter in a university classical mechanics course, titled "Non-inertial systems". A good treatment can be found in the excellent (and classic) book "Classical Dynamics of Particles and Systems" by Jerry marion (Academic Press, 1965, and later editions revised by Davidson). Chapter 12, "Motion in a Non-Inertial Reference Frame". Any good university library should have this. In the United States (I hate to admit) most high school teachers have never taken such a course, so are entirely innocent of the standard methods for doing problems in the non-inertial frame of the earth (necessary in ocean hydrodynamics and meteorology as well as in long range ballistics of rockets missiles and spacecraft) and in astrophysics. Engineers need these methods when dealing with gyroscopic effects. Return to Tidal Misconceptions. Return to Donald Simanek's front page.
http://www.lhup.edu/~dsimanek/scenario/centrip.htm
13
102
masses and ratios in chemical calculations (not using moles) You can use the ideas of relative atomic, molecular or formula mass AND the law of conservation of mass to do quantitative calculations in chemistry. Underneath an equation you can add the appropriate atomic or formula masses. This enables you to see what mass of what, reacts with what mass of other reactants. It also allows you to predict what mass of products are formed (or to predict what is needed to make so much of a particular product). You must take into account the balancing numbers in the equation (e.g. 2Mg), as well of course, the numbers in the formula (e.g. O2). HELP IN SOLVING RATIO'S and see also section 7. using moles (2) the symbol equation must be correctly balanced to get the right answer! (3) There are good reasons why, when doing a real chemical preparation-reaction to make a substance you will not get 100% of what you theoretically calculate. See discussion in section 14.2 (4) See 6b. for solution concentration and titration calculations based on reacting masses NOT involving moles Example 6a.1: 2Mg + O2 ==> 2MgO (atomic masses Mg =24, O = 16) converting the equation into gives ... (2 x 24) + (2 x 16) ==> 2 x (24 + 16) and this gives a basic reacting mass ratio of 48g Mg + 32g O2 ==> 80g MgO The ratio can be used, no matter what the units, to calculate and predict quite a lot! and you don't necessarily have to work out and use all the numbers in the ratio. What you must be able to do is solve a ratio! e.g. 24g Mg will make 40g MgO, why?, 24 is half of 48, so half of 80 is 40. Example 6a.2: 2NaOH + H2SO4 ==> Na2SO4 + 2H2O (atomic masses Na = 23, O = 16, H = 1, S = 32) mass ratio is: (2 x 40) + (98) ==> (142) + (2 x 18) = (80) + (98) ==> (142) + (36), (a) calculate how much sodium hydroxide is needed to make 5g of sodium sulphate. from the reacting mass equation: 142g Na2SO4 is formed from 80g of NaOH 5g Na2SO4 is formed from 5g x 80 / 142 = 2.82 g of NaOH by scaling down from 142 => 5 (b) calculate how much water is formed when 10g of sulphuric from the reacting mass equation: 98g of H2SO4 forms 36g of H2O 10g of H2SO4 forms 10g x 36 / 98 = 3.67g of H2O by scaling down from 98 => Example 6a.3: 2CuO(s) + C(s) ==> 2Cu(s) + CO2(g) (atomic masses Cu=64, O=16, Formula Mass ratio is 2 x (64+16) + (12) ==> 2 x (64) + (12 + 2x16) = Reacting mass ratio 160 + 12 ==> 128 + 44 (in the calculation, impurities are (a) In a copper smelter, how many tonne of carbon (charcoal, coke) is needed to make 16 tonne of copper? from the reacting mass equation: 12 of C makes 128 of Cu scaling down numerically: mass of carbon needed = 12 x 16 / 128 = 1.5 tonne of C (b) How many tonne of copper can be made from 640 tonne of copper oxide ore? from the reacting mass equation: 160 of CuO makes 128 of Cu (or direct from formula 80 CuO ==> 64 Cu) scaling up numerically: mass copper formed = 128 x 640 / 160 = 512 tonne Cu Example 6a.4: What mass of carbon is required to reduce 20 tonne of iron(II) oxide ore if carbon monoxide is formed in the process as well as iron? masses: Fe = 56, O = 16) + 3C ==> 2Fe + 3CO formula mass Fe2O3 = (2x56) + (3x16) = 160 160 mass units of iron oxide reacts with 3 x 12 = 36 mass units of carbon So the reacting mass ratio is 160 So the ratio to solve is 20 : scaling down, x = 36 x 20/160 = 4.5 tonne carbon needed. + 3CO ==> 2Fe + 3CO2 is the other most likely reaction that reduces the iron ore to iron. 6a.5: (a) Theoretically how much copper can be obtained from 2000 tonne of pure chalcopyrite ore, formula CuFeS2 ? is a copper-iron sulphide compound and one of the most important and common ores containing copper. Atomic masses: Cu = 64, Fe = 56 and S = 32 For every one CuFeS2 ==> one Cu can be extracted, f. mass of ore = 64 + 56 + (2x32) = Therefore the reacting mass ratio is: 184 ==> 64 so, solving the ratio, 2000 CuFeS2 ==> 2000 x 64 / 184 Cu = 695.7 tonne copper (max. can be (b) If only 670.2 tonne of pure copper is finally obtained after further purification by electrolysis, what is the % yield of the overall process? % yield = actual yield x 100/theoretical yield % yield = 670.2 x 100 / 695.7 = 96.3% More on % yield in calculations section 14.1 6a.6: A sample of magnetite iron ore contains 76% of the iron oxide compound Fe3O4 and 24% of waste silicate minerals. (a) What is the maximum theoretical mass of iron that can be extracted from each tonne (1000 kg) of magnetite ore by carbon reduction? [ Atomic masses: Fe = 56, C = 12 and O = 16 ] equation is: Fe3O4 + 2C ==> 3Fe + 2CO2 Before doing the reacting mass calculation, you need to do simple calculation to take into account the lack of purity of the ore. 76% of 1 tonne is 0.76 tonne (760 kg). For the reacting mass ratio: 1 Fe3O4 ==> 3 Fe (you can ignore rest of equation) reacting mass units: (3 x 56) + (4 x 16) ==> 3 x 56 so, from the reacting mass equation: 232 Fe3O4 ==> 168 Fe tonne ==> x tonne Fe ratio, x = 0.76 x 168/232 = 0.55 = 0.55 tonne Fe (550 kg)/tonne (1000 kg) of magnetite ore (b) What is the atom economy of the carbon reduction reaction? You can use some of the data from part (a). % atom economy = total mass of useful product x 100 / total mass of reactants = 168 x 100 / (232 + 2x12) = 168 x 100 / 256 = 65.6% (c) Will the atom economy be smaller, the same, or greater, if the reduction involves carbon monoxide (CO) rather than carbon (C)? explain? The atom economy will be smaller because CO is a bigger molecular/reactant mass than C and 4 molecules would be needed per 'molecule' of Fe3O4, so the mass of reactants is greater for the same product mass of iron (i.e. bottom line numerically bigger, so % smaller). This is bound to be so because the carbon in CO is already chemically bound to some oxygen and can't remove as much oxygen as carbon itself. + 4CO ==> 3Fe + 4CO2 so the atom economy = 168 x 100 / (232 + 4x28) = 48.8 % Atom economy is fully explained in calculations section 14.2b On analysis, a sample of hard water was found to contain 0.056 mg of calcium hydrogen carbonate per cm3 (0.056 mg/ml). If the water is boiled, calcium hydrogencarbonate Ca(HCO3)2, decomposes to give a precipitate of calcium carbonate CaCO3, water and carbon dioxide. (a) Give the symbol equation of the decomposition complete with state symbols. the mass of calcium carbonate in grammes deposited if 2 litres (2 dm3, 2000 cm3 or ml) is boiled in a kettle. [ atomic masses: Ca = 40, H = 1, C = 12, O = 16 ] ratio is based on: Ca(HCO3)2 ==> CaCO3 masses are 162 (40x1 + 1x2 + 12x2 + 16x6) and 100 (40 + 12 + 16x3) respectively reacting mass ratio is 162 units of Ca(HCO3)2 ==> 100 units of CaCO3 the mass of Ca(HCO3)2 in 2000 cm3 (ml) = 2000 x 0.056 = 112 mg solving the ratio 162 : 100 and 112 : z mg CaCO3 where z = unknown mass of calcium carbonate z = 112 x 100/162 = 69.1 mg CaCO3 since 1g = 1000 mg, z = 69.1/1000 = 0.0691 g CaCO3, calcium carbonate on the result, its consequences and why is it often referred to as of calcium carbonate will cause a white/grey deposit to be formed on the side of the kettle, especially on the heating element. Although 0.0691 g doesn't seem much, it will build up appreciably after many cups of tea! The precipitate is calcium carbonate, which occurs naturally as the rock limestone, which dissolved in rain water containing carbon dioxide, to give the calcium hydrogen carbonate in the first place. Since the deposit of 'limestone' builds up in layers it is called 'limescale'. This is a much more elaborate reacting mass calculation involving solution concentrations and extended ideas from the results. In this exemplar Q I've used the formulae a lot for short-hand. A solution of hydrochloric contained 7.3 g HCl/dm3. A solution of a metal hydroxide of formula MOH was prepared by dissolving 4.0g of MOH in 250 cm3 of water. M is an unknown metal but it is known that the ionic formula of the hydroxide is M+OH-. 25cm3 samples of the MOH solution were pipetted into a conical flask and titrated with the hydrochloric solution using a burette and a few drops of phenolphthalein indicator. All the MOH is neutralised as soon as the pink indicator colour disappears (i.e. the indicator becomes colourless). On average 19.7 cm3 of the HCl acid solution was required to completely neutralise 25.0 cm3 of the MOH solution. [Atomic masses: H = 1, Cl = 35.5, O = 16, M = the equation for the reaction between the metal hydroxide and the + HCl(aq) ==> MCl(aq) + H2O(l) You may or may not be required to give the state symbols in (), or you may be just asked to complete the equation given part of it. Calculate the mass of HCl used in each titration. Calculate the mass of MOH that reacts with the mass of HCl calculated in (b). Calculate the formula mass of HCl. Calculate the mass in g of MOH that reacts with 36.5g of HCl and hence the formula mass of MOH. 0.1438 g HCl reacts with 0.40 g MOH 36.5g HCl reacts with z g of MOH ratio for z, z = 36.5 x 0.40 / 0.1438 = 101.5 g MOH formula mass of HCl is 36.5 and from the equation, 1 MOH reacts with 1 HCl the experimental formula mass of MOH is found to be 101.5 is the atomic mass of the metal? the formula information on the metal hydroxide deduce the following Appendix 1 Solving Ratios type in answer F and H or F and H OTHER CALCULATION PAGES What is relative atomic mass?, relative isotopic mass and calculating relative atomic mass formula/molecular mass of a compound or element molecule Law of Conservation of Mass and simple reacting mass calculations Composition by percentage mass of elements in a compound Empirical formula and formula mass of a compound from reacting masses (easy start, not using moles) Reacting mass ratio calculations of reactants and products moles), mention of actual percent % yield and theoretical yield, and formula mass determination Introducing moles: The connection between moles, mass and formula mass - the basis of reacting mole ratio calculations (relating reacting masses and formula moles to calculate empirical formula and deduce molecular formula of a compound/molecule (starting with reacting masses or % composition) Moles and the molar volume of a gas, Avogadro's Law Reacting gas volume ratios, Avogadro's Law and Gay-Lussac's Law (ratio of gaseous Molarity, volumes and solution concentrations (and diagrams of apparatus) do volumetric titration calculations e.g. acid-alkali titrations (and diagrams of apparatus) Electrolysis products calculations (negative cathode and positive anode products) e.g. % purity, % percentage & theoretical yield, volumetric titration apparatus, dilution of solutions (and diagrams of apparatus), water of crystallisation, quantity of reactants required, atom economy Energy transfers in physical/chemical changes, Gas calculations involving PVT relationships, Boyle's and Charles Laws Radioactivity & half-life calculations including Revision KS4 Science Additional Science Triple Award Science Separate Sciences Courses aid to textbook revision GCSE/IGCSE/O level Chemistry Information Study Notes for revising for AQA GCSE Science, Edexcel GCSE Science/IGCSE Chemistry & OCR 21st Century Science, OCR Gateway Science WJEC gcse science chemistry CCEA/CEA gcse science chemistry O Level Chemistry (revise courses equal to US grade 8, grade 9 grade 10) A level Revision notes for GCE Advanced Subsidiary Level AS Advanced Level A2 IB Revise AQA GCE Chemistry OCR GCE Chemistry Edexcel GCE Chemistry Salters Chemistry CIE Chemistry, WJEC GCE AS A2 Chemistry, CCEA/CEA GCE AS A2 Chemistry revising courses for pre-university students (equal to US grade 11 and grade 12 and AP Honours/honors level for revising science chemistry courses revision guides content copyright © Dr W P Brown 2000-2012 All rights reserved revision notes, puzzles, quizzes, worksheets, x-words etc. * Copying of website material is not permitted Alphabetical Index for Science B C D G H I J K L M N O P U V W X Y Z
http://www.docbrown.info/page04/4_73calcs06rmc.htm
13
74
Curvature of space The curvature of space is a physical fact of surface of a 2-sphere (our "ordinary" sphere, or a surface of a 3 dimensional ball) of surface area is enclosing a different volume than - (4 / 3)πr3, where r is the radius of the 2-sphere. In our physical universe containing energy, such 2-spheres may enclose larger volume of space than - (4 / 3)πr3, and the 3-space with such properties is then called positively curved, or "convex". The 2-spheres in such 3-space contain more space than the same 2-spheres in Euclidean (flat) 3-space, which contains exactly - (4 / 3)πr3 volume of 3-space. If a 2-sphere of surface area contains smaller 3-volume of 3-space than - (4 / 3)πr3 such space is nagatively curved and it is called "concave". Curvatures of spacetime that is composed of space and time have to be described by more numbers than just the amount of space contained within 2-sphere of area Luckilly, our physical spacetime is never curved since it has to be flat on Noether's Theorem to obey conservation laws. The tensors used in physics are specifically designed to obey these laws. Therefore the physical spacetime must be flat. Being flat it allows the space alone being curved and then the time compensates its curvature creating gravitation as described in Gravitation demystified. According to Einstein, the space of our universe is such "curved space". One of its amazing feature is that it can be closed, in the same sense as the surface of the Earth is closed (no edges). But while the surface of the Earth is closed through the third dimension, the space of the universe is already 3 dimensional and there is no 4-th dimension in nature to close the space of our universe through. So we have to imagine how it works in 3 dimensions. What would happen when we travel through the universe and not being able to leave its space similarly as traveling on the two dimensional surface of the Earth we are going to make a big circle around the Earth and get back to the same point. How it can happen in the "curved space" of the universe. How the universe has to behave that such thing can happen while traveling through it along a straight line. Difficult to imagine, yet it is possible. Let's start to describe the physical features of curved space. As the first part of Einsteinian Gravitation is about what happens to time in the vicinity of masses (which explains all those things that Newtonian gravitation explains but without introducing any magical "gravitational attraction"), the second part is about what happens to space around masses. Basically Einstein's theory states that there is more space around masses than there would be without those masses present. It means that if we make a spherical shell having volume of e.g. 1,000 gallons when empty then we pour into it water then the space inside the sphere becomes bigger than before with the outside shell ever changing its size. It is difficult to believe, yet it seems to be what actually happens. The diameter of the shell as measured from the outside will be the same as it was before, but the space inside will be bigger (measured with the same rulers that we used to measure the shell from the outside). If we measured the diameter from inside, moving along a straight line through the center from one side of the shell to the other, we would find that the diameter of the shell inside is bigger than when measured from the outside. This is what is called "curved space". The space inside the sphere is bigger than the space taken by the sphere itself. We may pour into a 1,000 gallon sphere (i.e. volume when empty) e.g. 1,001 gallons of some (very) heavy liquid which means that the space inside became bigger by one gallon. To understand this strange thing with diameter of the shell being greater inside than outside (which happens to be a physical fact), we may think about it as rulers becoming shorter when put inside the shell containing mass inside. Just as clocks run slower in the vicinity of mass because time runs slower there (which is called time dilation), rulers become shorter in the vicinity of mass (which is called length contraction). So if our rulers are shorter inside the shell, the diameter of the shell measured from inside will be bigger. But physically the clocks all run the same speed (maintaining local time). It is time itself which slows down in the vicinity of mass as seen from the outside. The rulers also remain the same, just there is more space in the areas in vicinity of mass than there would be in regular (flat, Euclidean) space, so compared to the diameter of the shell, the rulers seem shorter to us seen from the outside of the shell. The space with such strange properties is called "curved" because the increase of volume inside our shell that contains mass is similar to the increase in surface area inside a ring when the ring is placed on a surface of a ball instead of on a flat table. The area of the surface inside a ring on a ball will be greater than the area inside the same ring placed on a flat table. We say that surface of the ball inside the ring is curved as opposed to the surface of the table inside the ring that is flat. Similarly we say that space inside our sphere is curved after mass, or energy, since every energy has mass according to the famous E = mc2 showed up inside it, as opposed to flat empty space. Another interesting thing about the above is that the space in the vicinity of mass gets bigger (or "rulers shrink") by the same relative amount (by the same percentage) as the time slows down. This relation of time to space near masses causes light rays to bend twice as much as predicted by Newtonian gravitation. As it was mentioned earlier, Newtonian gravitation only predicts accurately the gravitational effects caused by gravitational time dilation and none caused by curvature of space. That's why it predicts only half of deflection of ray of light and Einsteinian Gravitation predicts the whole angle as it is observed in the real world. And this is also how we know that the time dilation that causes half the deflection is the same as the increase in amount of space that causes the other half of deflection. This equality of spatial and temporal effects shows that the spacetime is more complex creature than space and time separately, and which are considered separately in Newtonian theory. That somehow time depends on the space, and space on time. It is similar to the married couple, where the woman and the man depend on each other and make a more complex structure than the two of them being considered together but independent form each other. This interdependency of time and space explains why the universe appears to be expanding. There are masses in the universe as planets, stars, galaxies and all the other observed and not yet observed junk between them. Each of them curves the space a little bit (makes more of it in its vicinity) when we look through that increased space deeper and deeper into space we see time slowing down more and more. Such slowing of time simulates a Doppler effect, which makes it seem as if all sources of light in the universe are moving away from us with velocities proportional to their distance from us. Just as before Einstein the behavior of time in the universe simulated the existence of the universal gravitational attraction, post-Einstein it also simulates the universal expansion. The astrophysicists prefer to insist that it is not possible to propose explanation of the observed phenomena other thanan Doppler effect, which according to them forces everybody to believe that the universe is expanding. And since it is not possible to propose explanation all papers on that subject are rejected by all scientific journals without even stating the reason for rejection other than the papers don't support the idea that the universe is expanding (small wonder). And it lasts already for over two decades. It will probably last well into this century until some Very Important Person, whose paper nobody will dare to reject, discovers that Einstein's theory explained all of it already. Then the big bang will also disappear from minds of astrophysicists. Those of the readers who are interested in this subject enough to read to this point might have noticed that this illusion of the expansion of the universe is caused by behavior of time coupled with curved space, and that behavior of time is reflected in Newtonian gravitation. It might give them an idea that therefore it should be possible to demonstrate just with Newtonian formula that a non-expanding universe should appear to be expanding. That is indeed so, and this is what the author has done to convince astrophysicists (without much success though) that the expansion of the universe is an illusion. It has been shown with simple Newtonian math what should be the observed rate of apparent expansion if the universe didn't actually expand, and it turns out that the result is as it is really observed. It is also the same result as would be predicted by just following strictly Einstein's gravitation and the fact that time dilation is the same as curvature of space. Both methods derive Hubble's constant of apparent expansion as the speed of light divided by Einstein's radius of the universe. The details of the derivation of Hubble's constant for our universe are on this site in Essay:Hubble redshift in Einstein's universe. There is also an explanation for the general public, but more detailed than those two paragraphs above and with full mathematical support for those who want to see how it is derived in Gravitation demystified. The above basically explains all the gravitational phenomena that are observed up to date. There also some predictions of what we might to encounter when we gather more data about the universe. All of them are quite interesting. One such thing is that the more mass is placed inside the shell the more additional space there will be inside it, but the amount of space increases by a greater amount than the mass that creates that additional space. The math of this mechanism, described by Schwarzschild's solution of Einstein Field Equations (that describe Einsteinian Gravitation), indicates that for any sphere there exists a certain amount of mass that makes that additional space infinite. If there were such a mass inside our sphere we could pour infinite amount of water into it. Such an object, with an infinite amount of space inside is called a black hole. The name comes from a fact that if there were infinite space inside it, the light would need an infinite amount of time to travel though it to get out of it and come to us. So practically we would never see that light regardless how long we looked at that object. We would see something that does not emit light at all, and so it is perfectly black. Another interesting feature of such an object would be that it would be at infinite distance from us. It would be so because when there is more space around a big mass and time also slows down in this region, light needs more time to get to that mass and back. If we use radar to measure the distance to that heavy object the photons we send to it come after longer time than they would if the mass of the object were small. In case of a hypothetical black hole, that time, and therefore the distance too, would be infinite. It is not known whether such objects as black holes exist in nature. Some scientists maintain that they can't exist (Einstein was one of them) because of those infinities they produce like the effect of time slowing to halt at the surface of the sphere, and so no more objects can fall into such a sphere in a reasonable time. So a real black hole couldn't be formed during the lifetime of the universe (regardless how long it were, unless it were infinite as well). The surface of the sphere which contains enough mass to stop the flow of time is called event horizon since at that surface the time stays still and so nothing can ever happen. No events are possible on and beyond that surface. Some scientists believe that black holes exist but no reasonable hypothesis telling how the objects may fall into them through the event horizon to form them has been proposed yet (however unreasonable hypotheses were proposed and many scientists, not understanding Einseinian gravitation, believe that those who proposed those hypotheses did check that they make sense). Consequently the black holes are more popular in SF texts than in science. And often this SF is presented to the public as science by naive astrophysicists who don't understand Einsteinian gravitation but believe what they are told by experts many of whom don't understand it either. Since we want to keep this text as close to science as possible let's forget about the black holes. There are even stranger and observable things in the nature so we don't need to get into SF to be baffled. It should be mentioned that the curvature of space has very little influence on gravitational phenomena in our solar system and next to none in vicinity of the earth. It applies mostly to deep space of the universe. To understand what is really going on in the universe it would be good to learn what happens to its space when each mass curves it only a little bit. The main thing that happens to space is that if the space behaves this way it has to be closed. It means that from whatever point in the universe one starts to travel in a straight line in whichever direction, if one travels long enough along that straight line, one is bound to return to the starting point. It seems unbelievable, perhaps much more than to the people who believe that the earth is flat, that going due West one may return one day to the starting point from the East, never even getting to the edge of the earth. Of course we know that the two dimensional surface of the earth is closed via the third dimension. The three dimensional space of the universe can't be closed via the fourth dimension since it is easy to notice that the fourth spatial dimension does not exist (it is impossible to make four mutually perpendicular directions). So let's try to explain how our space can still be closed despite that the lack of fourth spatial dimension. While reading popular science books about the universe one may find a quasi explanation of the closed three-dimensional curved space. That "it is the same" as the surface of the earth (or of a balloon), which is also closed, and that space has "just one dimension more" being three dimensional, while surface of the earth is two-dimensional. This is a good example of magical thinking which might be explained in this example, so the reader may be aware of it while reading other popular texts about science which may contain other instances of magical thinking. The magical thinking is very natural to humans, and showed up in human civilizations most likely just after invention of language. It is thinking rather about the names of things than about the things themselves. Analogies get created based only on similarity of words or ideas. And those analogies are very often false (another name for magical thinking is false analogy), and therefore they don't really explain anything. Since there are only three spatial dimensions, the curvature of space can't be explained with analogy to a two-dimensional sphere that is curved into third dimension. To explain the curvature of three-dimensional space one has to do it within three dimensions. This might seem to be possible if there is a way of explaining spherical geometry in two dimensions, by using only two dimensions, just on the flat surface. It turne out there is, as we'll see below. Besides, for many curved geometries (as e.g. for Lobachevskian geometry) a three-dimensional model can't be even constructed: there is no three-dimensional surface in Euclidean space that would have the Lobachevskian geometry and yet there are phenomena in nature that this geometry describes. So we have to find a way of understanding curved geometries in different ways than geometries of some curved surfaces in Euclidean three-dimensional space. E.g. a simple geometry of a surface of a ball or even simpler one of a surface of a cone that may be unrolled into a flat Euclidean surface. The mathematicians say that the geometry of the surface of a cone or a cylinder is flat despite that to a lay person it looks curved. It is flat because all distances between points on such a surface are the same as on a flat surface of a table. Bending of that surface into a cylinder does not change those distances while bending it into a sphere would. So to understand how three dimensional space can be curved despite that there is no fourth dimension it might be good to understand first how two dimensional surface may be curved without being curved into third dimension as if the third dimension didn't exist at all. The model of such a curved surface is quite simple. Let's imagine a big flat disk e.g. of 20,000 km radius that has such a property that whatever moves from its center towards its edge keeps its length in the radial direction (towards the center of the disk) but gets a little longer in the perpendicular direction. It gets back to its original size when it returns to the center. If it is a man walking on that disk, if he makes circles around the center of the disk, the circle that is twice as far from the center wouldn't have twice as long circumference but a little less than that. Lets assume that the relation between getting longer (perpendicular to the radial direction) and distance from the center is such that circumferences of those circles are exactly the same as those of parallels on the Earth at the distances from the Earth's pole the same as the radii of those circles. In such a case the man drawing circles and measuring them may conclude that he is not on a flat surface of a disk, but on a curved surface of a sphere of the size of the Earth. For him, for all practical purposes the surface would be a curved surface with the same geometry as the surface of the Earth. So he wouldn't be surprised when at the distance of 10,000 km from the center his circle would have length of only 40,000 km instead of about 62,832 km as it would on a flat surface. Or, that inside this circle there is more area than within the circle of the same circumference 40,000 km on a flat (Euclidean) surface. And even that all circles with radii greater then 10,000 km become smaller instead of getting bigger. And yet he would be on a "flat" two-dimensional surface of a disk. A surface that is not "curved" into any third dimension. The trick with changing size of a ruler in direction perpendicular to the line from the ruler to the center of the disk would make the geometry appear as if the surface were a surface of a sphere. The last circle that he would make, about 20,000 km from the center, would be so small that he could just slide around it and walk beyond it for another 20,000 km, getting back to the center of the disk from the opposite direction. So we see here a model of a curved two-dimensional surface without the necessity of introducing the third dimension (this model is known as hot plate model in literature of curved surfaces). It is good to add that since the geometry is the same as the geometry of a surface of a sphere there is really no center in it despite that the disk has a center. The trick with changing sizes makes impossible to tell which point of the disk is the center. All the points look the same, as they look the same on the surface of the Earth. The trick with sizes turns a flat two-dimensional space into a curved two-dimensional closed surface of a two-dimensional "sphere". Now this two dimensional model can be changed to three dimensions in the way that instead of walking and making circles the man can fly in any direction and build spheres around some center. If sizes change in the same way as before, the surfaces of his spheres will be growing a little slower than their radii, and the largest of them all will have a circumference of 40,000 km. Then circumferences of the farther spheres (with largest radii) will be smaller and smaller until the last one, at 20,000 km from the center will be so small that that he could just pass by it. He could fly beyond it for another 20,000 km, getting back to the center of the disk from the opposite direction. For all practical purposes he will be in a closed three-dimensional space with its weird properties, going East along a straight line and coming back from the West along the same straight line. The trick with sizes turns a flat three-dimensional space into a curved three-dimensional closed space of a three-dimensional "sphere". And all of it happens without introducing any fourth dimension. The above shows that it is possible by playing with sizes to change a flat space into a curved space. This is what seems to be going on in our universe, except that it is not the size of the objects or rulers that the movement is changing. The distances in that space change so that those rulers appear longer in some places and directions than in others, like they appeared shorter in the vicinity of some particular mass. Also the radius of that three-dimensional sphere into which the space is curved is much larger than the radius of the Earth. This radius is called Einstein's radius of the universe and it's size is about 20 billions light years (give or take a few billions). It happens to be the same radius called Einstein's radius of curvature of space, also known as Hubble's constant of the apparent expansion of the universe, which as mentioned above can be derived via a Newtonian formula. This shows how many additional questions about the universe, except why Mercury moves differently than predicted, and why light rays bend more than predicted, the Einsteinian Gravitation explains.
http://conservapedia.com/Curvature_of_space
13
51
Angles are measured in degrees. The number of degrees tell you how wide open the angle is. You can measure angles with a protracter, and you can buy them at just about any store that carries school items. Degrees are marked by a ° symbol. For those of you whose browsers can't interpret that, a degree symbol looks like this: . I tend to just write it out instead of using the symbol because it's quicker on the computer. There are up to 360 degrees in an angle. As you can see in the picture below, the 360 degrees form a circle. There are a few more basic things you should know about angles. First of all, the space inside an angle of less than 180 degrees, is a convex set, while the space outside of one is a nonconvex set. The opposite is true for an angle of more than 180 degrees (but less than 360 degrees). The side of an angle that is started at would be called the initial side, and the side that an angle ended at would be called the terminal side. The measure of ABC is written mABC. When measuring angles, you usually go counterclockwise, starting where the 3 would be on a clock. That would be called a zero angle because there is nothing in it - just a single ray going directly to the right. The next important type of angle is called the acute angle. An acute angle is an angle whose measure is inbetween 0 and 90 degrees. An example would be the 45 degree angle in the picture. The next important type of angle is the right angle. This is probably the most important type of angle there is because of all the spifty things that you can do with one. I won't go into all of them here. (I have to save something for later articles!) A right angle is an angle whose measure is exactly 90 degrees. Continuing around the circle, next is the obtuse angle. An obtuse angle is an angle whose measure is inbetween 90 and 180 degrees. The 135 degree angle in the diagram is an example. The last major kind of angle is the straight angle. A straight angle is an angle that measures exactly 180 degrees. Thus the name - the two rays form a straight line. A negative angle is also possible. This just means that you go clockwise instead of counterclockwise. A lot of geometry teachers don't go beyond that, at least at first. There isn't much else left to explain, but I'll give it a shot. After straight angles, there aren't any more special angles that you need to know about. A 360 degree angle is an angle that does a full circle. It looks just like a zero angle, but instead of having no degrees, it has 360 of them. (Duh. You can't get more basic than that!) It is possible to have an angle with more than 360 degrees. To find out what it looks like, all you do is subtract 360 from it until you have an angle less than or equal to 360. (What?! You want an example? C'mon, you people...) For example, if you have an angle that is 546 degrees, you subtract 360 from 546 to get 186. Thus, the angle is the equivalent of a 186 degree angle. There are a few more terms that you should also know. Supplementary angles are two angles whose measures combined equal 180 degrees. Complementary angles are two angles whose measures combined equal 90 degrees. Two non-straight and non-zero angles are adjacent if and only if a common side is in the interior of the angle formed by the non-common sides. A linear pair is a pair of adjacent angles whose non-common sides are opposite rays. Vertical angles are two angles that have a common vertex and whose sides form two lines. is a bisector of DAC if and only if is in the interior of DAC and mDAB = mCAB. A) Unique Measure Assumption: Every angle has a unique measure. The measure could be infinite or negative. B) Two Sides of a Line Assumption: Given any ray and any number x, there are unique rays and such that intersects ray and mBEA = mCEA = x. C) Zero Angle Assumption: If and are the same ray, then mAEB = 0. D) Straight Angle Assumption: If and are opposite rays, then mAEB = 180. E) Angle Addition Assumption: If (except for point E) is in the interior of AEB, then mAEC + mCEB = mAEB. Linear Pair Theorum: If two angles form a linear pair, then they are supplementary. See proof.Note: While you can usually get away with not knowing the names of theorums, your Geometry teacher will generally require you to know them. Vertical Angle Theorum: If two angles are vertical angles, then they have equal measures. See proof. - Jaime III We're really not this boring in person. Honest!
http://library.thinkquest.org/2647/geometry/angle/measure.htm
13
61
Chem1 General Chemistry Virtual Textbook →Matter and measure → Statistical inference Drawing conclusions from data Statistics don't lie — or do they? On this page: OK, you have collected your data, so what does it mean? This question commonly arises when measurements made on different samples yield different values. How well do measurements of mercury concentrations in ten cans of tuna reflect the composition of the factory's entire output? Why can't you just use the average of these measurements? How much better would the results of 100 such tests be? This final lesson on measurement will examine these questions and introduce you to some of the methods of dealing with data. This stuff is important not only for scientists, but also for any intelligent citizen who wishes to independenly evaluate the flood of numbers served up by advertisers, politicians, "experts", and yes— by other scientists. In the previous section of this unit we examined two very simple sets of data: Each of these sets has the same mean value of 40, but the "quality" of the set shown on the right is greater because the data points are less scattered; the precision of the result is greater. The quantitative measure of this precision is given by the standard deviation whose value works out to 28 and 7 for the two sets illustrated above. A data set containing only two values is far too small for a proper statistical analysis you would not want to judge the average mercury content of canned tuna on the basis of only two samples, for instance. Suppose, then, for purposes of illustration, that we have accumulated many more data points but the standard deviations of the two sets remain at 28 and 7 as before. What conclusions can we draw about how close the mean value of 40 is likely to come to the "true value" (the population mean μ) in each case? Although we cannot ordinarily know the value of μ, we can assign to each data point xi a quantity (xi xm) which we call the deviation from the [population] mean, an index of how far each data point differs from the elusive true value. We now divide this deviation from the mean by the standard deviation of the entire data set: If we plot the values of z that correspond to each data point, we obtain the following curves for the two data sets we are using as examples: We wont attempt to prove it here, but the mathematical properties of a Gaussian curve are such that its shape depends on the scale of units along the x-axis and on the standard deviation of the corresponding data set. In other words, if we know the standard deviation of a data set, we can construct a plot of z that shows how the measurements would be distributed An important corollary to the second condition is that if the data points do not approximate the shape of this curve, then it is likely that the sample is not representative, or that some complicating factor is involved. The latter often happens when a teacher plots a set of student exam scores, and gets a curve having two peaks instead of one representing perhaps the two sub-populations of students who devote their time to studying and partying. This minor gem was devised by the statistician W.J. Youdan and appears in The visual display of quantitative information, an engaging book by Edward R. Tufte (Graphics Press, Cheshire CT, 1983). Clearly, the sharper and more narrow the standard error curve for a set of measurement, the more likely it will be that any single observed value approximates the true value we are trying to find. Because the shape of the curve is determined by S, we can make quantitative predictions about the reliability of our data from its standard deviation. In particular, if we plot z as a function of the number of standard deviations from the mean (rather than as the number of absolute deviations from the mean as was done above), the shape of the curve depends only on the value of S. That is, the dependence on the particular units of measurement is removed. Moreover, it can be shown that if all measurement error is truly random, 68.3 percent (about two-thirds) of the data points will fall within one standard deviation of the population mean, while 95.4 percent of the observations will differ from the population mean by no more than two standard deviations. This is extremely important, because it allows us to express the reliability of a measurement quantitatively, in terms of confidence intervals. This is as close to the truth as we can get in scientific measurements. Note carefully: Confidence interval (CI) and confidence level (CL) are not the same! A given CI (denoted by the shaded range of 18-33 ppm in the diagram) is always defined in relation to some particular CL; specifying the first without the second is meaningless. If the CI illustrated here is at the 90% CL, then a CI for a higher CL would be wider, while that for a smaller CL would encompass a smaller range of values. The more measurements we make, the more likely will their average value approximate the true value. The width of the confidence interval (expressed in the actual units of measurement) is directly proportional to the standard deviation S and to the value of z (both of these terms are defined above). The confidence interval of a single measurement in terms these quantities and of the observed sample mean is given by: CI = xm + z S If n replicate measurements are made, the confidence interval becomes smaller: This relation is often used in reverse, that is, to determine how many replicate measurements n must be carried out in order to obtain a value within a desired confidence interval. As we pointed out above, any relation involving the quantity z (which the standard error curve is a plot of) is of limited use unless we have some idea of the value of the population mean μ. If we make a very large number of measurements (100 to 1000, for example), then we can expect that our observed sample mean approximates μ quite closely, so there is no difficulty. The shaded area in each plot shows the fraction of measurements that fall within two standard deviations (2S) of the "true" value (that is, the population mean μ). It is evident that the width of the confidence interval diminishes as the number of measurements becomes greater. This is basically a result of the fact that relatively large random errors tend to be less common than smaller ones, and are therefore less likely to cancel out if only a small number of measurements is made. OK, so larger data sets are better than small ones. But what if it is simply not practical to measure the mercury content of 10,000 cans of tuna? Or if you were carrying out a forensic examination of a tiny chip of paint, you might have only enough sample (or enough time) to do two or three replicate analyses. There are two common ways of dealing with such a difficulty. One way of getting around this is to use pooled data; that is, to rely on similar prior determinations, carried out on other comparable samples, to arrive at a standard deviation that is representative of this particular type of determination. The other common way of dealing with small numbers of replicate measurements is to look up, in a table, a quantity t, whose value depends on the number of measurements and on the desired confidence level. For example, for a confidence level of 95%, t would be 4.3 for three samples and 2.8 for five. The magnitude of the confidence interval is then given by CI = ± t S This procedure is not black magic, but is based on a careful analysis of the way that the Gaussian curve becomes distorted as the number of samples diminishes. Once we have obtained enough information on a given sample to evaluate parameters such as means and standard deviations, we are often faced with the necessity of comparing that sample (or the population it represents) with another sample or with some kind of a standard. The following sections paraphrase some of the typical questions that can be decided by statistical tests based on the quantities we have defined above. It is important to understand, however, that because we are treating the questions statistically, we can only answer them in terms of statistics that is, to a given confidence level. The usual approach is to begin by assuming that the answer to any of the questions given below is no (this is called the null hypothesis), and then use the appropriate statistical test to judge the validity of this hypothesis to the desired confidence level. Because our purpose here is to show you what can be done rather than how to do it, the following sections do not present formulas or example calculations, which are covered in most textbooks on analytical chemistry. You should concentrate here on trying to understand why questions of this kind are of importance. For more detail, see Statistics at Square One. This online reference has good descriptions of the t-test and of the other applications described below. That is, is it likely that something other than ordinary indeterminate error is responsible for this suspiciously different result? Anyone who collects data of almost any kind will occasionally be faced with this question. Very often, ordinary common sense will be sufficient, but if you need some help, two statistical tests, called the Q test and the T test, are widely employed for this purpose. We wont describe them here, but both tests involve computing a quantity (Q or T) for a particular result by means of a simple formula, and then consulting a table to determine the likelyhood that the value being questioned is a member of the population represented by the other values in the data set. This must always be asked when trying a new method for the first time; it is essentially a matter of testing for determinate error. The answer can only be had by running the same procedure on a sample whose composition is known. The deviation of the mean value of the known xm from its true value μ is used to compute a Student's t for the desired confidence level. You then apply this value of t to the measurements on your unknown samples. You wish to compare the means xm1 and xm2 from two sets of measurements in order to assess whether their difference could be due to indeterminate error. Suppose, for example, that you are comparing the percent of chromium in a sample of paint removed from a car's fender with a sample found on the clothing of a hit-and-run victim. You run replicate analyses on both samples, and obtain different mean values, but the confidence intervals overlap. What are the chances that the two samples are in fact identical, and that the difference in the means is due solely to indeterminate error? A fairly simple formula, using Students t, the standard deviation, and the numbers of replicate measurements made on both samples, provides an answer to this question, but only to a specified confidence level. If this is a forensic investigation that you will be presenting in court, be prepared to have your testimony demolished by the opposing lawyer if the CL is less than 99%. This is just a variant of the preceding question. Estimation of the detection limit of a substance by a given method begins with a set of measurements on a blank, that is, a sample in which the substance of question is assumed to be absent, but is otherwise as similar as possible to the actual samples to be tested. We then ask if any difference between the mean of the blank measurements and of the sample replicates can be attributed to indeterminate error at a given confidence level. For example, a question that arises at every world Olympics event, is what is the minimum level of a drug metabolite that can be detected in an athlete's urine? Many sensitive methods are subject to random errors that can lead to a non-zero result even in a sample known to be entirely free of what is being tested for. So how far from "zero" must the mean value of a test be in order to be certain that the drug was present in a particular sample? A similar question comes up very frequently in environmental pollution studies. How to lie with statistics is the title of an amusing book by Darrell Huff (Norton, 1954). Some of Irving Geisss illustrations for this book appear below. See also It occasionally happens that a few data values are so greatly separated from the rest that they cannot reasonably be regarded as representative. If these outliers clearly fall outside the range of reasonable statistical error, they can usually be disregarded as likely due to instrumental malfunctions or external interferences such as mechanical jolts or electrical fluctuations. Some care must be exercised when data is thrown away however; There have been a number of well-documented cases in which investigators who had certain anticipations about the outcome of their experiments were able to bring these expectations about by removing conflcting results from the data set on the grounds that these particular data had to be wrong The probability of ten successive flips of a coin yielding 8 heads is given by ... indicating that it is not very likely, but can be expected to happen about eight times in a thousand runs. But there is no law of nature that says it cannot happen on your first run, so it would clearly be foolish to cry Eureka and stop the experiment after one or even a few tries. Or to forget about the runs that did not turn up 8 heads! The fact that two sets of statistics show the same trend does not prove they are connected, even in cases where a logical correlation could be argued. Thus it has been suggested that according to the two plots below, "In relative terms, the global temperature seems to be tracking the average global GDP quite nicely over the last 70 years." The difference between confidence levels of 90% and 95% may not seem like much, but getting it wrong can transform science into junk science — a not-unknown practice by special interests intent on manipulating science to influence public policy; see the excellent 2008 book by David Michaels "Doubt is Their Product: How Industry's Assult on Science Threatens Your Health". Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially imortant that you know the precise meanings of all the highlighted terms in the context of this topic. Statistics for analytical chemistry tutorial (U of Toronto) This blog archive by Samantha Cook on Statistical Modeling, Causal Inference and Social Science is rather technical in places, but has a lot of interesting commentary on the foibles of statistical inference.This interactive T-test simulator allows you to see how changes to mean, variance, and population size affect the distribution curve.
http://www.chem1.com/acad/webtext/matmeasure/mm5.html
13
50
Prisms and their Applications. by Victor Jitlin A prism is one or several blocks of glass of glass, through which light passes and refracts and reflects off its straight surfaces. Prisms are used in two fundamentally different ways. One is changing the orientation, location, etc. of an image or its parts, and another is dispersing light as in a refractometers and spectrographic equipment. This project will only deal with the first use. Consider an image projected onto a screen with parallel rays of light, as opposed to an image formed by the same rays that are passed through a cubic prism (assume that the amount of light that is reflected is negligible). The rays that pass through the prism will not be refracted since the angle of refraction = sin-1(sin(0)/n) = 0, or reflected, so the images will be exactly the same. More generally, if the rays enter and leave a prism at right angles (Assuming the rays only travels through one medium while passing through the prism), the only effect on the image will be the reflection of the rays off of its surfaces. Since the law of reflection I= -I’ (Angle of incidence equals the negative of the angle of reflection) is not effected by the medium, the effect of the prism will be same as that of reflective surfaces or mirrors placed in the same location as the reflective surfaces of the prism. It follows that to understand prisms it is important to understand how mirrors can be used to change the direction of rays. Consider the following example: A horizontal ray is required to undergo a 45º-angle change and this has to be achieved using a mirror. We need to find how the mirror should be oriented to achieve the desired change of angle. Recall the Snell’s law which deals with refraction: sinI0 /n0 = sinI1/n1 if we define the incoming and outgoing rays ray and the normal of the refractive surface as vectors and using a property of the cross-product we can say the following Q0xM1 = |Q0||M1| sinI0 = sinI0 Q1xM1 = |Q1||M1| sinI1 = sinI1 N0 (Q0xM1)= n1 (Q1xM1) If we introduce two new vectors S0 and S1 and let them equal n0 Q0 and n1Q1 respectively we will get S0x M1 = S1xM1 (S1-S0)xM1 = 0 this implies that (S1-S0) are parallel or anti-parallel, which means that we can define a new variable Γ which is called the astigmatic constant with S1 – S0 = ΓM1 How is useful for solving our problem? It turns out that reflection can be described using the same laws and as refraction and letting -n1 = n0 =1 assuming the ray travels through vacuum (n constants are similar for vacuum and air). The equation then simplifies to Γ =-2 n1cosI0 This equation can be generalized to describe a ray reflecting off of a sequence of mirrors (see picture), Click on any image to get the PostScript source. and it is also convenient to introduce a new variable ρ which will equal Γ/-2. Thus using a property of the dot product we have: Γi = -2(Si-1∙Mi) = -2ρi In our case since we are only dealing with one mirror we can let i = 1 S1 = S0 + Γ1M1 ρ1 = S0∙M1 Since the ray is horizontal and is a unit vector we will let S0 = (1,0), and since the desired ray is at a 45º-angle we will let S1 = (√2/2, √2/2) and M1 will be the unknown. Thus we have (√2/2, √2/2) = (1,0) – 2((1,0) ∙M1)M1 (√2/2, √2/2) = (1,0) – 2(Mx)M1 If we split the vector equation into two that involve only scalar values we will get √2/2 = 1 – 2Mx2 √2/2 = 0 – MxMy Mx = √((√2/2-1)/2) = .382 My = -√2/(4Mx) = -.923 We can also find the angle of the mirror a = -tan-1(Mx/My) = -tan-1(.382/-.923) = 22.5º which is intuitively the right answer. // A useful property of the multiple mirror systems can be derived using the equation Si = Si-1+2(Si-1∙Mi)Mi Of course 2d and 3d models are completely analogous, here the 3d one will be used. First we can see that if we split up the vectors into their individual components, Six = S(i-1)x+2Mix(S(i-1)xMix+S(i-1)xMiy+S(i-1)zMiz) Siy = S(i-1)y+2Miy(S(i-1)xMix+S(i-1)xMiy+S(i-1)zMiz) Siz = S(i-1)z+2Miz(S(i-1)xMix+S(i-1)xMiy+S(i-1)zMiz) the equation can be written in a matrix form. Specifically: | Six | | (1-Mix2) -2MixMiy -2MixMiz || S(i-1)x| | Siy | = | -2MixMiy (1-Miy2) -2MiyMiz || S(i-1)y| | Siz | | -2MixMiz -2MiyMiz (1-Miz2) || S(i-1)z| We can try to find the determinant of the matrix, for convenience, in the following calculations Mix, Miy and Miz will be referred to as x, y, z respectively: D = (1-2x2)((1-2y2)(1-2z2)-4y2z2) +2xy(-1(1-2z2)2xy - 4yz2x) D = (1-2x2)(1-2y2-2z2+4y2z2-4y2z2) D = 1 - 2y2-2z2-2x2+4x2y2+4x2z2-4x2y2-4x2z2 D = 1 - 2(x2+y2+z2) since M is a unit vector x2+y2+z2 = 1 D = -1 If the system involves many mirrors then the transformation has to be applied to the vector multiple times. If we let the above mentioned matrix be Ri, then: (Sn) = [Rn][Rn-1]...[R1](S0) Thus we can see that the determinant of the matrix for the whole system is going to equal -1 if there are an odd number of matrices that are multiplied, i.e. and odd number of mirrors and it is going to equal 1 if the number of mirrors is even. The interpretation of this is that the image produced by an even number of mirrors is right handed i.e. "R" and an image produced by an even number of mirrors is left handed i.e. "Я". If the right-handed image is plain text, the left handed one is unreadable. This is a key principle in the design of prisms. Some Examples of Prisms Let's look at some examples of some simple prisms One example is Right angle prism, which produces a left-handed image and turns the angle of the rays by 90º (see picture). If the front side is of length A then the C will be of length 1.414A. It can also be used to reverse an image i.e. turn it 180º(see picture). In this case it will produce a left-handed image. Another example is the Dove prism. It is used to invert an image without changing its direction (see picture). Find the dimensions of the Dove prism. Let the height of the front side equal A. Then the front side is at an angle of 45º. The ray entering the top the front side will have to travel to the bottom of the back side. It incident angle is 45º and the refracted angle will be r = sin-1(sin(45º)/n) it's angle to the horizontal is a = sin-1(sin(45º)/n) - 45º now we can find the length of side B. A is added to account for the offset of the top corner. B = A tan(sin-1(sin(45º)/n) - 45º)+A if we let n equal 1.5170, then B = 4.227A C = 4.227A - 2A = 2.227A Note that the prisms symmetry implies that the rays hitting the bottom of the front side will hit the top of the back side. // Is it possible to make a prism with the same effect as the Dove prism but more compact? In other words how can the size to aperture ration be reduced? If the prism is rotated around the central ray by 180 degrees it will produce the same image. Thus two Dove prism with their respective side B's placed together will produce an image twice the size, but it will retain it's original length. Note that the rays that hit the bottom and the top of the prism will take opposite paths. The only remaining concern is that the rays that are refracted by the bottom side of each prism will produce a double image. This is remedied by silvering the two bottom sides, which insures that no rays pass through. Another prism that has the same effect as the Dove prism is the Pechan Prism (see Picture). It has an advantage of that it can be placed in diverging of converging light. As noted above the right angle prism produces a left image. If however the same angle change is required, but a right handed image is needed, a Pentaprism can be used (see picture). Find the dimensions of the Pentaprism. It is obvious that side A and the top side are going to be of the same length. As we have discovered earlier the angle that a reflective surface needs to be placed at to attain a 45º angle change of ray direction is 22.5º. If we are guided by the right triangles that consist of the sides of the prism and the ray passing the front side at its bottom, we can find side B: B = 1/cos(22.5º)A = 1.08239A we can also find side c: x = tan(22.5º)A C = √(2(tan(22.5º)A)2) = .5857A Notice that unlike the Dove prism, the dimensions of the Pentaprism are independent of the value of n.// Let's go back to dove prisms. If we rotate an image along an axis of the central ray, then we know that at each point in time the top of the initial image will correspond to the bottom of the final image. I.e. a point on the initial image of height sin(Ri) will correspond to a point of height -sin(Ri) of the final image, where i is the angle of rotation. Since the two images are initially offset by 180 degrees we can also say that initial height sin(Ri) corresponds to a final height of sin(Ri+180). We can define Rp = 180º + Ri where Rp is the rotation of the image produced by the prism and Riis the initial Rotation If instead we keep the object stationary and rotate the prism, then both the changes in Ri and Rp will be with respect to the prism. With respect to the initial image, both changes will occur. Thus the effect on the final image due to the rotation of the prism will be Rf = Ri + Rp = 2Ri+180 Where Ri is the rotation of the prism. Taking the derivative with respect to Ri we get dRf = 2dRi This implies that the final image is rotated at twice the speed of the rotation of the prism. For this reason it is called a rotator. Let's see how this fact can be used. A simple periscope can be constructed using just 2 mirrors, the two mirrors will have to be static relative to each other and the only way to change the target direction is to rotate the entire periscope, which means that the observer will also have to rotate. If space is limited, which is quite likely if it is used in a military setting like a trench, this might not be very practical. A better solution is achieved using the dove prism (see picture). A ray first passes through a right-angle prism, then a Dove prism, and then an Almici roof prism (a right-angle prism cannot be used at the bottom, because that would mean that the ray will be reflected 3 times and thus the final image will be left-handed). Now the top prism can be rotated while the bottom one remains fixed. How should the Dove prism behave? Since it rotates an image at twice the rate of the rotation of the incoming image, the Dove prism should rotate at half the angular speed of the right-angle prism. This is achieved using differential gears Keep in mind that normally a periscope would include several lenses to increase the field of view and add a magnification factor. An observer looking at a person taking a photo will be able to guess roughly what the image will show, simply by looking in the general direction that the camera is pointing. A photographer using a direct vision camera (one that has a viewfinder located above the main lens) has a better idea of the direction, but still cannot see exactly the image that will be produced. The most popular solution is an SLR (Single Lens Reflex) Camera (see picture), which produces exactly the same image in the viewfinder and on the film. Let's look at how this is achieved. After the rays pass through the lenses, the image is both upside down and has right and left reversed. This is inconsequential with respect to the film but must be corrected by the viewfinder. Until the moment that the picture is taken, the rays are reflected at 90º by a hinged mirror. Just before the shot, the mirror swings out of the way and the rays hit the film producing the photograph. Otherwise after being reflected off the mirror the rays are traveling vertically and need to be deflected 90º towards the observer. If a right-angled prism was used for this then the resulting image would appear upside down. This can be ascertained by bouncing an imaginary pencil off the surfaces of the mirror and the prism (see picture), keeping in mind that the image is flipped vertically when it passes through the lens. If however, a pentaprism was used the image will appear oriented the right way vertically. However there is still a problem because the image was flipped horizontally by the lens, and the lens only. To get the right horizontal orientation a penta-roof prism needs to be used. It adds one more reflection and flips the image horizontally. These are just a few samples of prisms and their applications. Many more much more complicated examples of prisms exist, and there are many interesting and useful applications of them which exploit their different properties. Dr. Robert E. Hopkins, Dr. Richard Hanau, Dr. Harold Osterberg, Dr. Oscar W. Richards, Mr. A. J. Kavanagh, Dr. Ralph Wight, Dr. Seymour Rosin, Dr. Philip Baumeister, Mr. Alva Bennet, Military Standardization Handbook, Optical Design, Defence Supply Agency, Washington, 5 October 1962. Michael Langford, Alfred A. Knopf, Master Guide to Photography, Alfred A. Knopf Inc., London, 1982. Walter G. Driscoll, William Vaughan, Handbook of Optics, McGraw-Hill, Inc., USA, 1978. Rudolf Kingslake, Optical System Design, Academic Press, Inc 111 Fifth Avenue, NY, NY 10003, 1983.
http://www.math.ubc.ca/~cass/courses/m309-03a/m309-projects/jitlin/Prisms.html
13
58
The tree line is the edge of the habitat at which trees are capable of growing. Beyond the tree line, trees cannot tolerate inappropriate environmental conditions (usually cold temperatures or lack of moisture).:51 The tree line should not be confused with a lower timberline or forest line, where trees form a forest with a closed canopy.:151 At the tree line, tree growth is often very stunted, with the last trees forming low, densely matted bushes. If it is caused by wind, it is known as krummholz formation, from the German for 'twisted wood'.:58 The tree line, like many other natural lines (lake boundaries, for example), appears well-defined from a distance, but upon sufficiently close inspection, it is a gradual transition in most places. Trees grow shorter towards the inhospitable climate until they simply stop growing.:55 The highest elevation that sustains trees; higher up, it is too cold or snow cover persists for too much of the year to sustain trees.:151 Usually associated with mountains, the climate above the tree line is called an alpine climate,:21 and the terrain can be described as alpine tundra. In the northern hemisphere treelines on north-facing slopes are lower than on than south-facing slopes because increased shade means the snowpack takes longer to melt which shortens the growing season for trees.:109 This is reversed in the southern hemisphere. The driest places that trees can grow; drier desert areas having insufficient rainfall to sustain trees. These tend to be called the "lower" tree line and occur below about 5,000 ft (1,500 m) elevation in the Desert Southwestern United States. The desert treeline tends to be lower on pole-facing slopes than equator-facing slopes, because the increased shade on a pole-facing slope keeps those slopes cooler and prevents moisture from evaporating as quickly, giving trees a longer growing season and more access to water. In some mountainous areas, higher elevations above the condensation line or on equator-facing and leeward slopes can result in low rainfall and increased exposure to solar radiation. This dries out the soil, resulting in a localized arid environment unsuitable for trees. Many south-facing ridges of the mountains of the Western U.S. have a lower treeline than the northern faces because of increased sun exposure and aridity. Different tree species have different tolerances to drought and cold. Mountain ranges isolated by oceans or deserts may have restricted repertoires of tree species with gaps that are above the alpine tree line for some species yet below the desert tree line for others. For example several mountain ranges in the Great Basin of North America have lower belts of Pinyon Pines and Junipers separated by intermediate brushy but treeless zones from upper belts of Limber and Bristlecone Pines.:37 On coasts and isolated mountains the tree line is often much lower than in corresponding altitudes inland and in larger, more complex mountain systems, because strong winds reduce tree growth. In addition the lack of suitable soil, such as along talus slopes or exposed rock formations, prevents trees from gaining an adequate foothold and exposes them to drought and sun. The northernmost latitude in the Northern Hemisphere where trees can grow; farther north, it is too cold all year round to sustain trees. Extremely cold temperatures, especially when prolonged, can result in freezing of the internal sap of trees, killing them. In addition, permafrost in the soil can prevent trees from getting their roots deep enough for the necessary structural support. The southernmost latitude in the Southern Hemisphere where trees can grow; further south, it is too cold to sustain trees. It is a theoretical concept that does not have any defined location. No trees grow in Antarctica or the subantarctic islands. This tree line would be the southernmost point in the environment at which trees can no longer grow, except there are no landmasses that have a true treeline analogous to the arctic treeline. The immediate environment is too extreme for trees to grow. This can be caused by geothermal exposure associated with hot springs or volcanoes, such as at Yellowstone, high soil acidity near bogs, high salinity associated with playas or salt lakes, or ground that is saturated with groundwater that excludes oxygen from the soil, which most tree roots need for growth. The margins of muskegs and bogs are common examples of these types of open areas. However, no such line exists for swamps, where trees, such as Bald cypress and the many mangrove species, have adapted to growing in permanently waterlogged soil. In some colder parts of the world there are tree lines around swamps, where there are no local tree species that can develop. There are also man-made pollution tree lines in weather exposed areas, where new tree lines have developed because of the increased stress of pollution. Example are around Nikel in Russia and previously in the Erzgebirge. Some typical Arctic and alpine tree line tree species (note the predominance of conifers): The alpine tree line at a location is dependent on local variables, such as aspect of slope, rain shadow and proximity to either geographical pole. In addition, in some tropical or island localities, the lack of biogeographical access to species that have evolved in a subalpine environment, can result in lower tree lines than one might expect by climate alone. Averaging over many locations and local microclimates, the treeline rises 75 metres (246 ft) when moving 1 degree south from 70 to 50°N, and 130 metres (430 ft) per degree from 50 to 30°N. Between 30°N and 20°S, the treeline is roughly constant, between 3,500 and 4,000 metres (11,500 and 13,000 ft). Here is a list of approximate tree lines from locations around the globe: |Location||Approx. latitude||Approx. elevation of tree line||Notes| |Chugach Mountains, Alaska||61°N||700||2,300||Tree line around 1,500 feet (460 m) or lower in coastal areas| |Norway||61°N||1,100||3,600||Much lower near the coast, down to 500–600 metres (1,600–2,000 ft). At 71°N, in Finnmark county, the tree-line is below sea level (Arctic tree line).| |Scotland||57°N||500||1,600||Strong maritime influence serves to cool summer and restrict tree growth:85| |Olympic Mountains WA, USA||47°N||1,500||4,900||Heavy winter snowpack buries young trees until late summer| |Mount Katahdin, Maine, USA||46°N||1,150||3,800| |Eastern Alps, Austria, Italy||46°N||1,750||5,700||more exposure to Russian cold winds than Western Alps| |Alps of Piedmont, Northwestern Italy||45°N||2,100||6,900| |New Hampshire, USA||44°N||1,350||4,400|| Some peaks have even lower treelines because of fire and subsequent loss of soil, such as Grand Monadnock and Mount Chocorua.| |Rila and Pirin Mountains, Bulgaria||42°N||2,300||7,500||Up to 2,600 m (8,500 ft) on favorable locations. Mountain Pine is the most common tree line species.| |Pyrenees Spain, France, Andorra||42°N||2,300||7,500||Mountain Pine is the tree line species| |Wasatch Mountains, Utah, USA||40°N||2,900||9,500||Higher (nearly 11,000 feet or 3,400 metres in the Uintas)| |Rocky Mountain NP, USA||40°N||3,550||11,600|| On warm southwest slopes| |3,250||10,700||On northeast slopes| |Yosemite, USA||38°N||3,200||10,500|| West side of Sierra Nevada| |3,600||11,800|| East side of Sierra Nevada| |Sierra Nevada, Spain||37°N||2,400||7,900||Precipitation low in summer| |Hawaii, USA||20°N||3,000||9,800|| Geographic isolation and no local tree species with high tolerance to cold temperatures.| |Pico de Orizaba, Mexico||19°N||4,000||13,100||| |Mount Kilimanjaro, Tanzania||3°S||3,950||13,000||| |Andes, Peru||11°S||3,900||12,800||East side; on west side tree growth is restricted by dryness| |Andes, Bolivia||18°S||5,200||17,100||Western Cordillera; highest treeline in the world on the slopes of Sajama Volcano (Polylepis tarapacana)| |4,100||13,500||Eastern Cordillera; treeline is lower because of lower solar radiation (more humid climate)| |Sierra de Córdoba, Argentina||31°S||2,000||6,600||Precipitation low above trade winds, also high exposure| |Australian Alps, Australia||36°S||2,000||6,600||West side of Australian Alps| |1,700||5,600||East side of Australian Alps| |Andes, Laguna del Laja, Chile||37°S||1,600||5,200||Temperature rather than precipitation restricts tree growth| |Mount Taranaki, North Island, New Zealand||39°S||1,500||4,900||Strong maritime influence serves to cool summer and restrict tree growth| |Tasmania, Australia||41°S||1,200||3,900||Cold winters, strong cold winds and cool summers with occasional summer snow restrict tree growth| |Fiordland, South Island, New Zealand||45°S||950||3,100||Cold winters, strong cold winds and cool summers with occasional summer snow restrict tree growth| |Torres del Paine, Chile||51°S||950||3,100||Strong influence from the Southern Patagonian Ice Field serves to cool summer and restrict tree growth| |Navarino Island, Chile||55°S||600||2,000||Strong maritime influence serves to cool summer and restrict tree growth| Like the alpine tree lines shown above, polar tree lines are heavily influenced by local variables such as aspect of slope and degree of shelter. In addition, permafrost has a major impact on the ability of trees to place roots into the ground. When roots are too shallow, trees are susceptible to windthrow and erosion. Trees can often grow in river valleys at latitudes where they could not grow on a more exposed site. Maritime influences such as ocean currents also play a major role in determining how far from the equator trees can grow. Here are some typical polar treelines: |Location||Approx. longitude||Approx. latitude of tree line||Notes| |Norway||24°E||70°N||The North Atlantic current makes Arctic climates in this region warmer than other coastal locations at comparable latitude. In particular the mild winters prevents permafrost.| |West Siberian Plain||75°E||66°N| |Central Siberian Plateau||102°E||72°N||Extreme continental climate means the summer is warm enough to allow tree growth at higher latitudes, extending to northernmost forests of the world at 72°28'N at Ary-Mas (102° 15' E) in the Novaya River valley, a tributary of the Khatanga River and the more northern Lukunsky grove at 72°31'N, 105° 03' E east from Khatanga River.| |Russian Far East (Kamchatka and Chukotka)||160°E||60°N||The Oyashio Current and strong winds affect summer temperatures to prevent tree growth. The Aleutian Islands are almost completely treeless.| |Alaska||152°W||68°N||Trees grow north to the south facing slopes of the Brooks Range. The mountains block cold air coming off of the Arctic Ocean.| |Northwest Territories, Canada||132°W||69°N||Reaches north of the Arctic Circle because of the continental nature of the climate and warmer summer temperatures.| |Nunavut||95°W||61°N||Influence of the very cold Hudson Bay moves treeline southwards.| |Labrador Peninsula||72°W||56°N||Very strong influence of the Labrador Current on summer temperatures as well as altitude effects (much of Labrador is a plateau). In parts of Labrador, the treeline extends as far south as 53°N.| |Greenland||50°W||64°N||Determined by experimental tree planting in the absence of native trees because of isolation from natural seed sources; a very few trees are surviving, but growing slowly, at Søndre Strømfjord, 67°N.| Kerguelen Island, Île Saint-Paul, South Georgia, South Orkney, and other subantarctic islands are all so heavily wind exposed and with a far too cold summer climate (tundra) that none have any indigenous tree species. The Falkland Islands summer temperature is near the limit, but the islands are also treeless although some planted trees exist. Antarctic Peninsula is the northernmost point in Antarctica and has the mildest weather. It is located 1,080 kilometres (670 mi) from Cape Horn on Tierra del Fuego. But no trees live on Antarctica. In fact, only a few species of grass, mosses, and lichens survive on the peninsula. In addition, no trees survive on any of the subantarctic islands near the peninsula. Tierra del Fuego however contains trees. Southern Rata forests exist on Enderby Island and Auckland Islands and these grow up to an elevation of 370 metres (1,200 ft) in sheltered valleys. These trees seldom grow above 3 m (9.8 ft) in height and they get smaller as one gains altitude, so that by 180 m (600 ft) they are waist high. These islands have only 600 - 800 hours of sun annually. Campbell Island further south is almost treeless, except for one stunted pine tree near the weather station and it was planted by scientists. The climate on these islands is not severe, but tree growth is limited by almost continual rain and wind. Summers are very cold with an average January temperature of 9 °C (48 °F). Winters are mild 5 °C (41 °F) but wet. Macquarie Island (Australia) is located at and has no vegetation beyond snow grass and alpine grasses and mosses. Here you can share your comments or contribute with more information, content, resources or links about this topic.
http://www.mashpedia.com/Tree_line
13
52
Solving Simple Algebraic Equations Study Guide Introduction to Solving Simple Algebraic Equations Strange as it may sound, the power of mathematics rests on its evasion of all unnecessary thought and on its wonderful saving of mental operations. —Ernst Mach (1838–1916) Austrian Physicist and Philosopher In this lesson, you'll learn how to solve one-step algebraic equations. So far, we have seen algebraic terms only in expressions. In this lesson, we will look at algebraic equations. An equation presents two expressions that are equal to each other. 3 + 6 = 9, and even 3 = 3 is an equation. An algebraic equation is an equation that includes at least one variable. Working with algebraic expressions has taught us many of the skills we need to solve equations. The goal of solving equations is to get the variable alone on one side of the equation. If the variable is alone on one side, its value must be on the other side, which means the equation is solved. x + 4 = 10 This equation has one variable, x. We can see that 4 is added to x, and the sum of x and 4 is equal to 10. How do we get x alone on one side of the equation? We need to get rid of that 4. We can do that by subtracting 4 from both sides of the equation. Why do we subtract 4 from both sides? The equal sign in an equation tells us that the quantities on each side of the sign have the same value. If we perform an action on one side, such as subtraction, we must perform the same action on the other side, so that the two sides of the equation stay equal. This is the most important rule when solving equations: Whatever we do to one side of an equation, we do the same to the other side of the equation. Let's subtract 4 from both sides: x + 4 – 4 = 10 – 4 x = 6 Our answer is x = 6. This means that if we substitute 6 for x in the equation x + 4 = 10, the equation will remain true. Some equations have more than one answer, but this equation has just one answer. y –3 = 13 The first step in solving an equation is to find the variable. In this equation, the variable is y: y is what we must get alone on one side of the equation. The second step in solving an equation is determining what operation or operations are needed to get the variable alone. In the previous example, we used subtraction. Why? Because a constant, 4, was added to x. We used the opposite of addition, subtraction, to get x alone. Addition and subtraction are opposite operations. Multiplication and division are also opposite operations. The third step is to perform the operation on both sides of the equation. If we are left with the variable alone on one side of the equation, then we have our answer. If not, then we must repeat steps two and three until we have our answer. In the equation y – 3 = 13, 3 is subtracted from y. We must use the opposite of subtraction, addition, to get y alone on one side of the equation. Add 3 to both sides of the equation: y – 3 + 3 = 13 + 3 y = 16 When a variable in an equation has a coefficient, we must use division to get the variable alone. Remember, a coefficient and a base in a term are multiplied. Division is what we use to undo the multiplication. Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! ACTIVITIESGet Outside! 10 Playful Activities Local SAT & ACT Classes - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Bullying in Schools - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Should Your Child Be Held Back a Grade? Know Your Rights - First Grade Sight Words List
http://www.education.com/study-help/article/solving-single-step-algebraic-equations/
13
97
An Introduction to Molecular Biology/DNA the unit of life Genes are made from a long molecule called DNA, which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within this large molecule. The order of these units carries genetic information, similar to how the order of letters on a page carry information. The language used by DNA is called the genetic code, which lets organisms read the information in the genes. This information is the instructions for constructing and operating a living organism. Deoxyribonucleic acid(DNA): Deoxyribonucleic acid (/diˌɒksiˌraɪbɵ.njuːˌkleɪ.ɨk ˈæsɪd/ , or DNA, is a nucleic acid that contains the genetic instructions used in the development and functioning of all known living organisms (with the exception of RNA viruses). The main role of DNA molecules is the long-term storage of information. DNA is often compared to a set of blueprints, like a recipe or a code, since it contains the instructions needed to construct other components of cells, such as proteins and RNA molecules. The DNA segments that carry this genetic information are called genes, but other DNA sequences have structural purposes, or are involved in regulating the use of this genetic information. DNA consists of two long polymers of simple units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds. These two strands run in opposite directions to each other and are therefore anti-parallel. Attached to each sugar is one of four types of molecules called bases. It is the sequence of these four bases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA, in a process called transcription. The structure of DNA was first discovered by James D. Watson and Francis Crick. It is the same for all species, comprising two helical chains each coiled round the same axis, each with a pitch of 34 Ångströms (3.4 nanometres) and a radius of 10 Ångströms (1.0 nanometres). Within cells, DNA is organized into long structures called chromosomes. These chromosomes are duplicated before cells divide, in a process called DNA replication. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts.In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed. The DNA double helix is stabilized by hydrogen bonds between the bases attached to the two strands. The four bases found in DNA are adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar/phosphate to form the complete nucleotide, as shown for adenosine monophosphate. DNA is a genetic material Griffith's experiment was conducted in 1928 by Frederick Griffith, one of the first experiments suggesting that bacteria are capable of transferring genetic information through a process known as transformation. Griffith used two strains of Streptococcus pneumoniae bacteria which infect mice – a type III-S (smooth) and type II-R (rough) strain. The III-S strain covers itself with a polysaccharide capsule that protects it from the host's immune system, resulting in the death of the host, while the II-R strain doesn't have that protective capsule and is defeated by the host's immune system. A German bacteriologist, Fred Neufeld, had discovered the three pneumococcal types (Types I, II, and III) and discovered the Quellung reaction to identify them in vitro. Until Griffith's experiment, bacteriologists believed that the types were fixed and unchangeable, from one generation to another. In this experiment, bacteria from the III-S strain were killed by heat, and their remains were added to II-R strain bacteria. While neither alone harmed the mice, the combination was able to kill its host. Griffith was also able to isolate both live II-R and live III-S strains of pneumococcus from the blood of these dead mice. Griffith concluded that the type II-R had been "transformed" into the lethal III-S strain by a "transforming principle" that was somehow part of the dead III-S strain bacteria. Today, we know that the "transforming principle" Griffith observed was the DNA of the III-S strain bacteria. While the bacteria had been killed, the DNA had survived the heating process and was taken up by the II-R strain bacteria. The III-S strain DNA contains the genes that form the protective polysaccharide capsule. Equipped with this gene, the former II-R strain bacteria were now protected from the host's immune system and could kill the host. The exact nature of the transforming principle (DNA) was verified in the experiments done by Avery, McLeod and McCarty and by Hershey and Chase. Alfred Hershey and Martha Chase conducted series of experiments in 1952 by , confirming that DNA was the genetic material, which had first been demonstrated in the 1944 Avery–MacLeod–McCarty experiment. These experiments are known as Hershey Chase experiments. The existence of DNA was known to biologists since 1869, most of them assumed that proteins carried the information for inheritance that time. Hershey and Chase conducted their experiments on the T2 phage. The phage consists of a protein shell containing its genetic material. The phage infects a bacterium by attaching to its outer membrane and injecting its genetic material and leaving its empty shell attached to the bacterium. In their first set of experiments, Hershey and Chase labeled the DNA of phages with radioactive Phosphorus-32 (p32) (the element phosphorus is present in DNA but not present in any of the 20 amino acids which are component of proteins). They allowed the phages to infect E. coli, and through several elegant experiments were able to observe the transfer of P32 labeled phage DNA into the cytoplasm of the bacterium. In their second set of experiments, they labeled the phages with radioactive Sulfur-35 (Sulfur is present in the amino acids cysteine and methionine, but not in DNA). Following infection of E. coli they then sheared the viral protein shells off of infected cells using a high-speed blender and separated the cells and viral coats by using a centrifuge. After separation, the radioactive S35 tracer was observed in the protein shells, but not in the infected bacteria, supporting the hypothesis that the genetic material which infects the bacteria was DNA and not protein. Hershey shared the 1969 Nobel Prize in Physiology or Medicine for his “discoveries concerning the genetic structure of viruses.” Oswald T. Avery, Colin MacCleod, Maclyn McCarty with Francis Crick and James D Watson Structure of DNA Two helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell, but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form. Base pairing Of DNA Chargaff's rules was given by Erwin Chargaff which state that DNA from any cell of all organisms should have a 1:1 ratio of pyrimidine and purine bases and, more specifically, that the amount of guanine is equal to cytosine and the amount of adenine is equal to thymine. This pattern is found in both strands of the DNA. They were discovered by Austrian chemist Erwin Chargaff. In molecular biology, two nucleotides on opposite complementary DNA strands that are connected via hydrogen bonds are called a base pair (often abbreviated bp). In the canonical Watson-Crick DNA base pairing, Adenine (A) forms a base pair with Thymine (T) and Guanine (G) forms a base pair with Cytosine (C). In RNA, thymine is replaced by Uracil (U). Alternate hydrogen bonding patterns, such as the wobble base pair and Hoogsteen base pair, also occur—particularly in RNA—giving rise to complex and functional tertiary structures. Purine base The German chemist Emil Fischer in 1884 gave the name 'purine' (purum uricum). He synthesized it for the first time in 1899 by uric acid which had been isolated from kidney stones by Scheele in 1776. Beside from DNA and RNA, purines are also components in a number of other important biomolecules, such as ATP, GTP, cyclic AMP, NADH, and coenzyme A. Purine itself, has not been found in nature, but it can be produced by organic synthesis.A purine is a heterocyclic aromatic organic compound, consisting of a pyrimidine ring fused to an imidazole ring. Adenine is one of the two purine nucleobases (the other being guanine) used in forming nucleotides of the nucleic acids (DNA or RNA). In DNA, adenine binds to thymine via two hydrogen bonds to assist in stabilizing the nucleic acid structures. Adenine forms adenosine, a nucleoside, when attached to ribose, and deoxyadenosine when attached to deoxyribose. It forms adenosine triphosphate (ATP), a nucleotide, when three phosphate groups are added to adenosine. Guanine, along with adenine and cytosine, is present in both DNA and RNA, whereas thymine is usually seen only in DNA, and uracil only in RNA. In DNA, guanine is paired with cytosine. With the formula C5H5N5O, guanine is a derivative of purine, consisting of a fused pyrimidine-imidazole ring system with conjugated double bonds. Guanine has two tautomeric forms, the major keto form and rare enol form. It binds to cytosine through three hydrogen bonds. In cytosine, the amino group acts as the hydrogen donor and the C-2 carbonyl and the N-3 amine as the hydrogen-bond acceptors. Guanine has a group at C-6 that acts as the hydrogen acceptor, while the group at N-1 and the amino group at C-2 act as the hydrogen donors. Pyrimidine base Pyrimidine is a heterocyclic aromatic organic compound similar to benzene and pyridine, containing two nitrogen atoms at positions 1 and 3 of the six-member ring. It is isomeric with two other forms of diazine.Three nucleobases found in nucleic acids, cytosine (C), thymine (T), and uracil (U), are pyrimidine derivatives. A pyrimidine has many properties in common with pyridine, as the number of nitrogen atoms in the ring increases the ring pi electrons become less energetic and electrophilic aromatic substitution gets more difficult while nucleophilic aromatic substitution gets easier. An example of the last reaction type is the displacement of the amino group in 2-aminopyrimidine by chlorine and its reverse. Reduction in resonance stabilization of pyrimidines may lead to addition and ring cleavage reactions rather than substitutions. One such manifestation is observed in the Dimroth rearrangement. Compared to pyridine, N-alkylation and N-oxidation is more difficult, and pyrimidines are also less basic: The pKa value for protonated pyrimidine is 1.23 compared to 5.30 for pyridine. Pyrimidine also is found in meteorites, although scientists still do not know its origin. Pyrimidine also photolytically decomposes into Uracil under UV light. Cytosine can be found as part of DNA, as part of RNA, or as a part of a nucleotide. As cytidine triphosphate (CTP), it can act as a co-factor to enzymes, and can transfer a phosphate to convert adenosine diphosphate (ADP) to adenosine triphosphate (ATP).The nucleoside of cytosine is cytidine. In DNA and RNA, cytosine is paired with guanine. However, it is inherently unstable, and can change into uracil (spontaneous deamination). This can lead to a point mutation if not repaired by the DNA repair enzymes such as uracil glycosylase, which cleaves a uracil in DNA. Cytosine can also be methylated into 5-methylcytosine by an enzyme called DNA methyltransferase or be methylated and hydroxylated to make 5-hydroxymethylcytosine. Active enzymatic deamination of cytosine or 5-methylcytosine by the APOBEC family of cytosine deaminases could have both beneficial and detrimental implications on various cellular processes as well as on organismal evolution. The implications of deamination on 5-hydroxymethylcytosine, on the other hand, remains less understood. Thymine (T, Thy) is one of the four nucleobases in the nucleic acid of DNA that are represented by the letters G–C–A–T. The others are adenine, guanine, and cytosine. Thymine is also known as 5-methyluracil, a pyrimidine nucleobase. As the name suggests, thymine may be derived by methylation of uracil at the 5th carbon. In RNA, thymine is replaced with uracil in most cases. In DNA, thymine(T) binds to adenine (A) via two hydrogen bonds, thus stabilizing the nucleic acid structures. Uracil found in RNA, it base-pairs with adenine and replaces thymine during DNA transcription. Methylation of uracil produces thymine. It turns into thymine to protect the DNA and to improve the efficiency of DNA replication. Uracil can base-pair with any of the bases, depending on how the molecule arranges itself on the helix, but readily pairs with adenine because the methyl group is repelled into a fixed position. Uracil pairs with adenine through hydrogen bonding. Uracil is the hydrogen bond acceptor and can form two hydrogen bonds. Uracil can also bind with a ribose sugar to form the ribonucleoside uridine. When a phosphate attaches to uridine, uridine 5'-monophosphate is produced. Nucleosides are glycosylamines consisting of a nucleobase (often referred to as simply base) bound to a ribose or deoxyribose sugar via a beta-glycosidic linkage. Examples of nucleosides include cytidine, uridine, adenosine, guanosine, thymidine and inosine. Nucleosides can be phosphorylated by specific kinases in the cell on the sugar's primary alcohol group (-CH2-OH), producing nucleotides, which are the molecular building-blocks of DNA and RNA. Nucleosides can be produced by de novo synthesis pathways, in particular in the liver, but they are more abundantly supplied via ingestion and digestion of nucleic acids in the diet, whereby nucleotidases break down nucleotides (such as the thymine nucleotide) into nucleosides (such as thymidine) and phosphate. 1. Adenosine is a nucleoside composed of a molecule of adenine attached to a ribose sugar molecule (ribofuranose) moiety via a β-N9-glycosidic bond. 2.Cytidine is a nucleoside molecule that is formed when cytosine is attached to a ribose ring (also known as a ribofuranose) via a β-N1-glycosidic bond. Cytidine is a component of RNA. 3.Guanosine is a purine nucleoside comprising guanine attached to a ribose (ribofuranose) ring via a β-N9-glycosidic bond. Guanosine can be phosphorylated to become guanosine monophosphate (GMP), cyclic guanosine monophosphate (cGMP), guanosine diphosphate (GDP), and guanosine triphosphate (GTP). 4.Thymidine (more precisely called deoxythymidine; can also be labelled deoxyribosylthymine, and thymine deoxyriboside) is a chemical compound, more precisely a pyrimidine deoxynucleoside. Deoxythymidine is the DNA nucleoside T, which pairs with deoxyadenosine (A) in double-stranded DNA. If cytosine is attached to a deoxyribose ring, it is known as a deoxycytidine A nucleotide is composed of a nucleobase (nitrogenous base), a five-carbon sugar (either ribose or 2'-deoxyribose), and one to three phosphate groups. Together, the nucleobase and sugar comprise a nucleoside. The phosphate groups form bonds with either the 2, 3, or 5-carbon of the sugar, with the 5-carbon site most common. Cyclic nucleotides form when the phosphate group is bound to two of the sugar's hydroxyl groups. Ribonucleotides are nucleotides where the sugar is ribose, and deoxyribonucleotides contain the sugar deoxyribose. Nucleotides can contain either a purine or a pyrimidine base. Nucleic acids are polymeric macromolecules made from nucleotide monomers. In DNA, the purine bases are adenine and guanine, while the pyrimidines are thymine and cytosine. RNA uses uracil in place of thymine. Adenine always pairs with thymine by 2 hydrogen bonds, while guanine pairs with cytosine through 3 hydrogen bonds, each due to their unique structures. A deoxyribonucleotide is the monomer, or single unit, of DNA, or deoxyribonucleic acid. Each deoxyribonucleotide comprises three parts: a nitrogenous base, a deoxyribose sugar, and one or more phosphate groups. The nitrogenous base is always bonded to the 1' carbon of the deoxyribose, which is distinguished from ribose by the presence of a proton on the 2' carbon rather than an -OH group. The phosphate groups bind to the 5' carbon of the sugar. When deoxyribonucleotides polymerize to form DNA, the phosphate group from one nucleotide will bond to the 3' carbon on another nucleotide, forming a phosphodiester bond via dehydration synthesis. New nucleotides are always added to the 3' carbon of the last nucleotide, so synthesis always proceeds from 5' to 3'. A phosphodiester bond is a group of strong covalent bonds between a phosphate group and two 5-carbon ring carbohydrates (pentoses) over two ester bonds. Phosphodiester bonds are central to most life on Earth, as they make up the backbone of the strands of DNA. In DNA and RNA, the phosphodiester bond is the linkage between the 3' carbon atom of one sugar molecule and the 5' carbon of another, deoxyribose in DNA and ribose in RNA. The phosphate groups in the phosphodiester bond are negatively-charged. Because the phosphate groups have a pKa near 0, they are negatively-charged at pH 7. This repulsion forces the phosphates to take opposite sides of the DNA strands and is neutralized by proteins (histones), metal ions such as magnesium, and polyamines. In order for the phosphodiester bond to be formed and the nucleotides to be joined, the tri-phosphate or di-phosphate forms of the nucleotide building blocks are broken apart to give off energy required to drive the enzyme-catalyzed reaction. When a single phosphate or two phosphates known as pyrophosphates break away and catalyze the reaction, the phosphodiester bond is formed. Hydrolysis of phosphodiester bonds can be catalyzed by the action of phosphodiesterases which play an important role in repairing DNA sequences. In biological systems, the phosphodiester bond between two ribonucleotides can be broken by alkaline hydrolysis because of the free 2' hydroxyl group. Forms of DNA A-DNA: A-DNA is one of the many possible double helical structures of DNA. A-DNA is thought to be one of three biologically active double helical structures along with B- and Z-DNA. It is a right-handed double helix fairly similar to the more common and well-known B-DNA form, but with a shorter more compact helical structure. It appears likely that it occurs only in dehydrated samples of DNA, such as those used in crystallographic experiments, and possibly is also assumed by DNA-RNA hybrid helices and by regions of double-stranded RNA. B-DNAThe most common form of DNA is B DNA. The DNA double helix is a spiral polymer of nucleic acids, held together by nucleotides which base pair together. In B-DNA, the most common double helical structure, the double helix is right-handed with about 10–10.5 nucleotides per turn. The double helix structure of DNA contains a major groove and minor groove, the major groove being wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to DNA do so through the wider major groove. Z-DNA: Z-DNA is one of the many possible double helical structures of DNA. It is a left-handed double helical structure in which the double helix winds to the left in a zig-zag pattern (instead of to the right, like the more common B-DNA form). Z-DNA is thought to be one of three biologically active double helical structures along with A- and B-DNA. Z-DNA is quite different from the right-handed forms. In fact, Z-DNA is often compared against B-DNA in order to illustrate the major differences. The Z-DNA helix is left-handed and has a structure that repeats every 2 base pairs. The major and minor grooves, unlike A- and B-DNA, show little difference in width. Formation of this structure is generally unfavourable, although certain conditions can promote it; such as alternating purine-pyrimidine sequence (especially poly(dGC)2), negative DNA supercoiling or high salt and some cations (all at physiological temperature, 37 °C, and pH 7.3-7.4). Z-DNA can form a junction with B-DNA (called a "B-to-Z junction box") in a structure which involves the extrusion of a base pair. The Z-DNA conformation has been difficult to study because it does not exist as a stable feature of the double helix. Instead, it is a transient structure that is occasionally induced by biological activity and then quickly disappears. |Diameter||23 Å (2.3 nm)||20 Å (2.0 nm)||18 Å (1.8 nm)| |Repeating unit||1 bp||1 bp||2 bp| |Inclination of bp to axis||+19°||−1.2°||−9°| |Rise/bp along axis||2.3 Å (0.23 nm)||3.32 Å (0.332 nm)||3.8 Å (0.38 nm)| |Pitch/turn of helix||28.2 Å (2.82 nm)||33.2 Å (3.32 nm)||45.6 Å (4.56 nm)| |Mean propeller twist||+18°||+16°||0°| |Glycosyl angle||anti||anti||C: anti, |Sugar pucker||C3'-endo||C2'-endo||C: C2'-endo, bp-Base pair, nm-nano meter Noncoding genomic DNA In molecular biology, noncoding DNA describes components of an organism's DNA sequences that do not encode for protein sequences. Pseudogenes Pseudogenes are DNA sequences, related to known genes, that have lost their protein-coding ability or are otherwise no longer expressed in the cell. Pseudogenes arise from retrotransposition or genomic duplication of functional genes, and become "genomic fossils" that are nonfuctional due to mutations that prevent the transcription of the gene, such as within the gene promoter region, or fatally alter the translation of the gene, such as premature stop codons or frameshifts. Pseudogenes resulting from the retrotransposition of an RNA intermediate are known as processed pseudogenes; pseudogenes that arise from the genomic remains of duplicated genes or residues of inactivated genes are nonprocessed pseudogenes. While Dollo's Law suggests that the loss of function in pseudogenes is likely permanent, silenced genes may actually retain function for several million years and can be "reactivated" into protein-coding sequences and a substantial number of pseudogenes are actively transcribed. Because pseudogenes are presumed to evolve without evolutionary constraint, they can serve as a useful model of the type and frequencies of various spontaneous genetic mutations. Coiling of DNA DNA supercoiling is important for DNA packaging within all cells. Because the length of DNA can be thousands of times that of a cell, packaging this genetic material into the cell or nucleus (in eukaryotes) is a difficult feat. Supercoiling of DNA reduces the space and allows for a lot more DNA to be packaged. In prokaryotes, plectonemic supercoils are predominant, because of the circular chromosome and relatively small amount of genetic material. In eukaryotes, DNA supercoiling exists on many levels of both plectonemic and solenoidal supercoils, with the solenoidal supercoiling proving most effective in compacting the DNA. Solenoidal supercoiling is achieved with histones to form a 10nm fiber. This fiber is further coiled into a 30nm fiber, and further coiled upon itself numerous times more. DNA packaging is greatly increased during nuclear division events such as mitosis or meiosis, where DNA must be compacted and segregated to daughter cells. Condensins and cohesins are Structural Maintenance of Chromosome proteins that aid in the condensation of sister chromatids and the linkage of the centromere in sister chromatids. These SMC proteins induce positive supercoils. Supercoiling is also required for DNA/RNA synthesis. Because DNA must be unwound for DNA/RNA polymerase action, supercoils will result. The region ahead of the polymerase complex will be unwound; this stress is compensated with positive supercoils ahead of the complex. Behind the complex, DNA is rewound and there will be compensatory negative supercoils. It is important to note that topoisomerases such as DNA gyrase (Type II Topoisomerase) play a role in relieving some of the stress during DNA/RNA synthesis. NA supercoiling can be described numerically by changes in the 'linking number' Lk. The linking number is the most descriptive property of supercoiled DNA. Lko, the number of turns in the relaxed (B type) DNA plasmid/molecule, is determined by dividing the total base pairs of the molecule by the relaxed bp/turn which, depending on reference is 10.4-10.5. Lk is merely the number of crosses a single strand makes across the other in a planar projection. The topology of the DNA is described by the equation below in which the linking number is equivalent to the sum of TW, which is the number of twists or turns of the double helix, and Wr which is the number of coils or 'writhes'. If there is a closed DNA molecule, the sum of TW and Wr, or the linking number, does not change. However, there may be complementary changes in TW and Wr without changing their sum. The change in the linking number, ΔLk, is the actual number of turns in the plasmid/molecule, Lk, minus the number of turns in the relaxed plasmid/molecule Lko. If the DNA is negatively supercoiled ΔLk < 0. The negative supercoiling implies that the DNA is underwound. A standard expression independent of the molecule size is the "specific linking difference" or "superhelical density" denoted σ. σ represents the number of turns added or removed relative to the total number of turns in the relaxed molecule/plasmid, indicating the level of supercoiling. The linking number is a numerical invariant that describes the linking of two closed curves in three-dimensional space. Intuitively, the linking number represents the number of times that each curve winds around the other. The linking number is always an integer, but may be positive or negative depending on the orientation of the two curves. Since the linking number L of supercoiled DNA is the number of times the two strands are intertwined (and both strands remain covalently intact), L cannot change. The reference state (or parameter) L0 of a circular DNA duplex is its relaxed state. In this state, its writhe W = 0. Since L = T + W, in a relaxed state T = L. Thus, if we have a 400 bp relaxed circular DNA duplex, L ~ 40 (assuming ~10 bp per turn in B-DNA). Then T ~ 40. - Positively supercoiling: - T = 0, W = 0, then L = 0 - T = +3, W = 0, then L = +3 - T = +2, W = +1, then L = +3 - Negatively supercoiling: - T = 0, W = 0, then L = 0 - T = -3, W = 0, then L = -3 - T = -2, W = -1, then L = -3 Negative supercoils favor local unwinding of the DNA, allowing processes such as transcription, DNA replication, and recombination. Negative supercoiling is also thought to favour the transition between B-DNA and Z-DNA, and moderate the interactions of DNA binding proteins involved in gene regulation. Histones: The DNA binding protein Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is from the German "Histon", of uncertain origin: perhaps from Greek histanai or from histos. Until the early 1990s, histones were dismissed by most as inert packing material for eukaryotic nuclear DNA, based in part on the "ball and stick" models of Mark Ptashne and others who believed transcription was activated by protein-DNA and protein-protein interactions on largely naked DNA templates, as is the case in bacteria. During the 1980s, work by Michael Grunstein demonstrated that eukaryotic histones repress gene transcription, and that the function of transcriptional activators is to overcome this repression. We now know that histones play both positive and negative roles in gene expression, forming the basis of the histone code. The discovery of the H5 histone appears to date back to 1970's, and in classification it has been grouped with The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer, forming two nearly symmetrical halves by tertiary structure (C2 symmetry; one macromolecule is the mirror image of the other).The H2A-H2B dimers and H3-H4 tetramer also show pseudodyad symmetry. The 4 'core' histones (H2A, H2B, H3 and H4) are relatively similar in structure and are highly conserved through evolution, all featuring a 'helix turn helix turn helix' motif (which allows the easy dimerisation). They also share the feature of long 'tails' on one end of the amino acid structure - this being the location of post-translational modification (see below). It has been proposed that histone proteins are evolutionarily related to the helical part of the extended AAA+ ATPase domain, the C-domain, and to the N-terminal substrate recognition domain of Clp/Hsp100 proteins. Despite the differences in their topology, these three folds share a homologous helix-strand-helix (HSH) motif. Using an electron paramagnetic resonance spin-labeling technique, British researchers measured the distances between the spools around which eukaryotic cells wind their DNA. They determined the spacings range from 59 to 70 Å.In all, histones make five types of interactions with DNA: Helix-dipoles from alpha-helices in H2B, H3, and H4 cause a net positive charge to accumulate at the point of interaction with negatively charged phosphate groups on DNA Hydrogen bonds between the DNA backbone and the amide group on the main chain of histone proteins Nonpolar interactions between the histone and deoxyribose sugars on DNA Salt bridges and hydrogen bonds between side chains of basic amino acids (especially lysine and arginine) and phosphate oxygens on DNA Non-specific minor groove insertions of the H3 and H2B N-terminal tails into two minor grooves each on the DNA molecule The highly basic nature of histones, aside from facilitating DNA-histone interactions, contributes to the water solubility of histones. Histones are subject to post translational modification by enzymes primarily on their N-terminal tails, but also in their globular domains. Such modifications include methylation, citrullination, acetylation, phosphorylation, SUMOylation, ubiquitination, and ADP-ribosylation. This affects their function of gene regulation. In general, genes that are active have less bound histone, while inactive genes are highly associated with histones during interphase. It also appears that the structure of histones has been evolutionarily conserved, as any deleterious mutations would be severely maladaptive. Histone DNA interaction The core histone proteins contain a characteristic structural motif termed the "histone fold" which consists of three alpha-helices (α1-3) separated by two loops (L1-2). In solution the histones form H2A-H2B heterodimers and H3-H4 heterotetramers. Histones dimerise about their long α2 helices in an anti-parallel orientation, and in the case of H3 and H4, two such dimers form a 4-helix bundle stabilised by extensive H3-H3’ interaction. The H2A/H2B dimer binds onto the H3/H4 tetramer due to interactions between H4 and H2B which include the formation of a hydrophobic cluster. The histone octamer is formed by a central H3/H4 tetramer sandwiched between two H2A/H2B dimers. Due to the highly basic charge of all four core histones, the histone octamer is only stable in the presence of DNA or very high salt concentrations. Nucleosomes form the fundamental repeating units of eukaryotic chromatin, which is used to pack the large eukaryotic genomes into the nucleus while still ensuring appropriate access to it (in mammalian cells approximately 2 m of linear DNA have to be packed into a nucleus of roughly 10 µm diameter). Nucleosomes are folded through a series of successively higher order structures to eventually form a chromosome; this both compacts DNA and creates an added layer of regulatory control which ensures correct gene expression. Nucleosomes are thought to carry epigenetically inherited information in the form of covalent modifications of their core histones. The nucleosome hypothesis was proposed by Don and Ada Olins in 1974 and Roger Kornberg. The nucleosome core particle ) consists of about 146 bp of DNA wrapped in 1.67 left-handed superhelical turns around the histone octamer, consisting of 2 copies each of the core histones H2A, H2B, H3, and H4. Adjacent nucleosomes are joined by a stretch of free DNA termed "linker DNA" (which varies from 10 - 80 bp in length depending on species and tissue type. DNA-binding domains One or more DNA-binding domains are often part of a larger protein consisting of additional domains with differing function. The additional domains often regulate the activity of the DNA-binding domain. The function of DNA binding is either structural or involving transcription regulation, with the two roles sometimes overlapping. DNA-binding domains with functions involving DNA structure have biological roles in the replication, repair, storage, and modification of DNA, such as methylation. Many proteins involved in the regulation of gene expression contain DNA-binding domains. For example, proteins that regulate transcription by binding DNA are called transcription factors. The final output of most cellular signaling cascades is gene regulation. The DBD interacts with the nucleotides of DNA in a DNA sequence-specific or non-sequence-specific manner, but even non-sequence-specific recognition involves some sort of molecular complementarity between protein and DNA. DNA recognition by the DBD can occur at the major or minor groove of DNA, or at the sugar-phosphate DNA backbone (see the structure of DNA). Each specific type of DNA recognition is tailored to the protein's function. For example, the DNA-cutting enzyme DNAse I cuts DNA almost randomly and so must bind to DNA in a non-sequence-specific manner. But, even so, DNAse I recognizes a certain 3-D DNA structure, yielding a somewhat specific DNA cleavage pattern that can be useful for studying DNA recognition by a technique called DNA footprinting. Many DNA-binding domains must recognize specific DNA sequences, such as DBDs of transcription factors that activate specific genes, or those of enzymes that modify DNA at specific sites, like restriction enzymes and telomerase. The hydrogen bonding pattern in the DNA major groove is less degenerate than that of the DNA minor groove, providing a more attractive site for sequence-specific DNA recognition. The specificity of DNA-binding proteins can be studied using many biochemical and biophysical techniques, such as gel electrophoresis, analytical ultracentrifugation, calorimetry, DNA mutation, protein structure mutation or modification, nuclear magnetic resonance, x-ray crystallography, surface plasmon resonance, electron paramagnetic resonance, cross-linking and Microscale Thermophoresis (MST). Types of DNA-binding domains Originally discovered in bacteria, the helix-turn-helix motif is commonly found in repressor proteins and is about 20 amino acids long. In eukaryotes, the homeodomain comprises 2 helices, one of which recognizes the DNA (aka recognition helix). They are common in proteins that regulate developmental processes (PROSITE HTH). Crystallographic structure (PDB 1R4O) of a dimer of the zinc finger containing DBD of the glucocorticoid receptor (top) bound to DNA (bottom). Zinc atoms are represented by grey spheres and the coordinating cysteine sidechains are depicted as sticks. The zinc finger This domain is generally between 23 and 28 amino acids long and is stabilized by coordinating Zinc ions with regularly spaced zinc-coordinating residues (either histidines or cysteines). The most common class of zinc finger (Cys2His2) coordinates a single zinc ion and consists of a recognition helix and a 2-strand beta-sheet. In transcription factors these domains are often found in arrays (usually separated by short linker sequences) and adjacent fingers are spaced at 3 basepair intervals when bound to DNA. The basic leucine zipper (bZIP) domain contains an alpha helix with a leucine at every 7th amino acid. If two such helices find one another, the leucines can interact as the teeth in a zipper, allowing dimerization of two proteins. When binding to the DNA, basic amino acid residues bind to the sugar-phosphate backbone while the helices sit in the major grooves. It regulates gene expression.The bZip family of transcription factors consist of a basic region that interacts with the major groove of a DNA molecule through hydrogen bonding, and a hydrophobic leucine zipper region that is responsible for dimerization. Consisting of about 110 amino acids, the winged helix (WH) domain has four helices and a two-strand beta-sheet. Winged helix turn helix The winged helix turn helix domain (wHTH) SCOP 46785 is typically 85-90 amino acids long. It is formed by a 3-helical bundle and a 4-strand beta-sheet (wing). The Helix-loop-helix domain is found in some transcription factors and is characterized by two α helices connected by a loop. One helix is typically smaller and due to the flexibility of the loop, allows dimerization by folding and packing against another helix. The larger helix typically contains the DNA-binding regions. HMG-box domains are found in high mobility group proteins which are involved in a variety of DNA-dependent processes like replication and transcription. The domain consists of three alpha helices separated by loops. DNA sequencing RNA sequencing was one of the earliest forms of nucleotide sequencing. The major landmark of RNA sequencing is the sequence of the first complete gene and the complete genome of Bacteriophage MS2, identified and published by Walter Fiers and his coworkers at the University of Ghent (Ghent, Belgium), between 1972 and 1976. Prior to the development of rapid DNA sequencing methods in the early 1970s by Frederick Sanger at the University of Cambridge, in England and Walter Gilbert and Allan Maxam at Harvard, a number of laborious methods were used. For instance, in 1973, Gilbert and Maxam reported the sequence of 24 basepairs using a method known as wandering-spot analysis. The chain-termination method developed by Sanger and coworkers in 1975 soon became the method of choice, owing to its relative ease and reliability. Maxam and Gilbert method In 1976–1977, Allan Maxam and Walter Gilbert developed a DNA sequencing method based on chemical modification of DNA and subsequent cleavage at specific bases. Although Maxam and Gilbert published their chemical sequencing method two years after the ground-breaking paper of Sanger and Coulson on plus-minus sequencing,Maxam–Gilbert sequencing rapidly became more popular, since purified DNA could be used directly, while the initial Sanger method required that each read start be cloned for production of single-stranded DNA. However, with the improvement of the chain-termination method (see below), Maxam-Gilbert sequencing has fallen out of favour due to its technical complexity prohibiting its use in standard molecular biology kits, extensive use of hazardous chemicals, and difficulties with scale-up. The method requires radioactive labeling at one 5' end of the DNA (typically by a kinase reaction using gamma-32P ATP) and purification of the DNA fragment to be sequenced. Chemical treatment generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). For example, the purines (A+G) are depurinated using formic acid, the guanines (and to some extent the adenines) are methylated by dimethyl sulfate, and the pyrimidines (C+T) are methylated using hydrazine. The addition of salt (sodium chloride) to the hydrazine reaction inhibits the methylation of thymine for the C-only reaction. The modified DNAs are then cleaved by hot piperidine at the position of the modified base. The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule. Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule. The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography, yielding a series of dark bands each corresponding to a radiolabeled DNA fragment, from which the sequence may be inferred. Also sometimes known as "chemical sequencing", this method led to the Methylation Interference Assay used to map DNA-binding sites for DNA-binding proteins. Dideoxynucleotide Chain-termination methods Because the chain-terminator method (or Sanger method after its developer Frederick Sanger) is more efficient and uses fewer toxic chemicals and lower amounts of radioactivity than the method of Maxam and Gilbert, it rapidly became the method of choice. The key principle of the Sanger method was the use of dideoxynucleotide triphosphates (ddNTPs) as DNA chain terminators. The classical chain-termination method requires a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleotidephosphates (dNTPs), and modified nucleotides (dideoxyNTPs) that terminate DNA strand elongation. These ddNTPs will also be radioactively or fluorescently labelled for detection in automated sequencing machines. The DNA sample is divided into four separate sequencing reactions, containing all four of the standard deoxynucleotides (dATP, dGTP, dCTP and dTTP) and the DNA polymerase. To each reaction is added only one of the four dideoxynucleotides (ddATP, ddGTP, ddCTP, or ddTTP) which are the chain-terminating nucleotides, lacking a 3'-hydroxyl (OH) group required for the formation of a phosphodiester bond between two nucleotides, thus terminating DNA strand extension and resulting in DNA fragments of varying length. The newly synthesized and labelled DNA fragments are heat denatured, and separated by size (with a resolution of just one nucleotide) by gel electrophoresis on a denaturing polyacrylamide-urea gel with each of the four reactions run in one of four individual lanes (lanes A, T, G, C); the DNA bands are then visualized by autoradiography or UV light, and the DNA sequence can be directly read off the X-ray film or gel image. In the image on the right, X-ray film was exposed to the gel, and the dark bands correspond to DNA fragments of different lengths. A dark band in a lane indicates a DNA fragment that is the result of chain termination after incorporation of a dideoxynucleotide (ddATP, ddGTP, ddCTP, or ddTTP). The relative positions of the different bands among the four lanes are then used to read (from bottom to top) the DNA sequence. Technical variations of chain-termination sequencing include tagging with nucleotides containing radioactive phosphorus for radiolabelling, or using a primer labeled at the 5’ end with a fluorescent dye. Dye-primer sequencing facilitates reading in an optical system for faster and more economical analysis and automation. The later development by Leroy Hood and coworkers of fluorescently labeled ddNTPs and primers set the stage for automated, high-throughput DNA sequencing. Chain-termination methods have greatly simplified DNA sequencing. For example, chain-termination-based kits are commercially available that contain the reagents needed for sequencing, pre-aliquoted and ready to use. Limitations include non-specific binding of the primer to the DNA, affecting accurate read-out of the DNA sequence, and DNA secondary structures affecting the fidelity of the sequence. Dye-terminator sequencing Dye-terminator sequencing utilizes labelling of the chain terminator ddNTPs, which permits sequencing in a single reaction, rather than four reactions as in the labelled-primer method. In dye-terminator sequencing, each of the four dideoxynucleotide chain terminators is labelled with fluorescent dyes, each of which emit light at different wavelengths. Owing to its greater expediency and speed, dye-terminator sequencing is now the mainstay in automated sequencing. Its limitations include dye effects due to differences in the incorporation of the dye-labelled chain terminators into the DNA fragment, resulting in unequal peak heights and shapes in the electronic DNA sequence trace chromatogram after capillary electrophoresis (see figure to the left). This problem has been addressed with the use of modified DNA polymerase enzyme systems and dyes that minimize incorporation variability, as well as methods for eliminating "dye blobs". The dye-terminator sequencing method, along with automated high-throughput DNA sequence analyzers, is now being used for the vast majority of sequencing projects. Common challenges of DNA sequencing include poor quality in the first 15–40 bases of the sequence and deteriorating quality of sequencing traces after 700–900 bases. Base calling software typically gives an estimate of quality to aid in quality trimming. In cases where DNA fragments are cloned before sequencing, the resulting sequence may contain parts of the cloning vector. In contrast, PCR-based cloning and emerging sequencing technologies based on pyrosequencing often avoid using cloning vectors. Recently, one-step Sanger sequencing (combined amplification and sequencing) methods such as Ampliseq and SeqSharp have been developed that allow rapid sequencing of target genes without cloning or prior amplification. Current methods can directly sequence only relatively short (300–1000 nucleotides long) DNA fragments in a single reaction. The main obstacle to sequencing DNA fragments above this size limit is insufficient power of separation for resolving large DNA fragments that differ in length by only one nucleotide. In all cases the use of a primer with a free 5' end is essential. Automation and sample preparation Automated DNA-sequencing instruments (DNA sequencers) can sequence up to 384 DNA samples in a single batch (run) in up to 24 runs a day. DNA sequencers carry out capillary electrophoresis for size separation, detection and recording of dye fluorescence, and data output as fluorescent peak trace chromatograms. Sequencing reactions by thermocycling, cleanup and re-suspension in a buffer solution before loading onto the sequencer are performed separately. A number of commercial and non-commercial software packages can trim low-quality DNA traces automatically. These programs score the quality of each peak and remove low-quality base peaks (generally located at the ends of the sequence). The accuracy of such algorithms is below visual examination by a human operator, but sufficient for automated processing of large sequence data sets. Polymerase chain reaction PCR is used to amplify a specific region of a DNA strand (the DNA target). Most PCR methods typically amplify DNA fragments of up to ~10 kilo base pairs (kb), although some techniques allow for amplification of fragments up to 40 kb in size. A basic PCR set up requires several components and reagents.These components include: DNA template that contains the DNA region (target) to be amplified. Two primers that are complementary to the 3' (three prime) ends of each of the sense and anti-sense strand of the DNA target. Taq polymerase or another DNA polymerase with a temperature optimum at around 70 °C. Deoxynucleotide triphosphates (dNTPs), the building-blocks from which the DNA polymerase synthesizes a new DNA strand. Buffer solution, providing a suitable chemical environment for optimum activity and stability of the DNA polymerase. Divalent cations, magnesium or manganese ions; generally Mg2+ is used, but Mn2+ can be utilized for PCR-mediated DNA mutagenesis, as higher Mn2+ concentration increases the error rate during DNA synthesis Monovalent cation potassium ions. The PCR is commonly carried out in a reaction volume of 10–200 μl in small reaction tubes (0.2–0.5 ml volumes) in a thermal cycler. The thermal cycler heats and cools the reaction tubes to achieve the temperatures required at each step of the reaction (see below). Many modern thermal cyclers make use of the Peltier effect, which permits both heating and cooling of the block holding the PCR tubes simply by reversing the electric current. Thin-walled reaction tubes permit favorable thermal conductivity to allow for rapid thermal equilibration. Most thermal cyclers have heated lids to prevent condensation at the top of the reaction tube. Older thermocyclers lacking a heated lid require a layer of oil on top of the reaction mixture or a ball of wax inside the tube. Figure 1: Schematic drawing of the PCR cycle. (1) Denaturing at 94–96 °C. (2) Annealing at ~65 °C (3) Elongation at 72 °C. Four cycles are shown here. The blue lines represent the DNA template to which primers (red arrows) anneal that are extended by the DNA polymerase (light green circles), to give shorter DNA products (green lines), which themselves are used as templates as PCR progresses. Typically, PCR consists of a series of 20-40 repeated temperature changes, called cycles, with each cycle commonly consisting of 2-3 discrete temperature steps, usually three . The cycling is often preceded by a single temperature step (called hold) at a high temperature (>90 °C), and followed by one hold at the end for final product extension or brief storage. The temperatures used and the length of time they are applied in each cycle depend on a variety of parameters. These include the enzyme used for DNA synthesis, the concentration of divalent ions and dNTPs in the reaction, and the melting temperature (Tm) of the primers.Initialization step: This step consists of heating the reaction to a temperature of 94–96 °C (or 98 °C if extremely thermostable polymerases are used), which is held for 1–9 minutes. It is only required for DNA polymerases that require heat activation by hot-start PCR. Denaturation step: This step is the first regular cycling event and consists of heating the reaction to 94–98 °C for 20–30 seconds. It causes DNA melting of the DNA template by disrupting the hydrogen bonds between complementary bases, yielding single-stranded DNA molecules. Annealing step: The reaction temperature is lowered to 50–65 °C for 20–40 seconds allowing annealing of the primers to the single-stranded DNA template. Typically the annealing temperature is about 3-5 degrees Celsius below the Tm of the primers used. Stable DNA-DNA hydrogen bonds are only formed when the primer sequence very closely matches the template sequence. The polymerase binds to the primer-template hybrid and begins DNA synthesis. Extension/elongation step: The temperature at this step depends on the DNA polymerase used; Taq polymerase has its optimum activity temperature at 75–80 °C, and commonly a temperature of 72 °C is used with this enzyme. At this step the DNA polymerase synthesizes a new DNA strand complementary to the DNA template strand by adding dNTPs that are complementary to the template in 5' to 3' direction, condensing the 5'-phosphate group of the dNTPs with the 3'-hydroxyl group at the end of the nascent (extending) DNA strand. The extension time depends both on the DNA polymerase used and on the length of the DNA fragment to be amplified. As a rule-of-thumb, at its optimum temperature, the DNA polymerase will polymerize a thousand bases per minute. Under optimum conditions, i.e., if there are no limitations due to limiting substrates or reagents, at each extension step, the amount of DNA target is doubled, leading to exponential (geometric) amplification of the specific DNA fragment. Final elongation: This single step is occasionally performed at a temperature of 70–74 °C for 5–15 minutes after the last PCR cycle to ensure that any remaining single-stranded DNA is fully extended. Final hold: This step at 4–15 °C for an indefinite time may be employed for short-term storage of the reaction. To check whether the PCR generated the anticipated DNA fragment (also sometimes referred to as the amplimer or amplicon), agarose gel electrophoresis is employed for size separation of the PCR products. The size(s) of PCR products is determined by comparison with a DNA ladder (a molecular weight marker), which contains DNA fragments of known size, run on the gel alongside the PCR products. Facts to be remembered DNA Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates and make the DNA. In 1865 Gregor Mendel's paper, Experiments on Plant Hybridization In 1869, DNA was first isolated by the Swiss physician Friedrich Miescher who discovered a microscopic substance in the pus of discarded surgical bandages. From 1880-1890 Walther Flemming, Eduard Strasburger, and Edouard van Beneden elucidate chromosome distribution during cell division In 1889 Hugo de Vries postulates that "inheritance of specific traits in organisms comes in particles", naming such particles "(pan)genes" In 1903 Walter Sutton hypothesizes that chromosomes, which segregate in a Mendelian fashion, are hereditary units In 1905 William Bateson coins the term "genetics" in a letter to Adam Sedgwick and at a meeting in 1906 In 1908 Hardy-Weinberg law derived. In 1910 Thomas Hunt Morgan shows that genes reside on chromosomes In 1913 Alfred Sturtevant makes the first genetic map of a chromosome In 1913 Gene maps show chromosomes containing linear arranged genes In 1918 Ronald Fisher publishes "The Correlation Between Relatives on the Supposition of Mendelian Inheritance" the modern synthesis of genetics and evolutionary biology starts. See population genetics. In 1928 Frederick Griffith discovers that hereditary material from dead bacteria can be incorporated into live bacteria (see Griffith's experiment) in 1931 Crossing over is identified as the cause of recombination In 1933 Jean Brachet is able to show that DNA is found in chromosomes and that RNA is present in the cytoplasm of all cells. In 1937 William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure. In 1928, Frederick Griffith discovered that traits of the "smooth" form of the Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. In 1952, Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the T2 phage. In 1953, James D. Watson and Francis Crick suggested double-helix model of DNA structure. Purines are found in high concentration in meat and meat products, especially internal organs such as liver and kidney. Examples of high-purine sources include: sweetbreads, anchovies, sardines, liver, beef kidneys, brains, meat extracts (e.g., Oxo, Bovril), herring, mackerel, scallops, game meats, beer (from the yeast) and gravy. bp = base pair(s) One bp corresponds to circa 3.4 Å of length along the strand kb (= kbp) = kilo base pairs = 1,000 bp Mb = mega base pairs = 1,000,000 bp Analysis of DNA topology uses three values: L = linking number - the number of times one DNA strand wraps around the other. It is an integer for a closed loop and constant for a closed topological domain. T = twist - total number of turns in the double stranded DNA helix. This will normally tend to approach the number of turns that a topologically open double stranded DNA helix makes free in solution: number of bases/10.5, assuming there are no intercalating agents (e.g., chloroquine) or other elements modifying the stiffness of the DNA. W = writhe - number of turns of the double stranded DNA helix around the superhelical axis L = T + W and ΔL = ΔT + ΔW Any change of T in a closed topological domain must be balanced by a change in W, and vice versa. This results in higher order structure of DNA. A circular DNA molecule with a writhe of 0 will be circular. If the twist of this molecule is subsequently increased or decreased by supercoiling then the writhe will be appropriately altered, making the molecule undergo plectonemic or toroidal superhelical coiling. When the ends of a piece of double stranded helical DNA are joined so that it forms a circle the strands are topologically knotted. This means the single strands cannot be separated any process that does not involve breaking a strand (such as heating). The task of un-knotting topologically linked strands of DNA falls to enzymes known as topoisomerases. These enzymes are dedicated to un-knotting circular DNA by cleaving one or both strands so that another double or single stranded segment can pass through. This un-knotting is required for the replication of circular DNA and various types of recombination in linear DNA which have similar topological constraints. Gb = giga base pairs = 1,000,000,000 bp. 1972 Development of recombinant DNA technology, which permits isolation of defined fragments of DNA; prior to this, the only accessible samples for sequencing were from bacteriophage or virus DNA. 1977 The first complete DNA genome to be sequenced is that of bacteriophage φX174. 1977 Allan Maxam and Walter Gilbert publish "DNA sequencing by chemical degradation". Frederick Sanger, independently, publishes "DNA sequencing with chain-terminating inhibitors". 1984 Medical Research Council scientists decipher the complete DNA sequence of the Epstein-Barr virus, 170 kb. 1986 Leroy E. Hood's laboratory at the California Institute of Technology and Smith announce the first semi-automated DNA sequencing machine. 1987 Applied Biosystems markets first automated sequencing machine, the model ABI 370. 1990 The U.S. National Institutes of Health (NIH) begins large-scale sequencing trials on Mycoplasma capricolum, Escherichia coli, Caenorhabditis elegans, and Saccharomyces cerevisiae (at US$0.75/base). 1991 Sequencing of human expressed sequence tags begins in Craig Venter's lab, an attempt to capture the coding fraction of the human genome. 1995 Craig Venter, Hamilton Smith, and colleagues at The Institute for Genomic Research (TIGR) publish the first complete genome of a free-living organism, the bacterium Haemophilus influenzae. The circular chromosome contains 1,830,137 bases and its publication in the journal Science marks the first use of whole-genome shotgun sequencing, eliminating the need for initial mapping efforts. 1996 Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm publish their method of pyrosequencing. 1998 Phil Green and Brent Ewing of the University of Washington publish “phred” for sequencer data analysis. 2001 A draft sequence of the human genome is published. 2004 454 Life Sciences markets a parallelized version of pyrosequencing.The first version of their machine reduced sequencing costs 6-fold compared to automated Sanger sequencing, and was the second of a new generation of sequencing technologies, after MPSS List of bases found in DNA and RNA |Name||3-D structure||Abbreviation||Structural formula||Classification||Found in| - Griffith experiment - Hershey–Chase experiment - Hershey, A.D. and Chase, M. (1952) Independent functions of viral protein and nucleic acid in growth of bacteriophage. J Gen Physiol. 36:39–56. - A very–MacLeod–McCarty experiment - Base pair - Phosphodiester bond - Noncoding DNA - DNA supercoil - Vologodskii AV, Lukashin AV, Anshelevich VV, et al. (1979). "Fluctuations in superhelical DNA". Nucleic Acids Res 6: 967–682. doi:10.1093/nar/6.3.967. - H. S. Chawla (2002). Introduction to Plant Biotechnology. Science Publishers. ISBN 1578082285. - Kayne PS, Kim UJ, Han M, Mullen JR, Yoshizaki F, Grunstein M. Extremely conserved histone H4 N terminus is dispensable for growth but essential for repressing the silent mating loci in yeast. Cell. 1988 Oct 7;55(1):27-39. PMID: 3048701 - Crane-Robinson C, Dancy SE, Bradbury EM, Garel A, Kovacs AM, Champagne M, Daune M (August 1976). "Structural studies of chicken erythrocyte histone H5". Eur. J. Biochem. 67 (2): 379–88. doi:10.1111/j.1432-1033.1976.tb10702.x. PMID 964248. - Aviles FJ, Chapman GE, Kneale GG, Crane-Robinson C, Bradbury EM (August 1978). "The conformation of histone H5. Isolation and characterisation of the globular segment". Eur. J. Biochem. 88 (2): 363–71. doi:10.1111/j.1432-1033.1978.tb12457.x. PMID 689022. - DNA sequencing - Smith LM, Sanders JZ, Kaiser RJ, et al (1986). "Fluorescence detection in automated DNA sequence analysis". Nature 321 (6071): 674–9. doi:10.1038/321674a0. PMID 3713851. "We have developed a method for the partial automation of DNA sequence analysis. Fluorescence detection of the DNA fragments is accomplished by means of a fluorophore covalently attached to the oligonucleotide primer used in enzymatic DNA sequence analysis. A different coloured fluorophore is used for each of the reactions specific for the bases A, C, G and T. The reaction mixtures are combined and co-electrophoresed down a single polyacrylamide gel tube, the separated fluorescent bands of DNA are detected near the bottom of the tube, and the sequence information is acquired directly by computer.". - Smith LM, Fung S, Hunkapiller MW, Hunkapiller TJ, Hood LE (April 1985). "The synthesis of oligonucleotides containing an aliphatic amino group at the 5' terminus: synthesis of fluorescent DNA primers for use in DNA sequence analysis". Nucleic Acids Res. 13 (7): 2399–412. doi:10.1093/nar/13.7.2399. PMID 4000959. PMC 341163. http://nar.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=4000959. - "Phred - Quality Base Calling". http://www.phrap.com/phred/. Retrieved 2011-02-24. - "Base-calling for next-generation sequencing platforms — Brief Bioinform". http://bib.oxfordjournals.org/content/early/2011/01/18/bib.bbq077.full. Retrieved 2011-02-24. - Murphy, K.; Berg, K.; Eshleman, J. (2005). "Sequencing of genomic DNA by combined amplification and cycle sequencing reaction". Clinical chemistry 51 (1): 35–39. - Sengupta, D.; Cookson, B. (2010). "SeqSharp: A general approach for improving cycle-sequencing that facilitates a robust one-step combined amplification and sequencing method". The Journal of molecular diagnostics : JMD 12 (3): 272–277. - Polymerase chain reaction
http://en.wikibooks.org/wiki/An_Introduction_to_Molecular_Biology/DNA_the_unit_of_life
13
59
7.3 Confidence Intervals for MeansIn chapter 4 we have seen how to compute the mean, median, standard deviation, and other descriptive statistics for a given data set, usually a sample from an underlying population. In this section we want to focus on estimating the mean of a population, given that we can compute the mean of a particular sample. In other words, if a sample of size, say, 100 is selected at random from some population, it is easy to compute the mean of that sample. It is equally easy to then use that sample mean as an estimate for the unknown population mean. But just because it's easy to do does not necessarily mean it's the right thing to do ... For example, suppose we randomly selected 100 people, measured their height, and computed the average height for our sample to be, say, 164.432 cm. If we now wanted to know the average height of everyone in our population (say everyone in the US), it seems reasonably to say that the average height of everyone is 164.432 cm. However, if we think about it, it is of course highly unlikely that the average for the entire population comes out exactly the same as the average for our sample of just 100 people. It is much more likely that our sample mean of 164.432 cm is only approximately equal to the (unknown) population mean. It is the purpose of this chapter to clarify, using probabilities, what exactly we mean by "approximately equal". In other words:Can we use a sample mean to estimate an (unknown) population mean, and - most importantly - how accurate is our estimated answer.Example: Consider some data for approximately 400 cars. We assume that this data has been collected at random. We would like to make predictions about all automobiles, based on that random sample. In particular, the data set lists miles per gallon, engine size, and weight of 400 cars, but we would like to know the average miles per gallon, engine size, and weight of all cars, based on this sample. It is of course simple to compute the mean of the various variables of the sample, using Excel. For our sample data we find that: mean gas mileage of the sample is 23.5 mpg with a standard deviation of 7.82 mpg, using 398 data values But we need to know how well this sample mean predicts the actual and unknown population mean for the entire distribution. Our best guess is clearly that the average mpg for all cars is 23.5 mpg - it's after all pretty much the only number we have - but how good is that estimation?<>In fact, we know more than just the sample mean. We also know that all sample means are distributed normally, according to the Central Limit Theorem, and that the distribution of all sample means (of which ours is just one) is normal with a mean of 23.5 mpg and a standard deviation of 7.82 / sqrt(398). Using that information, let's make a quick d-tour into "mathematics land" - we will in a minute list a recipe for what we need to do, but for now, bear with me: That interval (a, b) is known as a 95% confidence interval for the unknown mean. - Let's say we want to estimate an (unknown) population mean so that we are, say, 95% certain that the estimate is correct (or 90%, or 99%, or any other pre-determined notion of certainly we might have). - To provide a reasonable estimate, we need to compute a lower number a and an upper number b in such a way as to be 95% sure that our (unknown) population mean is between a and b. If the distribution had mean 0 and standard deviation 1 we could use some trial-and-error in Excel to compute the desired number a - note that if we assume that the mean was 0, a should be negative. In other words, we use Excel to compute NORMDIST(a, 0, 1, TRUE), where we guess some values of a: - Using standard probability notation we can rephrase this: we want to find a and b so that P(a < m < b) = 0.95, i.e. the probability that the (unknown) mean is between a and b should be 0.95, or 95%, which could be depicted as follows: - Using symmetry and focusing on the part of my distribution that we can compute with Excel, this is equivalent to finding a value of a such that P(x < a) = 0.025, where x is normally distributed, as in the following picture: Thus, if the mean was 0 and the standard deviation was 1, the number a = -1.96 would be just about right, and using symmetry we can conclude that b = +1.96. However, we don't know the mean and standard deviation of our population, so what can we do ... Central Limit Theorem to the rescue! - NORMDIST(-0.5,0,1,TRUE) = 0.308537539 (too much probability) - NORMDIST(-1.5,0,1,TRUE) = 0.066807201 (still too much) - NORMDIST(-2.0,0,1,TRUE) = 0.022750132 (now it's too little) - NORMDIST(-1.9,0,1,TRUE) = 0.02871656 (again, too much) - NORMDIST(-1.95,0,1,TRUE) = 0.02558806 (a little too much) - NORMDIST(-1.96,0,1,TRUE) = 0.024997895 (just about right) According to the Central Limit Theorem, the mean and standard deviation of the distribution of all sample means is m and s / sqrt(N), where m is the sample mean and s is the sample standard deviation. Thus, the mean we are supposed to use is the sample mean m and the standard deviation s / sqrt(N), according to the Central Limit Theorem. Putting everything together, we found that we have computed a 95% confidence interval as follows:from m - 1.96 * s / sqrt(N) to m + 1.96 * s / sqrt(N)Note: The term s / sqrt(N) is also known as the Standard Error The above explanation is perhaps somewhat confusing, and there are some parts where I've glossed over some important details. But the resulting formulas are simple, and those formulas will be what we want to focus on. In addition to the number 1.96 that we have derived for a 95% confidence interval, other numbers can be derived in a similar way for the 90% and 99% confidence intervals: Confidence Interval for Mean (large sample size N > 30)Suppose you have a sample with N data points, which has a sample mean m and standard deviation s. Then: - To compute a 90% confidence interval for the unknown population mean, compute the numbers:m - 1.645 * s / sqrt(N) and m + 1.645 * s / sqrt(N)Then there is a 90% probability that the unknown population mean is between these values. - To compute a 95% confidence interval for the unknown population mean, compute the numbers:m - 1.96 * s / sqrt(N) and m + 1.96* s / sqrt(N)Then there is a 95% probability that the unknown population mean is between these values. - To compute a 99% confidence interval for the unknown population mean, compute the numbers:m - 2.58 * s / sqrt(N) and m + 2.54 * s / sqrt(N)Then there is a 99% probability that the unknown population mean is between these values. Using these formulas we can now estimate an unknown population mean with 90%, 95%, or 99% certainty. Other percentages are also possible, but these are the most frequently used ones. Returning to our earlier example, where m = 23.5, s = 7.82, and N = 398 we have: - 90% confidence interval: from 23.5 - 1.645 * 7.82 / sqrt(398) = 22.85 to 23.5 + 1.645 * 7.82 / sqrt(398) = 24.14, thus: we are 90% certain that the average mpg for all cars is between 22.85 and 24.14 - 95% confidence interval: from 23.5 - 1.96 * 7.82 / sqrt(398) = 22.73 to 23.5 + 1.96 * 7.82 / sqrt(398) = 24.27, thus: we are 95% certain that the average mpg for all cars is between 22.73 and 24.27 - 99% confidence interval: from 23.5 - 2.54 * 7.82 / sqrt(398) = 22.5 to 23.5 + 2.54 * 7.82 / sqrt(398) = 24.4, thus: we are 99% certain that the average mpg for all cars is between 22.5 and 24.4 Note that a 99% confidence interval is large - i.e. includes more numbers - than a 90% confidence interval. That makes sense, since if we want to be more certain, we must allow for more values. Ultimately, a 100% confidence interval would simply consist of all possible numbers, or in an interval from -infinity to +infinity . That would certainly be correct, but is not very useful for practical applications. While the above calculations can easily be done with a calculator (or Excel), our favorite computer program Excel provides - yes, you might have guessed it - a quick shortcut to obtain confidence intervals. We will proceed as follows: - Load the above data into Excel - Select "Data Analysis..." from the "Tools" menu entry and select "Descriptive Statistics" - Select as input range the first few columns, including "Miles per Gallon", "Engine Size", "Horse Powers", and "Weight in Pounds". Note that we actually are not interested in "Horse Powers" but the input data range must consist of consecutive cells so we might as well include "Horse Powers" but ignore it in the final output. We should check the "Labels in First Row" box as well as "Summary Statistics" and "Confidence Level for Mean: " in the "Output options" section. We need to enter a level of confidence for the "Confidence Level for Mean". Common numbers are 90%, 95%, or 99% - we will explain the differences below again, or see the discussion above.For now, make sure that the figures are as indicated above. - Click on "OK" to see the following descriptive statistics (similar to what we have seen before): What this means is that the sample mean of, say, "Mile per Gallon" is 23.5145. That sample mean may or may not be the same as the average MPG of all automobiles. But we have also computed a 90% confidence interval, which means, in this case, the following:Under certain assumptions on the distribution of the population, we predict - based on our sample of 393 cars - that the average miles per gallon of all cars is somewhere between 23.5145 - 0.6459 = 22.87 and 23.5145 + 0.6459 = 24.16, and we are 90% certain that this answer is correct.Please note that this 90% confidence interval is slightly different from the confidence interval we computed previously "by hand". That is no coincidence, because the derivation of the formulas for confidence intervals uses the Central Limit Theorem and that theorem, in effect, states that the distribution of the sample means is approximately normal. However, that approximation works best the larger N (the sample size) is. Excel uses a slightly different method to compute confidence intervals: Example: According to Excel, the average engine size in our sample of size N = 398 is 192.67 cubic inches, with a standard deviation of 104.55 cubic inches. Use these statistics to manually compute a 90% confidence interval. Then compare it with the figure Excel produces for the same interval. - If N is sufficiently large (30 or more) the "manual" method and Excel's method agree closely. In this case the method is based on the standard normal distribution - If N is small (less than 30) the "manual" method is no longer appropriate and you should use Excel's method instead. In this case the method is based on the Student's T Distribution <>Thus, since the sample size is large (certainly larger than 30) the intervals computed manually and with Excel are virtually identical. For the picky reader, note that Excel's interval is slighly larger, so it's slightly more conservative than the manual computation, but the difference in this case is neglibile. - To compute a 90% confidence interval manually: - from m - 1.645 * s / sqrt(N) to m + 1.645 * s / sqrt(N) - from 192.67 - 1.645 * 104.55 / sqrt(398) to 192.67 + 1.645 * 104.55 / sqrt(398) - from 192.67 - 8.62 to 192.67 + 8.62 - from 184.05 to 201.29 - To compute a 90% confidence interval using Excel - as the above output shows, the mean m = 192.67 while the confidence level (90%) is 8.64 - from 192.67 - 8.64 to 192.67 + 8.64 - from 184.03 to 201.31 Similarly, according to Excel the average weight in pounds of all cars is 2969.5161 - 69.5328 and 2969.5161 + 69.5328, and we are 90% certain that we are correct. To recap: Instead of providing a point estimate for an unknown population mean (which would almost certainly be incorrect) we provide an interval instead, called confidence interval. Three particular confidence intervals are most common: a 90%, a 95%, or a 99% confidence interval. That means that: - if the interval was computed according to a 90% confidence level, then the true population mean is between the two computed numbers with 90% certainty, and the probability that the true population mean is not inside that interval is less than 10% - if the interval was computed according to a 95% confidence level, then the true population mean is between the two computed numbers with 95% certainty, and the probability that the true population mean is not inside that interval is less than 5% - if the interval was computed according to a 99% confidence level, then the true population mean is between the two computed numbers with 99% certainty, and the probability that the true population mean is not inside that interval is less than 1% Example: Suppose we compute, for the same sample data, both a 90% and a 99% confidence interval. Which one is larger ? To answer this question, let's compute both a 90% and a 99% confidence interval for the "Horse Power" in the above data set about cars, using Excel. The procedure of computing the numbers is similar to the above; here are the answers: - the sample mean for the "Horse Power" is 104.8325 - the 90% confidence level results in 3.1755, so that the 90% confidence interval goes from 104.8325 - 3.1755 to 104.8325 + 3.1755, or from 101.657 to 108.008 - the 99% confidence level results in 4.9851, so that the 99% confidence interval goes from 104.8325 - 4.9851 to 104.8325 + 4.9851, or from 99.84735 to 109.8176 That means, in general, that a 99% confidence interval is larger than a 90% confidence interval. That actually makes sense: if we want to be more sure that we have captured the true (unknown) population mean correctly, we need to make our interval larger. Hence, a 99% confidence interval must include more numbers than a 90% confidence interval; it is therefore wider than a 90% interval.
http://pirate.shu.edu/~wachsmut/Teaching/MATH1101/Testing/confidence-mean.html
13
64
Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2. Unfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine. The problem is easier to understand at first in base 10. Consider the fraction 1/3. You can approximate that as a base 10 fraction: and so on. No matter how many digits you’re willing to write down, the result will never be exactly 1/3, but will be an increasingly better approximation of 1/3. In the same way, no matter how many base 2 digits you’re willing to use, the decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1/10 is the infinitely repeating fraction Stop at any finite number of bits, and you get an approximation. On a typical machine running Python, there are 53 bits of precision available for a Python float, so the value stored internally when you enter the decimal number 0.1 is the binary fraction which is close to, but not exactly equal to, 1/10. It’s easy to forget that the stored value is an approximation to the original decimal fraction, because of the way that floats are displayed at the interpreter prompt. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. If Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display >>> 0.1 0.1000000000000000055511151231257827021181583404541015625 That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead >>> 0.1 0.1 It’s important to realize that this is, in a real sense, an illusion: the value in the machine is not exactly 1/10, you’re simply rounding the display of the true machine value. This fact becomes apparent as soon as you try to do arithmetic with these values >>> 0.1 + 0.2 0.30000000000000004 Note that this is in the very nature of binary floating-point: this is not a bug in Python, and it is not a bug in your code either. You’ll see the same kind of thing in all languages that support your hardware’s floating-point arithmetic (although some languages may not display the difference by default, or in all output modes). Other surprises follow from this one. For example, if you try to round the value 2.675 to two decimal places, you get this >>> round(2.675, 2) 2.67 The documentation for the built-in round() function says that it rounds to the nearest value, rounding ties away from zero. Since the decimal fraction 2.675 is exactly halfway between 2.67 and 2.68, you might expect the result here to be (a binary approximation to) 2.68. It’s not, because when the decimal string 2.675 is converted to a binary floating-point number, it’s again replaced with a binary approximation, whose exact value is Since this approximation is slightly closer to 2.67 than to 2.68, it’s rounded down. If you’re in a situation where you care which way your decimal halfway-cases are rounded, you should consider using the decimal module. Incidentally, the decimal module also provides a nice way to “see” the exact value that’s stored in any particular Python float >>> from decimal import Decimal >>> Decimal(2.675) Decimal('2.67499999999999982236431605997495353221893310546875') Another consequence is that since 0.1 is not exactly 1/10, summing ten values of 0.1 may not yield exactly 1.0, either: >>> sum = 0.0 >>> for i in range(10): ... sum += 0.1 ... >>> sum 0.9999999999999999 Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is explained in precise detail below, in the “Representation Error” section. See The Perils of Floating Point for a more complete account of other common surprises. As that says near the end, “there are no easy answers.” Still, don’t be unduly wary of floating-point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in 2**53 per operation. That’s more than adequate for most tasks, but you do need to keep in mind that it’s not decimal arithmetic, and that every float operation can suffer a new rounding error. While pathological cases do exist, for most casual use of floating-point arithmetic you’ll see the result you expect in the end if you simply round the display of your final results to the number of decimal digits you expect. For fine control over how a float is displayed see the str.format() method’s format specifiers in Format String Syntax. This section explains the “0.1” example in detail, and shows how you can perform an exact analysis of cases like this yourself. Basic familiarity with binary floating-point representation is assumed. Representation error refers to the fact that some (most, actually) decimal fractions cannot be represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many others) often won’t display the exact decimal number you expect: >>> 0.1 + 0.2 0.30000000000000004 Why is that? 1/10 and 2/10 are not exactly representable as a binary fraction. Almost all machines today (July 2010) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2**N where J is an integer containing exactly 53 bits. Rewriting 1 / 10 ~= J / (2**N) J ~= 2**N / 10 and recalling that J has exactly 53 bits (is >= 2**52 but < 2**53), the best value for N is 56: >>> 2**52 4503599627370496 >>> 2**53 9007199254740992 >>> 2**56/10 7205759403792793 That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value for J is then that quotient rounded: >>> q, r = divmod(2**56, 10) >>> r 6 Since the remainder is more than half of 10, the best approximation is obtained by rounding up: >>> q+1 7205759403792794 Therefore the best possible approximation to 1/10 in 754 double precision is that over 2**56, or 7205759403792794 / 72057594037927936 Note that since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded up, the quotient would have been a little bit smaller than 1/10. But in no case can it be exactly 1/10! So the computer never “sees” 1/10: what it sees is the exact fraction given above, the best 754 double approximation it can get: >>> .1 * 2**56 7205759403792794.0 If we multiply that fraction by 10**30, we can see the (truncated) value of its 30 most significant decimal digits: >>> 7205759403792794 * 10**30 // 2**56 100000000000000005551115123125L meaning that the exact number stored in the computer is approximately equal to the decimal value 0.100000000000000005551115123125. In versions prior to Python 2.7 and Python 3.1, Python rounded this value to 17 significant digits, giving ‘0.10000000000000001’. In current versions, Python displays a value based on the shortest decimal fraction that rounds correctly back to the true binary value, resulting simply in ‘0.1’.
http://docs.python.org/release/2.7.1/tutorial/floatingpoint.html
13
52
In computing, a serial port is a serial communication physical interface through which information transfers in or out one bit at a time (in contrast to a parallel port). Throughout most of the history of personal computers, data was transferred through serial ports connected the computer to devices such as terminals and various peripherals. While such interfaces as Ethernet, FireWire, and USB all send data as a serial stream, the term "serial port" usually identifies hardware more or less compliant to the RS-232 standard, intended to interface with a modem or with a similar communication device. Modern computers without serial ports may require serial-to-USB converters to allow compatibility with RS 232 serial devices. Serial ports are still used in applications such as industrial automation systems, scientific instruments, shop till systems and some industrial and consumer products. Server computers may use a serial port as a control console for diagnostics. Network equipment (such as routers and switches) often use serial console for configuration. Serial ports are still used in these areas as they are simple, cheap and their console functions are highly standardized and widespread. A serial port requires very little supporting software from the host system. Some computers, such as the IBM PC, used an integrated circuit called a UART, that converted characters to (and from) asynchronous serial form, and automatically looked after the timing and framing of data. Very low-cost systems, such as some early home computers, would instead use the CPU to send the data through an output pin, using the bit-banging technique. Before large-scale integration (LSI) UART integrated circuits were common, a minicomputer or microcomputer would have a serial port made of multiple small-scale integrated circuits to implement shift registers, logic gates, counters, and all the other logic for a serial port. Early home computers often had proprietary serial ports with pinouts and voltage levels incompatible with RS-232. Inter-operation with RS-232 devices may be impossible as the serial port cannot withstand the voltage levels produced and may have other differences that "lock in" the user to products of a particular manufacturer. Low-cost processors now allow higher-speed, but more complex, serial communication standards such as USB and FireWire to replace RS-232. These make it possible to connect devices that would not have operated feasibly over slower serial connections, such as mass storage, sound, and video devices. Many personal computer motherboards still have at least one serial port, even if accessible only through a pin header. Small-form-factor systems and laptops may omit RS-232 connector ports to conserve space, but the electronics are still there. RS-232 has been standard for so long that the circuits needed to control a serial port became very cheap and often exist on a single chip, sometimes also with circuitry for a parallel port. DTE and DCE The individual signals on a serial port are unidirectional and when connecting two devices the outputs of one device must be connected to the inputs of the other. Devices are divided into two categories "data terminal equipment" (DTE) and "data circuit-terminating equipment" (DCE). A line that is an output on a DTE device is an input on a DCE device and vice-versa so a DCE device can be connected to a DTE device with a straight wired cable. Conventionally, computers and terminals are DTE while modems and perhipherals are DCE. If it is necessary to connect two DTE devices (or two DCE devices but that is more unusual) a special cable known as a null-modem cable must be used. While the RS-232 standard originally specified a 25-pin D-type connector, many designers of personal computers chose to implement only a subset of the full standard: they traded off compatibility with the standard against the use of less costly and more compact connectors (in particular the DE-9 version used by the original IBM PC-AT). The desire to supply serial interface cards with two ports required that IBM reduce the size of the connector to fit onto a single card back panel. A DE-9 connector also fits onto a card with a second DB-25 connector that was similarly changed from the original Centronics-style connector. Starting around the time of the introduction of the IBM PC-AT, serial ports were commonly built with a 9-pin connector to save cost and space. However, presence of a 9-pin D-subminiature connector is not sufficient to indicate the connection is in fact a serial port, since this connector was also used for video, joysticks, and other purposes. Some miniaturized electronics, particularly graphing calculators and hand-held amateur and two-way radio equipment, have serial ports using a phone connector, usually the smaller 2.5 or 3.5 mm connectors and use the most basic 3-wire interface. Many models of Macintosh favored the related RS-422 standard, mostly using German Mini-DIN connectors, except in the earliest models. The Macintosh included a standard set of two ports for connection to a printer and a modem, but some PowerBook laptops had only one combined port to save space. The standard specifies 20 different signal connections. Since most devices use only a few signals, smaller connectors can often be used. For example, the 9 pin DE-9 connector was used by most IBM-compatible PCs since the IBM PC AT, and has been standardized as TIA-574. More recently, modular connectors have been used. Most common are 8P8C connectors. Standard EIA/TIA 561 specifies a pin assignment, but the "Yost Serial Device Wiring Standard" invented by Dave Yost (and popularized by the Unix System Administration Handbook) is common on Unix computers and newer devices from Cisco Systems. Many devices don't use either of these standards. 10P10C connectors can be found on some devices as well. Digital Equipment Corporation defined their own DECconnect connection system which was based on the Modified Modular Jack (MMJ) connector. This is a 6 pin modular jack where the key is offset from the center position. As with the Yost standard, DECconnect uses a symmetrical pin layout which enables the direct connection between two DTEs. Another common connector is the DH10 header connector common on motherboards and add-in cards which is usually converted via a cable to the more standard 9 pin DE-9 connector (and frequently mounted on a free slot plate or other part of the housing). The following table lists commonly used RS-232 signals and pin assignments. |MMJ||8P8C ("RJ45")||10P10C ("RJ50")| |Data Terminal Ready||DTR||●||20||4||1||3||2||2||7||3||9| |Carrier Detect||DCD||●||8||1||—||2||7||7||10||8||10 (alt 2)| |Data Set Ready||DSR||●||6||6||6||1||8||5||9||2 (alt 10)| |Request To Send||RTS||●||4||7||—||8||1||1||4||2||3| |Clear To Send||CTS||●||5||8||—||7||8||5||3||6||8| The signals are named from the standpoint of the DTE, for example, an IBM-PC compatible serial port. The ground signal is a common return for the other connections; it appears on two pins in the Yost standard but is the same signal. The DB-25 connector includes a second "protective ground" on pin 1. Connecting this to pin 7 (signal reference ground) is a common practice but not essential. Hardware abstraction Operating systems usually use a symbolic name to refer to the serial ports of a computer. Unix-like operating systems usually label the serial port devices /dev/tty* (TTY is a common trademark-free abbreviation for teletype) where * represents a string identifying the terminal device; the syntax of that string depends on the operating system and the device. On Linux, 8250/16550 UART hardware serial ports are named /dev/ttyS*, USB adapters appear as /dev/ttyUSB* and various types of virtual serial ports do not necessarily have names starting with tty. Common applications for serial ports The RS-232 standard is used by many specialized and custom-built devices. This list includes some of the more common devices that are connected to the serial port on a PC. Some of these such as modems and serial mice are falling into disuse while others are readily available. Serial ports are very common on most types of microcontroller, where they can be used to communicate with a PC or other serial devices. - Dial-up modems - GPS receivers (typically NMEA 0183 at 4,800 bit/s) - Bar code scanners and other point of sale devices - LED and LCD text displays - Satellite phones, low-speed satellite modems and other satellite based transceiver devices - Flat-screen (LCD and Plasma) monitors to control screen functions by external computer, other AV components or remotes - Test and measuring equipment such as digital multimeters and weighing systems - Updating Firmware on various consumer devices. - Some CNC controllers - Uninterruptible power supply - Stenography or Stenotype machines. - Software debuggers that run on a second computer. - Industrial field buses - Computer terminal, teletype - Older digital cameras - Networking (Macintosh AppleTalk using RS-422 at 230.4 kbit/s) - Serial mouse - Older GSM mobile phones - Some Telescopes Since the control signals for a serial port can be easily turned on and off by a switch, some applications used the control lines of a serial port to monitor external devices, without exchanging serial data. A common commercial application of this principle was for some models of uninterruptible power supply which used the control lines to signal "loss of power", "battery low alarm" and other status information. At least some Morse code training software used a code key connected to the serial port, to simulate actual code use. The status bits of the serial port could be sampled very rapidly and at predictable times, making it possible for the software to decipher Morse code. Many settings are required for serial connections used for asynchronous start-stop communication, to select speed, number of data bits per character, parity, and number of stop bits per character. In modern serial ports using a UART integrated circuit, all settings are usually software-controlled; hardware from the 1980s and earlier may require setting switches or jumpers on a circuit board. One of the simplifications made in such serial bus standards as Ethernet, FireWire, and USB is that many of those parameters have fixed values so that users can not and need not change the configuration; the speed is either fixed or automatically negotiated. Often if the settings are entered incorrectly the connection will not be dropped; however, any data sent will be received on the other end as nonsense. Serial ports use two-level (binary) signaling, so the data rate in bits per second is equal to the symbol rate in bauds. A standard series of rates is based on multiples of the rates for electromechanical teleprinters; some serial ports allow many arbitrary rates to be selected. The port speed and device speed must match. The capability to set a bit rate does not imply that a working connection will result. Not all bit rates are possible with all serial ports. Some special-purpose protocols such as MIDI for musical instrument control, use serial data rates other than the teleprinter series. Some serial port systems can automatically detect the bit rate. The speed includes bits for framing (stop bits, parity, etc.) and so the effective data rate is lower than the bit transmission rate. For example with 8-N-1 character framing only 80% of the bits are available for data (for every eight bits of data, two more framing bits are sent). Bit rates commonly supported include 75, 110, 300, 1200, 2400, 4800, 9600, 19200, 38400, 57600 and 115200 bit/s. Crystal oscillators with a frequency of 1.843200 MHz are sold specifically for this purpose. This is 16 times the fastest bit rate and the serial port circuit can easily divide this down to lower frequencies as required. Data bits The number of data bits in each character can be 5 (for Baudot code), 6 (rarely used), 7 (for true ASCII), 8 (for any kind of data, as this matches the size of a byte), or 9 (rarely used). 8 data bits are almost universally used in newer applications. 5 or 7 bits generally only make sense with older equipment such as teleprinters. Most serial communications designs send the data bits within each byte LSB (Least significant bit) first. This standard is also referred to as "little endian." Also possible, but rarely used, is "big endian" or MSB (Most Significant Bit) first serial communications; this was used, for example, by the IBM 2741 printing terminal. (See Bit numbering for more about bit ordering.) The order of bits is not usually configurable within the serial port interface. To communicate with systems that require a different bit ordering than the local default, local software can re-order the bits within each byte just before sending and just after receiving. Parity is a method of detecting errors in transmission. When parity is used with a serial port, an extra data bit is sent with each data character, arranged so that the number of 1 bits in each character, including the parity bit, is always odd or always even. If a byte is received with the wrong number of 1s, then it must have been corrupted. However, an even number of errors can pass the parity check. Electromechanical teleprinters were arranged to print a special character when received data contained a parity error, to allow detection of messages damaged by line noise. A single parity bit does not allow implementation of error correction on each character, and communication protocols working over serial data links will have higher-level mechanisms to ensure data validity and request retransmission of data that has been incorrectly received. The parity bit in each character can be set to none (N), odd (O), even (E), mark (M), or space (S). None means that no parity bit is sent at all. Mark parity means that the parity bit is always set to the mark signal condition (logical 1) and likewise space parity always sends the parity bit in the space signal condition. Aside from uncommon applications that use the 9th (parity) bit for some form of addressing or special signalling, mark or space parity is uncommon, as it adds no error detection information. Odd parity is more useful than even, since it ensures that at least one state transition occurs in each character, which makes it more reliable. The most common parity setting, however, is "none", with error detection handled by a communication protocol. Stop bits Stop bits sent at the end of every character allow the receiving signal hardware to detect the end of a character and to resynchronise with the character stream. Electronic devices usually use one stop bit. If slow electromechanical teleprinters are used, one-and-one half or two stop bits are required. Conventional notation The D/P/S (Data/Parity/Stop) conventional notation specifies the framing of a serial connection. The most common usage on microcomputers is 8/N/1 (8N1). This specifies 8 data bits, no parity, 1 stop bit. In this notation, the parity bit is not included in the data bits. 7/E/1 (7E1) means that an even parity bit is added to the seven data bits for a total of eight bits between the start and stop bits. If a receiver of a 7/E/1 stream is expecting an 8/N/1 stream, half the possible bytes will be interpreted as having the high bit set. Flow control A serial port may use signals in the interface to pause and resume the transmission of data. For example, a slow printer might need to handshake with the serial port to indicate that data should be paused while the mechanism advances a line. Common hardware handshake signals (hardware flow control) use the RS-232 RTS/CTS or DTR/DSR signal circuits. Generally, the RTS and CTS are turned off and on from alternate ends to control data flow, for instance when a buffer is almost full. DTR and DSR are usually on all the time and, per the RS-232 standard and its successors, are used to signal from each end that the other equipment is actually present and powered-up. However, manufacturers have over the years built many devices that implemented non-standard variations on the standard, for example, printers that use DTR as flow control. Another method of flow control (software flow control) uses special characters such as XON/XOFF to control the flow of data. The XON/XOFF characters are sent by the receiver to the sender to control when the sender will send data, that is, these characters go in the opposite direction to the data being sent. The circuit starts in the "sending allowed" state. When the receiver's buffers approach capacity, the receiver sends the XOFF character to tell the sender to stop sending data. Later, after the receiver has emptied its buffers, it sends an XON character to tell the sender to resume transmission. These are non-printing characters and are interpreted as handshake signals by printers, terminals, and computer systems. XON/XOFF flow control is an example of in-band signaling, in which control information is sent over the same channel used for the data. If the XON and XOFF characters might appear in the data being sent, XON/XOFF handshaking presents difficulties, as receivers may interpret them as flow control. Such characters sent as part of the data stream must be encoded in an escape sequence to prevent this, and the receiving and sending software must generate and interpret these escape sequences. On the other hand, since no extra signal circuits are required, XON/XOFF flow control can be done on a 3 wire interface. "Virtual" serial ports A virtual serial port is an emulation of the standard serial port. This port is created by software which enable extra serial ports in an operating system without additional hardware installation (such as expansion cards, etc.). It is possible to create a large number of virtual serial ports in a PC. The only limitation is the amount of resources, such as operating memory and computing power, needed to emulate many serial ports at the same time. Virtual serial ports emulate all hardware serial port functionality, including Baud rate, Data bits, Parity bits, Stop bits, etc. Additionally they allow controlling the data flow, emulating all signal lines (DTR/DSR/CTS/RTS/DCD/RI) and customizing pinout. Virtual serial ports are common with Bluetooth and are the standard way of receiving data from Bluetooth-equipped GPS modules. Virtual serial port emulation can be useful in case there is a lack of available physical serial ports or they do not meet the current requirements. For instance, virtual serial ports can share data between several applications from one GPS device connected to a serial port. Another option is to communicate with any other serial devices via internet or LAN as if they are locally connected to computer (Serial-over-Ethernet technology). Two computers or applications can communicate through an emulated serial port link. Virtual serial port emulators are available for many operating systems including MacOS, Linux, and various mobile and desktop versions of Microsoft Windows. See also |Wikibooks has a book on the topic of: Programming:Serial Data Communications| - Webopedia (2003-09-03). "What is serial port? - A Word Definition From the Webopedia Computer Dictionary". Webopedia.com. Retrieved 2009-08-07. - Yost Serial Device Wiring Standard - Joakim Ögren. "Serial (PC 9)". - Cyclom-Y Installation Manual, page 38, retrieved on 29 November 2008 - National Instruments Serial Quick Reference Guide, February 2007 - Installation Guide, DigiBoard PC/Xi, PC/16e, MC/Xi and COM/Xi Intelligent Asynchronous Serial Communications Board, pages 36-37, retrieved on 7 December 2008 - Hardware Book RS-232D - RS-232D EIA/TIA-561 RJ45 Pinout - "DCB Structure". MSDN. Microsoft. Retrieved 15 March 2011.
http://en.wikipedia.org/wiki/Serial_port
13
66
General Lessons - Length, mass, volume, density, review pages (Student worksheets provided) Conversion Practice (Student worksheets provided) Metric System Lesson Plan Links & Online Resources Need Adobe Acrobat to view the worksheets on this site? Visit the Adobe site for details! My metric unit includes labs on length, mass, volume, density, and temperature as well as conversions (metric to metric and metric to English). Students have many opportunities to use rulers, triple-beam balances, and other science equipment to learn how to use the metric system of measurements. Lesson #1 - Length Length Presentation (PPT) - I use this presentation to review the basic units of length and how to measure distances. Length Worksheet (pdf) - Student worksheet that goes along with the presentation. Length Lab (pdf) - Students are challenged to find the length of various objects in millimeters, centimeters, and meters. Units of Measure - Length (pdf) - Thanks to Christina Bryant for sharing this worksheet. Lesson #2 - Mass Mass Presentation (PPT) - I use this presentation to review the basic units of mass and how to measure mass using a triple-beam balance. Mass Worksheet (pdf) - Student worksheet that goes along with the presentation. Mass Lab (pdf) - For the mass lab, students first estimate the mass of various objects, then find the actual mass using triple-beam balances or other scales. To prepare for the activity you will need to organize various items (coins, paper clips, marbles, rocks, large washers/s-hooks, etc.) and triple-beam balances or scales for each group. They may group items together to reach a targeted mass, such as three pennies for 5 grams, or just use a single item. This lab is always a hit and the students get much needed estimation practice. NOTE: Estimates should be checked before any measuring is allowed! Some students will skip the estimation step and advance to using the scales! Another idea (from Sandra Gasparovich, Central Jr. High, East Peoria, IL) involves using film canisters, a triple-beam balance, and a variety of materials to create a set of masses. The students may use the masses during lab activities or challenge them to take them home and find items with like masses. Another twist is to fill pairs of canisters with various objects (pennies, popcorn, seeds, screws, washers, M&Ms). Give each student one canister and allow time for them to search for their "partner" - without looking into the canister. Once the groups have found their match, the students can check their results by opening the canisters. Lesson #3 - Volume Volume Presentation (PPT) - I use this presentation to review the basic units of volume and how to measure volume of regular and irregular objects. Volume Worksheet (pdf) - Student worksheet that goes along with the presentation. Volume Lab (pdf) - This lab consists of measuring the volume of liquids and regular solids as well as using graduated cylinders and overflow cans to find the volume of irregular objects (rocks). Lesson #4 - Density Mystery Canisters (pdf) - The density lab, known as Mystery Canisters, challenges students to modify three film canisters so that they have one that floats, one that sinks, and one that will remain suspended in the tub of tap water. Materials needed for the lab are: plastic tub of water (or the bottom half of a 2-liter soda bottle), three film canisters (free from Walmart, KMart, etc.), and an assortment of small objects (pennies, paperclips, marbles, etc.) for mass. Students will also need equipment to help them measure mass (triple-beam balance) and volume (graduated cylinders and overflow cans.) Students are allowed a few minutes to create the three canisters that will (1) float, (2) sink, and (3) remain suspended. Students may have difficulty getting one of the cansisters perfectly suspended. If the students can get the canister to suspend with less than half of the lid above the surface, they should get numbers that result in a density close to 1.0 g/ml. Once the students have their canisters approved, they find the mass and volume of the canisters and calculate each density. They should notice that the floating vial has a density less than 1 g/ml, the sinking vial has a density greater than 1 g/ml, and the suspended vial has a density close to 1 g/ml. Also available ... Gummy Bear Lab (pdf) - This lab incorporates a variety of metric measurements (length, volume, mass, and density) to record what happens to a gummy bear when it is placed in a water overnight. NOTE: This lab worksheet was based on a gummy bear lab available online; however, the website with the original lab is no longer available. Share your Gummi Bear data - Visit my Gummi Bear wiki to participate! Use gummi as the password to log in. Metric Mania Survey (pdf) - This worksheet is used at the end of the unit to review the material we have studied. Metric Challenge Puzzle (pdf) - Students review key terms from the metric system to discover the answer to a joke. An answer key is provided. One part of my metric unit includes a few lessons related to conversions. My students have difficulty relating the English system of measurement (feet, pounds, and gallons) to the metric units (meter, kilograms, and liters). The first lesson consists of making conversions from one system to the other. During this lesson, students use the information from a measurement chart to convert measurements. This assignment allows my students to connect the English system of measurements used in our daily lives to the metric system units. Thanks to Christina Bryant for sharing her worksheet - Meters, Liters, & Grams (PDF) The second lesson focuses on using a "metric ladder" to calculate conversions within the metric system. In the beginning the lesson is focused on counting the number of "jumps" it takes to move from one metric unit to another. The "jumps" determine the number of times the decimal is moved and in which direction. I remind students to count the number of jumps it would take to move from one unit to another, such as moving from meters to millimeters, rather than counting the number of boxes. To convert from meters to millimeters, it would take 3 jumps to the right which would mean the decimal would need to move 3 jumps to the right. As they learn the process and understand the value of the metric prefixes, I introduce using multiplication and division by 10, 100, and 1000 to accomplish the same conversion. They quickly learn the relationships between metric units, such as 1000 milliliters in 1 liter. Want a great way to help the students remember the order for the metric prefixes? Amy Monroe of Clifford H. Smart School in Commerce Township, Michigan, uses this phrase: "Kids Have Dropped OVER Dead Converting Metrics." The first letter of each word refers to one of the metric prefixes (kilo, hecto, etc.) and the "over" refers to the basic unit (meters, liters, or grams). Brad Loewen, a competitor in the Science Olympiad event titled Metric Estimation, used the phrase "King Henry Does Drink Chocolate Milk" to help his team finish in fourth place in the event at the state level. Conversion Review - Metric Mania Scavenger Hunt Game Challenge your students to a scavenger hunt with metric conversion problems! I hide 60 game cards with metric conversion problems in my room. Some of the cards are easy to see, while others are hidden under science tables, chairs, the garbage can, behind a curtain or poster, or other places that are easy to search. (I don't hide them inside anything so the kids don't have to go searching through my cabinets or desk drawers.) The kids work in teams to find the cards and solve the problems. Teams can only work on one card a time and must have the answer correct before they can start looking for another card. The team with the most cards at the end of the game wins a special reward - extra credit points or a piece of candy! Since I have more than one science class each day, I allow the kids to "hide" the game cards for the next class. Game Cards - Front of cards (pdf) and Back of cards (pdf) - Print the "Metric Mania" (front of cards) on colored paper or cover stock, then print the problems on the back. I laminated my set so the cards can be used more than once and hold up to repeated use by junior high students. Metric Mania Scavenger Hunt Game Student Worksheet (pdf) - Provides directions, game rules, and an area for students to write the answers. Metric Mania Scavenger Hunt Answer Key (pdf) - An answer key for the problems on the cards. I cross off the numbers as the kids solve the problems so I can keep track of the number of cards that are still hidden. Also available ... Thanks to Deborah Noles-Garcia for sharing her Metric Victims Game. Metric System Lessons Online AAAMath Measurement Lessons - This site provides explanations, interactive practice pages, and challenge games about measurements. Discovery School - A Metric World - This lesson provides students with an opportunity to compare measurement units - worksheets are provided! Dr. Math Measurement Lessons - A collection of math lessons with ideas for length and volume. Math in Daily Life - Cooking by Numbers - Challenge your students to use their metric skills to convert recipes from one system to the other. This site provides information as well as links to recipe collections. Want more recipe conversion info? Visit Bella Online Conversions. Metrics Matter - A ThinkQuest Junior site exploring the metric system! Metric Olympics - Download this PDF with ideas for the Metric Olympics! Metric System Info - Lots of great information for any unit on the metric system! Metric Estimation Game (Teachers.net) - A great game involving the metric system that is played like the TV Game show "The Price Is Right"! MiddleSchoolScience.com - Try Smile Metric style to help your students learn about metric units for length. Science Teaching Ideas - Explore the metric system section of this page for some great lessons and activities! SMILE Math page - The first section of this page contains lesson ideas for measurement. Teach-nology Measurement Lessons - A large collection of links (with descriptions) to sites with lessons for measurement. TheMetricSystem.info - Visit this website for helpful hints and resources for your metric unit! Think Metric - A great resource for information for students and teachers! The site also offers metric posters, rulers, games, and more! US Metric Association - A wealth of information for the metric system! Activities, lessons, & worksheets available on any page of this web site are intended for use by a single teacher in his/her classroom or to share at educational conferences. Reproduction for commercial use or profit is not permitted without the express written consent of Mrs. Tracy Trimpe. Visit my Frequently Asked Questions page for more details!
http://sciencespot.net/Pages/classmetric.html
13
72
Completing the square is a method that can be used to transform a quadratic equation in standard form to vertex form. Once in vertex form, a quadratic equation is easy to graph or solve. The method of completing the square has a few simple steps. The main simplification that we aim to make is to collapse the term, In other words, these two terms are equal to one another, so we are able to convert from one to the other. The steps to complete the square can be summarized as follows: 1. Isolate all terms involving x. 2. Make the leading coefficient 1 by dividing or multiplying both sides of the equation by the appropriate constant. 3. Take half the coefficient of x, square it, and add it to both sides of the equation. We will illustrate each of the steps in completing the square using the following example, y = 2x2 - 8x + 10, a typical quadratic equation in standard form. To complete the square on this equation, we must first isolate the terms involving x as, y − 10 = 2x2− 8x. Next, we make the leading coefficient 1 by dividing both sides of the equation by 2 as, Now we take half the coefficient of x, square it, and add it to both sides as, Notice that we have the right hand side in the form, where k = -4. We collapse the right hand side to x2 - 4x + 4 = (x - 2)2 . Thus, we have, Solving for y gives, Therefore, we have transformed the equation y = 2x2 - 8x + 10 in standard form to vertex form as, y = 2(x − 2)2 + 2. We can expand the equation y = 2(x − 2)2 + 2 to check our transformation, Completing the Square to Solve Quadratic Equations. Completing the square can also be used to solve quadratic equations. For example, suppose you were asked to solve the equation, Solving the above equation means to find the values x that make it a true statement. One way to do this is to complete the square. You would follow the same four steps by first isolating all terms involving x, Next, we make the leading coefficient 1 by multiplying both sides of the equation by 2 as, x2 + 6x = 16. Now take half the coefficient of x, square it, and add it to both sides of the equation as, Notice that we have the left hand side in the form, We collapse the left hand side to (x + 3)2 = 25. To solve for x, we take the square root of both sides as, Thus, we find the solutions of our equation as, x + = 5 − 3 = 2, x − = −5 − 3 = −8. Completing the Square to Help Graph a Quadratic Function Any quadratic function that is not in vertex form can be put in vertex form by completing the square. Once in vertex form, a quadratic equation can be easily plotted by recalling graphical transformations. The function given by, y(x) = a(x − h) 2+ k, can be graphed by transforming the base function f (x) = x2 . For example, g(x) = 3 (x + 1)2− 7 is related to the graph of f (x) = x 2 through the basic transformations. Specifically, the graph of g(x) looks like the graph of f (x) translated to the left 1 unit, stretched vertically by a factor of 3, and finally shifted down by 7. The domain of all quadratic equations consists of all real numbers. Knowing where the vertex of a parabola lies also allows you to determine the range of that quadratic function. In particular, if the parabola opens upward, the range of the quadratic function f (x) = ax2 + bx + c is, If the parabola opens downward, the range of the quadratic function f (x) is, As a specific example, the graph of the function f (x) = −2x2− 4x + 1 has vertex, Since this parabola opens downward (i.e. a < 0), the range of f (x) is ( −∞, 3]. In the next section we will introduce the quadratic formula, and learn how to find the roots (x-intercepts) of quadratic equations. Roots of Quadratic Equations
http://www.biology.arizona.edu/BioMath/tutorials/Quadratic/CompletingtheSquare.html
13
72
In probability tells us how likely something is to happen. If something is absolutely certain to happen, we say its probability is one. If something cannot possibly happen, we say its probability is zero. A probability between zero and one means we don't know for sure what will happen, but the higher the number the more likely something is to happen. If something has a probability of 0.25, most people would say "it probably won't happen". Unfortunately, children have a hard time understanding what a probability of 0.25 suggests. Children are much more adept in understanding probability in terms of percentages (25%), and fractions (i.e., "It will happen about ¼ of the time"), so it is very important that connections between probability and the development of those skills and understandings occur together. It can be difficult but Probability is best taught with models. Two of the most effective and commonly used models used to teach probability are area and tree diagrams Area diagrams give students an idea of how likely something is by virtue of the amount of space reserved for that probability. Consider the toss of a coin: Students can imagine that every time they toss a coin, they need to place that coin into the appropriate side of the rectangle. As the number of throws increases, the amount of area needed to enclose the coins will be the same on both sides. The same probability can be shown using a tree diagram: In this simple tree diagram, each line (or event path) will be followed ½ of the time. Note that both diagrams show all of the possibilities, not just the probability of one event. Both of these models can become more complex. Consider all of the possibilities of throwing two coins in series: In the above area rectangle the instances where heads has been tossed twice in a row is represented in the upper left hand corner. Note that whereas the probability of tossing both heads and tails (in any order) is 50% or 1/2, and the probability of tossing two heads in a row is 25% or 1/4. While the tree diagram gives us the same information, the student needs to understand that all of the events at the bottom have an equal chance of occurring (This is more visually evident in an area diagram). However, the tree diagram has the advantage of better showing the chronology of events (in this case the order of events is represented by downward motion). While it may appear that both of these models have strengths and weaknesses, that is not the point. By becoming familiar with these and other models, students gain a more robust and diverse understanding of the nature of probability. Consider the following problem: If at any time a Dog is just as likely to give birth to a male puppy as she is to give birth to a female puppy, and she has a litter of 5 puppies, what is the probability that all of the puppies will be of the same sex? While this problem can be easily answered with the a standard formula, , where n is the number of puppies born, this nomenclature is not only unintelligible to students, but teaching it to students gives them no understanding of the underlying concepts. On the other hand, if they have explored problems similar to this with a tree diagram, they should fairly easily be able to see patterns emerging from which they can conjecture the "formula" that will find the correct answer. Consider that a student has drawn a tree diagram that shows the permutations with three puppies in the litter: If a teacher was to have this student share their work, the class of students in looking at this model would be able to see that there are 8 possible outcomes. They should also notice that the only instances where litters of all males or all females can be found are on the far ends of the tree. Students are always encouraged to look for patterns in many of their mathematical explorations. Here, students start to notice that the number of possibilities can be determined by multiplying the number 2 by itself exactly the number of times there is to be a new outcome. In this case there are three outcomes (puppies born) in a row. So 2 x 2 x 2 = 23, or 8 possible outcomes. Once the student sees a connection to this pattern, they may conjecture that if there are 5 puppies in a litter, the possible number of outcomes is 25, or 32, of which only 2 (represented by the combinations MMMMM, and FFFFF on the extreme ends) will result in a litter puppies of all the same sex. Teachers are always looking for students to make connections with other skills they are learning. In this case the teacher might ask the students to state the answer as a reduced fraction (1/16) and a percentage (6.25%). Since we're using math to help us understand what will happen, it may help us to be able to write an equation like this: Pr(X) = y If this equation is true, we would say "the probability of X happening is y". For example, we can say: X = A coin toss coming up heads Pr(X) = 0.5 Since the letter X is standing for an event, not a number, and we don't want to get them confused, we'll use capital letters for events, and lower-case letters for numbers. The symbol Pr represents a function that has some neat properties, but for now we'll just think of it as a short way of writing "the probability of X". Since probabilities are always numbers between 0 and 1, we will often represent them as percentages or fractions. Percentages seem natural when someone says something like "I am 100% sure that my team will win." However, fractions make more sense for really computing probabilities, and it is very useful to really be able to compute probabilities to try to make good decisions, so we will use fractions here. The basic rule for computing the probability of an event is simple, if the event is the kind where we are making a definite choice from a known set of choices based on randomness that is fair. Not everything is like that, but things like flipping a coin, rolling a single die, drawing a card, or drawing a ticket from a bag with your eyes closed are random and fair since each of the possibilities is equally likely. But the probability of things a little more complicated, like the sum of two dice, is not so easy---and that makes it fun! Suppose we have a single die. It has six faces, and as far as we know each is equally likely to be face up if we roll the die. We can now use our basic rule for computing probability to compute the probability of each possible outcome. The basic rule is to take the number of outcomes that represent the event (called X) that you're trying to compute the probability of, and divide it by the total number of possible outcomes. So, for rolling a die: Pr("getting a one") = 1 face that has one dot / 6 faces = 1/6 Pr("getting a two") = 1 face that has two dots / 6 faces = 1/6 Pr("getting a three") = 1 face that has three dot / 6 faces = 1/6 Pr("getting a four") = 1 face that has four dots / 6 faces = 1/6 Pr("getting a five") = 1 face that has five dot / 6 faces = 1/6 Pr("getting a six") = 1 face that has six dots / 6 faces = 1/6 So we have 6 possibilities, and each has a probability of 1/6. We know that 6 * (1/ 6) = 1. In fact, that is an example of a fundamental rule of probabilities: The sum of probabilities of all possibilities is equal to one. But you probably know that each face of a die is equally likely to land face-up, so so far we haven't done anything useful. So let's ask the question: What is the probability of throwing two dice and having their sum equal seven when we add them together? Knowing that can help us win at a lot of games (such as Monopoly(trademark Parker Brothers)), so it's pretty valuable to know the answer. To find out the answer, we apply our basic rule, but now it is not so easy to know how many possible outcomes of TWO dice thrown together will equal seven. However, it is easy to know how many total possible outcomes there are: 6 times 6 = 36. One way to find out how many possible dice throws add up to seven is just to make a table of all possible dice throws that fills in their sum and count the number of sevens. Here is one: ***| 1 | 2 | 3 | 4 | 5 | 6 =========================== 1 | 2 | 3 | 4 | 5 | 6 | 7 2 | 3 | 4 | 5 | 6 | 7 | 8 3 | 4 | 5 | 6 | 7 | 8 | 9 4 | 5 | 6 | 7 | 8 | 9 | 10 5 | 6 | 7 | 8 | 9 | 10| 11 6 | 7 | 8 | 9 | 10| 11| 12 The numbers along the top represent the value on the first die and the values on the left represent the values of the second die when you throw them together. The values inside the table represent the sum of the first and the second dice. (Note that the numbers range from 2 to 12 --- there's no way to throw two dice and get a number less than two!) So if we look carefully, we can see that there are exactly 6 sevens in this table. So the probability of getting a seven if you throw two dice is 6 out of 36, or: Pr(getting a 7) = 6 / 36 = 1 / 6. (1 / 6 is a simple fraction equal in value to 6 / 36.) Maybe that doesn't surprise you, but let's look at all the probabilities: Pr(getting a 2) = 1 / 36 Pr(getting a 3) = 2 / 36 Pr(getting a 4) = 3 / 36 Pr(getting a 5) = 4 / 36 Pr(getting a 6) = 5 / 36 Pr(getting a 7) = 6 / 36 Pr(getting a 8) = 5 / 36 Pr(getting a 9) = 4 / 36 Pr(getting a 10)= 3 / 36 Pr(getting a 11)= 2 / 36 Pr(getting a 12)= 1 / 36 What do you think we would get if we added all of those probabilities together? They should equal 36 / 36, right? Check that they do. Did you know that the probability of a getting a 5 is four times that of getting a two? Did you know that probability of getting a five, six, seven or eight is higher than the probability of other 7 numbers all together? We can tell things like that by adding the probabilities together, if the events we're talking about are completely independent, as they are in this case. So we can even write: Pr(getting a 5, 6, 7 or 8) = Pr(getting a 5) + Pr(getting a 6) + Pr(getting a 7) + Pr(getting an 8) But we can substitute these fractions and add them up easily to get: Pr(getting a 5, 6, 7, or 8) = (4+5+6+5)/36 = 20/36. Since we know the probability of all possible events must equal one, we can actually use that to compute the probability: Pr(getting a 2,3,4,9,10,11 or 12) = 1 - (20/36) = (36/36 - 20/36) = 16/36. This way we didn't have to add up all those individual probabilities, we just subtracted our fraction from 1. Mathematically we could say: Pr(some events) = 1 - Pr(all the other possible events) You might want to take the time now to get two dice and roll them 3*36 = 108 times, recording the sum of them each time. The number you get for each sum won't be exactly 3 times the probabilities that we have computed, but it should be pretty close! You should get around 18 sevens. This is fun to do with friends; you can have each person make 36 rolls and then add together all your results for each sum that is possible. The fact that if you make a lot of rolls it is likely to get results close the probability of an outcome times the number of rolls is called the Law of Large Numbers. It's what ties the math of probability to reality, and what lets you do math to make a good decision about what roll you might get in a game, or anything else for which you can compute or estimate a probability. Here's a homework exercise for each of you: compute the probabilities of every possible sum of throwing two dice, but two different dice: one that comes up 1,2,3 or 4 only, and the other that comes up 1,2,3,4,5,6,7 or 8. Note that we these dice (which you can get at a gaming store, if you want) the lowest and highest possible roll is the same but the number of possible rolls is only 4 * 8 = 32. ***| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 ===================================== 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10| 11 4 | 5 | 6 | 7 | 8 | 9 | 10| 11| 12 Pr(getting a 2) = 1 / 32 Pr(getting a 3) = 2 / 32 Pr(getting a 4) = 3 / 32 Pr(getting a 5) = 4 / 32 Pr(getting a 6) = 4 / 32 Pr(getting a 7) = 4 / 32 Pr(getting a 8) = 4 / 32 Pr(getting a 9) = 4 / 32 Pr(getting a 10)= 3 / 32 Pr(getting a 11)= 2 / 32 Pr(getting a 12)= 1 / 32 (You should check our work by making sure that all these probabilities sum to 32/32 = 1). So this is interesting: Although the possible scores of the throwing two dice are the same, the probabilities are a little different. Since 1/32 is a little more than 1/36, you are actually more likely to get a two or a twelve this way. And since 4 /32 = 1/8 is less than 6 / 36 = 1/6, you are less likely to get a seven with these dice. You might have been able to guess that, but by doing the math you know for sure, and you even know by how much, if you know how to subtract fractions well. So let's review what we've learned: - We know what the word probability means. - We know probabilities are always numbers between 0 and 1. - We know a basic approach to computing some common kinds of probabilities. - We know that the probabilities of all possible outcomes should add up to exactly 1. - We know the probability of a result is equal to one minus the probability of all other results.
http://en.wikibooks.org/wiki/Primary_Mathematics/Probability
13
119
In mathematics, the absolute value (or modulus) | x | of a real number x is the non-negative value of x without regard to its sign. Namely, | x | = x for a positive x, | x | = −x for a negative x, and | 0 | = 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero. Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts. Terminology and notation Jean-Robert Argand introduced the term "module", meaning 'unit of measure' in French, in 1806 specifically for the complex absolute value and it was borrowed into English in 1866 as the Latin equivalent "modulus". The term "absolute value" has been used in this sense since at least 1806 in French and 1857 in English. The notation | x | was introduced by Karl Weierstrass in 1841. Other names for absolute value include "the numerical value" and "the magnitude". The same notation is used with sets to denote cardinality; the meaning depends on context. Definition and properties Real numbers From an analytic geometry point of view, the absolute value of a real number is that number's distance from zero along the real number line, and more generally the absolute value of the difference of two real numbers is the distance between them. Indeed the notion of an abstract distance function in mathematics can be seen to be a generalisation of the absolute value of the difference (see "Distance" below). Since the square-root notation without sign represents the positive square root, it follows that which is sometimes used as a definition of absolute value. The absolute value has the following four fundamental properties: Other important properties of the absolute value include: () Idempotence (the absolute value of the absolute value is the absolute value) () Symmetry () Identity of indiscernibles (equivalent to positive-definiteness) () Triangle inequality (equivalent to subadditivity) () Preservation of division (equivalent to multiplicativeness) () (equivalent to subadditivity) Two other useful properties concerning inequalities are: These relations may be used to solve inequalities involving absolute values. For example: Absolute value is used to define the absolute difference, the standard metric on the real numbers. Complex numbers Since the complex numbers are not ordered, the definition given above for the real absolute value cannot be directly generalised for a complex number. However the geometric interpretation of the absolute value of a real number as its distance from 0 can be generalised. The absolute value of a complex number is defined as its distance in the complex plane from the origin using the Pythagorean theorem. More generally the absolute value of the difference of two complex numbers is equal to the distance between those two complex numbers. For any complex number where x and y are real numbers, the absolute value or modulus of z is denoted | z | and is given by When the complex part y is zero this is the same as the absolute value of the real number x. When a complex number z is expressed in polar form as with r ≥ 0 and θ real, its absolute value is The absolute value of a complex number can be written in the complex analogue of equation (1) above as: where is the complex conjugate of z. The complex absolute value shares all the properties of the real absolute value given in equations (2)–(11) above. Since the positive reals form a subgroup of the complex numbers under multiplication, we may think of absolute value as an endomorphism of the multiplicative group of the complex numbers. Absolute value function The real absolute value function is continuous everywhere. It is differentiable everywhere except for x = 0. It is monotonically decreasing on the interval (−∞,0] and monotonically increasing on the interval [0,+∞). Since a real number and its negative have the same absolute value, it is an even function, and is hence not invertible. Both the real and complex functions are idempotent. Relationship to the sign function The absolute value function of a real number returns its value irrespective of its sign, whereas the sign (or signum) function returns a number's sign irrespective of its value. The following equations show the relationship between these two functions: and for x ≠ 0, The second derivative of | x | with respect to x is zero everywhere except zero, where it does not exist. As a generalised function, the second derivative may be taken as two times the Dirac delta function. The antiderivative (indefinite integral) of the absolute value function is where C is an arbitrary constant of integration. The absolute value is closely related to the idea of distance. As noted above, the absolute value of a real or complex number is the distance from that number to the origin, along the real number line, for real numbers, or in the complex plane, for complex numbers, and more generally, the absolute value of the difference of two real or complex numbers is the distance between them. The standard Euclidean distance between two points in Euclidean n-space is defined as: This can be seen to be a generalisation of | a − b |, since if a and b are real, then by equation (1), are complex numbers, then The above shows that the "absolute value" distance for the real numbers or the complex numbers, agrees with the standard Euclidean distance they inherit as a result of considering them as the one and two-dimensional Euclidean spaces respectively. The properties of the absolute value of the difference of two real or complex numbers: non-negativity, identity of indiscernibles, symmetry and the triangle inequality given above, can be seen to motivate the more general notion of a distance function as follows: Non-negativity Identity of indiscernibles Symmetry Triangle inequality Ordered rings The definition of absolute value given for real numbers above can be extended to any ordered ring. That is, if a is an element of an ordered ring R, then the absolute value of a, denoted by | a |, is defined to be: The fundamental properties of the absolute value for real numbers given in (2)–(5) above, can be used to generalise the notion of absolute value to an arbitrary field, as follows. Non-negativity Positive-definiteness Multiplicativeness Subadditivity or the triangle inequality Where 0 denotes the additive identity element of F. It follows from positive-definiteness and multiplicativeness that v(1) = 1, where 1 denotes the multiplicative identity element of F. The real and complex absolute values defined above are examples of absolute values for an arbitrary field. If v is an absolute value on F, then the function d on F × F, defined by d(a, b) = v(a − b), is a metric and the following are equivalent: - d satisfies the ultrametric inequality for all x, y, z in F. - is bounded in R. Vector spaces Again the fundamental properties of the absolute value for real numbers can be used, with a slight modification, to generalise the notion to an arbitrary vector space. A real-valued function on a vector space V over a field F, represented as ‖V‖, is called an absolute value (or more usually a norm) if it satisfies the following axioms: For all a in F, and v, u in V, Non-negativity Positive-definiteness Positive homogeneity or positive scalability Subadditivity or the triangle inequality The norm of a vector is also called its length or magnitude. In the case of Euclidean space Rn, the function defined by is a norm called the Euclidean norm. When the real numbers R are considered as the one-dimensional vector space R1, the absolute value is a norm, and is the p-norm for any p. In fact the absolute value is the "only" norm on R1, in the sense that, for every norm ‖ ⋅ ‖ on R1, ‖x‖ = ‖1‖ ⋅ |x|. The complex absolute value is a special case of the norm in an inner product space. It is identical to the Euclidean norm, if the complex plane is identified with the Euclidean plane R2. - Oxford English Dictionary, Draft Revision, June 2008 - Nahin, O'Connor and Robertson, and functions.Wolfram.com.; for the French sense, see Littré, 1877 - Lazare Nicolas M. Carnot, Mémoire sur la relation qui existe entre les distances respectives de cinq point quelconques pris dans l'espace, p. 105 at Google Books - James Mill Peirce, A Text-book of Analytic Geometry at Google Books. The oldest citation in the 2nd edition of the Oxford English Dictionary is from 1907. The term "absolute value" is also used in contrast to "relative value". - Nicholas J. Higham, Handbook of writing for the mathematical sciences, SIAM. ISBN 0-89871-420-6, p. 25 - Mendelson, p. 2. - Stewart, James B. (2001). Calculus: concepts and contexts. Australia: Brooks/Cole. ISBN 0-534-37718-1., p. A5 - Weisstein, Eric W. Absolute Value. From MathWorld – A Wolfram Web Resource. - Bartel and Sherbert, p. 163 - Peter Wriggers, Panagiotis Panatiotopoulos, eds., New Developments in Contact Problems, 1999, ISBN 3-211-83154-1, p. 31–32 - These axioms are not minimal; for instance, non-negativity can be derived from the other three: 0 = d(a, a) ≤ d(a, b) + d(b, a) = 2d(a, b). - Mac Lane, p. 264. - Shechter, p. 260. This meaning of valuation is rare. Usually, a valuation is the logarithm of the inverse of an absolute value - Shechter, pp. 260–261. - Bartle; Sherbert; Introduction to real analysis (4th ed.), John Wiley & Sons, 2011 ISBN 978-0-471-43331-6. - Nahin, Paul J.; An Imaginary Tale; Princeton University Press; (hardcover, 1998). ISBN 0-691-02795-1. - Mac Lane, Saunders, Garrett Birkhoff, Algebra, American Mathematical Soc., 1999. ISBN 978-0-8218-1646-2. - Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. ISBN 978-0-07-148754-2. - O'Connor, J.J. and Robertson, E.F.; "Jean Robert Argand". - Schechter, Eric; Handbook of Analysis and Its Foundations, pp. 259–263, "Absolute Values", Academic Press (1997) ISBN 0-12-622760-8.
http://en.wikipedia.org/wiki/Absolute_value
13
84
MSP:MiddleSchoolPortal/Measurement Sliced and Diced From Middle School Portal Measurement Sliced and Diced - Introduction Middle school teachers tell us that there are important practical skills and understanding that students need before they engage in the abstractions of algebra. These skills are found in the blurry area where measurement, basic geometry, and the arithmetic of decimals and fractions come together in the real world. To move forward mathematically, middle school students need hands-on experiences with measuring, using scale and proportionality, and estimating with benchmarks. How can online resources support the need for hands-on experiences with measurement? Read on! Students can use web resources to see the quantifying components in life, to visualize mathematics concepts, and to get instant feedback on calculations. The web is a friendly place to practice computational skills with fractions, decimals, and formulas. But perhaps most importantly, the Internet can expand the realm of possible real-world problem solving. Here we feature activities, lesson plans, and projects to help students understand how measurement and mathematical problem solving are part of life. A class measurement project can wrap together many important components of mathematics learning into a very memorable experience. Who can forget measuring their school gym to see how many pennies it could hold or finding the volume of the community swimming pool to see how many ping-pong balls it would take to fill it? A class measurement project allows students to first make choices about which tools and units to use, and then to do the measuring, use the data to find an answer, and communicate results. They apply measurement skills and concepts to solve everyday questions that can involve estimation, decimals, fractions, and proportional reasoning. A solid foundation in measurement in the middle school years enables students to think about their world in quantitative, geometric terms and see the usefulness of mathematics. Background Information for Teachers You are not alone if the idea of teaching measurement makes you uncomfortable; after all, doesn't everyone know how to measure? We selected these resources to help you refresh your approach to teaching measurement. Learning math: measurement Feel like you need a little review? This free online course examines critical concepts related to measurement. The 10 sessions feature video lessons, activities, and online demonstrations to review procedures used in conducting measurements, along with other topics such as the use of nonstandard measurement units, precision and accuracy, and the metric system. Circles, angles, volume formulas, and relationships between units of measurement are also explored. The final session presents case studies that help you examine measurement concepts from the perspective of your students. Measurement fundamentals This module, developed for the Virtual Machine Shop, offers a great deal of practical information about linear and angular measurement and the use of standard and metric units. It introduces the unique math used by engineers and tradespeople and presents a very practical focus to the study of measurement. How Many? A Dictionary of Units of Measurement This handy reference from the Center for Mathematics and Science Education at the University of North Carolina at Chapel Hill features a clickable list of defined terms related to units of measure. You'll also find FAQs, commentary, and news related to measurement units. Animations and Interactive Online Activities With these animations and interactive resources, students can find length, area, and perimeter at their own pace with as many repetitions as needed to create understanding. Measuring Henry's cabin When students want to know why they need to learn to measure, show them this cabin blueprint and ask them what they think a builder needs to know to start constructing a building. Students examine the cabin blueprint and find the surface area of the walls. Powers of ten What student isn't interested in very large numbers! Before your very eyes see the perspective expand from a 1-meter view of a rose bush to an expanded vision of 10 to the 26 power and then decrease to 10 to the negative 15. The site, also available in German and Italian, uses the meter as the unit of measurement. This visualization can help students see the results of increasing and decreasing scale. It is an engaging way to demonstrate scale and is a nice illustration of the meaning of exponents. Jigsaw puzzle size-up This online interactive jigsaw puzzle activity requires students to enlarge or shrink puzzle pieces before placing them in a puzzle. The choices for enlarging are 1.5, 2, and 4 times larger, while the sizes for shrinking are one-quarter, one-third, and one-half. These next two resources are from the site Figure This! that features 81 activities for middle schoolers. The activities, presented by colorful, animated characters, feature mathematics found in real-life situations. Students work with paper and pencil to answer the multiple questions posed in each activity. From the Figure This! home page you can go to a math index for a correlation of activities to important math topics. Printable versions of the activities are available in English and Spanish. Access ramp: how steep can a ramp be? The activity opens with an animation of a Figure This! character in a wheelchair using an access ramp over a three-step staircase, with steps 7 inches high and 10 inches wide. Students are challenged to think about dimensions of an access ramp to determine where the ramp should start to go up the three steps at a reasonable slope. Information about handicap accessibility is included. Windshield wipers: it's raining! Who sees more? The driver of the car or the truck? Here's something that the future drivers in middle school will relate to. Geometric shapes are used to compare the areas cleaned by different styles of windshield wipers. Open-Ended Questions and Hands-On Activities Teachers can use the printable open-ended questions and hands-on activities to get students thinking about measurement concepts. Approximating the Circumference and Area of a Circle If you have students who think they know everything about area, perimeter, circles, and pi, make a copy of this Geometry Problem of the Week and see what they can figure out. Wacky ruler Here's a good starter activity for students who lack the most fundamental understanding of measurement. They print out a "wacky ruler" and a page that features eight wiggly pink worms. The model ruler is marked with two, four, and seven units. Students can enter their measurements online to check their accuracy. Measuring (Const) These six worksheets introduce students to fractional units on a standard ruler and millimeters on a metric ruler. There is also a short, colorful PowerPoint slide show that demonstrates the fractional parts of an inch. Measure a picture, number 1: inch, half, quarter of an inch This student worksheet is the first in a series of five worksheets offering practical experience reading units on a ruler. Visualizing the Metric System How can you make the metric system more understandable for your students? Tell them to think of a gram as the mass of a jelly bean and a liter as one quart. This list can help students retain a visual picture to approximate various metric units. National Institute of Standards and Technology metric pyramid Take the mystery out of the metric system by having students create their own reference tool. They can use this paper cutout to make a 3-D pyramid printed with metric conversion information for length, mass, area, energy, volume, and temperature. The next four resources can be used to support a student project that explores big trees and the mathematics related to circumference and pi. A student exploration question can be "How big is the biggest tree in our neighborhood?" Big tree: have you ever seen a tree big enough to drive a car through? Even if your students have never seen a tree large enough to drive a car through, they can practice using fractions and decimals and the formula for the circumference of a circle. This activity lists the girth and height of 10 National Champion giant trees and asks students to determine which of the trees is large enough for a car to drive through. NPR: Bushwhacking with a Big-Tree Hunter Some people hunt animals and others hunt for trees. In this National Public Radio story, one in a special series called Big Trees and the Lives They've Changed, visit the Olympic Peninsula and learn about the life of a big tree hunter and the death of a giant Douglas fir. Here is a project idea that can be huge and interdisciplinary with the science or social studies department or that can be a one-day event where students can experience practical measurement. You may want to register as part of an online worldwide one-day event to calculate the circumference of the Earth (see first resource) or simply use all the resources to put together a class activity for replicating Eratosthenes' experiment. However you choose to approach it, you can tie measurement to real life by highlighting the historical connections and relating the activity to the modern technology of global positioning systems. The noon day project This Internet site presents the necessary mathematics and science information teachers need to re-create the measurement of the circumference of the Earth as done by the Greek librarian Eratosthenes more than 2000 years ago. Shadow measurements taken at high noon local time on a designated day in March are posted online and used to calculate the circumference of the Earth. Teachers can sign up and have their students participate in this annual spring event. Measuring the Circumference of the Earth This web page illustrates how data and mathematics were used in Eratosthenes' famous experiment. Money: Large Amounts Project Finally, how about using pennies as a unit of measure and asking big questions such as "What would the national debt look like if it were a pile of pennies—would it reach farther than the moon?" Once you start thinking in terms of using pennies, or any other size coins, to represent quantities, you may decide to start with a smaller quantity than the national debt. In any event, these web sites are a great place to begin. The megapenny project Visit this site to begin to appreciate the magnitude of large numbers. It shows and describes arrangements of large quantities of U.S. pennies. You'll see that a stack of 16 pennies measures one inch and a row of 16 pennies is one foot long. The site builds excitement for learning the size of the mass found in one quintillion (written as a one followed by eighteen zeroes) pennies. All pages have tables at the bottom, listing things such as the value of the pennies on the page, size of the pile, weight, and area (if laid flat). All weights and measurements are U.S. standards, not metric. The silver mile Here is a Math Forum middle school Problem of the Week that challenges students to think about the coins involved in creating a mile-long trail of silver coins. The authors include a few rules that require students to use fractions as they construct their mile using nickels, dimes, quarters, half-dollars, or silver dollars in specific proportions. MSP full record SMARTR: Virtual Learning Experiences for Students Visit our student site SMARTR to find related virtual learning experiences for your students! The SMARTR learning experiences were designed both for and by middle school aged students. Students from around the country participated in every stage of SMARTR’s development and each of the learning experiences includes multimedia content including videos, simulations, games and virtual activities. Visit the virtual learning experience on Measurement. The FunWorks Visit the FunWorks STEM career website to learn more about a variety of math-related careers (click on the Math link at the bottom of the home page). NCTM Measurement Standard In the discussion of the measurement standard, the National Council of Teachers of Mathematics states that "In the middle grades, students should build on their formal and informal experiences with measurable attributes like length, area, and volume; with units of measurement; and with systems of measurement." (Principles and Standards for School Mathematics, NCTM, 2000, p. 241) At its simplest, measurement in grades 6-8 begins with what appears to be a basic need for the student to know how to use a ruler to measure length. The reality is that even this apparently simple measurement task requires the use of multistep mathematics thinking. Many students lack the computational skills and conceptual understanding necessary to take on the more sophisticated tasks of finding surface area and volume and using units and converting units in metric and customary systems. The suggested online resources may help your students develop a conceptual understanding of area, perimeter, and volume and learn to use formulas and measurement units. The resources can also help you plan a really worthwhile class project. Check out the nine specific expectations that NCTM describes for middle school students related to the measurement standard. Author and Copyright Judy Spicer is the mathematics education resource specialist for digital library projects at Ohio State University. She has taught mathematics in grades 9-14. Please email any comments to [email protected]. Connect with colleagues at our social network for middle school math and science teachers at http://msteacher2.org. Copyright November 2004 - The Ohio State University. This material is based upon work supported by the National Science Foundation under Grant No. 0424671 and since September 1, 2009 Grant No. 0840824. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
http://msp.ehe.osu.edu/wiki/index.php/MSP:MiddleSchoolPortal/Measurement_Sliced_and_Diced
13
65
Critical point (mathematics) In calculus, a critical point of a function of a real variable is any value in the domain where either the function is not differentiable or its derivative is 0. The value of the function at a critical point is a critical value of the function. These definitions admit generalizations to functions of several variables, differentiable maps between Rm and Rn, and differentiable maps between differentiable manifolds. Definition for single variable functions A critical point of a function of a single real variable, f(x), is a value x0 in the domain of f where either the function is not differentiable or its derivative is 0, f′(x0) = 0. Any value in the codomain of f that is the image of a critical point under f is a critical value of f. These concepts may be visualized through the graph of f: at a critical point, either the graph does not admit a tangent or the tangent is a vertical or horizontal line. In the last case, the derivative is zero and the point is called a stationary point of the function. By Fermat's theorem, local maxima and minima of a function can occur only at its critical points. However, not every stationary point is a maximum or a minimum of the function — it may also correspond to an inflection point of the graph, as for f(x) = x3 at x = 0, or the graph may oscillate in the neighborhood of the point, as in the case of the function defined by the formulae f(x) = x2sin(1/x) for x ≠ 0 and f(0) = 0, at the point x = 0. - The function f(x) = x2 + 2x + 3 is differentiable everywhere, with the derivative f′(x) = 2x + 2. This function has a unique critical point −1, because it is the unique number x0 for which 2x0 + 2 = 0. This point is a global minimum of f. The corresponding critical value is f(−1) = 2. The graph of f is a concave up parabola, the critical point is the abscissa of the vertex, where the tangent line is horizontal, and the critical value is the ordinate of the vertex and may be represented by the intersection of this tangent line and the y-axis. - The function f(x) = x2/3 is defined for all x and differentiable for x ≠ 0, with the derivative f′(x) = 2x−1/3/3. Since f′(x) ≠ 0 for x ≠ 0, the only critical point of f is x = 0. The graph of the function f has a cusp at this point with vertical tangent. The corresponding critical value is f(0) = 0. - The function f(x) = x3 − 3x + 1 is differentiable everywhere, with the derivative f′(x) = 3x2 − 3. It has two critical points, at x = −1 and x = 1. The corresponding critical values are f(−1) = 3, which is a local maximum value, and f(1) = −1, which is a local minimum value of f. This function has no global maximum or minimum. Since f(2) = 3, we see that a critical value may also be attained at a non-critical point. Geometrically, this means that a horizontal tangent line to the graph at one point (x = −1) may intersect the graph at an acute angle at another point (x = 2). - The function f(x) = 1/x has no critical points. The point x = 0 is not considered as a critical point because it is not included in the function's domain. Several variables In this section, functions are assumed to be smooth. For a smooth function of several real variables, the condition of being a critical point is equivalent to all of its partial derivatives being zero; for a function on a manifold, it is equivalent to its differential being zero. If the Hessian matrix at a critical point is nonsingular then the critical point is called nondegenerate, and the signs of the eigenvalues of the Hessian determine the local behavior of the function. In the case of a real function of a real variable, the Hessian is simply the second derivative, and nonsingularity is equivalent to being nonzero. A nondegenerate critical point of a single-variable real function is a maximum if the second derivative is negative, and a minimum if it is positive. For a function of n variables, the number of negative eigenvalues of a critical point is called its index, and a maximum occurs when all eigenvalues are negative (index n, the Hessian is negative definite) and a minimum occurs when all eigenvalues are positive (index zero, the Hessian is positive definite); in all other cases, the critical point can be a maximum, a minimum or a saddle point (index strictly between 0 and n, the Hessian is indefinite). Morse theory applies these ideas to determination of topology of manifolds, both of finite and of infinite dimension. Gradient vector field In the presence of a Riemannian metric or a symplectic form, to every smooth function is associated a vector field (the gradient or Hamiltonian vector field). These vector fields vanish exactly at the critical points of the original function, and thus the critical points are stationary, i.e. constant trajectories of the flow associated to the vector field. Definition for maps For a differentiable map f between Rm and Rn, critical points are the points where the differential of f is a linear map of rank less than n; in particular, every point is critical if m < n. This definition immediately extends to maps between smooth manifolds. The image of a critical point under f is a called a critical value. A point in the complement of the set of critical values is called a regular value. Sard's theorem states that the set of critical values of a smooth map has measure zero. See also - Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 0-495-01166-5. - Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 0-547-16702-4. - Adams, A. Adams; Essex, Christopher (2009). Calculus: A Complete Course. Pearson Prentice Hall. p. 744. ISBN 978-0-321-54928-0. - Carmo, Manfredo Perdigão do (1976). Differential geometry of curves and surfaces. Upper Saddle River, NJ: Prentice-Hall. ISBN 0-13-212589-7.
http://en.wikipedia.org/wiki/Critical_point_(mathematics)
13
53
Hello, and welcome to the fourth episode of the Software Carpentry lecture on Python. In this episode, we'll have a look at lists. While loops let us do things many times… …collections let us store many values together, so that we don't have to define new variables for each piece of data we want to work with. The most popular kind of collection in Python is the list, which takes the place of arrays in languages like C and Fortran. To create a list, just put some values in square brackets with commas in between. To fetch the element at a location, put the index of that location in square brackets. For example, we can create a list of the atomic symbols of the first four noble gases… …and then print out the list element at location 1. And yes, Python indexes lists starting at 0, not at 1. There actually was a reason for this back in 1970, when the language C was invented; today, we just have to put up with it. And just as it's an error to try to get the value of a variable that hasn't been defined, it's an error to try to access a list element that doesn't exist. For example, if our list of noble gases has four elements, legal indices for the list are 0, 1, 2, and 3, so trying to access element 4 produces an error. If we don't know how long a list is, we can use the built-in function len to find out. As you'd expect, it returns 4 for our list of gases. And it returns 0 for the empty list, which is written as a pair of square brackets with nothing in between. We said earlier that list indices start at 0, but in fact, some negative indices work as well. values[-1] is the last element of the list, values[-2] is the next-to-last, and so on, counting backward from the end of the list. For example, here's our list of gases again. As you can see, element -1 is krypton (the last in the list), and element -4 is helium. This notation is easier to read than the long-winded alternative… …which means programmers are less likely to make mistakes with it. Lists have two important characteristics. First, they are mutable, i.e., they can be changed after they are created. For example, suppose we misspell the last entry in our list of gases. We can correct our mistake by assigning to that element of the list as if it were any other variable. Sure enough, our list has been updated in place. As you probably expect by now, the location must exist before a value can be assigned to it. If our list has four elements… …then assigning to index 4 produces an error, because the legal indices are 0 to 3 (or -1 to -4 if we're counting from the end). The second important characteristic of lists is that they are heterogeneous, i.e., they can store values of many different types. This makes them different from arrays in C and Fortran, whose entries all have to be the same type. Here for example, we have created two lists… …each of which contains both a string and an integer. This picture shows what's in memory after the second list is created: each list stores a reference to a string, and a reference to an integer. Lists can even store references to other lists. We can, for example, create a list gases whose two entries are references to the lists There's nothing magical about this: if we update our picture of what's in memory, we simply have another two-element list that stores references to other things we've already created. Nesting data structures like this allows us to do some very powerful things. It can also be a rich source of bugs, so we will delay discussion of the details to a later episode. Lists and loops naturally go together: we almost always use a loop of some kind to operate on all the list's elements. For example, we can use a while loop to step through the indices of a list to get each of its elements in turn. Here's a short program that prints the noble gases one by one. We start the loop variable i at 0, which is the first legal list index. Each time through the loop, we add 1 to it, so that we move through the set of legal list indices in order. We keep going as long as i is less than the length of the list, i.e., as long as it's a legal index. And sure enough, this loop prints out each list element in order. This works, but it's tedious to type it all in time after time. And it's all too easy to forget to increment the loop index, or to get the loop control condition wrong. To make things simpler, Python provides a second kind of loop called a for loop that gives the program each list element in turn. Here, for example, we do in one line ( for gas in gases) what took three lines in the previous program. As you can see, the for loop variable is assigned each element of the list in turn… …not each index. Python does this because it's the most common case: most of the time that a program wants to do something with each list element, it doesn't care what that element's location is. As we said a few medias ago, lists are mutable: their elements can be changed in place. We can also delete elements entirely, which shortens the list. Let's set up our noble gas list again… ..and then tell Python to delete element 0 using the If we print gases out afterward, it only has three elements. If we delete element 2 of this list (which is now the last element, since the list's length is 3)… …we're left with a two-element list. And yes, deleting an index that doesn't exist is an error. We can lengthen lists, too, by appending new elements. Let's assign an empty list to …then append the string …and the string …and finally the string Our list now has three elements. dot-append is an example of a method, and most operations on lists (and other things) are expressed this way. A method is a function that "belongs to" (and usually operates on) a specific chunk of data. If the data is stored in thing, then we call the method using the notation "thing dot methodname", passing in any arguments it takes inside parentheses. To show you how this works, here are a few useful list methods. Let's create the gases list again, but with 'He' duplicated at the front. gases.count('He') tells us that 'He' occurs twice in the list. gases.index('Ar') tells us that the index of the first occurrence of 'Ar' is 2. (Remember indexing starts at zero, so element 2 is the third element of the list.) gases.insert takes two arguments: the index where we want to insert something, and the something we want to insert. It doesn't return any value… …but if we print out the list after calling it, we can see that 'Ne' has been put at location 1, and everything above that has been bumped up to make room, leaving us with a list of five elements. Here are two methods that are often used incorrectly. Let's re-set the gases list… …and then print the result of gases.sort(). As you can see, the sort method returns None, which is the special value Python uses for "nothing here". However, if we now print gases, it has been sorted alphabetically. gases.reverse() returns nothing… …but reverses the list in place. People often expect reverse to return the sorted or reversed list, which leads to a common bug: gases = gases.sort() does sort the list that gases refers to, but then assigns None to the variable gases, effectively throwing away the data that has just been sorted. find method tells us where something is in a list, but if we just want to know whether something is there or not, we can use the Here's our list of gases again. As expected, the expression 'He' in gases is true. in is most often used in if statements, as in this example. 'Pu' is not in the list gases, this tells us that the universe is well ordered. The last thing we will introduce in this episode is the range function, which constructs sequences of integers. range(5) produces the list of numbers from 0 to 4… range(2, 6) produces 2, 3, 4, 5… range(0, 10, 3) produces 0, 3, 6, 9, i.e., starts at the first argument, and goes up to but not including the second argument, using the third argument as the step size. range(10, 0) does not produce a list in reverse order: instead, it starts at 10, and tries to go "up to" 0. Since nothing fits that description, it produces the empty list. len(list) is the length of a list, and range(N) is the integers from 0 to N-1, then range(len(list)) is the integers from 0 to 1 less than the length of the list, i.e., all the legal indices of the list. An example will make this clearer. Here's our list of gases. Its length is 4. range(4), is 0, 1, 2, and 3. If we use range(len(gases)) in a for loop, it assigns each index of the list to the loop variable in turn… …so we can print out (index, element) pairs one by one. This is a very common idiom in Python for those cases where we really do want to know each element's location as well as its value. We'll see an even better way to do it later.
http://software-carpentry.org/4_0/python/lists.html
13
51
||This article needs additional citations for verification. (May 2007)| Shading is a process used in drawing for depicting levels of darkness on paper by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas. There are various techniques of shading including cross hatching where perpendicular lines of varying closeness are drawn in a grid pattern to shade an area. The closer the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears. Computer graphics In computer graphics, shading refers to the process of altering the color of an object/surface/polygon in the 3D scene, based on its angle to lights and its distance from lights to create a photorealistic effect. Shading is performed during the rendering process by a program called a shader. Angle to light source Shading alters the colors of faces in a 3D model based on the angle of the surface to a light source or light sources. The first image below has the faces of the box rendered, but all in the same color. Edge lines have been rendered here as well which makes the image easier to see. The second image is the same model rendered without edge lines. It is difficult to tell where one face of the box ends and the next begins. The third image has shading enabled, which makes the image more realistic and makes it easier to see which face is which. Usually, upon rendering a scene a number of different lighting techniques will be used to make the rendering look more realistic. For this matter, a number of different types of light sources exist to provide customization for the shading of objects. Ambient lighting Shading is also dependant on lighting. An ambient light source represents a fixed-intensity and fixed-color light source that affects all objects in the scene equally. Upon rendering, all objects in the scene are brightened with the specified intensity and color. This type of light source is mainly used to provide the scene with a basic view of the different objects in it. Directional lighting A directional light source illuminates all objects equally from a given direction, like an area light of infinite size and infinite distance from the scene; there is shading, but cannot be any distance falloff Point lighting Light originates from a single point, and spreads outward in all directions Spotlight lighting Spotlight, originates from a single point, and spreads outward in a coned direction Area lighting Area, originates from a single plane and illuminates all objects in a given direction beginning from that plane Volumetric lighting Volume, an enclosed space lighting objects within that space Shading is interpolated based on how the angle of these light sources reach the objects within a scene. Of course, these light sources can be and often are combined in a scene. The renderer then interpolates how these lights must be combined, and produces a 2d image to be displayed on the screen accordingly. Distance falloff Theoretically, two surfaces which are parallel, are illuminated the same amount from a distant light source, such as the sun. Even though one surface is further away, your eye sees more of it in the same space, so the illumination appears the same. Notice in the first image that the color on the front faces of the two boxes is exactly the same. It appears that there is a slight difference where the two faces meet, but this is an optical illusion because of the vertical edge below where the two faces meet. Notice in the second image that the surfaces on the boxes are bright on the front box and darker on the back box. Also the floor goes from light to dark as it gets farther away. This distance falloff effect produces images which appear more realistic without having to add additional lights to achieve the same effect. Distance falloff can be calculated in a number of ways: - None - The light intensity received is the same regardless of the distance between the point and the light source. - Linear - For a given point at a distance x from the light source, the light intensity received is proportional to 1/x. - Quadratic - This is how light intensity decreases in reality if the light has a free path (i.e. no fog or any other thing in the air that can absorb or scatter the light). For a given point at a distance x from the light source, the light intensity received is proportional to 1/x2. - Factor of n - For a given point at a distance x from the light source, the light intensity received is proportional to 1/xn. - Any number of other mathematical functions may also be used. Flat shading Flat shading is a lighting technique used in 3D computer graphics to shade each polygon of an object based on the angle between the polygon's surface normal and the direction of the light source, their respective colors and the intensity of the light source. It is usually used for high speed rendering where more advanced shading techniques are too computationally expensive. As a result of flat shading all of the polygon's vertices are colored with one color, allowing differentiation between adjacent polygons. Specular highlights are rendered poorly with flat shading: If there happens to be a large specular component at the representative vertex, that brightness is drawn uniformly over the entire face. If a specular highlight doesn’t fall on the representative point, it is missed entirely. Consequently, the specular reflection component is usually not included in flat shading computation. Smooth shading Smooth shading of a polygon displays the points in a polygon with smoothly-changing colors across the surface of the polygon. This requires you to define a separate color for each vertex of your polygon, because the smooth color change is computed by interpolating the vertex colors across the interior of the triangle with the standard kind of interpolation we saw in the graphics pipeline discussion. Computing the color for each vertex is done with the usual computation of a standard lighting model, but in order to compute the color for each vertex separately you must define a separate normal vector for each vertex of the polygon. This allows the color of the vertex to be determined by the lighting model that includes this unique normal. Types of smooth shading include: Gouraud shading - Determine the normal at each polygon vertex - Apply an illumination model to each vertex to calculate the vertex intensity - Linearly interpolate the vertex intensities over the surface polygon Data structures - Sometimes vertex normals can be computed directly (e.g. height field with uniform mesh) - More generally, need data structure for mesh - Key: which polygons meet at each vertex - Polygons, more complex than triangles, can also have different colors specified for each vertex. In these instances, the underlying logic for shading can become more intricate. - Even the smoothness introduced by Gouraud shading may not prevent the appearance of the shading differences between adjacent polygons. - Gouraud shading is more CPU intensive and can become a problem when rendering real time environments with many polygons. - T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general, T-Junctions should be avoided. Phong shading Phong shading, is similar to Gouraud shading except that the Normals are interpolated. Thus, the specular highlights are computed much more precisely than in the Gouraud shading model: - Compute a normal N for each vertex of the polygon. - From bi-linear interpolation compute a normal, Ni for each pixel. (This must be renormalized each time) - From Ni compute an intensity Ii for each pixel of the polygon. - Paint pixel to shade corresponding to Ii. Flat vs. smooth shading |Uses the same color for every pixel in a face - usually the color of the first vertex.||Smooth shading uses linear interpolation of colors between vertices| |Edges appear more pronounced than they would on a real object because of a phenomenon in the eye known as lateral inhibition||The edges disappear with this technique| |Same color for any point of the face||Each point of the face has its own color| |Individual faces are visualized||Visualize underlying surface| |Not suitable for smooth objects||Suitable for any objects| |Less expensive||More expensive|
http://en.wikipedia.org/wiki/Shading
13
121
http://Physics2.FarTooMuch.Info/measure.htm is this page's URL. The original intent of the metric system is to have one name for each characteristic to be measured. For example, the only unit of distance is the meter. That characteristic could then be magnified or diminished with an appropriate prefix. For example, 'deci' means a tenth so that a decimeter is one tenth of a meter. The next important thing is to relate the different characteristics by defining their corresponding units so that 'fudge factors' are not needed. The unit of volute called liter and is defined as a cubic decimeter. The unit of mass called kilogram as defined by the international prototype. The unit of time is called second. The unit of force is called the newton. So if I say 'F=ma', the unit of force was chosen so that it requires a force of one newton to accelerate a mass of one kilogram at a rate of one meter per second every second. The energy needed push an object with a force of one newton for a distance of one meter is one joule. The power needed push an object with a force of one newton at a velocity of one meter per second is one watt. In the design of the metric system, great effort was made to avoid 'fudge factors'. The metric system was designed to be easy to understand. For each characteristic to be measured [except mass], only one unit is defined. And that unit can be scaled from the very small to the very large. Units for derived characteristics are defined so that fudge factors are not needed. It is for these reasons that we should use the metric system that is the official United States system of weights and measure. Metric units define non-metric units of weights and measure used in the United States. The standard measurement of length in the United States is the meter (m). What the meter is has been redefined many times. In 1799 the meter was originally defined as one ten millionth of the quadrant of the earth through Paris. Later in 1872, thirty prototype meters were made on metal bars with marks to indicate the meter. In 1960 the meter was redefined as 1650763.73 wavelengths of the orange-red line of krypton-86. And in 1983 the meter was again redefined so that the speed of light in a vacuum was exactly 299729458 meters/second. In 1964, a liter [L or l] was defined as one cubic decimeter. The kilogram (kg) is the unit of mass and is equal to the mass of the international prototype of the kilogram. One liter of water at maximum density, about 4 degrees Celsius, standard pressure, has a mass very close to 1 kilogram. With English unit, we add a sufix to denote very large or very small quantities. For example, "seven million" In the metric system a prefex is added to the unit, as in "seven megabucks" which would be seven million dollars. Similarly a kilobuck would be $1000, a decibuck would be a dime and a centibuck would be one cent. Below is a list of some of the prefixes used by the metric system preceded by their abreviation. Also included for completeness are the numbers shown in Scientific Notation and their English name. Metric Standard Scientific United Notation Notation Notation States (Y) yotta 1000000000000000000000000 1024 (Z) zetta 1000000000000000000000 1021 (E) exa 1000000000000000000 1018 quintillion (P) peta 1000000000000000 1015 quadrillion (T) tera 1000000000000 1012 trillion (G) giga 1000000000 109 billion (M) mega 1000000 106 million (k) kilo 1000 103 thousand (h) hecto 100 102 hundred (dk) deca 10 10 ten (d) deci 0.1 10-1 tenth (c) centi 0.01 10-2 hundredth (m) milli 0.001 10-3 thousandth (µ) micro 0.000001 10-6 millionth (n) nano 0.000000001 10-9 billionth (p) pico 0.000000000001 10-12 trillionth (f) femto 0.000000000000001 10-15 quadrillionth (a) atto 0.000000000000000001 10-18 quintillionth (z) zepto 0.000000000000000000001 10-21 (y) yocto 0.000000000000000000000001 10-24 Note that "µ" is the Greek letter mu. According to Paul Trusten, R.Ph., Public Relations Director, U.S. Metric Association (USMA), Inc., the Official U. S. Metric Page: "... The correct symbol for "microgram" is "µg". However, because the Greek letter "µ" (mu) can be mistaken for other letters when handwritten, the Joint Commission (the accrediting body for U.S. healthcare) has banned the use of the official symbol "µg" in medical records (hospital charts, hospital reports, prescriptions), and has recommended the use of "mcg" instead. ... Once a plan for U.S. metrication is in place, and metric-system education becomes part of the American landscape, I am confident that we will all become fluent in the use of "µ". However, rest assured that, in medicine and pharmacy, both "µg" and "mcg" are understood to be the same shorthand for "microgram." ..." Please DO NOT confuse "m" and "M". Asprin is measured in mg (milligram) and a Mg (megagram) is called a metric ton. A mm is less than .04 inches but a Mm is more than 621.37 miles, about the distance between San Diego and Salt Lake City. Of course the original definition of 10 Mm was the distance from the equator to the North Pole through Paris. Note that it is improper to mix English notation (thousand, million, billion, etc.) with metric notation. The total electric power generation capability in California is about 47 gigawatts. It is often incorrectly written as 47 thousand megawatts, or 47 million kilowatts. In a desire to keep the digits significant, writing 47,000 megawatts or 47,000,000 kilowatts should be discouraged. Calling a 1 gram tablet of asprin a 1000 mg tablet is also improper advertising hype. The unit of length is the meter (m). The unit of mass is the gram (g). The unit of time is the second (s). The unit of temperature is the Kelvin. (K) The unit of current is the ampre (A). The meter (m) is the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second. [i.e. by definition, light travels a distance of 29.9792458 cm in a ns.] The kilogram (kg) is the unit of mass; it is equal to the mass of the international prototype of the kilogram. [Essentially the kg is the mass of a liter of water at 4°C.] The second (s) is the duration [or time] of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom. [Duration chosen to equal what was a mean solar day divided by 86400.] The kelvin (K), unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature between the triple point of water and absolute zero. Absolute zero is 0 K. The ampere (A) is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross section, and placed 1 meter apart in vacuum, would produce between these conductors a force equal to 2 x 10-7 newton per meter of length. The mole [mol] is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12; its symbol is "mol." The candela [cd] is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 x 1012 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian. The liter (L), a measure of volume, is a cubic decimeter. The newton (N) is the force needed to accelerate 1 kg at 1 m/sec2. 100000 dynes are equal to 1 Newton. [On the Earth, gravity attracts 1 kg with a force of about 9.8 newtons. This force is the attraction of the 1 kg to all the mass in the universe and is known as the weight of the 1 kg. It varies between the equator and the poles. It also varies with time of day. Standard gravity is defined as 9.80665 m/sec2. ] The joule (J), a measure of energy, is 1 newton-meter of work. It is also equal to 1 watt-second of energy. There are 3.6 megajoules in a kilowatt-hour. To raise a liter of water one meter is about 9.8 joules of work. The watt (W), a measure of power, is one joule per second or one newton-meter per second. On Earth, the power needed to raise one liter of water one meter in one second is about 9.8 watts. One ampere through an electrical resistance of one ohm develops an electric force of one volt and dissipates one watt of power. The coulomb (C) is the measure of electric charge. One amp-second is one coulumb. [About 6.281 x 1018 electrons.] The volt (V) is the electrical potential (E) across a resister dissapating one watt with a current of one amp. That resister will have a resistance of one ohm. The ohm (Ω) is electrical resistance (R). It is equal to the voltage across a resistor divided by current through the resister in amps. The tesla (T) is a measure of magnetic flux density (B). It is webers (Wb) per square meter. Note that 1 Tesla is equal to 10,000 Gauss. [One Gauss = 100 µT] The weber (Wb) is a measure of magnetic flux. It is volts times time. The henry (H) is the unit of inductance. It is equal to magnetic flux (Wb) divided by current (A). The voltage across an inductor is equal to the rate of change of current through the inductor times its inductance. The farad (F) is the unit of capacitance. It is equal to charge (C) divided by voltage (V). The current into a capacitor is equal to the rate of change of voltage across the capacitor times the capacitance. The degree Celcius (°C) [aka centigrade] is the same difference as the degree Kelvin, but zero on the Celcius scale is at the freezing point of water. The hertz (Hz), a measure of frequency, is the number of times something happens per second. It replaces the old unit 'Cycles per Second' (CPS). The frequency of a periodic function is the reciprical of the period in seconds. In United States music, 'A' above middle 'C' is 440 Hz. The diopter, a measure of lens power, is the inverse of the focal length in meters. Thus a lens with a focal length of 1 m has a power of 1 diopter and a lens with a focal length of 0.25 m has a power of 4.0 diopters. When lenses are used together, the total power is the sum of the powers of each lens. The bel (B) is a logrithmic measure of power ratio. A power ratio of 10 is 1 B, a power ratio of 100 is 2 B, and a power ratio of 1000 is 3 B. A power ratio of 100.1 [about 1.26] is 0.1 B or 1 dB [decibel]. By design, metric units combine to form new units to measure other properties. The following table may prove useful. For all practical purposes: Volume Volume Mass of Water cubic meter kiloliter metric ton cubic decimeter liter kilogram cubic centimeter milliliter gram cubic millimeter microliter milligram It shows the relationship between volume expressed in cubic meters, and the equivalent volume in liters, and the mass of water, at maximum density which is about 4 degrees centigrade, that will fill that volume. Note that water therefore has a maximum density of about 1 metric ton per cubic meter [kiloliter], and 1 kilogram per liter, and 1 gram per milliliter, and 1 milligram per microliter. Similiarly, the unit of pressue called a 'pascal' is a newton per square meter, and is equal to a centinewton per square decimeter, and is equal to 100 micronewtons per square centimeter, and is equal to a micronewton per square millimeter. A measurement consists of two parts, the number and the unit. Both are important. If I were to say: 9.806650 = 32.17405 = 21.93685 you would say I was wrong. And I would be wrong. But if I were to say: 1 G=9.806650 m/sec2=32.17405 ft/sec2=21.93685 mph/sec then I would be right. [more or less, these are approximations] Note that here "G" represents the standard [average] acceration of a mass on the surface of the Earth due to the force of gravity. If I multiply 30 cm. by 40 cm., the answer is 1200 cm2. Note that the units are subject to the same algebraic manipulation as are the numbers. Acceleration is the rate of change of velocity. Rate of change, also known as a time derivative, has the unit 1/sec. Distance is the area under the velocity versus time plot. This area, also known as a time integral, has the unit sec. Thus the rate of change of distance with time is velocity, the rate of change of velocity is acceleration, and the rate of change of acceleration is jerk. If the distance is in meters and the time in seconds, the corresponding units are: Distance in m Velocity in m/sec Acceleration in m/sec2 Jerk in m/sec3 Velocity will always be in the form of unit distance per unit time. Any other unit is wrong. Knowing how to multiply and divide units is essential to understanding science. There are four different ways to divide a circle and measure an angle. There are 2 pi radians, 360 degrees, or 400 gradients in a circle. In addition, computers often represent a circle as a fraction between 0 and 1 using Binary Angular Measure or BAMs. The 5 bit Grey coded optical disk shown in Figure 1 is an example of a disk used to encode direction into a computer. code BAM deg/min/sec radians 00 BBBBB N 0.015625 5° 37' 30" 0.098175 01 BBBBW NNE 0.046875 16° 52' 30" 0.294524 02 BBBWW NNE 0.078125 28° 7' 30" 0.490874 03 BBBWB NE 0.109375 39° 22' 30" 0.687223 04 BBWWB NE 0.140625 50° 37' 30" 0.883573 05 BBWWW ENE 0.171875 61° 52' 30" 1.079922 06 BBWBW ENE 0.203125 73° 7' 30" 1.276272 07 BBWBB E 0.234375 84° 22' 30" 1.472622 08 BWWBB E 0.265625 95° 37' 30" 1.668971 09 BWWBW ESE 0.296875 106° 52' 30" 1.865321 10 BWWWW ESE 0.328125 118° 7' 30" 2.061670 11 BWWWB SE 0.359375 129° 22' 30" 2.258020 12 BWBWB SE 0.390625 140° 37' 30" 2.454369 13 BWBWW SSE 0.421875 151° 52' 30" 2.650719 14 BWBBW SSE 0.453125 163° 7' 30" 2.847068 15 BWBBB S 0.484375 174° 22' 30" 3.043418 16 WWBBB S 0.515625 185° 37' 30" 3.239767 17 WWBBW SSW 0.546875 196° 52' 30" 3.436117 18 WWBWW SSW 0.578125 208° 7' 30" 3.632467 19 WWBWB SW 0.609375 219° 22' 30" 3.828816 20 WWWWB SW 0.640625 230° 37' 30" 4.025166 21 WWWWW WSW 0.671875 241° 52' 30" 4.221515 22 WWWBW WSW 0.703125 253° 7' 30" 4.417865 23 WWWBB W 0.734375 264° 22' 30" 4.614214 24 WBWBB W 0.765625 275° 37' 30" 4.810564 25 WBWBW WNW 0.796875 286° 52' 30" 5.006913 26 WBWWW WNW 0.828125 298° 7' 30" 5.203263 27 WBWWB NW 0.859375 309° 22' 30" 5.399612 28 WBBWB NW 0.890625 320° 37' 30" 5.595962 29 WBBWW NNW 0.921875 331° 52' 30" 5.792314 30 WBBBW NNW 0.953125 343° 7' 30" 5.988661 31 WBBBB N 0.984375 354° 22' 30" 6.185011 The reason for the Grey code is to prevent errors that might occur if two bands were suppose to change at a given angle, but one change was a very small fraction off. Even though many software programs use BAMs for calculation of angles to describe, for example, direction of travel, computer output is usually converted to degrees and minutes for user display. This is done by multiplying the fractional angle [BAM] by 360 degrees, using the resultant integer as degrees, multiplying the new fraction by 60, then using that integer for minutes. The new fraction could be multiplied by 60 to get the seconds. Unlike this simple example, most encoding disks use 8 or more bits to encode an angle. Computers are binary machines. Everytime a bit is added the memory address space, the amount of addressable membory doubles. So the measurement of computer memory is bases on power of 2. To simplify the description of memory size, the term kilobye is applied to the size of memory assable by 10 bits that is 1024 bytes. This was done because 1024 is close to 1000. So when taking about computer memory or disk size, use the following table: Metric Standard Computer United Notation Notation Notation States (PB) petabyte 1125899906842624 250 quadrillion (TB) terabyte 1099511627776 240 trillion (GB) gigabyte 1073741824 230 billion (MB) megabyte 1048576 220 million (kB) kilobyte 1024 210 thousand To summarize, the basic units of the Metric System are the meter [m], kilogram [kg] and the second [s]. 100 inches is the same distance as 2.54 meters. A cubic decimeter [0.1 m or about 3.937" on a side] is a liter [l]. Fill that liter with cold water and the water will have a mass very close to a kilogram [kg]. In Saint Louis, Missouri, that kilogram will have a weight of about 9.8 Newtons [N]. Raise that kilogram up 1 meter and you will have done 9.8 joules [J] of work. Raise that kilogram up 1 meter in 1 second requires a power of 9.8 watts [W]. The pressure at the bottom of that cubic decimeter of water is .98 kilopascal [kP] as a function of location North Pole 983.217 N 221.036 lbf. St. Michael, Alaska 982.192 N 220.806 lbf. Paris, France 980.943 N 220.525 lbf. Standard Gravity 980.665 N 220.46223 lbf. New York, New York 980.267 N 220.373 lbf. Key West, Florida 978.970 N 220.081 lbf. Equator 978.039 N 219.872 lbf. Surface of Mars 369.7 N 83.1 lbf. Surface of our Moon 162.7 N 36.5 lbf. Surface of Pluto 65.7 N 14.7 lbf. Mass & Weight are NOT the same thing. The weight of an object varies with location. But its mass remains the same where ever it is. The mass of an object is measured on a balance scale by comparing it with other masses. Weight is a force and must be measured on a spring scale. The chart above shows how the weight of a 100 kg mass changes with location. Weight even changes with time of day. In the United States, things weigh the least at noon, in June, with a new moon! The difference is not much, but it is enough to cause the tides. Note that when selling sugar, apples, nuts, etc., 1 lb = .45359237 kg. At the standard gravity of 9.80665 m/sec2, a 1 lb [one pound mass] has a weight of 1 lbf [one pound force]. Sometimes the confusion between mass and force results in the use of the meaningless unit of kg/m2 [kilogram per square meter] being used to measure 'pressure'. Presure must be N/m2 which is Pa [pascal]. Below is a list of common metric unit symbols, the name of of the unit and a short description. A ampere electric current B bell power ratio C coulomb electric charge cd candela luminous intensity ° degree plane angle °C degree Celsius Temperature F farad electric capacitance Gs gauss magnetic flux density [cgs] g gram mass H henry electric inductance Hz hertz frequency J joule energy, work, quantity of heat kg kilogram mass K kelvin absolute temperature L or l liter volume lm lumen luminous flux lx lux illuminance m meter length mol mole amount of substance N newton force Oe oersted reluctance Ω ohm electric resistance Pa pascal pressure s second time S siemens electric conductance t ton mass [megagram] [metric ton or tonne] T tesla magnetic flux density V volt electric potential difference, EMF W watt power Wb weber magnetic flux 1790 Thomas Jefferson proposed a decimal-based system of measurement for the United States. France's Louis XVI authorized scientific investigations aimed at a reform of French weights and measures. This led to the development of the first "metric" system. 1792 The U.S. Mint was formed to produce decimal currency. (the U.S. dollar consisting of 100 cents). $25 was then defined as the worth of 1 oz. of gold. 1795 France officially adopted the metric system. 1866 Congress authorized the use of the metric system in the United States and gave each state a set of standard metric weights and measures. 1875 United states became one of the original 17 signatory nations to the Treaty of the Meter. 1893 United States adopted fundamental metric standards for length and mass. 1958 U.S. and imperial yards were adjusted to metric measurement. I.e. the U.S. inch was shortened to exactly 2.54 cm. 1960 The International System of Units, abbreviated SI, was approved by the General Conference of Weights and Measures. 1985 Congress passed the Metric Conversion Act of 1975. The Metric Board was established. 1982 The Metric Board was dissolved. 1988 Congress passed the 'Omnibus Trade and Competitiveness Act of 1988. Designated the metric system as the preferred system of weights and measures for United States trade and commerce. U.S. Metric Association (USMA), Inc. A must read! NIST National Institute of Standards and Technology, the Official U. S. Metric Page He best way to learn the metric system is to see and feel familiar things that have a metric label. Many of us grew up using 35 mm film with a width of 3.5 cm. The standard CD is 12 cm in diameter. Buy a meter stick and use it. The mass of a 1000 mg tablet of aspirin is 1 g. Buy a liter of bottled water and know that the bottle has a volume of 1 dm3 and the water has a mass of about 1 kg. Two and a half laps around your local high school football field [5 furlongs] is just over 1 km. Tires on a small car often will have a maximum inflation pressure of 300 kPa. Water freezes at 0 °C and boils at 100 °C at sea-level atmospheric pressure. That pressure will support a column of mercury about 76 cm high. Music of a marching band has a 2 Hz beat. In the United States, the 'A' above middle 'C' has a frequency of 440 Hz. If your energy bill says your average energy use is 720 kW-hr/month [30 day month], that is an average power of 1 kW, the power of ten 100 W light bulbs, about the power of bright sunlight on 1 m2 perpendicular to the sun at the Earth's surface. A kW-hr of energy is 3.6 MJ. Light will travel almost 30 cm in a ns. NEXT PAGE is Math PREVIOUS PAGE is digital sound This Web Site was designed by Russ Lemon
http://science2.fartoomuch.info/measure.htm
13
73
Four color theorem The four color theorem is a theorem of mathematics. It says that in any plane surface with regions in it (people think of them as maps), the regions can be colored with no more than four colors. Two regions that have a common border must not get the same color. They are called adjacent (next to each other) if they share a segment of the border, not just a point. This was the first theorem to be proved by a computer, in a proof by exhaustion. In proof by exhaustion, the conclusion is established by dividing it into cases, and proving each one separately. The number of cases sometimes may be very large. For example, the first proof of the four color theorem was a proof by exhaustion with 1,936 cases. This proof was controversial because most of the cases were checked by a computer program, not by hand. The shortest known proof of the four color theorem today still has over 600 cases. Even though the problem was first presented as a problem to color political maps of countries, mapmakers are not particularly interested in it. According to an article by the math historian Kenneth May (Wilson 2002, 2), “Maps utilizing only four colors are rare, and those that do usually require only three. Books on cartography and the history of map making do not mention the four-color property.” Many simpler maps can be colored using three colors. The forth color is required for some maps, such as one in which one region is surrounded by an odd number of others, which touch each other in a cycle. One such example is given in the image. The five color theorem states that five colors are enough to color a map. It has a short, elementary proof and was proven in the late 19th century. (Heawood 1890) Proving that four colors suffice turned out to be significantly more difficult. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852. Exact formulation of the problem [change] Intuitively, the four color theorem can be stated as 'given any separation of a plane into contiguous regions, called a map, the regions can be colored using at most four colors so that no two regions which are adjacent have the same color'. To be able to correctly solve the problem, it is necessary to clarify some aspects: First, all points that belong to three or more countries must be ignored. Secondly, bizarre maps with regions of finite area and infinite perimeter can require more than four colors. For the purpose of the theorem every "country" has to be a simply connected region, or contiguous. In the real world, this is not true: Alaska as part of the United States, Nakhchivan as part of Azerbaijan, and Kaliningrad as part of Russia are not contiguous. Because the territory of a particular country must be the same color, four colors may not be sufficient. For instance, consider a simplified map, such as the one shown on the left: In this map, the two regions labeled A belong to the same country, and must be the same color. This map then requires five colors, since the two A regions together are contiguous with four other regions, each of which is contiguous with all the others. If A consisted of three regions, six or more colors might be required. In this manner, it is possible to construct maps that require an arbitrarily high number of colors. A similar construction also applies if a single color is used for all bodies of water, as is usual on real maps. An easier to state version of the theorem uses graph theory. The set of regions of a map can be represented more abstractly as an undirected graph that has a vertex for each region and an edge for every pair of regions that share a boundary segment. This graph is planar: it can be drawn in the plane without crossings by placing each vertex at an arbitrarily chosen location within the region to which it corresponds, and by drawing the edges as curves that lead without crossing within each region from the vertex location to each shared boundary point of the region. Conversely any planar graph can be formed from a map in this way. In graph-theoretic terminology, the four-color theorem states that the vertices of every planar graph can be colored with at most four colors so that no two adjacent vertices receive the same color, or for short, "every planar graph is four-colorable" ( Thomas 1998, p. 849; Wilson 2002). The first person to name the problem was Francis Guthrie, in 1852. He was a law student in Engand, at the time. He found that he needed at least four colors to color a map of the counties of England. Augustus de Morgan first discussed the problem, in a letter he wrote to Rowan Hamliton in August 1852. In the letter, de Morgan asks whether four colors are really enough to color a map, such that countries that are next to each other get different colors. English mathematician Arthur Cayley presented the problem to the mathematical society in London, in 1878. Within a year, Alfred Kempe found what looked like a proof of the problem. Eleven years later, in 1890, Percy Heawood showed that Percy's proof was wrong. Peter Guthrie Tait presented another attempt at a proof in 1880. It took eleven years to show that Tait's proof did not work either. In 1891, Julius Petersen could show this. When he falsified Cayley's proof, Kempe also showed a proof for a problem he called Five color theorem. The theorem says that any such map can be colored with no more than five colors. There are two restrictions: First, any country is contiguous, there are no exclaves. The second restriction is that countries need to have a common border; if they only touch in a point, they can be colored with the same color. Even though Kempe's proof was wrong,he used some of the ideas which would later permit a correct proof. In the 1960s and 1970s, Heinrich Heesch developed a first sketch of a proof by computer. Kenneth Appel and Wolfgang Haken improved this sketch in 1976 (Robertson et al. 1996). They were able to reduce the number of cases that would need to be tested to 1936; a later version was made that relied on testing only 1476 cases. Each case needed to be tested by a computer (Appel & Haken 1977). In 2005, Georges Gonthier and Benjamin Werner developed a formal proof. This was an improvement, because it allowed to use theorem-proving software, for the first time. The software used is called Coq. The four color theorem is the first big mathematical problem that was proved with the help of a computer. Because the proof cannot be done by a human, some mathematicians did not recognize it as correct. To verify the proof, it is necessary to rely on a correctly wotking software, and hardware to validate the proof. Because the proof was done by computer, it is also not very elegant. Attempts that turned out to be wrong [change] The four color theorem has been notorious for attracting a large number of false proofs and disproofs in its long history. At first, The New York Times refused to report on the Appel–Haken proof. The newspaper did this as a matter of policy; it feared that the proof would be shown false like the ones before it (Wilson 2002, p. 209). Some proofs took a long time, till they could be falsified: For Kempe's and Tait's proofs falsifying them took over a decade. The simplest counterexamples generally try to create one region which touches all the others. This forces the remaining regions to be colored with only three colors. Because the four color theorem is true, this is always possible; however, because the person drawing the map is focused on the one large region, they fail to notice that the remaining regions can in fact be colored with three colors. This trick can be generalized: If the colors of some regions in a map are selected beforehand, it becomes impossible to color the remaining regions in such a way that in total, only four colors are used. Someone verifying the counterexample may not think that it may be necessary to change the color of these regions. This will make the counterexample look valid, even though it is not. Perhaps one effect underlying this common misconception is the fact that the color restriction is not transitive: a region only has to be colored differently from regions it touches directly, not regions touching regions that it touches. If this were the restriction, planar graphs would require arbitrarily large numbers of colors. Other false disproofs violate the assumptions of the theorem in unexpected ways, such as using a region that consists of multiple disconnected parts, or disallowing regions of the same color from touching at a point. Coloring political maps [change] In real life, many countries have exclaves or colonies. Since they belong to the country, they need to be colored with the same color than the parent country. This means that usually, more than four colors are needed to color such a map. When mathematicians talk about the graph associated with the problem, they say that is not planar. Even though it is easy to check if a graph is planar, finding the minimal number of colors needed to color it is very difficult. It is NP-complete, among the most difficult problems that exist. The minimal number of colors needed to color a graph is known as its chromatic number. Many of the problems that occur when trying to solve the four color theorem are related to discrete mathematics. For this reason, methods from algebraic topology are often used. Extension to "non-flat" maps [change] The four color theorem requires the "map" to be on a flat surface, what mathematicians call a plane. In 1890, Percy John Heawood created what is called Heawood conjecture today: It asks the same question as the four color theorem, but for any topological object. As an example, a torus can be colored with at most seven colors. The Heawood cojecture gives a formula that works for all such objects, except the Klein bottle. Articles and Books [change] - Allaire, F. (1997), "Another proof of the four colour theorem—Part I", Proceedings, 7th Manitoba Conference on Numerical Mathematics and Computing, Congr. Numer. 20: 3–72 - Appel, Kenneth; Haken, Wolfgang (1977), "Every Planar Map is Four Colorable Part I. Discharging", Illinois Journal of Mathematics 21: 429–490 - Appel, Kenneth; Haken, Wolfgang; Koch, John (1977), "Every Planar Map is Four Colorable Part II. Reducibility", Illinois Journal of Mathematics 21: 491–567 - Appel, Kenneth; Haken, Wolfgang (October 1977), "Solution of the Four Color Map Problem", Scientific American 237 (4): 108–121, doi:10.1038/scientificamerican1077-108 - Appel, Kenneth; Haken, Wolfgang (1989), Every Planar Map is Four-Colorable, Providence, RI: American Mathematical Society, ISBN 0-8218-5103-9, http://www.ams.org/books/conm/098/conm098-endmatter.pdf - Bernhart, Frank R. (1977), "A digest of the four color theorem.", Journal of Graph Theory 1: 207–225, doi:10.1002/jgt.3190010305 - Borodin, O. V. (1984), "Solution of the Ringel problem on vertex-face coloring of planar graphs and coloring of 1-planar graphs", Metody Diskretnogo Analiza (41): 12–26, 108, MR 832128. - Cayley, Arthur (1879), "On the colourings of maps", Proc. Royal Geographical Society (Blackwell Publishing) 1 (4): 259–261, doi:10.2307/1799998, JSTOR 1799998 - Fritsch, Rudolf; Fritsch, Gerda (1998), The Four Color Theorem: History, Topological Foundations and Idea of Proof, New York: Springer, ISBN 978-0-387-98497-1 - Gonthier, Georges (2008), "Formal Proof—The Four-Color Theorem", Notices of the American Mathematical Society 55 (11): 1382–1393, http://www.ams.org/notices/200811/tx081101382p.pdf - Gonthier, Georges (2005), A computer-checked proof of the four colour theorem, unpublished, http://research.microsoft.com/en-us/um/people/gonthier/4colproof.pdf - Hadwiger, Hugo (1943), "Über eine Klassifikation der Streckenkomplexe", Vierteljschr. Naturforsch. Ges. Zürich 88: 133–143 - Heawood, P. J. (1890), "Map-Colour Theorem", Quarterly Journal of Mathematics, Oxford 24: 332–338 - Magnant, C.; Martin, D. M. (2011), "Coloring rectangular blocks in 3-space", Discussiones Mathematicae Graph Theory 31 (1): 161–170, http://www.discuss.wmie.uz.zgora.pl/php/discuss.php?ip=&url=plik&nIdA=21787&sTyp=HTML&nIdSesji=-1 - Nash-Williams, C. St. J. A. (1967), "Infinite graphs—a survey", J. Combinatorial Theory 3: 286–301, MR 0214501. - O'Connor; Robertson (1996), The Four Colour Theorem, MacTutor archive, http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/The_four_colour_theorem.html - Pegg, A.; Melendez, J.; Berenguer, R.; Sendra, J. R.; Hernandez; Del Pino, J. (2002), "Book Review: The Colossal Book of Mathematics", Notices of the American Mathematical Society 49 (9): 1084–1086, Bibcode 2002ITED...49.1084A, doi:10.1109/TED.2002.1003756, http://www.ams.org/notices/200209/rev-pegg.pdf - Reed, Bruce; Allwright, David (2008), "Painting the office", Mathematics-in-Industry Case Studies 1: 1–8, http://www.micsjournal.ca/index.php/mics/article/view/5 - Ringel, G.; Youngs, J.W.T. (1968), "Solution of the Heawood Map-Coloring Problem", Proc. Nat. Acad. Sci. USA 60 (2): 438–445, Bibcode 1968PNAS...60..438R, doi:10.1073/pnas.60.2.438, PMC 225066, PMID 16591648 - Robertson, Neil; Sanders, Daniel P.; Seymour, Paul; Thomas, Robin (1996), "Efficiently four-coloring planar graphs", Efficiently four-coloring planar graphs, STOC'96: Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, ACM Press, pp. 571–575, doi:10.1145/237814.238005 - Robertson, Neil; Sanders, Daniel P.; Seymour, Paul; Thomas, Robin (1997), "The Four-Colour Theorem", J. Combin. Theory Ser. B 70 (1): 2–44, doi:10.1006/jctb.1997.1750 - Saaty, Thomas; Kainen, Paul (1986), "The Four Color Problem: Assaults and Conquest", Science (New York: Dover Publications) 202 (4366): 424, Bibcode 1978Sci...202..424S, doi:10.1126/science.202.4366.424, ISBN 0-486-65092-8 - Swart, ER (1980), "The philosophical implications of the four-color problem", American Mathematical Monthly (Mathematical Association of America) 87 (9): 697–702, doi:10.2307/2321855, JSTOR 2321855, http://mathdl.maa.org/images/upload_library/22/Ford/Swart697-707.pdf - Thomas, Robin (1998), "An Update on the Four-Color Theorem", Notices of the American Mathematical Society 45 (7): 848–859, http://www.ams.org/notices/199807/thomas.pdf - Thomas, Robin (1995), The Four Color Theorem, http://people.math.gatech.edu/~thomas/FC/fourcolor.html - Thomas, Robin, Recent Excluded Minor Theorems for Graphs, p. 14, http://people.math.gatech.edu/~thomas/PAP/bcc.pdf - Wilson, Robin (2002), Four Colors Suffice, London: Penguin Books, ISBN 0-691-11533-8 - Chechulin V. L. About а one proof of a planar's graphs 4-chromatically http://www.uresearch.psu.ru/files/articles/17_89592.doc Mentioned in this article [change] - Georges Gonthier (December, 2008). "Formal Proof---The Four-Color Theorem". Notices of the AMS 55 (11): 1382–1393.From this paper: Definitions: A planar map is a set of pairwise disjoint subsets of the plane, called regions. A simple map is one whose regions are connected open sets. Two regions of a map are adjacent if their respective closures have a common point that is not a corner of the map. A point is a corner of a map if and only if it belongs to the closures of at least three regions. Theorem: The regions of any simple planar map can be colored with only four colors, in such a way that any two adjacent regions have different colors. - Hud Hudson (May, 2003). "Four Colors Do Not Suffice". The American Mathematical Monthly 110 (5): 417–423. - In graph theory, the "nodes" of the graph are called vertices, and the lines connecting them are called edges. - Pieter Maritz and Sonja Mouton. "Francis Guthrie: A Colourful Life". The Mathematical Intelligencer 34 (3): 67-75. http://link.springer.com/content/pdf/10.1007%2Fs00283-012-9307-y. Other websites [change]
http://simple.wikipedia.org/wiki/Four_color_theorem
13
70
1. Name three sets of sufficient conditions for triangles to be congruent. To see the answer, pass your mouse over the colored area. To cover the answer again, click "Refresh" ("Reload"). Do the problem yourself first! 2. a) State the hypothesis of Proposition 26. Two triangles have two angles equal to two angles respectively, and one side equal to one side, which may be either the sides between the equal angles or the sides opposite one of them. 2. b) State the conclusion. The remaining sides will equal the remaining sides (namely those that are oppostie the equal angles), and the remaining angle will equal the remaining angle. 2. c) Prove the second case in which the sides opposite one of the 2. c) angles are equal. 2. c) That is, let angles B and C be equal respectively to angles E and F, 2. c) and let AB equal DE. 2. c) Prove that BC is equal to EF. If they are not equal, then assume that BC is greater; make BH equal to EF, and draw AH. Then triangles ABH, DEF are congruent. (S.A.S.) Therefore angle AHB is equal to angle DFE. But, by hypothesis, angle ACB is equal to angle DFE. Therefore, angle AHB is also equal to angle ACB, which is impossible (I. 16). Therefore BC is not unequal to EF. It is equal to it. 3. In quadrilateral ABCD, angles CDB, DBA are equal, and angles 3. ADB, DBC are equal. Prove that AD is equal to BC. Since angles ADB, DBA are equal respectively to angles DBC, CDB, and side DB is common to triangles DBA, DBC, then the remaining sides are equal (A.S.A.), namely those that are opposite the equal angles; side AD is equal to side BC. 4. In the rectangle ABCD, angle ABD is equal to angle BDC. 4. Prove that angle ADB is equal to angle DBC. 4. (A rectangle is a quadrilateral in which all the angles are right angles.) The right angle at A is equal to the right angle at C, because all right angles are equal. And angle ABD is equal to angle BDC, by hypothesis. In triangles ABD, BDC, then, angles DAB, ABD are equal respectively to angles DCB, BDC; and side DB is common; therefore the remaining angles are equal (A.A.S.); angle ADB is equal to angle DBC. 5. In this figure, the angles at B and C are right angles, the straight line 5. BC is bisected at D, and ADE is a straight line. Prove that AB is 5. equal to CE. Angle BDA is equal to angle CDE; (I. 15) angle B is equal to angle C; (Postulate 4) and BD is equal to DC. (Hypothesis) Therefore triangles BDA, CDE are congruent, (A.S.A.) and those sides are equal that are opposite the equal angles: side AB is equal to side CE. 6. Use Proposition 26 to prove Proposition 6 directly: 6. If two angles of a triangle are equal, then the sides opposite those angles 6. will be equal. 6. In triangle ABC, let angle B equal angle C; then side AB is equal 6. to side AC. (Hint: Draw the straight line AD that bisects angle A. (I. 9.) Angle B is equal to angle C, (Hypothesis) angle BAD is equal to angle CAD, by construction, and side AD is common to triangles BAD, CAD. Therefore those triangles are congruent (A.A.S.), and therefore the remaining sides are equal: side AB is equal to side AC. The following problems will depend on proving congruence; either S.A.S., S.S.S., A.S.A. or S.A.A. 7. Prove: The straight line that bisects the vertex of an isosceles triangle also 7. bisects the base and is perpendicular to it. 7. (That is, it is the perpendicular bisector of the base.) Let triangle ABC be isosceles with side AB equal to side AC; let the straight line AD bisect angle A; then BD is equal to DC, and angles ADB, ADC are right angles. For, in triangles BAD, CAD two sides and their included angles are respectively equal; therefore the remaining side is equal to the remaining side: BD is equal to DC. (S.A.S.) And those angles are equal that are opposite the equal sides; angle ADB is equal to angle ADC. And since they are adjacent angles they are right angles. This is what we wanted to prove. 8. In quadrilateral ABCD, the straight line AC is the perpendicular 8. bisector of the straight line BD at the point E. 9. a) Prove that triangles ABD and BCD are both isosce1es. BE is equal to ED; (Hypothesis) angles AEB, AED are right angles, (Hypothesis) therefore they are equal; and AE is common to triangles AEB, AED. Therefore the remaing side is equal to the remaining side: AB is equal to AD. (S.A.S.) Triangle ABD therefore is isosceles. (Definition 8) Next, in triangles BEC, DEC, the right angles BEC, DEC are equal; side BE is equal to side ED, and EC is a common side. Therefore the remaing side is equal to the remaining side: BC is equal to DC. (S.A.S.) Therefore triangle BCD is isosceles. 9. b) Prove that angle ABC is equal to angle ADC. Since triangles DAB, BCD are isosceles, the angles at the base, angles ABD, ADB are equal, and angles CBD, CDB are equal. (I. 5) Therefore angles ABE, EBC together are equal to angles ADE, DEC together; (Axiom 2) angle ABC is equal to angle ADC. 19. BDEC is a straight line, AB is equal to AC, and AD is equal to AE. 19. Prove that BD is equal to EC. We will show that triangles ADB, AEC are congruent. First, since AD is equal to AE, (Hypothesis) angle ADE is equal to angle AED because triangle ADE is isosceles. Therefore angle ADB is equal to angle AEC, because they are supplements of equal angles. (I. 13, Problem 6) Next, side AB is equal to side AC; (Hypothesis) therefore in the isosceles triangle ABC, angle B is equal to angle C; and we have shown angle ADB equal to angle AEC; therefore in triangles ADB, AEC the remaining sides are equal: BD is equal to EC. (S.A.A.) 10. Angles EBA and CBD are right angles. EB is equal to BA, and DB is equal to BC. Prove that triangles EBC, ABD are congruent. Angle EBA is equal to angle CBD because they are right angles. To each of them join angle ABC; then angle EBC is equal to angle ABD. Next, side EB in triangle EBC is equal to side AB in triangle ABD, and side BC is equal to side BD. (Hypothesis) Therefore two sides and the included angle of triangle EBC are equal respectively to two sides and the included angle of triange ABD. Therefore those triangles are congruent. (S.A.S.) Table of Contents | Introduction | Home Please make a donation to keep TheMathPage online. Even $1 will help. Copyright © 2012 Lawrence Spector Questions or comments?
http://www.themathpage.com/abookI/geoProblems/I-26Prob.htm
13
110
Area is a quantity expressing the two-dimensional size of a defined part of a surface, typically a region bounded by a closed curve. The term surface area refers to the total area of the exposed surface of a 3-dimensional solid, such as the sum of the areas of the exposed sides of a polyhedron. Many students find area difficult. They feel overwhelmed with area homework, tests and projects. And it is not always easy to find area tutor who is both good and affordable. Now finding area help is easy. For your area homework, area tests, area projects, and area tutoring needs, TuLyn is a one-stop solution. You can master hundreds of math topics by using TuLyn. At TuLyn, we have over 2000 math video tutorial clips including area videos , area practice word problems , area questions and answers , and area worksheets Our area videos replace text-based tutorials and give you better step-by-step explanations of area. Watch each video repeatedly until you understand how to approach area problems and how to solve them. - Hundreds of video tutorials on area make it easy for you to better understand the concept. - Hundreds of word problems on area give you all the practice you need. - Hundreds of printable worksheets on area let you practice what you have learned by watching the video tutorials. How to do better on area: TuLyn makes area easy. Do you need help with Units Of Area in your Geometry class? Do you need help with Surface Area in your Geometry class? Do you need help with Area of Circles in your Geometry class? Do you need help with Area of Ellipses in your Geometry class? Do you need help with Area of Parallelograms in your Geometry class? Do you need help with Area of Quadrilaterals in your Geometry class? Do you need help with Area of Rectangles in your Geometry class? Do you need help with Area of Rhombus in your Geometry class? Do you need help with Area of Squares in your Geometry class? Do you need help with Area of Trapezoids in your Geometry class? Do you need help with Area of Triangles in your Geometry class? Geometry: Area Videos Area Of a Circle When Diameter is Given Video Clip Length: 5 minutes 9 seconds Video Clip Views: This tutorial will teach you how to find the area of a circle when given the diameter. You will also learn how to use given information, the diameter, and figure out what the radius will be equivalent to in order to solve for the area. You also learn to multiply decimals with various decimal places. Area And Perimeter Of A Rectangle Video Clip Length: 2 minutes 28 seconds Video Clip Views: This tutorial will be able to show you how to determine the area and perimeter of a rectangle when given the length and width. Geometry: Area Word Problems The volume of the soda can The volume of the soda can is fixed at 400 cubic centimeters. Use the volume with each radius to find the possible heights of different sized soda cans. Once the height column is completed calculate the surface areas. The results for radius are already given. Round the height to the nearest tenth and the surface area to the nearest whole number. The given radius are: 1 cm, 2 cm, 3 cm, 4 cm, 5 cm, 6 cm, 7 ... Geometry: Area Practice Questions Need lots of shapes with an area of 64 square meters. The area of the room is 6 square yards. The perimeter is 10 yards. The room is longer than ... A wall in 7 in. in length and 6 in. in ... How Others Use Our Site It will help my student understand better difficult areas in math. I am taking college online and I have been out of school for a long time. Math is not a strong area for me, and I am hoping that this will explain things where I can understan them. I am going to college for the first time and I am 46 years old. I have forgotten a lot of areas of math. I just want a resource I can refur to. To find areas in measurement for construction. I am hoping I will learn how to do word problems related to Solving Systems of Linear Equations in Two Variables. It is an area I am having the most trouble.. especially related to distance (2 autos traveling at different speeds, etc.) problems where speed is unknown. I teach public school & am looking for ways to claerly deliver instruction & imcrease my professional knowledge area. It has worksheets aligned to my areas of instruction. It provides extra practice for my students. Identify areas of weakness, and give lessons to overcome. I have developed insight into an area I need to be able to prove. It will give my son great pratice sheets on some of the areas in math he is weak in!. My weakest area in Math is in expressing the equation/equations based on the given data in the statements of the mathematical problem. With your site it is hoped I can remedy my weakness by practicing to solve the worded problems and together with the other students I can check my progress the right way. Thank you for putting up this site. By providing relevant [practrice in the area and strand area of math content. It appears that whatever I am working on I should be able to pull practice sheets to allow for more pract ice, remediation, or reteaching in that area. Videos are awesome. a little bit more in depth explanation will not hurt. For example, why the PI value is used in calculationg the area of a cirlce but not for any other shape. My 6th grade son needs extra practice in some areas. At this time it`s unit prices. The ability to use this site as a tutor based program will help my students that need additional help in some areas. Greatly in all areas of math. To give me a better understanding how math works, and to better help me with the written formulas. Area, multiplication, subtraction, division, addition, fraction, and graphs. I have students that are struggling with areas in math that conflict with my Building Trades course and needed extra assessments to help students succeed with test. I am a high school special education teacher. I teach students with discalcula and other learning disabilities. I need many math worksheets to supplement the math instructional strategy that I developed and tested for my students. My strategy is for basic math comprehension problems. I also need geometry and other math worksheets in areas that I have not addressed. GIve yet another voice to the various areas of instruction that will help my children with math concepts. I think it will help me improve my math skills in all areas of math. Help me with areas I might be weak in. I think it will provide me with materials to help my students in the areas they most need help, and individualize my instruciton and practice. I am a teacher, and I hope I can use it to help kids with trouble areas. Help me with my weak areas. I think all the practice sheets will help me establish the areas in math that I need to work on. I think it will also help me establish the math concepts that I need to improve greatly. Improve my skills in tougher areas of algebra. By tutoring me in the areas that I have no idea on how to start on. I am studing for my GED, I hope your site will help me in the areas I need to understand. I am in college online and I am entering my first math class. I have not been in school for 10 years. I think this will help me a lot in areas I could encounter struggle with for any reason. Also, I think it will help in learning new mathematical problems.
http://www.tulyn.com/geometry/area
13
58
Mass spectrometry is a powerful analytical technique used to quantify known materials, to identify unknown compounds within a sample, and to elucidate the structure and chemical properties of different molecules. The complete process involves the conversion of the sample into gaseous ions, with or without fragmentation, which are then characterized by their mass to charge ratios (m/z) and relative abundances. This technique basically studies the effect of ionizing energy on molecules. It depends upon chemical reactions in the gas phase in which sample molecules are consumed during the formation of ionic and neutral species. A mass spectrometer generates multiple ions from the sample under investigation, it then separates them according to their specific mass-to-charge ratio (m/z), and then records the relative abundance of each ion type. The first step in the mass spectrometric analysis of compounds is the production of gas phase ions of the compound, basically by electron ionization. This molecular ion undergoes fragmentation. Each primary product ion derived from the molecular ion, in turn, undergoes fragmentation, and so on. The ions are separated in the mass spectrometer according to their mass-to-charge ratio, and are detected in proportion to their abundance. A mass spectrum of the molecule is thus produced. It displays the result in the form of a plot of ion abundance versus mass-to-charge ratio. |Ions provide information concerning the nature and the structure of their precursor molecule. In the spectrum of a pure compound, the molecular ion, if present, appears at the highest value of m/z (followed by ions containing heavier isotopes) and gives the molecular mass of the compound. The instrument consists of three major components: Ion Source: For producing gaseous ions from the substance being studied. Analyzer: For resolving the ions into their characteristics mass components according to their mass-to-charge ratio. Detector System: For detecting the ions and recording the relative abundance of each of the resolved ionic species. In addition, a sample introduction system is necessary to admit the samples to be studied to the ion source while maintaining the high vacuum requirements (~10-6 to 10-8 mm of mercury) of the technique; and a computer is required to control the instrument, acquire and manipulate data, and compare spectra to reference libraries. Figure: Components of a Mass Spectrometer With all the above components, a mass spectrometer should always perform the following processes: Produce ions from the sample in the ionization source. Separate these ions according to their mass-to-charge ratio in the mass analyzer. Eventually, fragment the selected ions and analyze the fragments in a second analyzer. Detect the ions emerging from the last analyzer and measure their abundance with the detector that converts the ions into electrical signals. Process the signals from the detector that are transmitted to the computer and control the instrument using feedback. Analysis of Biomolecules using Mass Spectrometry Mass spectrometry is fast becoming an indispensable field for analyzing biomolecules. Till the1970s, the only analytical techniques which provided similar information were electrophoretic, chromatographic or ultracentrifugation methods. The results were not absolute as they were based on characteristics other than the molecular weight. Thus the only possibility of knowing the exact molecular weight of a macromolecule remained its calculation based on its chemical structure. The development of desorption ionization methods based on the emission of pre-existing ions such as plasma desorption (PD), fast atom bombardment (FAB) or laser desorption (LD), allowed the application of mass spectrometry for analyzing complex biomolecules. Analysis of Glycans Oligosaccharides are molecules formed by the association of several monosaccharides linked through glycosidic bonds. The determination of the complete structure of oligosaccharides is more complex than that of proteins or oligonucleotides. It involves the determination of additional components as a consequence of the isomeric nature of monosaccharides and their capacity to form linear or branched oligosaccharides. Knowing the structure of an oligosaccharide requires not only the determination of its monosaccharide sequence and its branching pattern, but also the isomer position and the anomeric configuration of each of its glycosidic bonds. Advances in glycobiology involves a comprehensive study of structure, bio-synthesis, and biology of sugars and saccharides. Mass spectrometry (MS) is emerging as an enabling technology in the field of glycomics and glycobiology. Analysis of Lipids Lipids are made up of many classes of different molecules which are soluble in organic solvents. Lipidomics, a major part of metabolomics, constitutes the detailed analysis and global characterization, both spatial and temporal, of the structure and function of lipids (the lipidome) within a living system. Many new strategies for mass-spectrometry-based analyses of lipids have been developed. The most popular lipidomics methodologies involve electrospray ionization (ESI) sources and triple quadrupole analyzers. Using mass spectrometry, it is possible to determine the molecular weight, elemental composition, the position of branching and nature of substituents in the lipid structure. Analysis of Proteins and Peptides Proteins and peptides are linear polymers made up of combinations of the 20 amino acids linked by peptide bonds. Proteins undergo several post translational modifications, extending the range of their function via such modifications. The term Proteomics refers to the analysis of complete protein content in a living system, including co- and post-translationally modified proteins and alternatively spliced variants. Mass Spectrometry has now become a crucial technique for almost all proteomics experiments. It allows precise determination of the molecular mass of peptides as well as their sequences. This information can very well be used for protein identification, de novo sequencing, and identification of post-translational modifications. Analysis of Oligonucleotides Oligonucleotides (DNA or RNA), are linear polymers of nucleotides. These are composed of a nitrogenous base, a ribose sugar and a phosphate group. Oligonucleotides may undergo several natural covalent modifications which are commonly present in tRNA and rRNA, or unnatural ones resulting from reactions with exogenous compounds. Mass spectrometry plays an important role in identifying these modifications and determining their structure as well as their position in the oligonucleotide. It not only allows determination of the molecular weight of oligonucleotides, but also in a direct or indirect manner, the determination of their sequences. Software for Mass Spectrometric Data Analysis SimGlycan® predicts the structure of glycans and glycopeptides from the MS/MS data acquired by mass spectrometry, facilitating glycosylation and post translational modification studies. SimGlycan® accepts the experimental MS profiles, of both glycopeptides and released glycans, matches them with its own database and generates a list of probable structures. The software also supports multi stage mass spectrometry data analysis which enables structural elucidation and identification of fragmentation pathways. SimLipid is an innovative lipid characterization tool which enables structural elucidation of unknown lipids using MS/MS data. The software analyzes lipid mass spectrometric data for characterizing and profiling lipids. SimLipid can also annotate mass spectra with the lipid structures identified using abbreviations.
http://www.premierbiosoft.com/tech_notes/mass-spectrometry.html
13
62
Radiocarbon dating (or simply carbon dating) is a radiometric dating technique that uses the decay of carbon-14 (14 C) to estimate the age of organic materials, such as wood and leather, up to about 58,000 to 62,000 years. Carbon dating was presented to the world by Willard Libby in 1949, for which he was awarded the Nobel Prize in Chemistry. Since its introduction it has been used to date many well-known items, including samples of the Dead Sea Scrolls, the Shroud of Turin, enough Egyptian artifacts to supply a chronology of Dynastic Egypt, and Ötzi the Iceman. The dating method is based on the fact that carbon is found in various forms, including the main stable isotope (12 C) and an unstable isotope (14 C). Through photosynthesis, plants absorb both forms from carbon dioxide in the atmosphere. When an organism dies, it contains a ratio of 14 C to 12 C, but, as the 14 C decays with no possibility of replenishment, the ratio decreases at a regular rate (the half-life of 14 C). The measurement of 14 C decay provides an indication of the age of any carbon-based material (a raw radiocarbon age). However, over time there are small fluctuations in the ratio of 14 C to 12 C in the atmosphere, fluctuations that have been noted in natural records of the past, such as sequences of tree rings and cave deposits. These records allow for the fine-tuning, or calibration, of the indications derived from measuring the carbon ratio. A raw radiocarbon age, once calibrated, yields a calendar date. One of the most frequent uses of radiocarbon dating is to estimate the age of organic remains from archaeological sites. Inventors of the method The technique of radiocarbon dating was developed by Willard Libby and his colleagues at the University of Chicago in 1949. Emilio Segrè asserted in his autobiography that Enrico Fermi suggested the concept to Libby at a seminar in Chicago that year. Libby estimated that the steady state radioactivity concentration of exchangeable carbon-14 would be about 14 disintegrations per minute (dpm) per gram. In 1960, Libby was awarded the Nobel Prize in chemistry for this work. He demonstrated the accuracy of radiocarbon dating by accurately estimating the age of wood from a series of samples for which the age was known, including an ancient Egyptian royal barge of 1850 BCE. Physical and chemical background Carbon has two stable, nonradioactive isotopes: carbon-12 (12 C), and carbon-13 (13 C). In addition, there are trace amounts of the unstable radioisotope carbon-14 (14 C) on Earth. Carbon-14 has a relatively short half-life of 5,730 years, meaning that the fraction of carbon-14 in a sample is halved over the course of 5,730 years due to radioactive decay. The Carbon-14 isotope would vanish from Earth's atmosphere in less than a million years were it not for the unremitting influx of cosmic rays interacting with molecules of nitrogen (N 2) or perhaps rather single nitrogen atoms (free nitrogen atoms, N), in the stratosphere, which constantly replenishes this isotope. The high energy neutrons resulting from cosmic ray particle interactions with Earth's atmosphere participate in the following nuclear reaction on the atoms of nitrogen molecules (N 2) in the stratosphere, where p is a proton: The highest rate of carbon-14 production takes place at altitudes of 9 to 15 km (30,000 to 50,000 ft), and at high geomagnetic latitudes, where C-14 then reacts relatively rapidly with oxygen to form carbon dioxide (CO 2). The carbon dioxide containing the C-14 species spreads evenly throughout the atmosphere and the oceans, reacting with water to produce carbonic acid (CO 2 + H 2O → H For approximate analysis it is assumed that the cosmic ray flux is constant over long periods of time; thus carbon-14 is produced at a constant rate and the proportion of radioactive to non-radioactive carbon is constant: ca. 1 part per trillion (600 billion atoms/mole). In 1958, Hessel de Vries showed that the concentration of carbon-14 in the atmosphere varies with both time and locality. In order to obtain the most accurate results in carbon dating, calibration curves must be employed. Other mechanisms of producing C-14 C can also be produced at ground level at a rate of 1 x 10−4 atoms per gram per second, which is not considered significant enough to impact on dating without a known other source of neutrons. C-14 uptake in living organisms Plants and all other photosynthesizing organisms (algae, some bacteria, some protists) use atmospheric carbon dioxide in the photosynthesis. The products of photosynthesis are ingested by animals. At the same time, all living organisms fueled by carbon molecules release carbon dioxide in the process of cellular respiration. The decay of organic matter This means that almost all living organisms are constantly exchanging carbon-14 atoms with their environment. This exchange stops when the organism dies. Nevertheless, release of CO 2 from the organism continues, by processes of molecular decay (disintegration). These processes, however, do not change the fraction of C-14 relative to the other two species of carbon (C-12 and C-13) in decaying organic matter. Radioactive decay of C-14 It is a process of radioactive decay (i.e. beta decay) that gradually decreases the fraction of the C-14 isotope relative to the other two isotopes of carbon. The half-life of C-14 is 5,730 ± 40 years. This means that the fraction of C-14 relative to each of the two other species of carbon (C-12 and C-13) declines by half in approximately 5,730 years. The equation for the radioactive decay of C-14 involves the production of both a standard nitrogen atom species (N-14), an electron (e−, also called a beta-particle, β-particle ), and a subatomic particle called an electron antineutrino (ν Computation of ages and dates The number of decays per time period is proportional to the current number of radioactive atoms. This is expressed by the following differential equation, where N is the number of radioactive atoms and λ is a positive number called the decay constant: As the solution to this equation, the number of radioactive atoms N can be written as a function of time: which describes an exponential decay over a timespan t with an initial condition of N0 radioactive atoms at t = 0. Canonically, t is 0 when the decay started. In this case, N0 is the initial number of 14 C atoms when the decay started. For radiocarbon dating of a once living organism, the initial ratio of 14 C atoms to the sum of all other carbon atoms at the point of the organism's death, and hence the point when the decay started, is approximately the same ratio as in the atmosphere at that time. Two characteristic times can be defined: - mean- or average-life: mean or average time each radiocarbon atom spends in a given sample until it decays - half-life: time required for half the number of radiocarbon atoms in a given sample to decay It can be shown that: - = = radiocarbon mean- or average-life = 8,033 years (Libby value) - = = radiocarbon half-life = 5,568 years (Libby value) Notice that dates are customarily given in years BP, which implies t(BP) = –t because the time arrow for dates runs in reverse direction from the time arrow for the corresponding ages. From these considerations and the above equation, it results: For a raw radiocarbon date: and for a raw radiocarbon age: After replacing values, the raw radiocarbon age becomes any of the following equivalent formulae: using logs base e and the average life: using logs base 2 and the half-life: Wiggle matching uses the non-linear relationship between the 14 C age and calendar age to match the shape of a series of closely sequentially spaced 14 C dates with the 14 C calibration curve. Radiocarbon dating of soil organic matter (SOM) is problematic because SOM accumulates from heterogeneous sources. Fractionation of the heterogeneous organic carbon sources limits the application and interpretation of carbon dating of SOM. To remedy the inconsistencies in previous methods of carbon-14 dating of SOM, a high-temperature, pyrolysis-combustion technique was used. A combustion system used by the Illinois State Geological Survey (ISGS), under vacuum, fractions the SOM into a volatile and residual fraction. The volatile residue contains low-molecular-weight organic compounds, whereas, the residual residue contains high- molecular-weight organic compounds. Preceding extraction of carbon dioxide from SOM samples, pretreatment is necessary. Each sample must be pretreated with heated 2 N HCl followed by rinsing with deionized water and vacuum filtration. Drying of the sample in a furnace will reduce the accumulation of water within the system. The combustion system utilized by the ISGS consists of an inner and an outer quartz tube. To ensure pure CO2 production, a vacuum of -25 psi must be established. During volatile pyrolysis, the inner tube is purged with argon while the outer tube is purged with oxygen. As the oxygen is purged through the outer tube, the volatile compounds released from the sample are carried by the argon into the outer tube where they are oxidized at 800 degree Celsius to form carbon dioxide. The CO2 and other gases produced from the volatile fraction are then passed through a cupric oxide furnace and wash traps including 0.5 N AgNO3 solution and a solution of 7.3 g Na2Cr2O7 in 50% H2SO4 for purification purposes. After the filtration, the CO2 is then passed through a dry ice-isopropanol trap to trap the water and the CO2 is finally collected in liquid nitrogen traps. The end of the volatile fraction is marked by the disappearance of the flame in the ignition furnace. Once the purified CO2 is transferred, the residual pyrolysis begins with the purging of the inner tube with oxygen and outer tube with argon. In pyrolysis of large samples, a stainless steel chamber and a crucible furnace connected to the inner tube of the combustion system must be used. The purified CO2 is then converted to benzene for liquid scintillation spectrometry. Measurements and scales Measurements are traditionally made by counting the radioactive decay of individual carbon atoms by gas proportional counting or by liquid scintillation counting. For samples of sufficient size (several grams of carbon), this method is still widely used in the 2000s. Among others, all the tree ring samples used for the calibration curves (see below) were determined by these counting techniques. Such decay counting, however, is relatively insensitive and subject to large statistical uncertainties for small samples. When there is little carbon-14 to begin with, the long radiocarbon half-life means that very few of the carbon-14 atoms will decay during the time allotted for their detection, resulting in few disintegrations per minute. The sensitivity of radiocarbon dating has been greatly increased by the use of accelerator mass spectrometry (AMS). With this technique 14 C atoms can be detected and counted directly, as opposed to detecting radioactive decay. Radiocarbon AMS samples are prepared by completely burning the sample, collecting the resulting carbon dioxide, and reducing it to a solid carbon target for sputtering atomic carbon ions into the mass spectrometer. This method allows dating samples containing only a few milligrams of carbon. Raw radiocarbon ages (i.e., those not calibrated) are usually reported in "years Before Present" (BP). This is the number of radiocarbon years before 1950, based on a nominal (and assumed constant – see "calibration" below) level of carbon-14 in the atmosphere equal to the 1950 level. These raw dates are also based on a slightly incorrect historic value for the radiocarbon half-life. Such value is used for consistency with earlier published dates (see "Radiocarbon half-life" below). See the section on computation for the basis of the calculations. Radiocarbon dating laboratories generally report an uncertainty for each date. For example, 3000 ± 30 BP indicates a standard deviation of 30 radiocarbon years. Traditionally, this included only the statistical counting uncertainty. However, some laboratories supplied an "error multiplier" that could be multiplied by the uncertainty to account for other sources of error in the measuring process. More recently, laboratories try to quote the overall uncertainty, which is determined from control samples of known age and verified by international intercomparison exercises. In 2008, a typical uncertainty better than ±40 radiocarbon years can be expected for samples younger than 10,000 years. This, however, is only a small part of the uncertainty of the final age determination (see section Calibration below). Samples older than the upper age-limit cannot be dated because the small number of remaining intrinsic 14 C atoms will be obscured by the 14 C background atoms introduced into the samples while they still resided in the environment, during sample preparation, or in the detection instrument. As of 2007[update], the limiting age for a 1 milligram sample of graphite is about ten half-lives, approximately 60,000 years. This age is derived from that of the calibration blanks used in an analysis, whose 14 C content is assumed to be the result of contamination during processing (as a result of this, some facilities will not report an age greater than 60,000 years for any sample). A variety of sample processing and instrument-based constraints have been postulated to explain the upper age-limit. To examine instrument-based background activities in the AMS instrument of the W. M. Keck Carbon Cycle Accelerator Mass Spectrometry Laboratory of the University of California, a set of natural diamonds were dated. Natural diamond samples from different sources within rock formations with standard geological ages in excess of 100 Ma yielded14 C apparent ages 64,920 ± 430 BP to 80,000 ± 1100 BP as reported in 2007. The need for calibration Dates may be expressed as either uncalibrated or calibrated years (the latter abbreviated as cal or cal.). A raw BP date cannot be used directly as a calendar date, because the level of atmospheric 14 C has not been strictly constant during the span of time that can be radiocarbon dated, producing radiocarbon plateaus. The level is affected by variations in the cosmic ray intensity, which is, in turn, affected by variations in the Earth's magnetosphere. In addition, there are substantial reservoirs of carbon in organic matter, the ocean, ocean sediments (see methane hydrate), and sedimentary rocks. Changes in the Earth's climate can affect the carbon flows between these reservoirs and the atmosphere, leading to changes in the atmosphere's 14 As the graph to the right shows, the uncalibrated, raw BP date underestimates the actual age by 3,000 years at 15000 BP. The underestimation generally runs about 10% to 20%, with 3% of that underestimation attributable to the use of 5,568 years as the half-life of 14 C instead of the more accurate 5,730 years. To maintain consistency with a large body of published research, the out-of-date half-life figure is still used in all radiocarbon measurements. An uncalibrated radiocarbon date is abbreviated as 14 C yr BP or C14 yr BP or simply BP, although the last is ambiguously also sometimes used with dating methods other than radiocarbon, such as stratigraphy. A calibrated, or calendar date, is abbreviated as cal yr BP or cal BP, interpretable as "calibrated years before present" or "calendar years before present". In academic practice calibrated dates are generally presented along with their source uncalibrated dates, as the accuracy of the presently established calibration curve varies by time period. The standard radiocarbon calibration curve is continuously being refined on the basis of new data gathered from tree rings, coral, and other studies. In addition to the natural variation of the curve throughout time, the carbon-14 level has also been affected by human activities in recent centuries. From the beginning of the industrial revolution in the 18th century to the 1950s, the fractional level of 14 C decreased because of the admixture of CO 2 into the atmosphere from the combustion of fossil fuels. This decline, which is known as the Suess effect, also affects the 13 C isotope. However, atmospheric 14 C was almost doubled during the 1950s and 1960s, due to atmospheric atomic bomb tests. The raw radiocarbon dates, in BP years, are calibrated to give calendar dates. Standard calibration curves are available, based on comparison of radiocarbon dates of samples that can be dated independently by other methods such as examination of tree growth rings (dendrochronology), deep ocean sediment cores, lake sediment varves, coral samples, and speleothems (cave deposits). The calibration curves can vary significantly from a straight line, so comparison of uncalibrated radiocarbon dates (e.g., plotting them on a graph or subtracting dates to give elapsed time) is likely to give misleading results. There are also significant plateaus in the curves, such as the one from 11,000 to 10,000 radiocarbon years BP, which is believed to be associated with changing ocean circulation during the Younger Dryas period. Over the historical period (from 0 to 10,000 years BP), the average width of the uncertainty of calibrated dates was found to be 335 years - in well-behaved regions of the calibration curve the width decreased to about 113 years, while in ill-behaved regions it increased to a maximum of 801 years. Significantly, in the ill-behaved regions of the calibration curve, increasing the precision of the measurements does not have a significant effect on increasing the accuracy of the dates. The 2004 version of the calibration curve extends back quite accurately to 26,000 years BP. Any errors in the calibration curve do not contribute more than ±16 years to the measurement error during the historic and late prehistoric periods (0–6,000 yrs BP) and no more than ±163 years over the entire 26,000 years of the curve, although its shape can reduce the accuracy as mentioned above. In late 2009, the journal Radiocarbon announced agreement on the INTCAL09 standard, which extends a more accurate calibration curve to 50,000 years. The results of research on varves in Lake Suigetsu, Japan, which was announced in 2012, realised this aim. "In most cases, the radiocarbon levels deduced from marine and other records have not been too far wrong. However, having a truly terrestrial record gives us better resolution and confidence in radiocarbon dating," said Bronk Ramsey. "It also allows us to look at the differences between the atmosphere and oceans and study the implications for our understanding of the marine environment as part of the global carbon cycle." Carbon dating was developed by American scientist Willard Libby and his team at the University of Chicago. Libby calculated the half-life of carbon-14 as 5,568 ± 30 years, a figure now known as the Libby half-life. Following a conference at the University of Cambridge in 1962, a more accurate figure of 5,730 ± 40 years was agreed upon, which was based on more recent experimental data (this figure is now known as the Cambridge half-life). The chairman of the Cambridge conference, Harry Godwin, wrote to the scientific journal Nature, recommending that the Libby half-life continue to be used for the time being, as the Cambridge figure might itself be improved by future experiments. Laboratories today continue to use the Libby figure to avoid inconsistencies with earlier publications, although the Cambridge half-life is still the most accurate figure that is widely known and accepted. However, the inaccuracy of the Libby half-life is not relevant if calibration is applied: the mathematical term representing the half-life is canceled out as long as the same value is used throughout a calculation. Carbon exchange reservoir Libby's original exchange reservoir hypothesis assumed that the exchange reservoir is constant all over the world. The calibration method also assumes that the temporal variation in 14 C level is global, such that a small number of samples from a specific year are sufficient for calibration. However, since Libby's early work was published (1950 to 1958), latitudinal and continental variations in the carbon exchange reservoir have been observed by Hessel de Vries (1958; as reviewed by Lerman et al.). Subsequently, methods have been developed that allow the correction of these so-called reservoir effects, including: - When CO 2 is transferred from the atmosphere to the oceans, it initially shares the 14 C concentration of the atmosphere. However, turnaround times of CO 2 in the ocean are similar to the half-life of 14 C (making 14 C also a dating tool for ocean water). Marine organisms feed on this "old" carbon, and thus their radiocarbon age reflects the time of CO 2 uptake by the ocean rather than the time of death of the organism. This marine reservoir effect is partly handled by a special marine calibration curve, but local deviations of several hundred years exist. - Erosion and immersion of carbonate rocks (which are generally older than 80,000 years and so shouldn't contain measurable 14 C) causes an increase in 12 C and 13 C in the exchange reservoir, which depends on local weather conditions and can vary the ratio of carbon that living organisms incorporate. This is believed to be negligible for the atmosphere and atmosphere-derived carbon, since most erosion will flow into the sea. The atmospheric 14 C concentration may differ substantially from the concentration in local water reservoirs. Eroded from CaCO3 or organic deposits, old carbon may be assimilated easily and provide diluted 14 C carbon into trophic chains. So the method is less reliable for such materials, as well as for samples derived from animals with such plants in their food chain. - Volcanic eruptions eject large amounts of carbon into the air, causing an increase in 12 C and 13 C in the exchange reservoir and can vary the exchange ratio locally. This explains the often irregular dating achieved in volcanic areas. - The earth is not affected evenly by cosmic radiation, the magnitude of the radiation at a particular place depends on both its altitude and the local strength of the earth's magnetic field strength, thus causing minor variation in the local 14 C production. This is accounted for by having calibration curves for different locations of the globe. However, this could not always be performed, as tree rings for calibration were only recoverable from certain locations in 1958. The rebuttals by Münnich et al. and by Barker both maintain that, while variations of carbon-14 exist, they are about an order of magnitude smaller than those implied by Crowe's calculations. These effects were first confirmed when samples of wood from around the world, which all had the same age (based on tree ring analysis), showed deviations from the dendrochronological age. Calibration techniques based on tree-ring samples have contributed to increased accuracy since 1962, when they were accurate to 700 years at worst. Speleothem studies extend 14 Speleothems (such as stalagmites) are calcium carbonate deposits that form from drips in limestone caves. Individual speleothems can be tens of thousands of years old. Scientists are attempting to extend the record of atmospheric carbon-14 by measuring radiocarbon in speleothems which have been independently dated using uranium-thorium dating. These results are improving the calibration for the radiocarbon technique and extending its usefulness to 45,000 years into the past. Initial results from a cave in the Bahamas suggested a peak in the amount of carbon-14 that was twice as high as modern levels. A recent study does not reproduce this extreme shift and suggests that analytical problems may have produced the anomalous result. - Ancient footprints of Acahualinca - Chauvet Cave - The Dead Sea Scrolls - Eve of Naharon - Haraldskær Woman - Kennewick Man - Shroud of Turin - Skeleton Lake - Thera eruption - Vinland map - Arizona Accelerator Mass Spectrometry Laboratory - Carbon sequestration - Cosmogenic isotopes - Discussion of half-life and average-life or mean-lifetime - Environmental isotopes - Old wood - Plastino, W.; Kaihola, L.; Bartolomei, P.; Bella, F. (2001). "Cosmic Background Reduction In The Radiocarbon Measurement By Scintillation Spectrometry At The Underground Laboratory Of Gran Sasso". Radiocarbon 43 (2A): 157–161. - Doug McDougall (2008). Nature's Clocks: How Scientists Measure the Age of Almost Everything. Berkey & Los Angeles, California: University of California Press. p. 45. - Christopher Bronk Ramsey, Michael W. Dee, Joanne M. Rowland, Thomas F. G. Higham, Stephen A. Harris, Fiona Brock, Anita Quiles, Eva M. Wild, Ezra S. Marcus, Andrew J. Shortland (June 18, 2010). "Radiocarbon-Based Chronology for Dynastic Egypt". Science Magazine. Retrieved January 27, 2013. - "The Incredible Age of the Find". South Tyrol Museum of Archaeology. 2013. Retrieved January 27, 2013. - Thomas Higham. "The 14C Method". Retrieved January 27, 2013. - Arnold, J. R.; Libby, W. F. (1949). "Age Determinations by Radiocarbon Content: Checks with Samples of Known Age". Science 110 (2869): 678–680. Bibcode:1949Sci...110..678A. doi:10.1126/science.110.2869.678. PMID 15407879. - Willard Frank Libby - Münnich KO, Östlund HG, de Vries H (1958). "Carbon-14 Activity during the past 5,000 Years". Nature 182 (4647): 1432–3. Bibcode:1958Natur.182.1432M. doi:10.1038/1821432a0. - McFadgen, B G et al. (1994). "Radiocarbon calibration curve variations and their implications for the interpretation of New Zealand prehistory". Radiocarbon 36 (2): 221–236. "The shape of a distribution of calibrated 14 C dates displays spurious peaks and troughs, brought about by changes in the slope of the calibration curve interacting with the spreading effect of the stochastic distribution of counting errors" - Ramsey, C. B. (2008). "Radiocarbon dating: revolutions in understanding". Archaeometry 50 (2): 249–275. doi:10.1111/j.1475-4754.2008.00394.x. - Wang, Hong; Keith C. Hackley,a Samuel V. Panno,a Dennis D. Coleman,b Jack Chao-li Liu,b and Johnie Brownc (8). "Pyrolysis-combustion 14C dating of soil organic matter". Quaternary Research 60: 348–355. Bibcode:2003QuRes..60..348W. doi:10.1016/j.yqres.2003.07.004. Retrieved 2012-04-23. - Coleman, Dennis (1973). "Illinois State Geological Survey Dates IV". Radiocarbon 15 (1): 75–85. - Goslar, T.; Czernik, J. (2000). "Sample Preparation in the Gliwice Radiocarbon Laboratory". Geochronmetria 18: 1–8. - Scott, EM (2003). "The Fourth International Radiocarbon Intercomparison (FIRI).". Radiocarbon 45: 135–285. - "NOSAMS Radiocarbon Data and Calculations". Woods Hole Oceanographic Institution. - Taylor RE, Southon J (2007). "Use of natural diamonds to monitor 14 C AMS instrument backgrounds". Nuclear Instruments and Methods in Physics Research B 259: 282–28. Bibcode:2007NIMPB.259..282T. doi:10.1016/j.nimb.2007.01.239. - Reimer, P.J., Baillie, M.G.L., Bard, E., Bayliss, A., Beck, J.W., Bertrand, C.J.H., Blackwell, P.G., Buck, C.E., Burr, G.S., Cutler, K.B., Damon, P.E., Edwards, R.L., Fairbanks, R.G., Friedrich, M., Guilderson, T.P., Hogg, A.G., Hughen, K.A., Kromer, B., McCormac, G., Manning, S., Bronk Ramsey, C., Reimer, R.W., Remmele, S., Southon, J.R., Stuiver, M., Talamo, S., Taylor, F.W., van der Plicht, J., Weyhenmeyer, C.E. (2004). "IntCal04 Terrestrial radiocarbon age calibration". Radiocarbon 46: 1029–58. - "Atmospheric δ14 C record from Wellington". Carbon Dioxide Information Analysis Center. Retrieved 1 May 2008. 2 record from Vermunt". Carbon Dioxide Information Analysis Center. Retrieved 1 May 2008. - "Radiocarbon dating". Utrecht University. Retrieved 1 May 2008. - Kudela K. and Bobik P. (2004). "Long-Term Variations of Geomagnetic Rigidity Cutoffs". Solar Physics 224: 423–431. Bibcode:2004SoPh..224..423K. doi:10.1007/s11207-005-6498-9. - Radiocarbon Calibration University of Oxford, Radiocarbon Web Info, Version 130 Issued 19/Mar/2012, retrieved 6/July/2012 - Reimer, Paula J.; Brown, Thomas A.; Reimer, Ron W. (2004). "Discussion: Reporting and Calibration of Post-Bomb 14 C Data". Radiocarbon 46 (3): 1299–1304 - These results were obtained from a Monte Carlo analysis calibrating simulated measurements of varying precision using the 1993 version of the calibration curve. The width of the uncertainty represents a 2σ uncertainty (that is, a likelihood of 95% that the date appears between these limits). Niklaus TR, Bonani G, Suter M, Wölfli W (1994). "Systematic investigation of uncertainties in radiocarbon dating due to fluctuations in the calibration curve". Nuclear Instruments and Methods in Physics Research (B ed.) 92: 194–200. Bibcode:1994NIMPB..92..194N. doi:10.1016/0168-583X(94)96004-6. - Reimer Paula J et al. (2004). "INTCAL04 Terrestrial Radiocarbon Age Calibration, 0–26 Cal Kyr BP". Radiocarbon 46 (3): 1029–1058. A web interface is here. - Reimer, P.J.; et. al. (2009). "IntCal09 and Marine09 Radiocarbon Age Calibration Curves, 0–50,000 Years cal BP". Radiocarbon 51 (4): 1111–1150. - Balter, Michael (15 Jan 2010). "Radiocarbon Daters Tune Up Their Time Machine". ScienceNOW Daily News. - "Japanese lake record improves radiocarbon dating". AAAS. 18 Oct 2012. Retrieved 18 Oct 2012. - Godwin, H. (1962). "Half-life of Radiocarbon". Nature 195 (4845): 984. Bibcode:1962Natur.195..984G. doi:10.1038/195984a0. - Libby WF (1955). Radiocarbon dating (2nd ed.). Chicago: University of Chicago Press. - Lerman, J. C.; Mook, W. G.; Vogel, J. C.; de Waard, H. (1969). "Carbon-14 in Patagonian Tree Rings". Science 165 (3898): 1123–1125. Bibcode:1969Sci...165.1123L. doi:10.1126/science.165.3898.1123. PMID 17779805. - McNichol AP, Schneider RJ, von Reden KF, Gagnon AR, Elder KL, NOSAMS, Key RM, Quay PD (October 2000). "Ten years after - The WOCE AMS radiocarbon program". Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms 172 (1–4): 479–84. Bibcode:2000NIMPB.172..479M. doi:10.1016/S0168-583X(00)00093-8. - Stuiver M, Braziunas TF (1993). "Modelling atmospheric 14 C influences and 14 C ages of marine samples to 10,000 BC". Radiocarbon 35 (1): 137. - Kolchin BA, Shez YA (1972). Absolute archaeological datings and their problems. Moscow: Nauka. - Crowe C (1958). "Carbon-14 activity during the past 5000 years". Nature 182 (4633): 470–1. Bibcode:1958Natur.182..470C. doi:10.1038/182470a0. - Barker H (1958). "Carbon-14 Activity during the past 5,000 Years". Nature 182 (4647): 1433. Bibcode:1958Natur.182.1433B. doi:10.1038/1821433a0. - Libby WF (1962). "Radiocarbon; an atomic clock". Annual Science and Humanity Journal. - Wang YJ; Cheng, H; Edwards, RL; An, ZS; Wu, JY; Shen, CC; Dorale, JA (2001). "A High-Resolution Absolute-Dated Late Pleistocene Monsoon Record from Hulu Cave, China". Science 294 (5550): 2345–2348. Bibcode:2001Sci...294.2345W. doi:10.1126/science.1064618. PMID 11743199. - Beck JW; Richards, DA; Edwards, RL; Silverman, BW; Smart, PL; Donahue, DJ; Hererra-Osterheld, S; Burr, GS et al. (2001). "Extremely large variations of atmospheric C-14 concentration during the last glacial period". Science 292 (5526): 2453–2458. Bibcode:2001Sci...292.2453B. doi:10.1126/science.1056649. PMID 11349137. - Hoffmann DL; Beck, J. Warren; Richards, David A.; Smart, Peter L.; Singarayer, Joy S.; Ketchmark, Tricia; Hawkesworth, Chris J. (2010). "Towards radiocarbon calibration beyond 28 ka using speleothems from the Bahamas". Earth and Planetary Science Letters 289: 1–10. Bibcode:2010E&PSL.289....1H. doi:10.1016/j.epsl.2009.10.004. - Jensen MN (2001). "Peering deep into the past". University of Arizona, Department of Physics. - Pennicott K (10 May 2001). "Carbon clock could show the wrong time". PhysicsWeb. - Bowman, Sheridan (1990). Interpreting the Past: Radiocarbon Dating. Berkeley: University of California Press. ISBN 0-520-07037-2. - Currie, L. (2004). "The Remarkable Metrological History of Radiocarbon Dating II". J. Res. Natl. Inst. Stand. Technol. 109: 185–217. - Friedrich, M.; et al. (2004). "The 12,460-Year Hohenheim Oak and Pine Tree-Ring Chronology from Central Europe—a Unique Annual Record for Radiocarbon Calibration and Paleoenvironment Reconstructions". Radiocarbon 46: 1111–1122. - Gove, H. E. (1999) From Hiroshima to the Iceman. The Development and Applications of Accelerator Mass Spectrometry. Bristol: Institute of Physics Publishing. - Kovar, Anton J. (1966). "Problems in Radiocarbon Dating at Teotihuacan". American Antiquity (Society for American Archaeology) 31 (3): 427–430. doi:10.2307/2694748. JSTOR 2694748. - Lorenz, R. D.; Jull, A. J. T.; Lunine, J. I.; Swindle, T. (2002). "Radiocarbon on Titan". Meteoritics and Planetary Science 37 (6): 867–874. Bibcode:2002M&PS...37..867L. doi:10.1111/j.1945-5100.2002.tb00861.x. - Mook, W. G.; van der Plicht, J. (1999). "Reporting 14 C activities and concentrations". Radiocarbon 41: 227–239. - Weart, S. (2004) The Discovery of Global Warming - Uses of Radiocarbon Dating. - Willis, E.H. (1996) Radiocarbon dating in Cambridge: some personal recollections. A Worm's Eye View of the Early Days. - Radiocarbon - The main international journal of record for research articles and date lists relevant to 14C - C14dating.com - General information on Radiocarbon dating - calib.org - Calibration program, Marine Reservoir database, and bomb calibration - NOSAMS: National Ocean Sciences Accelerator Mass Spectrometry Facility at the Woods Hole Oceanographic Institution - Discussion of calibration (from U Oxford) - Several calibration programs can be found at www.radiocarbon.org - CalPal Online (Cologne Radiocarbon Calibration & Paleoclimate Research Package) - OxCal program (Oxford Calibration) - Fairbanks' Radiocarbon Calibration program (for prior to 12400 BP) - Notes on radiocarbon dating, including movies illustrating the atomic physics (from UC Santa Barbara) - Carbon Dating-How it works? - How is physics used in archaeology? (from physics.org)
http://en.wikipedia.org/wiki/Radiocarbon_dating
13
61
1.FUNCTIONS: DOMAIN and RANGE FUNCTIONS: DOMAIN and RANGE. [Note: In this worksheet, “x” will refer to the independent variable of a function and “y” will refer to the dependent variable. DomainAndRange | www.sinclair.edu Facebook LIKES: 64 2.9.3 Graphing Functions by Plotting Points, The Domain and Range ... Domain of a function = set of all first coordinates, Range of a function = set of all ... Now, using the graph we can determine the domain and range of the function. 9.3 | infinity.cos.edu Facebook LIKES: 0 3.Relation, Function, Domain & Range Relation, Function, Domain & Range. Based on Online Lesson @: mathwarehouse.com/algebra/relation/math-function.php. Part I. 1) What is the domain and ... Relation-function-in-math_Worksheet | www.mathwarehouse.com Facebook LIKES: 67 4.Library of Functions For each type of function, we will give its domain and try to say what we can about its range. Lines slope-intercept form y = mx + b, point-slope form y − y1 = m (x ... functions | math.aa.psu.edu Facebook LIKES: 0 5.Functions: The domain and range Jackie Nicholas Jacquie ... of a function, the domain and range of a function, and what we mean by specifying the domain of a function. 1.1 What is a function? 1.1.1 Definition of a function ... domainrange | sydney.edu.au Facebook LIKES: 258 6.9.1 The Square Root Function Of course, we can also determine the domain and range of the square root function by projecting all points on the graph onto the x- and y-axes, as shown in ... section1 | msenux.redwoods.edu Facebook LIKES: 0 7.Graphing Square Root and Cube Root Functions Square Root Function. Cube Root Function y x y 5 a x. (1, a). (0, 0) y x y 5 a x. (1, a). (0, 0). (21, 2a). 3. The domain of y 5 aV. } x. The domain and range is x ≥ 0. i76 | www.jamestown.k12.nd.us Facebook LIKES: 0 8.Domain & Range "The Domain of a function is the set of all the allowable x or input values." By using the ... "The Range of the function is the set of resulting y or output values." ... domainandrange | www.missouriwestern.edu Facebook LIKES: 48
http://findpdf.net/documents/range-of-functions-domain-pdf-download.html
13
71
We are trying to develop an assessment for a science unit based on curriculum from the National Science Resource Center and the Smithsonian. One of the key concepts of this unit states: "A force is any push or pull on an object. An unbalanced force is needed to make a resting object move, to bring a moving object to rest, or to change the direction of a moving object." We do not find a coherent explanation of what an unbalanced force is, or why it is important. An unbalance force is one that is not opposed by an equal and opposite force operating directly against the force intended to cause a change in the object's state of motion or rest. Consider this little illustration: Object, O is at rest and subjected to a force from the left as shown: Let ====> represent the force to change the object's state of motion or rest on object. O This unopposed (unbalanced) force will cause the object to move to the right. ====> O Let O <==== represent an opposing force of equal magnitude operating on When the forces are opposed and impinging on the object ====> O <====, the object will not move because each force is balanced by an equal and However, if the forces are unbalanced and aligned thus, ====> O <========, the larger force coming from the right is unbalance by the one from the left. Thus, the object will move toward the left. The picture is more complicated that I can illustrate here because an opposing force my be impinging on the object from an angle. Overall, it is the "net" unbalanced force that will cause the object to move or change its state of motion. Newton's second law of motion defines force as mass times acceleration. Thus a force acting on an object will induce an acceleration assuming the mass stays constant. In a frictionless environment, a body can be considered at rest when it is not being subjected to an acceleration. That means that the object can either be completely still or moving at a constant velocity with respect to the observers frame of reference. To my knowledge, in physics, there is no such thing called an unbalanced force. I can however, explain what they may be talking about in this Our world has friction. It is a force acting on everything that moves. We do not see it or create it but it is there and generally causes a negative acceleration on moving objects like cars and bikes. To counteract this force of friction, we must generate a force in the opposite direction of the frictional force. If these two forces are equal and opposite, then the net acceleration of the object is zero and the object (car or bike) maintains a constant velocity. Now, if we wish to change this velocity, we must "unbalance" the forces. If we increase the force pushing the object forward, it accelerates in that direction and vice versa. An example that I used with my high school physics students is a "tug-o-war". If both sides pull with equal and opposite forces, the flag at the center of the rope does not accelerate to the left or to the right. There are certainly forces being applied to the rope -- team "A" pulling to the right and team "B" pulling to the left -- but there is no "net force", that is, no "unbalanced" force -- so the flag does not accelerate left or Likewise, an airplane flying at a constant speed (say 400 mph) in a straight line (say due north) and at a constant altitude (say 30,000 feet) has no unbalanced forces. The forward thrust of the planes engines exactly equal the backward air resistance (drag) and the upward lift of the plane's wings exactly equals the downward pull due to gravity. The plane has forces but no net forces (no "unbalanced" forces). To accelerate to a faster speed, the planes thrust would have to be greater than its drag (thus applying a net force forward). To slow down, the plane's drag would have to be greater than the plane's thrust (applying a net force backward). To go to a higher altitude, the plane's lift would have to be greater than the plane's weight. And so on. This is important because of Newton's second law, F=ma, states that an object will only accelerate when a net force is applied. If there is no net force, the object will not accelerate. Todd Clark, Office of Science U.S. Department of Energy You have confronted one of the fundamental problems in physics that is not confronted in most presentations and/or physics texts. The problem is that of the "undefined terms". If you examine the standard definitions of the fundamental terms in physics -- for example: "force", "energy", "mass"-- you will find their definitions are circular. One being defined in terms of the other and the other way around. Circular definitions are logically unacceptable, so how can the circle be broken? The dilemma is resolved by recognizing that "physics" is an "effective theory". The term "effective theory" is used in a non-standard specialized way. An "effective theory" is one that recognizes that there are certain fundamental elements of the theory that cannot be defined in terms of elements of the theory. In physical science and mathematics all theories are "effective" (at least all I have come across). Euclidean geometry deals with "points" and "lines" but those terms are not defined within the lexicon of Euclidean geometry. They are the fundamental elements whose BEHAVIOR geometry treats. Thermodynamics deals with "work" and "heat", "energy", "temperature". It is my favorite example of circular definitions: According to Van Nostrand's Scientific Encyclopedia (a typical source): "Heat. The agency whose addition to or removal from a physical system is the cause of thermal changes of several types. These include rise and fall of "Temperature. That property of systems which determines whether they are in thermodynamic equilibrium. Two systems are in equilibrium when their "...This led to the comparison of the states of thermal equilibrium of two bodies in terms of a third body called a thermometer. The temperature scale is a measure of [the] state of thermal equilibrium, and two systems are at thermal equilibrium must have the same temperature." > If that does not drive you up the wall I do not know what will!!! What is this "agency"? Temperature is that "property"? What is that "property"? The problem is too large to discuss here, but paraphrasing Richard Feynman, he says of "energy" words to the effect -- There is this number I know how to calculate (energy) using certain formulas. When I change a system in some experimentally defined way, I always calculate the same value from the formulas. I will call this result of calculating using the formulas "the change in energy". There are other formulas that give rise to quantities that do not change too. One of them I might call "momentum". So there are quantities that are defined operationally by some instrument or another. The definition of the quantity is what the instrument says it is. Not everyone will agree with me, but I do not see any other "out" to obscure circular definitions. What is meant here by an "unbalanced force" is a force on an object that is not balanced by another force on the same object. An object with an unbalanced force is an object with a non-zero net force, an object not in The statement about an unbalanced force is saying that the motion of an object in equilibrium (all forces adding up to zero) cannot change. Dr. Ken Mellendorf Illinois Central College I would prefer to call it the NET force. If you take the vector sum of all the forces acting on an object, that sum is the net force, F, that goes into Newton's 2nd Law: F = ma. m is the mass of the object and a is the acceleration of the center of mass of that object. Note that if the net force is non-zero, the object will accelerate. This could cause it to start moving (if it had been at rest) or to speed up or slow down and/or to change Remember that force is a vector quantity and to find the net force, you must take the VECTOR sum of all the forces acting on the object. Best, Dick Plano, Professor of Physics emeritus, Rutgers University An unbalanced force is any force that is not opposed by some other force. For example, gravity exerts a force on your body that is perfectly opposed by the force the ground exerts on the bottoms of your feet. As a result, you do not sink into the ground or fly up into the air, even though there are forces exerted on you that would, if unopposed, cause these things to happen. Click here to return to the Physics Archives Update: June 2012
http://www.newton.dep.anl.gov/askasci/phy00/phy00847.htm
13
66
Tip of the Week [Re-posted from "Simple Science Strategies," April 3, 2013] Struggling to find time to teach science in a day full of math and language arts? Trying to move beyond fun activities to authentic learning tasks that lead to big scientific thinking? Wondering how to take your students beyond the superficial to the higher order thinking of a real scientist? Get a copy of The Essentials of Science and Literacy. Who Would Enjoy The Essentials of Science and Literacy ? - Literacy support teachers who are in classrooms during science instruction; - Teachers in priority districts, where the traditional focus has been on increasing literacy scores; - Teachers who like to use an integrated approach to instruction; - Instructional coaches who are charged with helping teachers improve their practice; - Any teacher who wants to raise the level of rigor and engagement in their literacy and science work. Read a review of The Essentials of Science and Literacy For ordering information: Click on the image, above, for information on ordering this text from Barnes & Noble. , lesson plans , classroom environment , Simple Science Strategies , integrated curriculum Homogeneous groups are a type of instructional group where learners are placed with other students who are alike in some way. Literacy instruction in most public school settings includes time in homogeneous groups, usually based on overall reading level, using instructional level texts as the primary reading material. This structure is based on the work of Marie Clay, and others, who advocate that students should be working on their instructional level, with texts that are just a little higher than what they can read and comprehend independently. Identifying a student’s independent, instructional and frustration levels in reading has made reading instruction much more enjoyable for students, and has led to better targeting of specific foci for instruction, for groups of students. While homogeneous groups have their place in instruction, only using homogeneous, leveled groups can lead to some unintended problems in instruction: - While they may progress within their instructional level groups, students in the lower groups often do not “accelerate” – that is, the lowest performing students do not make up a year and a half’s growth over the course of the school year, and end the year as behind as when they started the year; - Learners work with their instructional level texts, but students do not apply learned strategies to more challenging texts, nor learn additional strategies for navigating appropriately complex texts (e.g., grade-level, or above) – and these might not even be the same strategies; - Although generally supported as a strategy for increasing performance of gifted and talented students, scientific literature does not necessarily support homogeneous ability grouping for other groups; - Grouping of students based on literacy level creates de facto tracking, as other content areas may now be “leveled” because of scheduling reading instruction; - Students in the highest groups often do not receive an equitable level of instruction, as they generally are more independent in academic tasks, and can complete grade-level assignments without adult assistance – consequently, while they start out and end the year ahead of their classmates, they do not make a year’s growth, as they often receive less actual direct instruction with appropriately challenging material; - Homogeneous grouping may undermine the use of collaborative groups and student discourse, as the lowest readers might lack sufficient oral language skills or background knowledge to have extended discourse and the highest students may be accustomed to working in isolation, rather than Reading Level, vs. Literacy Level Using the overall reading level of a student (such as her DRA level), rather than a specific skill area, can also make instruction challenging in homogeneous groups: - Because reading is a complex task, reading level, alone, is not specific enough to help the teacher select an appropriate instructional focus for a student; - Students in a “group” may be at the same DRA level for many different reasons, and need different strategies from one - The students in the lowest performing groups may need many different skills, that planning appropriate, targeted instruction for their groups becomes challenging; - Reading level only measures some components of comprehensive literacy: speaking and listening, word work, writing and other areas are not adequately measured by the DRA or other similar assessments; - Student groups may become static, as reading level changes more slowly than specific skill proficiency. Why Focus on Discourse? Forming mixed instructional groups to foster student-to-student discourse is based on several principles: - The best peer to model a particular skill or strategy is one who most recently mastered it; - Listening and speaking come before reading and - Comprehension of a topic or concept exists separately from being able to decode a text about it; - Comprehension can be measured by how well a student discusses a topic or concept; - Talking about a topic or concept is a rehearsal for writing about it; - Discussion has to be explicitly taught, just like other literacy skills. Forming Discussion Groups Here is one way to form groups that foster - Rank the students in your class from 1-5 on their English Oral Language, with 1 being a beginner, and 5 being the highest with regard to their oral language skills (ability to converse, vocabulary use, ability to listen to other speakers, ability to work around sticky points when working with a group, etc.). - Form groups of students with mixed rankings, with a range of similar (but not the same) abilities (see the graphic for an example), such as 1-2-3 together, 2-3-4 together, and 3-4-5 together. - To distribute the groups among several classrooms, you can now have a low, medium and high group, but the groups will be heterogeneous. And, because you are grouping on a specific skill, you won’t have “predictable” groups: e.g., you might have a student ranked as a 4 or 5, based on oral language and group leadership, who is a struggling decoder, Provide literacy tasks at a lower reading level than instructional, because your focus will be on discussion strategies to prepare a group oral response to questions. (Literature Circles work well for To Form Groups across Classrooms Rank the students for the whole grade level. [NOTE: This example assumes the following distribution of students, when ranked by oral language proficiency, just as an example. Use your own numbers here.] - Rank 1 – 8 students (new arrivals, with limited English oral language proficiency) - Rank 2 – 12 students - Rank 3 – 20 students - Rank 4 – 16 students - Rank 5 – 4 students You will want groups of 5 students for discourse. In this example, there are 20 students per classroom, or 4 groups per classroom. To form “leveled” classrooms (to prioritize assignment of adult supports, for example), concentrate the 1’s in one classroom, and the 5’s in another (you may have to adjust this, depending on your specific numbers). Distribute students of a given rank across a number of groups, paying attention to dynamics among particular students (remember, you want conversation, so you want to create groups of students who will be supportive of one another. Refer to the diagram, below, for one way to form mixed instructional groups, based on a oral language proficiency, across multiple classrooms. (NOTE: This same technique could be used to form mixed groups for any other skill, as well). Regardless of the content area (math, science, reading, writing), provide learning tasks that focus on discussion. - For reading, instead of guided reading groups, create literature circles, and provide discussion prompts. - For math, provide complex problems with multiple possible solutions, and have groups collaborate to solve the problems. science, provide a scientific claim that students must find evidence to both support AND refute, before they select their stance on the issue. - For social studies, create a gallery walk of artifacts for students to discuss and respond to. - For writing, create peer editing/revising groups for process writing assignments. Grade Level: Upper elementary , reluctant readers , classroom environment , cooperative learning , second language support , grouping strategies I do a lot of work with schools that have high populations of Second-Language Language Learners, including "newcomers," students who are new arrivals to the United States. In these classrooms, both elementary and secondary, teachers make great use of hands on activities and visuals, to separate the cognitive demands from the language demands of their instruction. Here's a visual review of some techniques they've used recently: 1. Labeling: Classroom objects are labeled with their English names. In some classrooms, these signs are labeled in English, Spanish, French, Twi, and whatever other first languages are present in this high school classroom. 2. Visual Dictionary: A "menu" book is created for school lunch items in an elementary school, with their names in English and in Spanish. Many times, there is not only a language barrier, but a cultural barrier, because the foods are not ones eaten in their culture. 3. Graphic Organizers: Graphic organizers help students see the relationship between pieces of information, even when the words on the page are not understood, making the input more comprehensible. Shown is a sheltered instruction general math class. 4. Visual Technology: The SmartBoard is not only visually accessible, but interactive, and students can manipulate maps, words, numbers and other figures right on the board. Other helpful technological visual aids include iPads, iPods, student responders, and 5. Student-created Visuals: Timelines, maps, diagrams and other visual aids have added meaning when students work together to create them. Here, the visual of the timeline and the student-drawn images help make the timeline come to life in high school social studies. 6. Directions in Words and Pictures: Using pictures to reinforce written words helps to clarify directions. Directions for routine procedures are posted on large charts, where students can access them independently. Vocabulary in the directions is clear, and high-frequency, Tier 1 vocabulary words ("write", "draw") are used repeatedly. Here, a high school science teacher in a sheltered English class posts the directions for a vocabulary game that students use to reinforce and practice general science vocabulary. 7. Target Language Goals and Words: In this high school English classroom, language goals, and their corresponding vocabulary, are posted for a reminder to the teacher, and students, of the skills to be practiced in speaking and listening. 8. Word Walls: In this high school math class, the focus is on Tier 2 vocabulary words that will be used across content areas, and in all math subjects ("position", "round", "fraction"). In addition, high-concept, Tier 1 words are included ("move", "each"). Common phrases or questions are included. Having the vocabulary in a pocket chart allows students and teachers to manipulate them for activities, or take them to their desks for reference, and allows the teacher the flexibility of rotating through vocabulary words as students master them. My thanks to the students and teachers at East Hartford High School and Franklin H. Mayberry Elementary School, in East Hartford, for sharing their work on this blog. The Race to the Moon, 1960's... Back in the "olden days," before A Nation at Risk, before No Child Left Behind, teachers taught children in a free-form, holistic approach. I remember, as an elementary student, creating a mini-world out of moss and soldier lichens in a mason jar in 3rd grade, creating a cloud in a bottle in 5th grade (with the help of my teacher's lit cigarette -- I know, I know -- but this was the 60's...), learning how to play chess in 5th grade, and dissecting a humongous cow eyeball in 6th grade. We created weather maps, backdrops and props for school plays, and photo albums of our class field trips to a local pond. Kids these days still do some of these things, but a greater portion of these activities has been condensed to less and less of the weekly schedule, to make way for more explicit skills instruction. We know the reasons for this, some political, some educational. And we have definitely seen that some specific groups of children have historically been "left behind:" students with disabilities, students of color, urban children, poor children, and students who are learning a second language. For many of these groups, our hyperfocus on skills has produced great gains, and we've learned to be better diagnosticians and better teachers. But, alas, we are seeing some of the unintended consequences of this skills-focus, as well: kids who passively attend classes, waiting to be "filled" with information; students who do not know how to think, question or wonder, or problem-solve; and children (and teachers!) who question the relevance of the material that is being taught to today. The Giant Circle Here we are in 2012. The Common Core State Standards have raised the bar for many educators and their students. The Next Generation Science Education Standards imminent release have schools scrambling to, once again, find time for science instruction in a schedule previously usurped by reading, writing and mathematics, the "testable" subjects. A sluggish American economy has forced districts to more and more with less and less. There is a movement afoot to return to teaching rich topics, and infuse the literacy and numeracy skills required to learn important scientific and historical ideas: a rising number of "theme" public schools as choices in urban areas; a growing number of charter schools devoted to the arts or sciences; STEM magnet schools emerging across the country. Along with this, there are thousands of teachers trying to go back to the "old" way of teaching, with the "new" way of looking at skills and standards infused within. I am having a great time working with teachers all over, as they explore favorite topics through the lens of the Common Core State Standards. Here is the first in a series of articles on creating integrated, standards-based curriculum. Wildflowers and Seeds, an Integrated Study for Fall I am building an elementary unit for the start of the school year, on wildflowers and seeds. I chose this topic because I have a desire to build a series of studies around short nature walks and hikes that can be conducted anywhere. One of the things the students will readily observe in September is an abundance of late summer wildflowers in flower and bearing seeds. I know that I want to emphasize several key ideas: - Nature Study - Rules, Routines and Procedures for a New School Year - Describing with Adjectives Brainstorming, by Content Area - seed dispersal mechanisms - observation strategies - science journals - nature centers (learning centers) - weather data (to accompany the skill of observation) - exploring the 100 grid - exploring the number line - navigating the math text book - exploring number facts (fact families, x tables..) Everyday Math includes many activities such as these as the entire first unit in many grade levels. I want to include the specific learning task, "Numbers All Around." - exploring time lines - exploring maps & globes The overall focus for social studies will be on building a community of learners. English Language Arts - Reading/Literature: Miss Rumphius, by Barbara Cooney (done in "Five in a Row" style) - Reading/Informative Texts: understanding and using field guides to wildflowers - Reading/Foundations: using context clues to identify meaning of unknown words in context - Writing: Perspective of a type of seed (strategy: RAFT paper) - Language: adjectives and adverbs (strategy: "Dressed Up Sentences") - Speaking/Listening: asking and answering questions See links for additional information on the FIAR strategy and RAFT paper details, including purchase information, where applicable. More About "Five in a Row" Many homeschoolers develop integrated studies around high-quality children's literature by using a technique called, "Five in a Row" (FIAR). This curriculum building technique, developed by Jane Claire Lambert and Becky Jane Lambert, is an easy, fun way to build a collection of learning tasks that are connected to one another, by using a great book as the connector. In the wildflowers and seeds unit I am developing, I know that I want to use Miss Rumphius as my "spine," because the text has an engaging story line, interesting and deep characters, a moral and a clear connection to the science topic (seeds and wildflowers). Because of the quality of the literature, I know that I will be able to consider a great many connections to various content areas, giving me (and my kids) many different ideas for an integrated unit. (Click on the photo for ordering information). In FIAR, a piece of literature (or a chapter, if it is a novel) is read (or re-read) every day of the week. Each day, learning tasks are developed which correspond to a particular content area. Let's consider Miss Rumphius for a moment: Monday (Social Studies): All About Maine (geography, topography, history, climate and culture, coastlines... whatever fits my grade-level social studies curriculum) Tuesday (English Language Arts): A Character Study: Miss Rumphius Wednesday (Creative Arts): The Dry-brush Watercolor Technique (art response to literature) Thursday (Applied Mathematics): Our Classroom Weather Calendar (sun index, length of day, high/low air temperature, rainfall... whatever fits my grade-level math measurement standards) More About RAFT Papers are a Project CrISS Strategy for helping students organize their writing. The acronym, RAFT, stands for... For a study of wildflowers to go with Miss Rumphius, how about a study of seed dispersal mechanisms? - Role = a burdock fruit (burr) - Audience = a neighborhood cat - Format = a thank you note - Topic = helping the burr move to a new home next door - Role = a poison ivy berry - Audience = a cedar waxwing (bird) - Format = a persuasive letter - Topic = why the two should become friends - Role = a dandelion tuft - Audience = the local meteorologist - Format = a letter to the editor - Topic = review of the local weather forecasts - Role = a jewelweed plant - Audience = little kids - Format = instructions - Topic = how to make a seed rocket More Links on Miss Rumphius , outdoor education , lesson plans , hands on activities , Simple Science Strategies , Five in a Row , integrated curriculum With the increased focus on the Common Core State Standards in all content areas, science and social studies teachers are looking for ways to include additional, authentic literacy experiences in their instruction. Journaling and notebooking activities can be used to incorporate more writing task within your science classes. "An Apple a Day" is the first in a series of science journaling pages that follows the apple tree throughout the year. This first set focuses on the formation of the apple fruit from the flower. See Simple Science Strategies as additional sets are posted, including the next set (prepared for October), which will focus on the development of fruit and foliage color in the fall. I recently had the opportunity to work with a group of teachers, grades 4-6, as they were developing literacy centers to help support independent learning during small group intervention times. The focus of our centers work was vocabulary practice. Life in Ancient Rome We chose Ancient Roman times as the focus of our practice work, as a couple of the grade levels have the history of ancient times as part of their social studies curriculum. Our first step was to go through the chapters we were working with, and make a grand list of all kinds of words that we might use when working with the students. Our goal through all our vocabulary work was for students to understand the vocabulary words in conversation and reading, and to use them correctly in speaking and writing. Key Principles When Creating Vocabulary Centers Before we begin developing vocabulary centers, we need to review some guiding principles for learning centers and working with vocabulary words: Principle #1: Independence is the Goal Remember that these are intended to be independent activities. Your grade level standards are the student performances that you would expect by the end of the school year, so whatever you design for the centers should be scaled back appropriately, or scaffolded. When you think of scaffolding for learning centers, remember that you (or another adult) are not supposed to be the scaffolds. So think three "p's": Explicit directions, the kind of organizer you use, hint cards, etc., are all ways to use print to scaffold for independence. Likewise, using real props (books, realia, photographs, audio tapes/CDs, artifacts, manipulatives, pocket charts) provide support for students as they work independently. Peers are an underutilized scaffold in classrooms. I don't mean having more knowledgeable students work with strugglers, but the meaningful grouping of students into collaborative units, where they converse and problem-solve. In my classroom, when I was working with small groups, I used the "Ask Three Before Me" rule for solving problems during centers time: 1) first check with someone in your group; 2) next check with your study buddy 3) finally, check with a class "expert" (we posted these in the centers area, so students new classmates who were "experts" on various topics. Remember that centers are not intended to be silent work, but should promote discourse. Principle #2: Relationships Rule When working with words, students need to look beyond dictionary definitions and really get to know that the REAL meaning of words exists between them . In other words, it is the relationship between words that carries the real meaning in a text. Therefore, the activities that we provide within a vocabulary center should reinforce the relationship between words, the meaning of the word within the context of the passage, and the cognitive process we want students to practice with those words. David Hyerle developed a series of graphic aids that reflect eight different cognitive processes we would want students to practice. He calls these Thinking Maps . On the surface, they resemble the graphic organizers with which we have become so familiar in classrooms. One major way that they differ, however, is that students actually construct them, rather than fill them in. And there are only eight of them, as Dr. Hyerle posits that beyond this, organizers are just variations on a theme, and, for simplicity's sake, it is better for students to be very familiar with eight major ways of organizing information than a hundred subtle twists. In the vocabulary training, I worked with teachers to create vocabulary activities that mirrored the various types of thinking maps and the types of thought processes we'd want students to experience with vocabulary about Ancient Rome. - Defining in context - Describing using adjectives - Comparing & contrasting - Showing cause and effect - Classifying and categorizing - Ordering and sequencing - Demonstrating part-whole relationships - Illustrating analogies For example, in the illustration, above, students are sorting vocabulary words regarding life in Ancient Rome, based on whether the word talks about the life of men at that time, women, or both men and women. There is also a category for words that the student is unsure of -- a great way to monitor for increased understanding over time. Principle #3: Think Tier 2 Linguists have divided English words into three categories, or tiers, based on their frequency and manner of use. (NOTE: It should be mentioned that these tiers have nothing to do with the three tiers of RtI [Response to Intervention]). The three tiers of vocabulary can be described as follows: - Tier 1 -- High frequency, basic sight words (e.g., day/night, color words, positional words, etc.) - Tier 2 -- High frequency words that exist across multiple content areas, have multiple meanings, or are frequently used in academic directions (e.g., root, cell, problem, base, array, justify, order). - Tier 3 -- Low frequency words that are subject specific (hypotenuse, ecosystem, fiduciary) When we choose words for explicity vocabular instruction, we should concentrate most heavily on Tier 2 words. Why? Because they are the kinds of words that we hear in mature, adult conversation, they occur in multiple settings and give us an opportunity to discuss meaning in context. In short, they give us the biggest impact in the shortest amount of time. So we look over our big list that we created at the beginning of the session, and we begin sorting our words into Tiers. Our final list is going to be mostly Tier 2, with some really important Tier 3 words, just a few, added. We will add Tier 1 words if the needs of our audience require them (e.g., support of second language learners). Another note about the photo here. You'll notice that I've included a blue card with the set number, the topic ("Gladiators") and the skill focus (concept comparison). I like to think that I'll remember, next year, what I used this packet of words for, but the reality is that this baggie of words looks alarmingly like the other 76 packets that I created last year. And I DON'T always put my stuff away in a timely fashion. So I need to remind myself! Principle #4: "7 plus/minus 3" I once visited a high school English class in a district that had adopted a district-wide focus on vocabulary instruction. The reaction of this teacher to the initiative was to write every vocabulary word to be covered that semester on his front white board -- about 150 words, words like alliteration, irony, foreshadowing, soliloquy, allegory, hyperbole, paradox, protagonist... All semester, those words sat there, staring at the students. I wondered if they had turned into one gray blur by day 5 of the semester. Learning theorists have found that there is a magic number of "things" that the human brain can mull over at a given time, and it is far fewer than 150. The general rule of thumb is that new items to be processed should number around seven, give or take three, depending on the age/ability of the audience and the difficulty of the material or the task in which it is imbedded. Think of the long numbers you have to remember: Social Security Numbers, bank account numbers, telephone numbers, passcodes. Notice that they are conveniently divided into groups of three or four digits. If they aren't, we do it ourselves. Some of us can remember a number larger than 10, but most of the important numbers we need to recall are 10 digits or less. So, back to our big list. I like to generate the master list, which may have 30-40 words on it. But, for an individual lesson or activity, I then select a subset of 4-10 words that is a good fit for that activity. In the photo, above, I demonstrated how I would use different colors of index cards for different sets of words, differentiating the difficulty of the words based on the readiness of groups of students. Some words might show up in multiple sets; some sets might include more Tier 1 words, some might have a few more Tier 3 words, some might have more complex forms of the Tier 2 words (e.g., artisans , instead of craftsmen ). But all sets would be limited to the rule of 7 plus/minus 3. I have found that this thinking has an impact even on the vocabulary work of the elementary grades, where students are often given worksheets of 20-25 words and asked to work with them during the week. That's about twice as many as would be recommended. Principle #5: Whole Group Before Independent Whenever you introduce a new activity to students, before they can do the task independently, they need to have the activity defined, modeled and practiced as a whole group. Once they have demonstrated that they can do the task with support of the teacher, then, and only then, do we move on to doing the activity as an independent learning center. Vocabulary activities as described here are a breeze in these days of Smart Boards. One fourth grade teacher that I coached was a master at creating vocabulary word activities, where he, and then the students, dragged virtual vocabulary cards around the Smart Board, as he conducted whole-group practice sessions with any new vocabulary activity. The students could then do the vocabulary activity in partners using the Smart Board (as a center), or using real cards at a learning station. Remember, scaffolding can still be provided in the form of print, props and peers, as students move to complete independence in any activity. One excellent way to accustom students to the activity as a whole group, prior to turning them loose on the activity in a center, is to involve them in the creation of the centers materials, as a class. Teachers thoroughly enjoy creating the activities using colorful index cards and markers -- and students will, too. Rather than spending your evening hours creating students' vocabulary materials, plan to introduce the words through the students' creation of their own word sets. The photo, above, shows all the materials you need to create engaging vocabulary sets for students. I used sandwich sized zip-style baggies to store the word sets. You can use sticky file labels to mark the baggies with student names. Principle #6: Rules, Routines & Procedures Many an excellent learning station has gone south, quickly, when a teacher has neglected to be extremely explicit about how to "work" the station. I never assumed anything, when working with new routines. If I wanted the students to put finished work in a certain basket, I told them, as I physically put the paper in the basket. Then I marked the basket, "Put finished work here!" Otherwise, I found it on desks, under desks, in desks, in backpacks, on the floor, by the water fountain... And not just in elementary classrooms, either. I usually advise teachers to look over their weekly plan books as they ponder them (I usually did this on Sunday afternoon!). Then I identified one or two rules, routines or procedures that students would have to master in order to be successful at the activities I had outlined, and then I planned explicit mini-lessons for those routines: "Working With a Partner," "Selecting 'Just Right' Books," "Signing up for Conferences," "When You're Finished..." It saves hours of time (maybe weeks?) during the school year, to take the time right off the bat, to make sure that we are of one accord in the classroom. The Guiding Reading Book, by Fountas and Pinnell, has mini-lessons for many classroom routines -- ask your literacy coach if your building has a copy of either the elementary or middle grades version. Our Vocabulary Activities We developed an assortment of vocabulary activities that stretched students' thinking beyond the normal "word on one side, dictionary definition on the other" routine (links are provided for downloadable resources for each activity): "Children of Ancient Rome" -- Students work with differentiated word sets, color-coded by the difficulty level of the word sets. Using the VOCAB strategy, students work as a cooperative group to arrange their vocabulary words on the bulletin board, creating a web that shows the relationship of the words to one another. There is no "correct" answer, but students need to be prepared to share their thinking. "Life in Ancient Rome" -- Pairs of students sort their word sets based on whether the word relates to the life of a Roman man, Roman woman, both men and women of Rome or unsure. After sorting their cards, students transcribe their sort onto a sorting frame to submit. "Roman Gladiators" -- Students delve deeper into the various relationships between words, using a concept of definition frame to show what they know about Roman gladiators. Because this is a sophisticated tool, the teacher might pre-"program" portions of the frame to scaffold for independence (remember, "print" is one of our scaffolding techniques in independent learning centers). You will notice that the materials include a summary after each vocabulary activity. It is important, when using any organizer or handout, that students are able to summarize what they overall learning of the activity was. Have a joyous, peaceful, sunshine-y summer! Schools all over the state (and country) are communicating data to stakeholders through the use of data walls. Businesses have been doing this for decades. This article reviews some of the considerations when using data in various school settings, based on some of the observations I have made in schools over the past several years, as well as teacher feedback. Personal Data Walls: Teachers have re-discovered the importance and impact of sharing a student's progress with the student, and inviting that student into the data-driven decision-making process. Special educators have had students track their progress (fluency graphs, sight word checklists, DRA levels, etc.) for years. More and more, classroom teachers are having students do the same in the general ed classroom, and are seeing the impace on even the youngest learners. First-grade teachers at O'Brien STEM Elementary School in East Hartford use file folders as personal data walls for their students. Students graph their writing scores five times a year and post the graph on the left side of the folder, as well as keep track of their mastery of their first grade sight words in a graph on the right side of the folder. Photocopies of the graphs are sent home periodically to communicate with parents, and the students use their folders during conferences. The folder format creates a handy way to send scores and student work to the receiving second grade teacher at the end of the year. Classroom Data Displays In a previous post ("Getting Data Teams Up and Running, 2011 "), I shared one of the best classroom data displays that I've seen, where the second grade team at Mayberry Elementary School in East Hartford created a "walking data wall" to show student progress in the DRA2. Teachers at O'Brien found that placing their student reading group table near the display helped keep students focused on their goals. They also met with parents at this table, so that parents could see where their children fell in relation to their peers, in their reading progress. Public Data Displays When it comes to displaying data outside the classroom, teachers and schools have to make some decisions: - Who is the intended audience for this display (parents, other students, other teachers/staff)? - If parents, what is the intended purpose of the display (to inform, to teach, to call to action)? - If students, how will students (in your class and others) use the data? How will their attention be drawn to it? - If other teachers, how will the data be highlighted? What will be the intended action? Recently, I met with teams of teachers at O'Brien STEM Elementary School, in East Hartford, where we discussed how to make hall displays parent friendly. Here are some suggestions for sharing data with the community: - Use a small amount of data to show the reason behind the current classroom work (e.g., a bar graph showing DRA2 data to show why 'retelling' is the current area of focus in Grade 2). Parents can support school focus more easily if they understand why it is important. - Avoid "sorting" words like "below basic," "proficient," "substantially deficient," etc. While we may use these words as teachers, they do not evoke positive and encouraging thoughts in parents. Better to provide the goal, and visually show progress toward the goal. People will get the picture, without the "punishing" words. - Use lots of visuals to show, rather than tell. Classroom teachers can share photos and vignettes of ways that they addressed the data focus, to show parents the kinds of activities that support learners in that area. - Provide a pocket folder with "take-home" ideas. One fourth grade teacher at O'Brien provided parents with ideas on how to support the grade-level focus at home. Other teachers provide a classroom newsletter as part of the display, to make the display interactive. - Turn your display into a "waiting room." If there was something I wanted parents to see, I placed a desk next to it during parent conferences. Waiting parents could interact with the display while they waited for their turn at conferences. The photo at left shows one school's approach to making hallway data displays parent-friendly. Click the photo to see their description of the display. School Data Displays I was at E. C. Goodwin Technical High School last week, and got a chance to take a good look at the data display they had in their main office conference room, before the school data team meeting convened. Here were the components of the display, simply tacked to the bulletin board): - The School Improvement Plan (front page with main goals showing) - CAPT data graph, showing 5 years of standardized assessment data (reading, writing, math and science) for 10th graders - NOCTI data graph, showing 5 years' performance on the standardized trades exam - The school professional development plan and calendar for the year - CMT and CAPT (state assessment data), disaggregated by student graduating cohort) - Guiding questions for the School Data Team - CAPT data graph showing 5 years of reading scores and 5 years of writing scores - The Reading Action Plan from the SIP - The most recent English Department Data Team process summary (i.e., their most recent data team minutes) - A narrative description of their current strategy focus (a teacher-created strategy to make more meaningful connections to literature) - Dipstick data on a scoring rubric - CAPT data graph showing 5 years of math scores - The Math Action Plan from the SIP - Math screening data (from STAR Math) - grade-level profile - The most recent Math Department Data Team process summary - A narrative description of their method of selecting a targeted group of struggling ninth graders for a focus group on working with exponents - Dipstick data (via "quizlets") for the targeted student group - Office discipline referrals for the year, by month - A narrative of school-wide strategies being implemented to address ODRs - CAPT data graph showing 5 years of science scores - The most recent Science Department Data Team process summary - A bulleted instructional plan to address the current student learning focus (developing a problem statement) in Science - Summaries of professional development to address literacy, numeracy and comprehension strategies in the trades The display clearly showed the alignment between district, school and departmental goals, as linked by their four guiding questions (as shown below). On a montly basis, the team gathered for brief reports, by department, then discussed school-wide strategies to address themes that emerged across disciplines. For example, their most recent debrief revealed a student learning issue around making meaning from text that was technical in nature: assignement directions, math word problems, scientific procedures, technical manual specs, etc. They then discussed the adoption of a school wide strategy for paraphrasing technical texts (going from part to whole), as well as a school-wide strategy for analyzing problems and procedures (whole to part). For more examples of public data displays, see my Pinterest board on School Data Walls . The examples show different formats that schools have chosen to present data. Choose the format that best suits the purpose and culture of your team and school. I will continue to add to the board as I see examples to share, so check back often. Will You Help Me? Will you help me out? to take a brief survey on your experience on this page (You will NOT be directed to an advertiser! This is for my research, only!). , professional development , data teams , progress monitoring , classroom environment , data displays , leadership teams What's your favorite recommended read-aloud of all time? Post a link and a short description, with the hashtag #favoritereads to this page... We know that there are ways to review quiz results that are less than effective ("Okay, folks, number 4 is x+2... number 5 is 14... number 6 is..."). But we also know that we can't spend a whole class period working through each problem. Here is a way that one department uses their SmartBoards to create an engaging, effective way for students to review quizzes and assessments. SmartBoard Item Analysis Students, especially as they get older, don't like to ask for help or share when they get an answer wrong. But most of us (adults included) don't mind at all when someone asks us which problem we got right. Several science teachers at East Hartford Middle School , in East Hartford, Connecticut, have students come up to the SmartBoard and plot a "smiley" (or a star, in another class) next to the number of each problem that they completed correctly. In this example, the total number of students was 23. Most items had 21 correct responses (91%); the lowest correct response rate was the last item, where 19 students scored correct (83%). After students plot their responses, the class can talk about items that were particularly problematic, or which stood out from the others. Because of the anonymity of the task, a teacher can ask, "Why might so many students have given an incorrect response to number three?" and students can begin to analyze the problems that occurred in that item, without talking specifically about their own work: - "It said to 'justify your answer,' and I wasn't sure what 'justify' meant..." - "Maybe they weren't sure which were the dependent and independent variables, because it wasn't given..." - "If they didn't draw their line graph well, they couldn't really find out the intercept..." - "Maybe they didn't understand what the question was asking..." At Seymour High School , a physics teacher has students use their cell phones as "responders," taking a common formative assessment that collects the answers in a shared spreadsheet on Google Docs . The teacher can tell what time a student takes his quiz, and from where. He closes the "window" at a certain time -- anyone who hasn't logged in by that time is locked out of the quiz. Then he creates a graph of the responses for each item, that he posts on his SmartBoard, for the students to review. He can highlight the correct answer, and focus on incorrect responses that seemed to trip up a chunk of students. Of course, if you are fortunate, you can purchase student responders to use with your SmartBoard, like the Connecticut Technical High School System has. Practice tests and quiz review were never so much fun as when you can see the responses real-time, like a game show. For more articles on technology and feedback in the classroom, see below: In other news... Here are the other interesting tidbits that have passed over my desk this week: Last week I posted some information on data teams, including some links to other schools' websites, with forms, schedules and videos. The Connecticut State Department of Education has posted videos that Connecticut educators can access using your educator identification number (on your State Teaching Certificate). See the CALI (Connecticut Accountability for Learning Initiative) page , sign in using your email and password, then click the "Media" tab. If you loved Classroom Instruction that Works and The Art and Science of Teaching, then you will really love Visible Learning for Teachers , by John Hattie. A group of teachers and consultants pored through this book as part of a Data Teams training recently, and we were astonished about the true impact (or lack of...) of some of our tried and true strategies. I won't spoil the surprise. Definitely one for the adminstrator bookshelf. We Give Books is a website where students can read children's books online, for free. Books read by online readers are matched with donations of books to one of several charitable causes aimed at putting books in the hands of children around the world. Bookmark the website for your computer center. "I Need a Strategy to Help My Kids _______..." This month, I have spent a lot of time with teachers, developing lesson plans and analyzing assessment data, and selecting targeted strategies to meet specific, data-based needs of students. Here are the highlights from March: Based on the use of 12 Structure words, this strategy for explicitly teaching mental imagery is helpful for teaching students how to infer, develop more specific details in their retellings and writing, and to comprehend the author's selection of words and images in the story. Here are some selected links for teachers interested in learning more about this powerful strategy: QARs(Question-Answer Relationships) (Raphael) Students often write weak responses to questions when they respond to literature, because they don't understand 1) what the question is asking them to do; 2) what KIND of question they are answering and 3) where they would look for the right information to answer that question. QARs explicitly teach students how to determine the TYPE of question they are reading, in order to determine where they need to look for the correct information to answer the question. This article contains a clear and concise overview of the research and the use of QARs in the classroom, and includes planning tools to help teachers prepare a variety of question types, and to help students work with various questions during guided and independent reading activities. A great resource. This engaging, simple and effective tool is useful for building background knowledge collaboratively before beginning a new unit of instruction, for monitoring understanding and reflecting during instruction, and as an organizer for cooperatively summarizing learning before students independently summarize their own. I have found this to be a very successful strategy with students from the earliest primary grades, through adult learners. And, incidentally, it is one that most of my teachers want to try first. Looking for an easy way to organize your planning to meet the Common Core State Standards? Wondering how to show students that literacy is a balance of guided reading, self-selected reading, writing and working with words? The Four Blocks Model using a simple graphic to organize your planning around these four areas. All of their materials are coded by these four blocks, making it easy for you to see if you have all areas covered, and making it simpler to convey to students what part of literacy they are working on with a given activity. Teachers.Net has a web page dedicated to sharing ideas on using the Four Blocks Model for planning and instruction -- see "4 Blocks Literacy" for access to plans, chat boards and other resources. Teachers from primary grades to high school struggle to get some of their students to understand main idea and supporting details, or to choose the most powerful and effective evidence to support their arguments in writing. The Four Square Writing Method helps students conceptualize the relationship between a topic, a main idea/thesis statement/argument, and the details that support it. It also helps younger students distinguish between interesting facts and important details when they write and respond to literature. By using a simple four-block graphic organizer, students gradually work from simpler conceptual relationships to longer writing pieces. I used this method when I worked with my students and found it to be a very versatile and effective thinking and writing tool. Teachers at Vermilion Parish Schools developed an online tool for students to use, to help them use the Four Square Writing Method. Thinking Maps are learner-created maps that demonstrate the structure of a particular cognitive process. On the surface, they resemble graphic organizers in their non-linguistic format. Unlike graphic organizers, they begin with a blank page, and are learner-created, based on the cognitive process we want the learner to use: defining, describing, categorizing, comparing, sequencing, showing cause and effect, showing part-whole, and illustrating analogies. You will notice that all of these strategies involve talking and thinking more than filling out papers. Why would that be so? What is this telling us about teaching? What is this telling us about our kids?
http://www.northsideconsulting.org/blog/literacy.aspx
13
69
A data type tells the computer the kind of value you are going to use. There are different kinds of values for various purposes. Before assigning a data type to a variable, you should know how much space a data type will occupy in memory. Different variables or different data types use different amount of space in memory. The amount of space used by a data type is measured in bytes. To specify the data type that will be used for a variable, after typing Dim followed by the name of the variable, type the As keyword, followed by one of the data types we will review next. The formula used is: Dim VariableName As DataType This technique allows you to declare one variable on its line. In many assignments, you will need to declare more than one variable. To do this, you have two alternatives. You can declare each variable on its own line. This would be done as follows: Dim Variable1 As DataType1 Dim Variable2 As DataType2 Dim Variable3 As DataType3 You can also declare more than one variable on the same line. To do this, use only one Dim keyword but separate each combination of a name and data type with a comma. This would be done as follows: Dim Variable1 As DataType1, Variable2 As DataType2 Dim Variable3 As DataType3 Microsoft Visual Basic also provides special characters for some data types so that, instead of specifying it, you can use that character. We will indicate what character for what type. A variable is considered Boolean if it can hold only one of two values, either true or false, 0 or no 0, Yes or No. To declare such a variable, use the Boolean keyword. Here is an example: Private Sub Form_Load() Dim IsMarried As Boolean End Sub After declaring the variable and when using it, you can specify its value as True or as False. To convert a value or an expression to Boolean, you can call the CBool() function. If you are planning to use a numeric value in your program, you have a choice from different kinds of numbers that Visual Basic can recognize. You can use the Byte data type for a variable that would hold a natural number that ranges from 0 to 255. You can declare it as follows: Private Sub Form_Load() Dim StudentAge As Byte End Sub If the user enters a certain value in a control and you want to convert it to a small number, you can use CByte(). The formula to use would be: Number = CByte(Value to Convert to Byte) When using CByte(), passing that value between the parentheses. An integer is a natural number. To declare a variable that would hold a number that ranges from -32768 to 32767, use the Integer data type. The integer type should always be used when counting things such as books in a library or students in a school; in this case you would not use decimal values. Here is an example of declaring an integer variable: Private Sub Form_Load() Dim Tracks As Integer End Sub When declaring an integer variable, you can omit the As Integer expression and terminate the name of the variable with %. Here is an example: Private Sub Form_Load() Dim Tracks% End Sub If you have a value that needs to be converted into a natural number, you can call CInt() using the following formula: Number = CInt(Value to Convert) Between the parentheses of CInt(), enter the value, text, or expression that needs to be converted. A long integer is a number that can be used for a field or variable involving greater numbers than integers. To declare a variable that would hold such a large number, use the Long data type. Here is an example: Private Sub Form_Load() Dim Population As Long End Sub Alternatively, you can omit the As Long expression and end the variable name with the @ symbol to indicate that you are declaring a Long integer variable. Here is an example: Private Sub Form_Load() Dim Population@ End Sub To convert a value to a long integer, call CLng() using the following formula: Number = CLng(Value to Convert) To convert a value to long, enter it in the parentheses of CLng(). In computer programming, a decimal number is one that represents a fraction. Examples are 1.85 or 426.88. If you plan to use a variable that would that type of number but precision is not your main concern, declare it using the Single data type. Here is an example: Private Sub Form_Load() Dim Distance As Single End Sub If you want, you can omit the As Single expression in the declaration. Instead, you can type ! at the end the name of the variable to still indicate that you are declaring a Single variable. Here is an example: Private Sub Form_Load() Dim Distance! End Sub If you have a value that needs to be converted, use CSng() with the following formula: Number = CSng(Value to Convert) If you want to use a decimal number that requires a good deal of precision, declare a variable using the Double data type. Here is an example of declaring a Double variable: Private Sub Form_Load() Dim Distance As Double End Sub Instead of the AS Double expression, you can omit it and end the name of the variable with the # character to indicate that you are declaring a Double variable. Here is an example: Private Sub Form_Load() Dim Distance# End Sub To convert a value to double-precision, use CDbl() with the following formula: Number = CDbl(Value to Convert) In the parentheses of CDbl(), enter the value that needs to be converted. The Currency data type is used to deal with monetary values. Here is an example of declaring it: Private Sub Form_Load() Dim StartingSalary As Currency End Sub If you want to convert a string to a monetary value, use CCur() with the following formula: Number = CCur(Value to Convert) To perform this conversion, enter the value in the parentheses of CCur(). In Visual Basic, a Date data type is used to specify a date or time value. Therefore, to declare either a date or a time variables, use the Date data type. Here are two examples: Private Sub Form_Load() Dim DateOfBirth As Date Dim KickOffTime As Date End Sub If you have a string or an expression that is supposed to hold a date or a time value, to convert it, use CDate() based on the following formula: Result = CDate(Value to Convert) In the parentheses of CDate(), enter the value that needs to be converted. A Variant can be used to declare any kind of variable. You can use a variant when you can't make up your mind regarding a variable but, as a beginning programmer, you should avoid it. Here is a table of various data types and the amount of memory space each one uses: When naming your variables, besides the rules reviewed previously, you can start a variable's name with a one to three letters prefix that could identify the data type used. Here are a few suggestions: In the above sections, we saw how to declare a variable from a built-in data type. Besides these types, Microsoft Access and Microsoft Visual Basic ship with objects as we introduced them in Lesson 2. Sometimes you will need to refer to such objects in your code. In most cases, you will need to first declare a variable of the desired type before using it. To declare a variable of an object, you should first make sure you know the type of object you want. Every object you will use in your application is primarily of type Object. In many cases, you will be able to directly use the object in your application. In some other cases, you will first need to declare the variable and initialize it before using it. Also, in many cases, you can declare a variable and specify its particular type. In some cases, you may not know or may not need to specify the particular type of the object you want to use. In this case, when declaring the variable, you can specify its type as Object. When using the Object type to declare a one, the variable should be one of the existing VBA types of object and not one of the basic data types we saw earlier. This would be done as follows: Dim objVariable As Object After this declaration, you should then initialize the variable and specify the actual type it would be. To initialize a variable declared as a VBA object, use the Set operator that we will see later. In Lesson 2, we saw that a Microsoft Access database was an object of type Application. In your code, to declare a variable of this type, you can type: Dim app As Application If you want to refer to such an object outside of Microsoft Access, you must qualify it with the Access object. For example, from an application such as Microsoft Word, to declare a variable that refers to a Microsoft Access database, the above declaration would be made as: Dim app As Access.Application Even in Microsoft Access, you can use Access.Application. A constant is a value that doesn't change (this definition is redundant because the word value already suggests something that doesn't change). There are two types of constants you will use in your programs: those supplied to you and those you define yourself. Visual Basic provides the vbCrLf constant used to interrupt a line of text and move to the next line. PI is a mathematical constant whose value is approximately equal to 3.1415926535897932. It is highly used in operations that involve circles or geometric variants of a circle: cylinder, sphere, cone, etc. A variable is said to be null when its value is invalid or doesn't bear any significant or recognizable value. An expression is said to be false if the result of its comparison is 0. Otherwise, the expression is said to bear a true result. |Previous||Copyright © 2005-2007 FunctionX, Inc.||Next|
http://www.functionx.com/vbaccess2003/Lesson03b.htm
13