score
int64
50
2.08k
text
stringlengths
698
618k
url
stringlengths
16
846
year
int64
13
24
54
0. In the Beginning... A Summary of Stellar Formation Stars start out as interstellar clouds of gas and dust. If you were inside such a cloud, you probably couldn't tell that there was anything at all there, because the gases which make up the clouds are incredibly thin. Each cubic inch of these clouds contains only a few dozen to a few hundred atoms, while each cubic inch of our atmosphere contains almost a billion trillion atoms. If you had to expand a single cubic inch of our atmosphere until it was as thin as the gases in an interstellar cloud, it would be almost 200 miles on a side. Although the clouds are incredibly rarefied, they are also incredibly big, stretching for trillions of miles in all directions. Because of their huge size, even though there is practically nothing at any given place within them, the huge extent of practically nothing adds up to substantial masses, hundreds of thousands of times greater than the mass of the Earth, like that of the Sun. Because the material of the cloud is spread out over such a huge volume of space, the gravity caused by its mass is incredibly small, and under normal circumstances, it cannot force the cloud to contract to a smaller size. But under some circumstances, the clouds ARE forced to contract to smaller sizes. During such contractions, gravity gradually increases, and if its force becomes large enough, the thin gases within the cloud will not be able to exert enough outward pressure to prevent the gravitational pull from contracting the cloud to still smaller sizes, and so the cloud will continue to contract. Although gravity is trying to make the cloud smaller, the pressure of the gases within the cloud is trying to stop the contraction. At first the pressure is negligible, since the gases are so incredibly thin, but as the cloud gets smaller, the gases become denser and hotter. The inward pull of gravity tries to make the gases move inward with greater and greater speeds, but random collisions between the atoms of the gas tend to convert this inward motion into a random sub-microscopic movement, which we perceive as heat. As the cloud contracts, greater and greater amounts of inward movement are converted into faster and faster microscopic motions, or greater and greater amounts of heat. The greater temperatures which result, combined with the greater density of the gas, create a continually increasing pressure, which fights against gravity. As the heat generated by the contraction of the gases increases, pressure gradually rises, until it equals the gravitational forces, stopping the inward motion of the cloud. If this occurs while the cloud is still very large, and not very warm, no further contraction will occur. But if this balance does not occur until the cloud has contracted a long way, and has therefore generated a large amount of heat, some of the heat will be radiated away in the form of infrared light. The heat lost in this way reduces the ability of the gas to hold up against the pull of gravity, and causes a slow, semi-equilibrium contraction of the cloud. At each stage of the contraction, pressure and gravity are in balance, and if no more heat were radiated away, the contraction would stop, but the continual radiation of infrared light at the outside of the gradually warming cloud prevents this, and allows gravity to have a slow but steady victory over pressure. Although the loss of heat at the outside of the cloud forces it to contract, there is a limit to how far the contraction can go. As the cloud continues to contract, temperatures within the cloud continue to rise. By the time that the cloud is as small as a star like the Sun, the central temperatures have risen to many millions of degrees, and a conversion of hydrogen to helium begins, in a process known as thermonuclear fusion. At first, this conversion is slow, and produces only a small amount of energy, but as the star continues to contract, the rate of nuclear fusion increases, producing more and more energy. The heat generated by this fusion helps replace the heat being lost at the outside of the star, slowing the rate of contraction. The closer the star gets to a stable size, the closer the core approaches an equilibrium temperature at which the nuclear reactions produce exactly as much heat as is being lost on the outside. When the star reaches that temperature, there is no longer any net loss of heat, and so the star's contraction finally ends. 1. The Formation of the Solar Nebula A cloud which is trying to contract to become a star has several problems to overcome before it can do so. One of these is the outward pressure, discussed above. In this case, the solution to the problem is to heat the cloud to a high enough temperature so that some of the heat is being continually radiated away, reducing the amount of heat left over to create pressure. Another problem is angular momentum. Even while the cloud is huge, it must have some tiny amount of rotational motion. The random motions which occur in different parts of the cloud will probably nearly cancel each other out, so that any overall motions are small, but it is not likely that they will exactly cancel, and so some tiny rotational motion is to be expected. In many cases, the rotational motion of the cloud may be so large that it is impossible for the gravity of the cloud to overcome it, and the cloud remains as a large cool blob of gases, but the existence of so many stars in our Galaxy shows that there must be various ways in which stars can overcome this problem. One way is for the cloud to break up into two or more blobs, revolving around each other--a binary or multiple star system. Such systems are in fact quite common. Between one-third and one-half of all the stellar systems in our Galaxy are thought to consist of such multiple stars. Since we do not know of any companions to the Sun, it seems that our Solar System solved the problem in a different way. Presumably the amount of rotation in the cloud was fairly small, and as a result, those parts of the cloud which happened to have a smaller amount of rotation could fall inwards more-or-less uniformly, forming a large, roughly spherical ball near the center, which became the Sun, while those parts of the cloud which were rotating the fastest formed a flattened circular disk rotating around the central ball, the Solar Nebula. As the cloud contracted, parts of it which were moving parallel to the axis of rotation (getting closer to the plane of rotation) would not have had their inward motion affected by the rotation, but parts which were moving in the plane of rotation (getting closer to the axis of rotation) would have gradually increased their rotational speeds, just as ice skaters spin faster by pulling their arms towards their bodies. The same thing can be seen in the motion of the planets around the Sun; Kepler's Law of Areas is mathematically equivalent to the Law of Conservation of Angular Momentum which determines how rotating objects speed up as they get closer to their axis of rotation. Below, an artist's impression of the formation of a circumstellar disc around a star. Rotation of material falling toward the star causes it to form a circular disc surrounding the star (in the case of our Sun, this is referred to as the Solar Nebula). As material falls toward the star, it is heated and compressed. For smaller stars, most of the material may become part of the star; but for hot, bright massive stars (such as represented by this image) much of the infalling material is ejected into space, at very high velocity. Because of the density of the material in the circumstellar disc, the ejected material is forced to follow a path perpendicular to the plane of the disc, forming two "polar" jets. (Credit: L. Calçada/M. Kornmesser, ESO) 2. The Formation of the Planetesimals In addition to gases, interstellar clouds contain microscopic particles of solid materials, the interstellar "dust". Most of the material in the clouds can only exist as gases in interstellar space, and so the dust is even more thinly spread out than the gases. Each cubic mile of a typical cloud contains only a dozen or so microscopic dust particles, but again, as in the case of the gases, the huge volumes of the clouds allow large total masses of dust in the range of several hundred Earth masses. Observations of interstellar clouds show that most of the dust probably consists of ices, especially water ice, carbon soots, and metallic oxides, especially silicates. As the cloud contracted to form the Solar Nebula, the continually increasing density of the cloud allowed collisions between the dust grains to occur, rarely at first, then more and more frequently. These collisions gradually built up larger and larger dust grains, which must have consisted of roughly random accumulations of the various types of dust materials. By the time that the cloud had contracted to form the Solar Nebula, it must have contained trillions of trillions of sand-grain and smaller-size particles, made mostly of "dirty" ices. The planets formed from further collisions of these particles, and so the current orbital motions of the planets must reflect the motions and distribution of these small grains at the time that the Solar Nebula had finished its contraction. Because of this, we believe that the Solar Nebula must have been 60 to 80 AUs in diameter, and only a few AUs thick. The plane of the Nebula must have been the same as the current plane of the planets' orbits, and the direction of rotation of the Nebula must have been the same as the eastward revolution that the planets still exhibit. As the Solar Nebula formed, collisions of dust grains would be gradually building up larger and larger dirty snowballs, or "planetesimals". At first, these would be small, but by the time that the Sun had nearly finished its formation, many of them would be tens, hundreds, or even thousands of feet in diameter. The compositions of these planetesimals would depend upon the temperatures in the various regions where they were forming. We can determine the temperatures which must have existed at that time by looking at the compositions of the planets, their satellites, and meteorites (most of which come from the asteroid belt). In the outer solar system, the satellites of the planets are made mostly of ices, and so temperatures there must have been below 0°F. Some meteorites, called carbonaceous chondrites, contain carbon compounds and other volatile (easily vaporized) materials which could not have survived temperatures much higher than 600°F, and so the region that they came from cannot have had temperatures warmer than that for very long. The darker asteroids in the outer asteroid belt have optical properties similar to these "primitive" meteorites, and so we believe that the outer asteroid belt must have had temperatures of 200 to 600°F, while the areas closer to the Sun, where the asteroids are lighter-colored, and must not have as many carbon compounds, must have had temperatures of 800 to 1000°F. In a similar way, we can estimate temperatures near the orbit of Mars at 1000 to 1500°F, near the orbit of the Earth at 1500 to 2500°F, and near the orbit of Mercury at 3000 to 4000°F. These seem like very high temperatures, but theories of stellar formation show that in the last stages of its formation the Sun would have been 50 to 200 times brighter than it now is, causing very high temperatures in the regions close to it. Regions further away escaped such high temperatures, aside from their greater distance, because there was so much gas and dust swirling around the Sun that it would have been difficult to see, even from the orbit of the Earth. Under such circumstances, temperatures drop very rapidly in the inner Solar System, and more slowly in the outer Solar System. At the orbit of Jupiter, temperatures would have been a little below freezing, and at the orbit of Neptune, temperatures were probably just about as cold as now. Temperatures in the early solar system (shown in red) must have been much hotter inside the orbit of Jupiter, than now (shown in black). Midway between Mars and Jupiter (vertical line), current temperatures are 200 degrees below zero Fahrenheit, but during the formation of the planetesimals, temperatures in that region must have been around 600 degrees above zero Fahrenheit. In the Sun and other stars, and in the interstellar medium from which they formed, 90% of the atoms are hydrogen atoms, and almost all of the remaining 10% are helium atoms. Under the conditions in the Solar Nebula, these materials could only exist as gases. The remaining 1/10th of 1% of the atoms are mostly carbon, nitrogen, and oxygen. Their compounds with each other, and with hydrogen, can only be solid at low temperatures, and therefore could not exist as solids in the inner solar system. Only metals, such as silicon, iron and titanium, which make up only one ten-thousandth of 1% of the Solar Nebula, can form oxides and other compounds which can withstand the high temperatures in the inner part of the solar system. Within each part of the Solar Nebula, whatever grains could survive the temperatures in that region would be building up into planetesimals. In the outer solar system the temperatures would be low enough so that almost all of the original interstellar dust materials could remain solid, and so in that region dirty snowballs would be forming, but in the inner solar system, only rocky objects could form. Because the metal oxides which the rocks are made of are so rare, the rocky planetesimals would have been relatively small at any stage of the planets' formation, but in the outer solar system, where not only rocky materials, but also carbon compounds and ices could exist as solids, the larger amount of solid materials should have allowed the planetesimals to grow much faster. As a result, late in the formation of the planets the inner solar system must have contained fairly small rocky bodies, similar to the current asteroids, while the outer solar system must have contained fairly large dirty snowballs, perhaps even as large as the satellites of the Jovian planets. 3. The Formation of the Jovian Planets and the End of the Solar Nebula In the early part of their formation, the planetesimals would be too small to have any significant gravity. Even though 99.9% (or more) of the materials in any part of the Solar Nebula would be gases, the planetesimals could not hold on to any part of these gases. As they moved around among the gases, those might pile up temporarily around the planetesimals, but their gravities would be too weak to permanently hold onto the gases. This would be especially true in the inner solar system, where the smaller sizes of the rocky planetesimals would cause them to have smaller gravities, but even in the outer solar system, the dirty snowballs would be mostly too small to have any chance of gravitationally holding on to the gases through which they were moving. As they became larger, however, the gravities of the planetesimals would increase, and in some cases, they might become large enough to hold on to the gases through which they were moving. As soon as they did so, they would begin to accumulate mass at a much faster rate, since there were far more gases than solid materials in any region. While they were too small to hold on to such gases, they could only accumulate the less than 1/10th of 1% of the material that they ran into that happened to be solid, but once they could hold on to gases, they could accumulate a much greater fraction of the material that they ran into, allowing them to grow as much as a thousand times faster than before. Within a short period time they would mushroom to much greater sizes and masses. This is exactly what must have happened to the Jovian planets. In the last stages of their formation they must have become very large dirty snowballs, with gravities large enough to hold on to even hydrogen and helium. As they moved around the Sun, sweeping up everything in their paths, they grew at an enormous rate. This would be especially true for Jupiter and Saturn. Closer to the Sun, they would be running around in a denser part of the Solar Nebula than Uranus and Neptune, and at a faster velocity. So Jupiter and Saturn would accumulate much more gaseous material, ending up with a composition almost equal to that of the huge amounts of gases swept up in the last stages of their formation, while Uranus and Neptune would accumulate smaller amounts of gases, ending up with a composition intermediate between that of the gases and their original cores. At the end of the Sun's formation, strong "winds" developed, probably driven by a very rapid rotation of the young star, carried out into the Solar Nebula by the Sun's magnetic field. Within a hundred thousand years or so, virtually all of the gases in the Nebula would be driven out of the solar system, pushed back into interstellar space. This cannot have happened before the cores of the Jovian planets became large enough to capture light gases, or they would still be just large dirty snowballs. Some lines of evidence seem to indicate that most of the gases of the Jovian planets accumulated in a very short period of time, possibly less than ten thousand years. Therefore, the Jovian planets' cores must have become large enough to accumulate gases just before the end of the dispersal of the Solar Nebula. This also appears to be verified by the characteristics of the Earth and Venus. At the time that the cores of the Jovian planets were approaching the critical sizes required to accumulate gases, the cores of the Terrestrial planets must also have been approaching their final sizes, but although these planets are too small to hold on to the hydrogen and helium which made up most of the mass of even the inner Solar Nebula, their finished sizes are big enough to hold on to heavier gases, such as carbon dioxide and water vapor. These heavier gases are not nearly as abundant as the lighter gases, but they are still many times more abundant than the rocky materials which make up the Terrestrial planets. As a result, if the Earth and Venus had reached their current size while the Solar Nebula still existed, they would have become several times more massive. As a result, it appears that the planetary system could have ended up quite differently than it did. The buildup of the planetesimals into the final sizes of the planets must have finished at almost the same time that the Solar Nebula was destroyed by the strong solar winds of the newly-formed Sun. If the planets had formed a little later, there would be no Jovian planets. If they had formed a little sooner, there would be one or two more Jovian planets (the Earth and Venus, or a combined single object). 4. Why Did the Terrestrial Planets End Up Different From the Jovian Planets? In some discussions of the origin of the planets it is stated (or at least implied) that the Terrestrial planets formed in a region which was swept clear of gases by the young Sun, while the Jovian planets formed in a region which had lots of gases. The reasons for this statement are that if the Earth and Venus had reached their present sizes while they were still surrounded by gases, they would have ended up with large amounts of those gases, and if the Jovian planets had reached a core size large enough to accumulate those gases only after they were gone, they would still be just large dirty snowballs. The trouble with this idea is that it is easy to take it too far, and to imagine that throughout most of the time that the planets were forming, there were significant differences between the different parts of the solar system. So you may easily get the incorrect impression that during the time that the Terrestrial planets were forming, the inner solar system had very little, if any gases, and that most of the gases were in the outer solar system. Now there must indeed have been many differences between different parts of Solar Nebula. The gases (and other materials) must have been relatively dense close to the Sun, and much more rarefied far from it, with the outermost traces of reasonably dense gases not much more than 30 AUs from the Sun. The gases must have been moving around the Sun at relatively high speeds near the orbit of Mercury, and ten times lower speeds near the orbit of Pluto. And the temperatures must have been very high near the Sun, and very cold far from it, so that close to the Sun, only rocky materials could exist as solids, while far from the Sun carbon compounds and even ices could exist as solids, but although there were differences in the conditions at different distances from the Sun, we don't have to assume that there were significant compositional differences. There are no physical or chemical reasons why there should be more of any given type of material in one part of the cloud than in another part. Just because water ice couldn't exist near our orbit doesn't mean the materials which make up water didn't exist here. They would simply have had to exist as gases. |The Composition of the Solar Nebula| One common error in thinking about differences between different parts of the Solar System is that heavy things must have fallen towards the Sun, and lighter things floated away from it. This error is based on the fact that the inner planets are dense, rocky objects, while the ones further away are made of lighter ices and gases. This idea is reinforced by the fact that when the Earth was young, it must have melted and differentiated (heavy things sank to the middle, to form the dense metallic core, and lighter things floated to the top, to form the lighter mantle and crust). But in the case of the Earth, everything is just "sitting" here, and the effects of buoyancy and density can act on its materials with no complications. In the case of the Solar Nebula, everything is actually in orbit around the Sun, and for orbital motions, it doesn't make any difference what things are made of, because gravity acts the same on all orbiting objects, whatever their mass, size, density or composition. As a result, there is absolutely no reason for light objects to end up in one place, and dense objects to end up in another. (To realize that this must be the case, remember that the Sun, which is in the center of the Solar System, is made almost entirely of hydrogen and helium, which would not be the case if heavy things tended to fall towards the Sun more than light things.) As a result, the proportions of different materials in different places should be just the same as in the Sun. Only the percentages which are solid -- which depends upon temperature -- would vary from place to place, and the fact that the denser solid materials are closer to the Sun is not due to the fact that they are dense, but that they can remain solid at high temperatures. You can see how this works in this diagram: Diagram of Composition of Solar Nebula Amount of material rapidly increasing upwards, Distance from forming Sun increasing to right The relative proportions of various materials are the same at all distances from the Sun, as indicated by the parallel nature of the curves showing the amounts of different substances -- primarily hydrogen, with lesser amounts of helium, still lesser amounts of medium-weight atoms (carbon, oxygen, and nitrogen) and their compounds with each other and hydrogen, and still smaller amounts of heavier atoms, such as silicon and iron, and their oxides. However, the solid proportion of these materials, shown by the red curves, is extremely variable. Far from the Sun, where it is cold, there would be tens of Earth masses of icy materials. Closer in, the materials that make up these ices would still be common, but they are all vaporized by the higher temperatures. Rocky materials would exist almost everywhere, because they can withstand higher temperatures, but in the outer Solar System, the relative rarity of the materials which make up those rocky materials would make them relatively unimportant, in comparison to the much more common icy materials. So, far from the Sun, we would have large amounts of snow, and small amounts of rocks; and close to the Sun, small amounts of rock; and everywhere, large amounts of gas -- but the gas is of minor importance until, late in the game, the planetesimals get large enough to gravitationally attract it (as discussed below). The most likely explanation of the problem with the lack of gases near the Terrestrial planets is that the planets probably formed at slightly different rat (see diagram below). In the inner solar system, where only rocky bodies could form, the materials out of which they were forming represented only a very tiny fraction of one percent of the total mass of the Nebula, whereas in the outer solar system, where icy materials could also exist as solids, the materials out of which the proto-Jovian planets were forming would have been tens or hundreds of times more abundant. Thus, at the stage where the Jovian planets were getting close to their final size, and just becoming capable of holding on to light gases, the Terrestrial planets were probably no more than half their final sizes, and not capable of holding on to even heavy gases. As stated on the previous page, even the Jovian planets must not have gotten large enough to hold on to hydrogen and helium until near the end of the dispersal of the Solar Nebula by the solar wind, so the Terrestrial planets, which wouldn't have gotten large enough to hold on to gases until a bit later, would simply have reached their final size too late to have any gases left around them. In other words, although we probably have to assume that the Terrestrial planets finished their formation in a region nearly free of any gases, we don't have to assume that they started their formation in such a region. Out of the few million years that it probably took to build up planetary bodies from the originally microscopic dust grains, a few hundreds of thousands of years lag in the final rate of formation of the inner planets would be quite adequate to explain how they could have finished their formation after the dispersal of the Solar Nebula, while the outer planets, forming just a little earlier, could have finished their formation just before the end of the dispersal, and been able to hold on to whatever gases were still left in the outer solar system. (More to come on this topic at a later date.) |Rates of Growth in Different Parts of the Solar Nebula| Diagram Showing Planetary Sizes as a Function of Time (Bya = Billion years ago) One way to look at the growth of planets is to consider how rapidly things would grow in different parts of the Solar System. As shown in the diagram above, in the asteroid belt, where there was relatively little material, things grew slowly, and, because there wasn't enough material to allow an effective sweeping up of any materials left over after the initial formation, continued to grow very slowly, all the way to the present time. In the case of the Earth and the other Terrestrial planets, the much larger amounts of material present in the regions where they were forming allowed them to grow to larger sizes in relatively short periods of time, and to eventually become large enough to hold onto heavy gases. In the case of the Jovian planets, which formed in regions with large amounts of solid ices, the growth to large size was even faster, and they were able to reach a size large enough to hold onto gases at just about the same time that the Sun blew away (or finished blowing away) the remnants of the Solar Nebula. Since there were, even at that time, far more gases than solid materials, the ability to hold onto these gases allowed the Jovian planets to mushroom in size at a very rapid rate. Current estimates are that the Jovian planets got big enough to hold onto gases only a few tens of thousands of years before the Sun finished blowing away the gas in the Solar Nebula; if they had gotten big enough to hold onto gases a few hundred thousand years earlier, Jupiter in particular might have ended up as a small star, instead of as a large planet. If they had gotten big enough to hold onto gases a few hundred thousand years later, there would have been very few gases left for them to pile onto their icy cores, and the largest of them might not have even as much gas as Neptune. A few million years later, when the Terrestrial planets reached sizes large enough to hold onto gases, those gases had long since been blown out of the Solar System, and so they unable to accumulate any gases (this of course leads to the question of how the Terrestrial planets ended up with the atmospheres they have, which is discussed in The Formation and Evolution of Planetary Atmospheres). 5. The Age of the Solar System and Its Early History The Sun and planets must have been formed about 4.5 billion years ago. This date is determined by studying the characteristics of rocks which contain small amounts of radioactive substances. If the mineral grains which contain such materials have not been altered significantly since their formation, the decay products will be trapped in those minerals, but the decay products do not have the same chemistry as the original radioactive materials, and so they stick out like a sore thumb when detailed chemical and physical studies of the minerals are made. By comparing the fraction of radioactive materials which have already decayed to the total amount of such materials, and measuring the rate at which such materials decay in the laboratory, it is possible to determine the "age" of the rock. Of course, this is only an estimate in many cases, and if the rock has been altered in some significant way since the minerals were first formed, it may not be an accurate indication of how long ago that was, but if we look at many samples from various places, the overall results are almost certainly correct. The age of the Solar System is determined by the study of Earth rocks, Moon rocks, and meteorites. The oldest rocks which we have discovered on the Earth only date back to 3.8-3.9 billion years ago. The Earth itself must be somewhat older than that, as these rocks are all sedimentary and metamorphic rocks, meaning that they were formed from the compression and alteration of sediments derived from the weathering and erosion of still older rocks, but no samples of those older rocks are known to still exist. It is therefore difficult to estimate the true age of the Earth from direct study of Earth rocks, but calculations based on the relative distribution of the decay products of radioactive materials in rocks of various ages seem to imply an age somewhere in the range of 4 to 5 billion years. The rocks which the Apollo astronauts brought back from the Moon give us a slightly more accurate estimate of the actual age of the solar system. Most of these rocks are basaltic lavas from the lunar maria, and date only to 3.3 to 3.8 billion years ago, but some of them are heavily fractured granitic rocks which appear to have been blasted off the lunar highlands, and date to over 4.3 billion years ago. This implies that the Moon must have formed a little earlier than that, but again, just how much earlier could be difficult to estimate. The best estimates of the age of the solar system seems to come from certain primitive meteorites, which appear not to have been significantly altered since they were formed. They exhibit a range of ages, but most of their ages cluster closely about a value of 4.5 billion years ago. Since this is in reasonable agreement with the best estimates that we can make from the Earth and the Moon, we believe that this is the true age of the Solar Nebula, the Sun, and all the other bodies in the solar system. The total history of the accumulation of the planetesimals into planets and other solid bodies probably did not encompass more than a few million years, and in comparison to the 4500 million years or so back to the beginning of the solar system, represents only an instant. So we should probably consider all of the bodies which we now see as being of essentially the same age. Towards the end of their formation, the planets must have undergone a period of melting. Certainly the differentiation of the Earth, with its heavy metallic core, and lighter rocky mantle, requires some such period of melting. The exact time that this occurred can only be estimated, but probably was very close to the initial formation of the planets, as even the Moon seems to have completed such a molten state at a very early date. Like the Earth, the Moon has a differentiated crust, with a low-density granitic "slag" forming the bulk of the highland surface of the Moon. This implies that the Moon must have melted, differentiated, and then begun to re-solidify before the date, 4.3 to 4.4 billion years ago, which we determine as the "age" of the rocks which were recovered from the Moon. This means that the period of melting must have been within the first 100 million years after the formation of the planetary bodies. The heat required to produce this melting appears to have been caused by the decay of short-lived radioactive materials. These materials are created inside supernova explosions, and one or more such explosions must have occurred in the region near the interstellar cloud which became the solar system within the last few tens of millions of years prior to the formation of the solar system, in order for any significant amounts of such radioactive substances to have still existed at the time that the solar system formed. But this would not be surprising, as we believe that the Sun, like most stars, probably formed in a group or cluster of stars, and if any of those were much more massive than the Sun, they could easily have formed, lived out their lives, and died, all during the time that the cloud which became our solar system was hovering on the edge of contraction. In fact, some primitive meteorites have unusual abundances of very heavy atoms which are, as a result, thought to be at least partly the decay products of extremely heavy atoms which cannot normally exist in nature, except for short times after they are created in supernova explosions, and before they have had a chance to decay. As a result, we feel certain that at the time the planets were forming, they contained significant amounts of short-lived radioactive substances which would soon decay and disappear. If those substances were permanently trapped inside small bodies, such as the primitive meteorites, then the heat generated by the decay of these materials would easily leak to the surface and be radiated away into interplanetary space, but if they were trapped inside larger bodies, such as the asteroids or planets, it would take a long time for the heat to leak through the thicker layers of rocky materials, and so heat would build up inside the larger bodies. According to current estimates of the amounts of such radioactive substances in the early solar system, any bodies more than 50 to 100 miles in diameter would soon accumulate so much heat that they would start to melt, allowing the heavier metals to sink to the bottom and the lighter rocky materials to rise to the top. As a result, all of the Terrestrial planets, the Moon, and even the half dozen or so largest asteroids must have become completely molten, differentiated objects. As the short-lived radioactive materials died out, the heat created by their decay would also die out, and the molten bodies would gradually solidify. The crustal materials, being exposed directly to the relatively low temperatures of interplanetary space, would solidify first, while the rocky mantles, insulated by hundreds or thousands of miles of overlying materials, would take considerably longer. So the crust of the Moon could easily have formed within the 200 million years or so allowed by our current knowledge of the ages of highland rocks, but the deep interior of the Moon might well have still been molten at that time (in fact, "fossil" magnetism inside Moon rocks implies that it did have a molten core, and a magnetic field created by that core, for at least a short period of time). Looking at the highland surface of the Moon, we can see that at the time that it solidified, not all of the rocky material in the inner solar system was inside the Moon and planets. At least some small fraction of the planetesimals must have still been moving around in independent orbits, and as these objects ran into the now-solid surfaces of the cooling planets, they blasted out huge craters. The number of objects left in between the planets must have been only a small fraction of the mass of the planets themselves, or else heat generated by the violence of their collisions would have re-melted the surfaces of the planets, but there must still have been a huge number of them, since all the truly ancient planetary surfaces still visible to us, such as the surfaces of our Moon, Callisto and Mercury, are completely covered with craters tens or hundreds of miles in diameter. Eventually, of course, this stage of bombardment of the early planetary surfaces must have come to an end. As the planets gradually swept up the objects not yet in them, the numbers of such objects which were still left would have gradually declined, and so there would be fewer and fewer objects left to cause still other collisions. By around 4 billion years ago, about half a billion years after the start of the solar system, there were so few objects left that the early period of intense bombardment had essentially ended, and surfaces which are younger than that are relatively unscathed by cratering. 6. The Formation of the Asteroids In most cases, the planets seem to have regularly spaced orbits, with each planet being about 1/2 or 2/3 as far from the Sun as the next planet out. Even Neptune and Pluto follow this rule, if you only consider their distances from the Sun at the times that Neptune is lapping Pluto (at which times Pluto is always near aphelion, 45 to 50 AUs from the sun). There is, however, a notable exception to this rule, in the region which we call the asteroid belt. If we wanted the rule to always work, then we would have to suppose that there should be a planet between Mars and Jupiter, with an orbit size of 2.5 to 3 AUs, and in fact, when the first asteroid (Ceres) was discovered, it was thought to be that missing planet, because it orbits in that region, but Ceres turned out to be only the largest out of hundreds and thousands of smaller bodies, so that instead of a single large planet, we have a host of "minor: planets. An early explanation of this difference was that perhaps a single large planet had once existed where the asteroids now are, and had somehow been blown or blasted to bits, and in the early part of this century, when it was realized that the Earth is differentiated, the stony and iron meteorites seemed to lend a measure of proof to this idea. At that time, it was not known that the Earth had been melted by short-lived radioactive materials, and other theories of how a planet could be melted depended on factors which work better for large bodies, which have substantial gravities, than for small objects like the asteroids. Since it can be shown with some certainty that the asteroids must be the parent bodies of the stony and iron meteorites, it seemed that they must have melted and differentiated, but since it was not known how to melt small objects like the asteroids, it was tempting to put some faith in the idea that there had been a single large planet in that region, so that the asteroids themselves could have started out as an object large enough to become differentiated. Now, of course, we know that even middling-size asteroids could have easily been melted by the large amounts of short-lived radioactive materials present in the early history of the solar system, and detailed studies of the compositions of various meteorites seem to divide them into various groups which do not seem likely to have originated in less than a half-dozen independent parent bodies. So we now think that the asteroids have always existed as separate objects, and were never a single large object. This leads us to the question of why they never accumulated into a single object, while the planets did. The most likely explanation of this problem has to do with the mass of the asteroids. Although the space within the asteroid belt is often portrayed as being full of rocks, it is actually very nearly empty. There are several thousand asteroids that are a 1/4 mile or more in diameter, but they are spread out over a region almost a billion miles in diameter, and tens of millions of miles thick. Because they occupy such a vast region, they are usually quite far apart. In fact, during the several months that it takes a spacecraft to pass through the asteroid belt, an observer on the craft would probably not see any asteroid close enough to look like a tiny disk, and would see even faint distant asteroids only every few days. Because the asteroids are fairly far apart, collisions between them are not nearly as frequent as people normally presume. Admittedly, there must be some collisions, as some "families" of asteroids have orbital characteristics which imply that they resulted from the collision of two larger objects at some time within the last few millions or tens of millions of years, but such individual collisions probably occur only once in every few millions of years now, while in the early solar system, collisions had to be so frequent that whole planets could be built up in a very short time, so the current rate of asteroid collisions doesn't seem to be a very useful method of building up planets. To explain the planets' formations, we must assume that there were far more objects running around, and into each other, than we now see in the asteroid belt. Looking at just the surface of the Moon, we see more craters than would be caused by throwing all known asteroids into it, and we know that even the huge number of collisions required to form the Moon's craters cannot have been more than a fraction of the mass of the Moon itself. The solution to our problem lies in the small amount of mass found in the asteroid belt. The total mass of all the asteroids is only one thousandth of the mass of the Earth. If we assume that, towards the end of the formation of the solar system, collisions between planetesimals had built up objects similar in size to the current asteroids, then in the asteroid belt there would have been about the same number of objects as now, but near the orbit of the Earth, with a thousand times the mass, there would have been a thousand times as many objects. For each collision in the asteroid belt, there must have been close to a million collisions in the Earth's orbit. (Each object near our orbit would see a thousand times as many targets, so each one would have a thousand times the chance for a collision, and with a thousand times as many objects, each a thousand times as likely to have a collision, the total number of collisions would be a thousand thousand times larger.) At some point, each region would have built up a small number of relatively large objects, which contained most of the mass of that region (indeed, Ceres already contains half the mass of all the asteroids, and the next dozen largest asteroids contain most of the rest of the mass). At that point, since so few objects were left, the number of "accidental" collisions would be tremendously reduced in comparison to earlier stages in the accumulation of material. If we continued to rely only on such collisions, then as in the case of the asteroids we would be at a dead end. In the case of the planets, however, gravity, and their relatively large sizes, can help finish the job. If two asteroids pass fairly close to each other, but do not collide, their small masses produce so little gravitational effect that they continue on with nearly the same orbital motions that they had prior to their encounter. But if objects tens or hundreds times larger were to pass so close, their large sizes would make it far more likely that they would collide, and even if they still missed each other, their larger masses would produce much larger gravitational effects, and their orbits would be significantly altered. In the case of the asteroids, the small changes that they produce in each others' orbital motions mean that unless they are already in intersecting orbits, they will have to wait a long time, if not forever, for the very minor rate of orbital change to allow them to have intersecting orbits and collide with each other. Larger masses, by changing each others' orbits faster, could end up with intersecting orbits and collide with each other much sooner. In other words, if the random collisions that are most important while there are still huge numbers of planetesimals can continue long enough to build up good-sized masses, then the size and gravity of those masses, by allowing significant interactions in the present, and additional interactions later on, will continue the process of accumulation until only a single large mass is left in a given region. Of course, this theory explains how, if there is little mass in the asteroid belt, the asteroids can fail to complete the process of planetary growth, but it does not explain why there should have been so little mass. To explain this, we can rely on the fact that the asteroids are not far enough out to allow icy materials to be solid, but too far out for rocky materials to be thickly clustered together, and to a possible gravitational interaction between the huge masses of Jupiter and Saturn, which may have swept materials that might have existed in the asteroid belt into other regions. Since the Jovian planets must have grown faster than the inner planets, they probably affected the final stages of formation of the asteroids and possibly even the inner planets, so at least part of the reason for the low masses in the asteroid belt may be due to such gravitational interactions. However, if that were the only reason, then there should be at least a somewhat greater mass in that region, so part of the answer must be that there just wasn't much solid material in the region in the first place.
http://cseligman.com/text/ssevolve/ssorigin.htm
13
80
The general scope of angular momentum covers phenomena that may seem hardly related: The common factor is that angular momentum involves two or more objects that are exerting a force on each other so they don't fly apart. A solid object can be thought of as all of it's constituent atoms, with the bonding strength of the material providing the required force. If a object spins faster than the bonding strength can handle the object tears apart. In this article I focus on the case where the motion is circumnavigating, and with the attractive force strong enough to even contract the system. Animation 1 depicts that when a rotating assembly contracts the angular velocity increases. Inversely, when the distance to the center of rotation increases the angular velocity goes down. Image 2 plots the motion of one of the spheres as the assembly contracts. In the diagram the darkest arrow represents the centripetal force. The origin of the coordinate system has been positioned at the center point of the rotation. To visualise the effect of the contracting force it is helpful to think of it as decomposed: one component perpendicular to the instantaneous velocity, and another component tangent to the instantaneous velocity. The perpendicular component causes change of direction; the tangent component causes change of speed. That is, the force that contracts the rotating system will speed it up. What we would like to know is how much angular acceleration there will be because of the contraction. In the history of physics this problem was very important. Its solution, first found for celestial mechanics, was a breakthrough development. Kepler had hypothesized that the shape of a planetary orbit is an ellipse (with the Sun at one focus), but that was not enough to have a theory of celestial motion; he also needed a rule that describes how the velocity changes over time. As is well known: Kepler discovered a law of areas: as planets circumnavigate the Sun they sweep out equal areas in equal amounts of time. For Kepler his area law was an empirical law; he had no way of knowing whether it was a separate law or an integral part of some larger theory of motion. Newton showed the latter was the case. The very first theorem in Newton's Principia is the area law. In fact, Newton derived a more general form of the area law, showing that it applies not only for the Sun's gravity, but for any central force. I will call this generalized form 'Newton's law of areas'. I will first present Newtons derivation, and then I will show how Newton's area law and the modern concept of conservation of angular momentum are equivalent. Newton's derivation relies on the following elements: The law of conservation of linear velocity asserts a principle of proportion: In image 5 the thick line depicts the law of conservation of linear velocity. An object moves along the points A, B, C, D, E, covering equal distances in equal intervals of time. It is convenient to take as point S the common center of mass of two objects, R and T (as shown in image 4). The dotted lines mark triangles. Clearly SAB, SBC, SCD and SDE all have the same area: equal areas are swept out in equal intervals of time. This shows how conservation of linear velocity and the area law are interconnected. When no force acts conservation of linear velocity and the area law are one and the same principle. Image 6 emphasizes the property of point S, the common center of mass, that when object R and object T exert a force on each other the momentum of the common center of mass is conserved. Image 7 shows Newton's geometric demonstration of the law of areas. The diagram is a slightly modified version of the diagram that Newton gave in the Principia; the mathematics is the same. Object T (not shown in image 7) is moving along the curvilinear trajectory that goes through the points A, B, C, D and E. The force experienced by object T causes change of velocity. For the derivation the continuous change of velocity is approximated with instantaneous changes of velocity that occur at equally spaced points in time. In the limit of ever smaller intervals of time the sequence of the instantaneous changes of velocity approaches infinitely close to a continuous change of velocity. Point S is the common center of mass of the objects R and T. Objects R and T exert a distance dependent force upon each other. The force that objects R and T exert upon each other is mutual: that ensures that the state of motion of point S is inertial motion. For the proof it suffices that point S is in a state of inertial motion and that the force that is exerted upon object T is directed towards point S at all times. At point B, object T receives an impulse towards point S, resulting in a velocity component towards point S. Had object T not received that impulse, it would have proceeded to point c (in an equal amount of time). The actual displacement BC is the vector sum of the displacements Bc and BV. The triangles SAB and SBc have the same area. Since the lines SB and Cc are parallel, the triangles SBc and SBC have the same area. Hence, the triangles SAB and SBC have the same area. The points B, C and d are on the same line. If object T would not receive an impulse at point C then in an equal amount of time it would proceed to point d. Since the laws of motion are the same for any orientation in space the same reasoning can be repeated for the subsequent triangles, thus demonstrating that the triangles SBC, SCD and SDE that are swept out in equal intervals of time, have equal area. In the limit of ever smaller intervals of time, the line sections BC, CD and DE approach ever closer to the curvilinear trajectory. The next step is to find an an algebraic expression for Newton's law of areas, so that the principle can be put to use in calculations. The surface area of a triangle is proportional to the product of the base and the height. Here the base of each triangle is r, the radial distance, and the height is r·Δθ (where Δθ is the angle that is swept out during the time interval Δt) Dividing the area by the interval of time gives the amount of area that is swept out per unit of time. In the limit of Δt going to infinitisimal the expression for the conserved quantity is proportional to the following expression: Where ω is the angular velocity. So we have the following conserved quantity: r2ω The area law expresses a relation between radial distance and angular velocity. At the beginning I raised the question: when the rotating assembly contracts, how much angular acceleration will the contraction cause? The answer to that question is the relation: r2ω. The relation is quadratic in 'r', which means you get a lot of angular acceleration. For instance: when the radial distance has been halved the angular velocity will have been quadrupled. The relation r2ω forms the basis of the concept of angular momentum. Angular momentum L is defined as the product of the moment of inertia and angular velocity. L = mr²ω. The convenience of defining angular momentum that way is that it slots in with kinetic energy. The kinetic energy of an object that follows a circular trajectory is ½mv². We can make the following substitution: v = ωr, which gives an expression for the kinetic energy associated with circular motion: ½mr²ω² The quantiy mr² is called the 'moment of inertia' and it is the rotational counterpart of inertia. Momentum deals with spatial symmetry of the laws of physics, rather than with a cause-to-effect relation. This can be shown with the example of a cannon being fired. When a cannon is fired, the projectile will shoot out of the barrel towards the target, and the barrel will recoil. It would be wrong to suggest that the projectile leaves the barrel at high velocity because of the recoil of the barrel. The projectile being fired and the recoil of the barrel occur simultaneously, hence either one cannot be the cause of the other. The causal mechanism is in the preceding energy conversions: the explosion of the gun powder converts potential chemical energy to the potential energy of a highly compressed gas. As the gas expands, its high pressure exerts a force on both the projectile and the interior of the barrel. It is through the action of that force that potential energy is converted to kinetic energy of both projectile and barrel. Similarly, in the case of angular acceleration due to contraction of a rotating system, the increase of angular velocity on contraction is consistent with the principle of conservation of angular momentum, but that should not be confused with conservation of angular momentum being a causal agent. The causal agent is the centripetal force doing work. In the next article Angular Momentum, part 2, I discuss that the conservation of angular momentum can also be derived from the work-energy theorem. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
http://www.cleonis.nl/physics/printopt/angular_momentum.php
13
54
Laminins are major proteins in the basal lamina (one of the layers of the basement membrane), a protein network foundation for most cells and organs. The laminins are an important and biologically active part of the basal lamina, influencing cell differentiation, migration, and adhesion, as well as phenotype and survival. Laminins are trimeric proteins that contain an α-chain, a β-chain, and a γ-chain, found in five, four, and three genetic variants, respectively. The laminin molecules are named according to their chain composition. Thus, laminin-511 contains α5, β1, and γ1 chains. Fourteen other chain combinations have been identified in vivo. The trimeric proteins intersect to form a cross-like structure that can bind to other cell membrane and extracellular matrix molecules. The three shorter arms are particularly good at binding to other laminin molecules, which allows them to form sheets. The long arm is capable of binding to cells, which helps anchor organized tissue cells to the membrane. The laminin family of glycoproteins are an integral part of the structural scaffolding in almost every tissue of an organism. They are secreted and incorporated into cell-associated extracellular matrices. Laminin is vital for the maintenance and survival of tissues. Defective laminins can cause muscles to form improperly, leading to a form of muscular dystrophy, lethal skin blistering disease (junctional epidermolysis bullosa) and defects of the kidney filter (nephrotic syndrome). Fifteen laminin trimers have been identified. The laminins are combinations of different alpha-, beta-, and gamma-chains. - The five forms of alpha-chains are: LAMA1, LAMA2, LAMA3, LAMA4, LAMA5 - The beta-chains include: LAMB1, LAMB2, LAMB3, LAMB4 - The gamma-chains are: LAMC1, LAMC2, LAMC3 Laminins were previously numbered - e.g. laminin-1, laminin-2, laminin-3 - but the nomenclature was recently[when?] changed to describe which chains are present in each isoform. For example, laminin-511 contains an α5-chain, a β1-chain and a γ1 chain. Laminins form independent networks and are associated with type IV collagen networks via entactin, fibronectin, and perlecan. They also bind to cell membranes through integrin receptors and other plasma membrane molecules, such as the dystroglycan glycoprotein complex and Lutheran blood group glycoprotein. Through these interactions, laminins critically contribute to cell attachment and differentiation, cell shape and movement, maintenance of tissue phenotype, and promotion of tissue survival. Some of these biological functions of laminin have been associated with specific amino-acid sequences or fragments of laminin. For example, the peptide sequence [GTFALRGDNGDNGQ], which is located on the alpha-chain of laminin, promotes adhesion of endothelial cells. Dysfunctional structure of one particular laminin, laminin-211, is the cause of one form of congenital muscular dystrophy. Laminin-211 is composed of an α2, a β1 and a γ1 chains. This laminin's distribution includes the brain and muscle fibers. In muscle, it binds to alpha dystroglycan and integrin alpha7—beta1 via the G domain, and via the other end binds to the extracellular matrix. Abnormal laminin-332, which is essential for epithelial cell adhesion to the basement membrane, leads to a condition called junctional epidermolysis bullosa, characterized by generalized blisters, exuberant granulation tissue of skin and mucosa, and pitted teeth. Malfunctional laminin-521 in the kidney filter causes leakage of protein into the urine and nephrotic syndrome. Laminins in cell culture Recently, several publications have reported that laminins can be used to culture cells, such as pluripotent stem cells, that are difficult to culture on other substrates. Mostly, two types of laminins have been used. Laminin-111 extracted from mouse sarcomas is one popular laminin type, as well as a mixture of laminins 511 and 521 from human placenta. Various laminin isoforms are practically impossible to isolate from tissues in pure form due to extensive cross-linking and the need for harsh extraction conditions, such as proteolytic enzymes or low pH, that cause degradation. However, professor Tryggvason's group at the Karolinska Institute in Sweden showed how to produce recombinant laminins using HEK293 cells in 2000. Kortesmaa et al. 2000. This made it possible to test if laminins could have a significant role in vitro as they have in the human body. In 2008, two groups independently showed that mouse embryonic stem cells can be grown for months on top of recombinant laminin-511. Later, Rodin et al. showed that recombinant laminin 511 can be used to create a totally xeno-free and defined cell culture environment to culture human pluripotent ES cells and human iPS cells. Role in neural development Laminin-111 is a major substrate along which nerve axons will grow, both in vivo and in vitro. For example, it lays down a path that developing retinal ganglion cells follow on their way from the retina to the tectum. It is also often used as a substrate in cell culture experiments. Interestingly, the presence of laminin-1 can influence how the growth cone responds to other cues. For example, growth cones are repelled by netrin when grown on laminin-111, but are attracted to netrin when grown on fibronectin. This effect of laminin-111 probably occurs through a lowering of intracellular cyclic AMP. Role in cancer ||This section's tone or style may not reflect the encyclopedic tone used on Wikipedia. (July 2012)| The majority of transcripts that harbor an internal ribosome entry site (IRES) are involved in cancer development via corresponding proteins. A crucial event in tumor progression referred to as epithelial to mesenchymal transition (EMT) allows carcinoma cells to acquire invasive properties. The translational activation of the extracellular matrix component laminin B1 (LamB1) during EMT has been recently reported suggesting an IRES-mediated mechanism. In this study, the IRES activity of LamB1 was determined by independent bicistronic reporter assays. Strong evidences exclude an impact of cryptic promoter or splice sites on IRES-driven translation of LamB1. Furthermore, no other LamB1 mRNA species arising from alternative transcription start sites or polyadenylation signals were detected that account for its translational control. Mapping of the LamB1 5'-untranslated region (UTR) revealed the minimal LamB1 IRES motif between -293 and -1 upstream of the start codon. Notably, RNA affinity purification showed that the La protein interacts with the LamB1 IRES. This interaction and its regulation during EMT were confirmed by ribonucleoprotein immunoprecipitation. In addition, La was able to positively modulate LamB1 IRES translation. In summary, these data indicate that the LamB1 IRES is activated by binding to La which leads to translational upregulation during hepatocellular EMT. Laminin domains |Laminin Domain I| |Laminin Domain II| |Laminin B (Domain IV)| |Laminin EGF-like (Domains III and V)| crystal structure of three consecutive laminin-type epidermal growth factor-like (le) modules of laminin gamma1 chain harboring the nidogen binding site |Laminin G domain| laminin alpha 2 chain lg4-5 domain pair, ca1 site mutant |Laminin G domain| the structure of the ligand-binding domain of neurexin 1beta: regulation of lns domain function by alternative splicing |Laminin N-terminal (Domain VI)| Laminin I and Laminin II Laminins are trimeric molecules; laminin-1 is an alpha1 beta1 gamma1 trimer. It has been suggested that the domains I and II from laminin A, B1 and B2 may come together to form a triple helical coiled-coil structure. Laminin B The laminin B domain (also known as domain IV) is an extracellular module of unknown function. It is found in a number of different proteins that include, heparan sulphate proteoglycan from basement membrane, a laminin-like protein from Caenorhabditis elegans and laminin. Laminin IV domain is not found in short laminin chains (alpha4 or beta3). Laminin EGF-like Beside different types of globular domains each laminin subunit contains, in its first half, consecutive repeats of about 60 amino acids in length that include eight conserved cysteines. The tertiary structure of this domain is remotely similar in its N-terminus to that of the EGF-like module. It is also known as a 'LE' or 'laminin-type EGF-like' domain. The number of copies of the laminin EGF-like domain in the different forms of laminins is highly variable; from 3 up to 22 copies have been found. In mouse laminin gamma-1 chain, the seventh LE domain has been shown to be the only one that binds with a high affinity to nidogen. The binding-sites are located on the surface within the loops C1-C3 and C5-C6. Long consecutive arrays of laminin EGF-like domains in laminins form rod-like elements of limited flexibility, which determine the spacing in the formation of laminin networks of basement membranes. Laminin G The laminin globular (G) domain can be found in one to several copies in various laminin family members, which includes a large number of extracellular proteins. The C terminus of laminin alpha chain contains a tandem repeat of five laminin G domains, which are critical for heparin-binding and cell attachment activity. Laminin alpha4 is distributed in a variety of tissues including peripheral nerves, dorsal root ganglion, skeletal muscle and capillaries; in the neuromuscular junction, it is required for synaptic specialisation. The structure of the laminin-G domain has been predicted to resemble that of pentraxin. Laminin G domains can vary in their function, and a variety of binding functions has been ascribed to different LamG modules. For example, the laminin alpha1 and alpha2 chains each has five C-terminal laminin G domains, where only domains LG4 and LG5 contain binding sites for heparin, sulphatides and the cell surface receptor dystroglycan. Laminin G-containing proteins appear to have a wide variety of roles in cell adhesion, signalling, migration, assembly and differentiation. Laminin N-terminal Basement membrane assembly is a cooperative process in which laminins polymerise through their N-terminal domain (LN or domain VI) and anchor to the cell surface through their G domains. Netrins may also associate with this network through heterotypic LN domain interactions. This leads to cell signalling through integrins and dystroglycan (and possibly other receptors) recruited to the adherent laminin. This LN domain dependent self-assembly is considered to be crucial for the integrity of basement membranes, as highlighted by genetic forms of muscular dystrophy containing the deletion of the LN module from the alpha 2 laminin chain. The laminin N-terminal domain is found in all laminin and netrin subunits except laminin alpha 3A, alpha 4 and gamma 2. Human proteins containing laminin domains Laminin Domain I Laminin Domain II Laminin B (Domain IV) Laminin EGF-like (Domains III and V) AGRIN; ATRN; ATRNL1; CELSR1; CELSR2; CELSR3; CRELD1; HSPG2; LAMA1; LAMA2; LAMA3; LAMA4; LAMA5; LAMB1; LAMB2; LAMB3; LAMB4; LAMC1; LAMC2; LAMC3; MEGF10; MEGF12; MEGF6; MEGF8; MEGF9; NSR1; NTN1; NTN2L; NTN4; NTNG1; NTNG2; RESDA1; SCARF1; SCARF2; SREC; STAB1; USH2A; Laminin G domain AGRIN; CASPR4; CELSR1; CELSR2; CELSR3; CNTNAP1; CNTNAP2; CNTNAP3; CNTNAP4; CNTNAP5; COL11A1; COL11A2; COL24A1; COL5A1; COL5A3; CRB1; CRB2; CSPG4; EGFLAM; FAT; FAT2; FAT4; GAS6; HSPG2; LAMA1; LAMA2; LAMA3; LAMA4; LAMA5; NELL2; NRXN1; NRXN2; NRXN3; PROS1; RESDA1; SLIT1; SLIT2; SLIT3; USH2A; Laminin N-terminal (Domain VI) See also - Timpl R et al. (1979). "Laminin – a glycoprotein from basement membranes". J Biol Chem 254 (19): 9933–7. PMID 114518. - Aumailley M et al. (2005). "A simplified laminin nomenclature". Matrix Biol. 24 (5): 326–32. doi:10.1016/j.matbio.2005.05.006. PMID 15979864. - M. A. Haralson and John R. Hassell (1995). Extracellular matrix: a practical approach. Ithaca, N.Y: IRL Press. ISBN 0-19-963220-0. - Yurchenko P and Batton BL (2009). "Developmental and Pathogenic Mechanisms of Basement Membrane Assembly". Curr Pharm Des. 15 (12): 1277–94. doi:10.2174/138161209787846766. PMC 2978668. PMID 19355968. - Colognato H, Yurchenco P (2000). "Form and function: the laminin family of heterotrimers". Dev. Dyn. 218 (2): 213–34. doi:10.1002/(SICI)1097-0177(200006)218:2<213::AID-DVDY1>3.0.CO;2-R. PMID 10842354. - Smith J, Ockleford CD (January 1994). "Laser scanning confocal examination and comparison of nidogen (entactin) with laminin in term human amniochorion". Placenta 15 (1): 95–106. doi:10.1016/S0143-4004(05)80240-1. PMID 8208674. - Ockleford CD, Bright N, Hubbard A, D'Lacey C , Smith J, Gardiner L, Sheikh T, Albentosa, M, Turtle K (October 1993). "Micro-Trabeculae, Macro-Plaques or Mini-Basement Membranes in Human Term Fetal Membranes?". Phil. Trans. R. Soc. Lond. B 342 (1300): 121–136. doi:10.1098/rstb.1993.0142. - Beck et al., 1999.[specify] - Hall, T. E.; Bryson-Richardson, RJ et al. (2007). "The zebrafish candyfloss mutant implicates extracellular matrix adhesion failure in laminin α2-deficient congenital muscular dystrophy". PNAS 104 (17): 7092–7097. doi:10.1073/pnas.0700942104. PMC 1855385. PMID 17438294. - Wewer et al. (1983). "Human laminin isolated in a nearly intact, biologically active form from placenta by limited proteolysis". J Biol Chem. 258 (20): 12654–60. PMID 6415055. - Domogatskaya et al. (2008). "Laminin-511 but not -332, -111, or -411 enables mouse embryonic stem cell self-renewal in vitro". Stem Cells 26 (11): 2800–9. doi:10.1634/stemcells.2007-0389. PMID 18757303. - Miyakzaki et al. (2008). "Recombinant human laminin isoforms can support the undifferentiated growth of human embryonic stem cells". Biochem. Biophys. Res. Commun. 375 (1): 27–32. doi:10.1016/j.bbrc.2008.07.111. PMID 18675790. - Petz M, Them N, Huber H, Beug H, Mikulits W. (October 2011). "La enhances IRES-mediated translation of laminin B1 during malignant epithelial to mesenchymal transition.". Nucleic Acids Rsearch 39 (18): 01–13. doi:10.1093/nar/gkr717. PMC 3245933. PMID 21896617. - Sasaki M, Kleinman HK, Huber H, Deutzmann R, Yamada Y (November 1988). "Laminin, a multidomain protein. The A chain has a unique globular domain and homology with the basement membrane proteoglycan and the laminin B chains". J. Biol. Chem. 263 (32): 16536–44. PMID 3182802. - Engel J (July 1989). "EGF-like domains in extracellular matrix proteins: localized signals for growth and differentiation?". FEBS Lett. 251 (1-2): 1–7. doi:10.1016/0014-5793(89)81417-6. PMID 2666164. - Stetefeld J, Mayer U, Timpl R, Huber R (April 1996). "Crystal structure of three consecutive laminin-type epidermal growth factor-like (LE) modules of laminin gamma1 chain harboring the nidogen binding site". J. Mol. Biol. 257 (3): 644–57. doi:10.1006/jmbi.1996.0191. PMID 8648630. - Baumgartner R, Czisch M, Mayer U, Poschl E, Huber R, Timpl R, Holak TA (April 1996). "Structure of the nidogen binding LE module of the laminin gamma1 chain in solution". J. Mol. Biol. 257 (3): 658–68. doi:10.1006/jmbi.1996.0192. PMID 8648631. - Mayer U, Poschl E, Gerecke DR, Wagman DW, Burgeson RE, Timpl R (May 1995). "Low nidogen affinity of laminin-5 can be attributed to two serine residues in EGF-like motif gamma 2III4". FEBS Lett. 365 (2-3): 129–32. doi:10.1016/0014-5793(95)00438-F. PMID 7781764. - Beck K, Hunter I, Engel J (February 1990). "Structure and function of laminin: anatomy of a multidomain glycoprotein". FASEB J. 4 (2): 148–60. PMID 2404817. - Yurchenco PD, Cheng YS (August 1993). "Self-assembly and calcium-binding sites in laminin. A three-arm interaction model". J. Biol. Chem. 268 (23): 17286–99. PMID 8349613. - Tisi D, Talts JF, Timpl R, Hohenester E (April 2000). "Structure of the C-terminal laminin G-like domain pair of the laminin alpha2 chain harbouring binding sites for alpha-dystroglycan and heparin". EMBO J. 19 (7): 1432–40. doi:10.1093/emboj/19.7.1432. PMC 310212. PMID 10747011. - Ichikawa N, Kasai S, Suzuki N, Nishi N, Oishi S, Fujii N, Kadoya Y, Hatori K, Mizuno Y, Nomizu M, Arikawa-Hirasawa E (April 2005). "Identification of neurite outgrowth active sites on the laminin alpha4 chain G domain". Biochemistry 44 (15): 5755–62. doi:10.1021/bi0476228. PMID 15823034. - Beckmann G, Hanke J, Bork P, Reich JG (February 1998). "Merging extracellular domains: fold prediction for laminin G-like and amino-terminal thrombospondin-like modules based on homology to pentraxins". J. Mol. Biol. 275 (5): 725–30. doi:10.1006/jmbi.1997.1510. PMID 9480764. - Xu H, Wu XR, Wewer UM, Engvall E (November 1994). "Murine muscular dystrophy caused by a mutation in the laminin alpha 2 (Lama2) gene". Nat. Genet. 8 (3): 297–302. doi:10.1038/ng1194-297. PMID 7874173. - The Laminin Protein - Laminin Precursors as nutritional supplements - BioLamina - The Laminin Experts - Laminin at the US National Library of Medicine Medical Subject Headings (MeSH)
http://en.wikipedia.org/wiki/Laminin
13
56
Introduction to Newton's Law of Gravitation A little bit on gravity Introduction to Newton's Law of Gravitation ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - We're now going to learn a little bit about gravity. - And just so you know, gravity is something that, especially - in introductory physics or even reasonably advanced - physics, we can learn how to calculate it, we can learn how - to realize what are the important variables in it, but - it's something that's really not well understood. - Even once you learn general relativity, if you do get - there, I have to say, you can kind of say, oh, well, it's - the warping of space time and all of this, but it's hard to - get an intuition of why two objects, just because they - have this thing called mass, they are - attracted to each other. - It's really, at least to me, a little bit mystical. - But with that said, let's learn to deal with gravity. - And we'll do that learning Newton's Law of Gravity, and - this works for most purposes. - So Newton's Law of Gravity says that the force between - two masses, and that's the gravitational force, is equal - to the gravitational constant G times the mass of the first - object times the mass of the second object divided by the - distance between the two objects squared. - So that's simple enough. - So let's play around with this, and see if we can get - some results that look reasonably familiar to us. - So let's use this formula to figure out what the - acceleration, the gravitational acceleration, is - at the surface of the Earth. - So let's draw the Earth, just so we know what - we're talking about. - So that's my Earth. - And let's say we want to figure out the gravitational - acceleration on Sal. - That's me. - And so how do we apply this equation to figure out how - much I'm accelerating down towards the center of Earth or - the Earth's center of mass? - The force is equal to-- so what's this big G thing? - The G is the universal gravitational constant. - Although, as far as I know, and I'm not an expert on this, - I actually think its measurement can change. - It's not truly, truly a constant, or I guess when on - different scales, it can be a little bit different. - But for our purposes, it is a constant, and the constant in - most physics classes, is this: 6.67 times 10 to the negative - 11th meters cubed per kilogram seconds squared. - I know these units are crazy, but all you have to realize is - these are just the units needed, that when you multiply - it times a mass and a mass divided by a distance squared, - you get Newtons, or kilogram meters per second squared. - So we won't worry so much about the units right now. - Just realize that you're going to have to work with meters in - kilograms seconds. - So let's just write that number down. - I'll change colors to keep it interesting. - 6.67 times 10 to the negative 11th, and we want to know the - acceleration on Sal, so m1 is the mass of Sal. - And I don't feel like revealing my mass in this - video, so I'll just leave it as a variable. - And then what's the mass 2? - It's the mass of Earth. - And I wrote that here. - I looked it up on Wikipedia. - This is the mass of Earth. - So I multiply it times the mass of Earth, times 5.97 - times 10 to the 24th kilograms-- weighs a little - bit, not weighs, is a little bit more massive than Sal-- - divided by the distance squared. - Now, you might say, well, what's the distance between - someone standing on the Earth and the Earth? - Well, it's zero because they're touching the Earth. - But it's important to realize that the distance between the - two objects, especially when we're talking about the - universal law of gravitation, is the distance between their - center of masses. - For all general purposes, my center of mass, maybe it's - like three feet above the ground, because - I'm not that tall. - It's probably a little bit lower than that, actually. - Anyway, my center of mass might be three feet above the - ground, and where's Earth's center of mass? - Well, it's at the center of Earth, so we have to know the - radius of Earth, right? - So the radius of Earth is-- I also looked it up on - Wikipedia-- 6,371 kilometers. - How many meters is that? - It's 6 million meters, right? - And then, you know, the extra meter to get to my center of - mass, we can ignore for now, because it would be .001, so - we'll ignore that for now. - So it's 6-- and soon. - I'll write it in scientific notation since everything else - is in scientific notation-- 6.371 times 10 to the sixth - meters, right? - 6,000 kilometers is 6 million meters. - So let's write that down. - So the distance is going to be 6.37 times 10 - to the sixth meters. - We have to square that. - Remember, it's distance squared. - So let's see if we can simplify this a little bit. - Let's just multiply those top numbers first. Force is equal - to-- let's bring the variable out. - Mass of Sal times-- let's do this top part. - So we have 6.67 times 5.97 is equal to 39.82. - And I just multiplied this times this, so now I have to - multiply the 10's. - So 10 to the negative 11th times 10 to the negative 24th. - We can just add the exponents. - They have the same base. - So what's 24 minus 11? - It's 10 to the 13th, right? - And then what does the denominator look like? - It's going to be the 6.37 squared times 10 - to the sixth squared. - So it's going to be-- whatever this is is going to be like 37 - or something-- times-- what's 10 to the sixth squared? - It's 10 to the 12th, right? - 10 to the 12th. - So let's figure out what 6.37 squared is. - This little calculator I have doesn't have squared, so I - have to-- so it's 40.58. - And so simplifying it, the force is equal to the mass of - Sal times-- let's divide, 39.82 divided by 40.58 is - equal to 9.81. - That's just this divided by this. - And then 10 to the 13th divided by 10 to the 12th. - Actually no, this isn't 9.81. - Sorry, it's 0.981. - 0.981, and then 10 to the 13th divided by 10 to the 12th is - just 10, right? - 10 to the first, times 10, so what's 0.981 times 10? - Well, the force is equal to 9.81 times the mass of Sal. - And where does this get us? - How can we figure out the acceleration right now? - Well, force is just mass times acceleration, right? - So that's also going to just be equal to the acceleration - of gravity-- that's supposed to be a small g there-- times - the mass of Sal, right? - So we know the gravitational force is 9.81 times the mass - of Sal, and we also know that that's the same thing as the - acceleration of gravity times the mass of Sal. - We can divide both sides by the mass of Sal, and we have - the acceleration of gravity. - And if we had used the units the whole way, you would have - seen that it is kilograms meters per second squared. - And we have just shown that, at least based on the numbers - that they've given in Wikipedia, the acceleration of - gravity on the surface of the Earth is almost exactly what - we've been using in all the projectile motion problems. - It's 9.8 meters per second squared. - That's exciting. - So let's do another quick problem with gravity, because - I've got two minutes. - Let's say there's another planet called the - planet Small Earth. - And let's say the radius of Small Earth is equal to 1/2 - the radius of Earth and the mass of Small Earth is equal - to 1/2 the mass of Earth. - So what's the pull of gravity on any object, say same - object, on this? - How much smaller would it be on this planet? - Well, actually let me save that to the next video, - because I hate being rushed. - So I'll see you Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/science/physics/newton-gravitation/gravity-newtonian/v/introduction-to-newton-s-law-of-gravitation?topic=Physics&sort=1
13
51
Values and Norms Values and norms are evaluative beliefs that synthesize affective and cognitive elements to orient people to the world in which they live. Their evaluative element makes them unlike existential beliefs, which focus primarily on matters of truth or falsehood, correctness or incorrectness. Their cognitive element makes them unlike motives that can derive from emotions or psychological drives. Values and norms involve cognitive beliefs of approval or disapproval. Although they tend to persist through time and therefore faster continuity in society and human personality, they also are susceptible to change (Moss and Susman 1980; Alwin 1994). The evaluative criteria represented in values and norms influence the behavior of subject units at multiple levels (e.g., individuals, organizations, and societies) as well as judgments about the behavior of others, which also can influence behavior. For example, values and norms affect the evaluation of individuals as suitable marriage partners and in that way influence marital behavior. Values and norms also affect evaluation of the governing policies and practices of societies and thus have an impact on diplomatic relations and the policies of one society’s government toward other societies. Concept of Value A value is a belief about the desirability of a mode, means, or end of action (Kluckhohn 1951; Schwartz and Bilsky 1987). It indicates the degree to which something is regarded as good versus bad. A value tends to be general rather than specific, transcending particular types of action and situations. As a general evaluative criterion, it is used to assess specific behaviors in specific situations. The evaluative criteria represented by values derive from conceptions of morality, aesthetics, and achievement. That is, a mode, means, or end of action can be regarded as good or bad for moral, aesthetic, or cognitive reasons and often for a combination of those reasons (Kluckhohn 1951; Parsons and Shils 1951). For example, being considerate of others may be valued positively (i.e., be viewed as desirable or good) for moral reasons, neatness may be valued positively for aesthetic reasons, and intelligence may be valued positively for cognitive reasons. Since the distinguishing characteristic of a value is evaluation as good or bad, a value that has a cognitive basis is a function of cognitive appraisal based on competency and achievement rather than on scientific or utilitarian grounds. For example, the choice of steel rather than iron to construct a building is a decision based on scientific or utilitarian criteria rather than on values. The concept of a value must be differentiated from other concepts that appear to be similar. One of those concepts is a preference. A value may be thought of as a type of preference, but not all preferences are values. The distinctive characteristic of a value is that it is based on a belief about what is desirable rather than on mere liking. A preference for an equitable rather than inequitable distribution of rewards is a value, but a preference for vanilla rather than chocolate ice cream is not. The concept of a value also bears some similarity to the concept of an attitude. Some analysts have suggested that a value is a type of attitude (Fishbein and Ajzen 1975; Glenn 1980), but there are differences between the two concepts. An attitude refers to an organization of several beliefs around a specific object or situation, whereas a value refers to a single belief of a specific kind: a belief about desirability that is based in conceptions of morality, aesthetics, or achievement and transcends specific behaviors and situations. Because of its generality, a value occupies a more central and hierarchically important place in human personality and cognitive structure than does an attitude. It is a determinant of attitudes as well as behavior. Thus, evaluations of numerous attitude objects and situations are based on a relatively small number of values. Not all attitudes, however, derive from values. For example, an attitude toward skiing may be based on the extent to which that sport is found to be enjoyable rather than on a value. The concept of a value also differs from the concept of an interest in much the same way that it differs from the concept of an attitude, since an interest is a type of attitude that results in the directing of one’s attention and action toward a focal object or situation. As is true of attitudes more broadly, some interests derive from values but others do not. The concept of a value also can be distinguished from the related concept of a motive. The basic property of a motive is the ability to induce valences (incentives) that may be positive or negative. A value has a motive property, involving a predisposition to act in a certain way, because it affects the evaluation of the expected consequences of an action and therefore the choice among possible alternatives; however, it is a less person-centered concept than a motive, which also encompasses emotions and drives. A value is a particular type of motive involving a belief about the desirability of an action that derives from an evaluation of that action’s expected consequences in a situation. A value is a distinctively human motive, unlike motives that operate at both the human and the infrahuman levels. A value also differs from a need. Although both function as motives because of their ability to induce valences, a need is distinctive in being a requirement for the continued performance of an activity and the attainment of other valued outcomes (Emerson 1987). Some needs have a biological basis; others are psychological, often deriving from the persistent frustration of important goals. Although a value may arise from a need, becoming a cognitive transformation of that need, not all needs are transformed into values and not all values derive from needs. Needs also may derive from the structure of a situation, having a social or economic basis rather than a personcentered biological or psychological basis. For example, a need for income may cause an actor to behave in ways that conflict with his or her values. A need differs from a value in that the continued functioning of the actor and the acquisitions of other valued outcomes are contingent on its being met. A need also differs from a value in that it implies a deficit that imposes a requirement, whereas a value implies motivation that is based on a belief about desirability. Finally, a value can be differentiated from a goal. A value sometimes is thought of as a goal because goals are selected on the basis of values. However, some values focus on modes of action that are personal attributes, such as intelligence, rather than ends of action, or goals. Values are not goals of behavior. They are evaluative criteria that are used to select goals and appraise the implications of action. Concept of Norm Like a value, a norm is an evaluative belief. Whereas a value is a belief about the desirability of behavior, a norm is a belief about the acceptability of behavior (Gibbs 1965; Marini 1984). A norm indicates the degree to which a behavior is regarded as right versus wrong, allowable versus unallowable. It is an evaluative criterion that specifies a rule of behavior, indicating what a behavior ought to be or ought not to be. A prescriptive norm indicates what should be done, and a proscriptive norm indicates what should not be done. Because a norm is a behavioral rule, it produces a feeling of obligation. A value, in contrast, produces a feeling of desirability, of attraction or repulsion. A norm also differs from a value in its degree of specificity. A norm is less general than a value because it indicates what should or should not be done in particular behavioral contexts. Whereas a value is a general evaluative criterion that transcends particular types of action and situations, a norm is linked directly to particular types of action and situations. For example, there may be a norm proscribing the killing of other human beings that is generally applicable except in situations such as war, self-defense, capital punishment, and euthanasia. Situational variability of this type sometimes is referred to as the conditionality of a norm. A norm, like a value, is generally applicable to the types of action and situations on which it focuses, but it is less general than a value because it is less likely to transcend particular types of action and situations. Because norms often derive from values, they have their basis in conceptions of morality, aesthetics, and achievement and often in a combination of those conceptions. The basis of a norm tends to affect its strength, or the importance attached to it. For example, a norm based in morality that differentiates right from wrong is likely to be considered more important than a norm based in aesthetics that differentiates the appropriate from the inappropriate, for example, in matters of dress or etiquette. A norm, however, differs from a custom in much the same way that a value differs from a preference. A norm involves an evaluation of what an actor should do, whereas a custom involves an expectation of what an actor will do. It may be expected, for example, that people will drink coffee, but it is usually a matter of indifference whether they do. Drinking coffee is therefore a custom, not a norm; it is not based on a belief about what people ought to do. The Structure of Values and Norms Multiple values and norms are organized and linked in the cultures of human social systems and also are linked when they are internalized by individuals. Cultural ‘‘value orientations’’ organize and link values and norms to existential beliefs in general views that also might be called worldviews or ideologies (Kluckhohn 1951). They are sets of linked propositions embracing evaluative and existential elements that describe preferred or obligatory states. Values and norms are linked to and buttressed by existential beliefs about human nature, the human condition, interpersonal relations, the functioning of social organizations and societies, and the nature of the world. Since existential beliefs focus on what is true versus untrue, they are to some degree empirically based and verifiable. In most of the early conceptual and theoretical work on values, values and norms were not differentiated clearly. Later, particularly as attempts to measure values and norms were made, the two concepts were routinely considered distinct, and studies focusing on them have been carried out separately since that time. As a result, the relationship between values and norms rarely has been analyzed theoretically or empirically. Values and norms are closely related because values usually provide the justification for norms. As beliefs about what is desirable and undesirable, values often are associated with normative beliefs that require or preclude certain behavior, establishing boundaries to indicate what is acceptable versus unacceptable. For example, the positive value attached to human safety and security is supported by norms that proscribe doing harm to other persons and their property. Not all values are supported by norms, however. Displaying personal competence in a variety of ways is positively valued, but norms do not always require it. Similarly, not all norms support values. For example, norms in regard to dress and etiquette can be quite arbitrary. Their existence may support values, but the specific rules of behavior they establish may not. Many cultural value orientations organize and link the values and norms that operate as evaluative criteria in human social systems. These orientations are learned and internalized by individuals in unique ways that vary with an individual’s personal characteristics and social history and the interaction between the two. Cultural value orientations and internalized individual value orientations are more comprehensive systems of values and norms than those activated as influences on particular types of behavior. The latent structure of values and norms that characterizes a social system or an individual can be thought of as a map or blueprint (Rokeach 1973). Only a portion of the map or blueprint that is immediately relevant to the behavioral choices being made is consulted, and the rest is ignored temporarily. Different subsets of values and norms that make up different portions of the map or blueprint are activated when different types of behavioral choices are made. For example, the values and norms relevant in choosing a mate differ from those relevant in deciding how to allocate one’s time among various activities. A characteristic of values and norms that is important for understanding their structure is the type of object unit to which they pertain, such as an individual, an organization, or a society. Values and norms establish what is desirable or acceptable for particular types of object units. For example, physical and psychological health are positively valued ends of action for individuals, and norms that proscribe or prescribe action to maintain or promote health govern individual action. Democracy, distributive justice, and world peace are positively valued ends of action for societies, and norms, usually in the form of laws, proscribe and prescribe certain actions on the part of a society’s institutions in support of those values. Individuals may value democracy, justice, and peace, but these are societal values, not individual values, since they pertain to the characteristics of societies, not to those of individuals. Differentiating values by their object units is important in conceptualizing and measuring values relevant to the explanation of behavior because correspondence between the actor, or subject unit, and the object unit determines the extent to which behavior by the actor is relevant to achieving a particular end. Individuals differentiate between personal and societal values because they do not have direct influence over social values, thus distinguishing their beliefs on the basis of whether they think those beliefs will lead to action (Braithwaite and Law 1985). As evaluative criteria, values and norms have the ability to induce valences (incentives). They affect evaluation of the behavior of others and involve a predisposition to act in a certain way because they affect the evaluation of the expected consequences of action. The evaluation that occurs on the basis of values and norms derives from two structural properties: the polarity, or directionality, of the value or norm and the standard of comparison that is used. Polarity→ The polarity of a value or norm is the direction of its valence, or motive force, which may be positive or negative. In the case of a value, something that is evaluated as desirable will have a positive valence, whereas something that is evaluated as undesirable will have a negative valence. In the case of a norm, something that should be done will have a positive valence, whereas something that should not be done will have a negative valence. Standard of Comparison→ A value or norm also is characterized by a standard, or level, of aspiration or expectation. This evaluative standard is a reference point with respect to which a behavior and its consequences are evaluated. A subject unit’s own action and that of others, as well as the ends that result or may result from action, are evaluated on the basis of whether they are above or below an evaluative standard. In the case of a value, the evaluative standard determines the neutral point on the value scale at or above which a behavior or its consequences will be evaluated as desirable and below which a behavior or its consequences will be evaluated as undesirable. In both economics and psychology, it has been recognized that there is a utility, or value, function that should be considered nonlinear (Marini provides a discussion of these developments), and there is empirical evidence that it generally is appropriate to assume the existence of a reference point on a utility, or value, scale. This reference point plays a critical role in producing a nonlinear relationship between the value scale and the objective continuum of behavior and its consequences. It has been observed that value functions change significantly at a certain point, which is often, although not always, zero. In the prospect theory of Kahneman and Tversky (1979), outcomes are expressed as positive or negative deviations from a neutral reference outcome that is assigned a value of zero. Kahneman and Tversky propose an S-shaped value function that is concave above the reference point and convex below it but less steep above than below. This function specifies that the effect of a marginal change decreases with the distance from the reference point in either direction but that the response to outcomes below the reference point is more extreme than is the response to outcomes above it. The asymmetry of the value function suggests a stronger aversion to what is evaluated as undesirable, an asymmetry that is consistent with an empirically observed aversion to loss. In the case of a norm, the evaluative standard is set by what is defined to be acceptable versus unacceptable. It is a level of expectation that is determined by the specific behaviors that are regarded as right versus wrong, appropriate versus inappropriate. An important difference between a value and a norm is that whereas there is a continuous, nonlinear relationship between a value scale and the objective continuum of behavior or its consequences above the neutral point set by the evaluative standard, this relationship is not expected between the scale of evaluation based on a normative criterion and the objective continuum of behavior. Because a normative standard establishes a boundary of acceptability or requirement that applies to all those covered by the norm, compliance with a normative expectation is not evaluated as a continuous variable on the basis of variation in behavior above the reference point set by the normative expectation. However, violation of a normative standard is evaluated as a continuous variable on the basis of variation in behavior below the reference point set by the standard. Negative deviations from the standard are likely to be evaluated in much the same way as are negative evaluations from the reference point on a value scale, which is convex below the reference point. Because of the strong aversion to what is evaluated as being below the reference standard, behavior that violates a normative standard is likely to be eliminated from consideration as an option. The level of aspiration or expectation that operates as an evaluative standard for an actor is socially determined to a large degree. It is a ‘‘comparison level’’ learned from others whom the actor takes as referents. As a result of variation in the characteristics of actors, the social environments to which they are exposed, and the interaction between those two factors, the evaluative standards associated with values and norms vary across actors. Even among actors in the same social environment, the evaluative standard is specific to the actor, although there may be a high degree of consensus about it in a social group. The evaluative standards associated with values and norms are subject to change in an individual actor. An important source of change is experience that affects the level of ability, knowledge, or accomplishment of an actor. For example, the evaluative standard for achievement values is affected by an actor’s level of achievement. There is evidence that people tend to raise their value standards with success and lower them with failure. Thus, as a worker learns a job, that worker’s ability to perform the job increases, as does the worker’s evaluative standard. A level of ability that once was aspired to and evaluated as ‘‘extremely good’’ may, after increases in the worker’s ability, come to be viewed as ‘‘mediocre’’ and below the worker’s current evaluative standard for expected performance. Experience also may affect the evaluative standard for norms. For example, there is evidence that the experience of divorce changes normative beliefs about divorce in the direction of increasing its acceptability (Thornton 1985). Another source of change in the evaluative standards associated with the values and norms of an actor is an increase in knowledge of the world that alters the existential beliefs connected with values and norms. The evaluative standards associated with values and norms vary not only among actors and over time for the same actor but also with the characteristics of other actors whose behavior is the object of evaluation. These characteristics may differentiate among actors or among the circumstances of the same actor at different times. For example, the value standard used by an adult to evaluate a child’s knowledge will vary for children who have completed different amounts of schooling, such as an elementary school student, a high school student, or a college student: The amount of knowledge evaluated as ‘‘very good’’ for an elementary school student will differ from that evaluated as ‘‘very good’’ for a student at a more advanced stage of schooling. Different value standards will be applied to different students and to the same student at different stages of schooling. Similarly, in a work organization, the value standard used to evaluate performance may vary for different categories of workers: Those with more experience may be evaluated according to a higher standard. Again, these different tandards may be applied to different workers who are in different categories or to the same worker as he or she progresses from one category to another. Like a value standard, a normative standard may vary with the characteristics of other actors whose behavior is an object of evaluation. However, there is a difference between a value and a norm in this regard. Because a value is a continuous variable, variation in the value standard with the characteristics of the other actors whose behavior is being evaluated need not have implications for whether the value applies to those actors. In contrast, because a norm is a discrete variable that differentiates what is acceptable from what is unacceptable, variation in the evaluative standard of a norm with the characteristics of other actors whose behavior is being evaluated determines whether the norm applies to other actors with particular characteristics. This variability—that is, variability in whether a value or norm applies based on the characteristics of the actors being evaluated— is a dimension of the importance of a value or norm and is labeled its conditionality. It is commonly recognized that values and norms differ in their priority, or importance, and that those differences are another aspect of the structure of values and norms. Differences in priority produce a structure that is to some degree hierarchical. Recognition that not all values are of equal importance has led to the use of ranking procedures to measure values (Allport et al. 1960; Rokeach 1973). These procedures have been criticized for forcing respondents to represent their values in a ranked order that does not allow for the possibility that some values may be of equal importance (Alwin and Krosnick 1985; Braithwaite and Law 1985). Although there is a hierarchy among values, there may be sets of values that occupy the same position in the hierarchy. The priority of a value or norm not only has implications for its influence on behavior but also may have implications for the probability that it will change, since values and norms of high priority have been argued to be less likely to change than are those of low priority. The priority, or importance, of a value or norm can be assessed on a number of dimensions: Although these dimensions are conceptually different, they are likely to overlap empirically to a considerable degree. The extent to which and ways in which they overlap in reflecting the importance of a value or norm are not known. Strength→ The strength of a value or norm can be defined as the maximum strength of the force field it can induce. The strength of the valence reflects its hierarchical position in the latent map or blueprint that characterizes the structure of values and norms for a social system or an individual. Although the strength of a value or norm is likely to display considerable stability, it is also subject to change. At the level of the social system, it may change as a result of long-term changes in social organization and aspects of culture as well as precipitating events. As the social system changes, socializing influences on individuals change. Changes in the values and norms of individuals occur both over the life course (Glenn 1980; Alwin 1994) and as a result of differences between those who are born and move through life in different historical periods. The motivational force of a value at a particular time, however, is not necessarily the maximum strength of its latent force field, because attaining a valued outcome may reduce the subjective utility of additional units of that outcome as a result of diminishing marginal utility, or satiation. In the case of either a value or a norm, whether one attains an outcome also may alter the maximum strength of its latent force field. For example, if attainment is problematic, the importance of a value or norm may decline as a way of reducing cognitive dissonance. Centrality→ The centrality of a value or norm can be defined as the number and variety of behaviors or ends to which it applies. Because a central value or norm contributes more than does a peripheral one to the coherent organization and functioning of the total system, the disappearance of a central value or norm would make a greater difference to the total system than would the disappearance of a peripheral value or norm. A central value or norm is more resistant to change than is a peripheral value or norm; however, if change occurs, the more central the value or norm changed, the more widespread its repercussions (Rokeach 1973, 1985). For individuals and even for social groups, concern and responsibility for the well-being of others is a central value that pertains to a large number and variety of specific behaviors and ends. It is supported by a central proscriptive norm that one should not harm others and a central prescriptive norm that one should help others, particularly if they are in need. These norms pertain to a large number and variety of specific behaviors. In contrast, excitement and adventure are more peripheral values, affecting a smaller number and variety of specific behaviors and ends. In connection with these values, peripheral norms govern the carrying out of specific types of activities that may be sources of excitement and adventure, such as the rules governing sports and potentially dangerous recreational activities. For individuals, life values that pertain to the overall ends, or goals, of life along with the norms that support them tend to be more central than are the values and norms that pertain to particular life domains or social roles. Part of the reason for this is that life values affect whether particular life domains or social roles are entered into and the amounts of time and energy a person spends in different domains and roles. They also affect an individual’s domain- and role-specific values and norms. For example, life values include things such as attaining a high material standard of living, having meaningful family relationships and friendships, making the world a better place, and having a good time. Life values of this type are among the factors that influence entry into various life domains and roles, the activities in those domains and roles, and how much investment is made in each one (e.g., marriage, parenthood, employment, friendships, leisure activities and hobbies, community activities, religion). Values and norms pertaining to each of the domains and roles are to some degree a function of overall life values. For example, if an individual places a higher priority on making the world a better place than on material well-being, that individual’s employment values will place a higher priority on the possible influence and significance of the work performed than on the earnings derived from the work. Similarly, if an individual places a higher priority on meaningful relationships than on material wellbeing, marital values will place a higher priority on love and mutual respect than on the shared material standard of living. Range→ The range of a value or norm can be defined as the number and variety of actors of a particular type of object unit (e.g., individuals, organizations, and societies) to which it applies. Whereas the dimension of centrality focuses on the characteristics of action and its ends (i.e., the number and variety of behaviors or ends to which a value or norm applies), the dimension of range focuses on the characteristics of actors (i.e., the number and variety of individuals or larger social units to which a value or norm applies). The characteristics of actors used to define the range of a value or norm tend to be ascriptive or groupdefining characteristics of individuals or larger social units. In the case of individuals, these are characteristics such as age, sex, nationality, race, and ethnicity. A value or norm with a broad range applies to all actors of a particular type of object unit, whereas a value or norm with a narrow range applies to a very restricted category of actors of that type. For example, concern about and responsibility for the well-being of others is a value with a broad range that applies universally to individuals throughout the world. In contrast, wisdom is a value with a narrower range because although it applies throughout the world, it applies primarily to people of older ages. Similarly, the norm against incest has a broad range because it applies universally to individuals throughout the world. In contrast, the norm prescribing paid employment has a narrower range because it applies primarily to men in particular age categories. Conditionality→ The conditionality of a value or norm can be defined as the number and variety of situations to which it applies. Whereas the dimension of centrality focuses on the characteristics of action or its ends and the dimension of range focuses on the characteristics of actors, the dimension of conditionality focuses on the characteristics of situations, including a situation’s actors. When conditionality pertains to the characteristics of a situation’s actors, it usually refers to emergent or potentially changing characteristics of actors that define the situation rather than to ascriptive characteristics that define membership in social groups. Although values are less tied to specific types of situations than norms are, both values and norms vary in the degree to which they are conditioned on the characteristics of situations. For example, some values pertaining to modes of conduct, such as courtesy, cleanliness, and honesty, are applicable across most situations. Others are applicable in many fewer situations or may even be bipolar, with the polarity of the value being conditional on the situation. For example, aggressiveness is positively valued in some types of competitive situations, such as warfare and sports, but negatively valued in some types of cooperative situations, such as conversation and child rearing. The conditionality of a value or norm is evident when a given subject actor who is evaluating a given type of action or end of action makes different evaluations in different types of situations, that is, when the evaluation varies with the characteristics of the situation. For example, friendliness is valued positively, but it is a value characterized by some conditionality, since it is valued negatively when exhibited toward strangers in dangerous environments. Killing other human beings is normatively proscribed in almost all situations, but the norm has some conditionality because killing is not proscribed in warfare, self-defense, capital punishment, and euthanasia. In capital punishment and some types of warfare, killing actually is prescribed. Abortion is believed by some people to be normatively proscribed, and whether it is normatively proscribed often depends on the characteristics of the situation, including how conception occurred, whether the mother’s health is in danger, and whether the mother can care for the child. Opposition to abortion is therefore a norm of higher conditionality than is the proscription against killing other human beings. The conditionality of a value or norm is defined by the number and variety of situations to which it applies consistently, that is, with the same polarity. A value or norm that has the same polarity across many and varied types of situations is a value or norm of low conditionality and therefore of high priority. A value or norm that has the same polarity in only a few similar types of situations is a value or norm of high conditionality and low priority. Intent→ Whether a value applies to a mode, means, or end of action has been labeled its intent (Kluckhohn 1951). Mode values pertain to the manner or style in which an action is carried out and refer to both the action and the actor. They pertain to qualities manifested in the act, and if such qualities are observed consistently over time for a type of action or for an actor, they are applied not just to a single instance of action but to a type of action or to an actor more generally. Adjectives such as ‘‘intelligent,’’ ‘‘independent,’’ ‘‘creative,’’ ‘‘responsible,’’ ‘‘kind,’’ and ‘‘generous’’ describe mode values. Instrumental values focus on necessary means to other ends. They refer to action that constitutes the means or from which the means are derived. For example, a job and the earnings it provides may be viewed as means to other ends such as acquiring the material resources necessary to sustain life. Goal values, in contrast, pertain to self-sufficient, or autonomous, ends of action. They are not subordinate to other values and are what an actor values most. Some analysts have argued that they can be defined as what an actor desires without limit. They focus on sources of intrinsic satisfaction or happiness but are distinguished from pleasures, which, except when elevated to become goal values, are satisfactions that are enjoyed incidentally and along the way. Pleasures are not necessarily based on beliefs about desirability, since they can be based on mere liking. A norm may apply to a mode or means of action but not to an end of action. By requiring or prohibiting a way of acting or a type of action, norms limit the modes and means used in accomplishing ends. For example, the values of honesty and fairness govern modes and means of accomplishing ends, and associated with these values are norms that require honest and fair action. Values and norms cannot always be identified as falling into a single category of intent. For some types of action, mode values and norms and instrumental or goal values and norms overlap; choosing an action as a means or to directly achieve an end actually defines the mode of action. For example, accomplishing a task by a means that shows concern for others defines a mode of acting that is kind, considerate, polite, and caring. Choosing to accomplish a task by honest means defines a mode of acting honestly. Acting to achieve an end that benefits others defines a mode of acting that is caring, giving, and generous. Mode values and norms and instrumental or goal values and norms do not always overlap, however. A given mode may be applied to a variety of means and ends, and choosing a means or acting to achieve an end does not necessarily imply or define a mode. For example, for modes that reflect ability or competence, as described by adjectives such as ‘‘intelligent’’, ‘‘creative,’’ ‘‘efficient,’’ ‘‘courageous,’’ ‘‘organized,’’ and ‘‘self-reliant,’’ there may be no necessary connection or only a limited one between the values reflected in the mode and the values reflected in the acts undertaken as means or ends. Differentiating between instrumental values and goal values is difficult because the two types are interdependent. Their relationship is not just one of sequence, since achieving particular ends may require the use of certain means (Kluckhohn 1951; Fallding 1965). Differentiating between instrumental values and goal values also requires reflection by the actor. An important concern of moral philosophy has been identifying the end or ends of action that ultimately bring satisfaction to human beings, that is, that have genuine, intrinsic value (Lovejoy 1950). The focus has been on identifying important goal values and distinguishing them from less important instrumental values. This means–end distinction is not as well developed in the category systems of all cultures as it is in Western culture (Kluckhohn 1951), and even among persons exposed to Western culture, it is not developed equally or similarly in all actors. Not all actors make the distinction or make it in the same way. What are instrumental values to some actors are goal values to others. When mode, instrumental, and goal values are separable, they can all affect behavior. Sometimes they point to identical actions, and sometimes they do not. Similarly, when mode and instrumental norms are separable, both can affect behavior. Among values that can pertain to either means or ends, the distinction between instrumental and goal values is a dimension of importance, with goal values being of higher priority than instrumental values (Fallding 1965; Braithwaite and Law 1985). However, values that can pertain only to a mode or means are not necessarily of lower priority than are values that can pertain to ends. Because social structure, as defined both organizationally and culturally, links sets of values and norms, there are patterned relationships among the sets of values and norms held by actors. These relationships can be seen as being influenced by conceptual domain, dimensions of importance, behavioral context, and interdependence. Conceptual Domain→ Values and norms that are conceptually similar are thought of as falling within the same conceptual domain, and a conceptual domain is identified by the observation of strong empirical relationships among sets of values or norms. Domains that are conceptually distinct also can have relationships to one another. Compatible domains are positively related, and contradictory domains are negatively related. Empirical research provides some evidence of the existence of conceptual domains of values and norms and the relationships among them. For example, in Western societies, a value domain emphasizing pleasure, comfort, and enjoyment has a negative relationship to a prosocial value domain that emphasizes concern and responsibility for others. Similarly, a value domain emphasizing the extrinsic attainment of power, money, and position has a negative relationship to the prosocial value domain (Schwartz and Bilsky 1987). Values appear to be organized along at least three broad dimensions: Although there has been less research on the pattern of interrelationships among norms, evidence indicates that norms fall into conceptual domains. Norms pertaining to honesty, for example, are conceptually separable from norms pertaining to personal freedom in family matters, sexuality, and mortality. Dimensions of Importance→ Interrelationships among values and norms also are affected by dimensions of importance, since these dimensions affect their application across object units, social institutions, social roles, and behavioral contexts. Dimensions of importance such as centrality, range, and conditionality are linked to variability in application across object units, social institutions, and social roles. Values and norms that have high importance because they are broadly applicable are more likely to be interrelated than are values and norms that have low importance, which apply more narrowly. Values and norms that apply narrowly are related to each other and to values and norms that apply more broadly only under the conditions in which they apply. Behavioral Context→ Interrelationships among values and norms are influenced not only by conceptual domains and dimensions of importance but also by the behavioral contexts to which they apply. Values and norms that are relevant to the same or related behavioral contexts tend to be interrelated. For example, the values and norms that play a role in interpersonal relationships differ in some respects from those which play a role in educational and occupational performance. The value of concern for others and the norms that support it are of high priority in interpersonal relationships but can be of low priority in the performance of educational and occupational tasks. Interdependence→ Socially structured or otherwise necessary links among modes, means, and ends of action are a source of interdependence among values and norms. Mode values and norms and instrumental or goal values and norms can overlap, and instrumental and goal values are interdependent when achieving particular ends requires the use of certain means. This interdependence constrains the extent to which the relative priority of values can affect action. For example, attaining a less highly valued means cannot be forgone to attain a more highly valued end if the end cannot be attained without the means. The Origin of Values and Norms Multiple values and norms are organized and linked in the cultures of human social systems, which are linked when they are internalized by human actors or institutionalized by corporate actors. Social values and norms, in contrast to personal, or internalized, values and norms refer to the values and norms of a social unit that encompasses more than one person. These may refer to the officially stated or otherwise institutionalized values and norms of an organization or society, or to the collective, or shared, values and norms of the individuals who constitute a social unit such as an informal reference group, a formal organization, a society, or a societal subgroup defined by a shared characteristic. When a social value or norm refers to a collective property of the members of a social unit, it may be held with varying degrees of consensus by those who constitute that unit (Rossi and Berk 1985). An important difference between formal organizations and informal social groups or geographically defined social units is that formal organizations usually come into being for a specific purpose and are dedicated to particular types of activity and to achieving particular ends. As a result, their objectives are both narrower and more varied than those of other social units. The values and norms of individual persons derive from the social environments to which they are exposed. Through socialization, individuals become aware of and internalize social values and norms, which then become important internal determinants of action. An individual’s internalized values and norms reflect the values and norms of the society and the various subgroups and organizations within that society to which that individual is exposed, particularly, although not exclusively, in the early stages of the life course. Once social values and norms are internalized, they can direct the behavior of individuals irrespective of external influences. Internalized values and norms are a source of self-expectations and a basis of self-evaluation, with the subjective response to an outcome ensuing from the self- oncept. Adherence to self-expectations enhances self-esteem, producing a sense of pride and other favorable self-evaluations. Violation of self-expectations reduces self-esteem, producing guilt, self-depreciation, and other negative self-evaluations. To preserve a sense of self-worth and avoid negative self-evaluations, individuals try to behave in accordance with their internalized values and norms. Sociologists tend to see internalized values and norms as an important influence on human behavior, and this makes them see the social values and norms of society as governing and constraining the choices individuals make. Social values and norms also affect behavior because they are internalized by significant others and thus affect an actor’s perception of other people’s expectations. To the extent that actors are motivated to comply with what they perceive the views of others to be, social values and norms become a source of external pressure that exerts an influence that is independent of an individual’s internalized values and norms. Although change in personal values and norms occurs over the life course, there is some evidence that levels of stability are relatively high (Moss and Susman 1980; Sears 1983; Alwin 1994). It has been argued that values and norms that are more closely tied to the self-concept and considered more important are more resistant to change (Rokeach 1973; Glenn 1980). Those values and norms may undergo less change because they are internalized through conditioning-like processes that begin early in life and are strongly linked to existential beliefs. They tend to be tied to shared mental models that are used to construct reality and become embedded central elements of cognitive organization with a strong affective basis. Some types of values, norms, and attitudes (for example, political attitudes) are quite malleable into early adulthood and then become relatively stable. After this ‘‘impressionable,’’ or ‘‘formative,’’ period when change is greatest, they are relatively stable in midlife, and this stability either persists or declines in the later years (Alwin et al. 1991; Alwin 1994). The pattern of life-course change and stability described above has been argued to be due to a number of influences. One is the process of biological and psychological maturation with age, which is most rapid in the early stages of life. As functional capacity develops, influences at that time have the advantage of primacy, and when they are consistent over a period of years, affective ‘‘mass’’ is built up. Nevertheless, some types of values, norms, and attitudes remain malleable into early adulthood, and strong pressure to change or weak earlier socialization can lead to resocialization in late adolescence or early adulthood (Sears 1981; Alwin et al. 1991). It is likely that change declines after early adulthood in part because individuals tend to act on previously formed values, norms, and attitudes as they seek new information and experiences. This selective structuring of new inputs enhances consistency over time, since new inputs tend to reinforce rather than call into question earlier ones. Another influence on life-course change and stability in values and norms is change in social experiences and roles over the life course (Wells and Stryker 1988; Elder and Caspi 1990). These changes are extensive during the transitional years of early adulthood and may increase after retirement. They represent opportunities for change because they bring the individual into contact with new individuals, reference groups, and situations, and change in values and norms is likely to occur through both interaction with others and adaptation to situations. Role change can produce change as a role occupant engages in new behaviors, is exposed to new circumstances and information, and learns the norms governing role behavior. After early adulthood, a decline in the number of changes in social experiences and roles leads to greater stability in values and norms. Change in social values and norms occurs through a variety of processes. One influence is historical change in the conditions of life that occurs through technological innovation, alterations in economic and social organization, and change in cultural ideas and forms. Historical change by definition involves ‘‘period effects,’’ but because those effects tend to be experienced differently by different birth cohorts (i.e., those at different ages when a historical change occurs), the influence of historical change on social values and norms occurs to some degree through a process of cohort succession. Change in social values and norms also occurs through change in the social values and norms of subgroups of social units. This change can be of several types. First, change in the presence and size of subgroups with different values and norms produces change in the collective values and norms of the group. For example, the presence of new immigrant groups with different values and norms or a change in the relative size of groups with different values and norms affects the values and norms of the collective unit. Second, change in the degree of similarity or difference in the values and norms of subgroups can produce change in overall values and norms. On the one hand, acculturation through intergroup contact and similar experiences will reduce the distinctiveness of subcultural groups; on the other hand, segregation and increasing divergence in the life experiences of subgroups will widen their cultural distinctiveness. Third, some subcultural groups may be more subject to particular period influences than others are, and this differential responsiveness can increase or decrease differences in values and norms among subgroups. Another source of change in social values and norms is change in exposure to social organizations that exert distinct socializing influences. For example, exposure to religious, educational, or work organizations may produce differences in values and norms between those with such exposure and those without it. The extent to which exposure to different organizational environments is likely to affect personal values and norms depends on the distinctiveness of those environments, which also is subject to change. Thus, social values and norms are affected by both changes in the exposure of the population to different organizations and changes in what is socialized by those organizations. The Role of Values and Norms in Explaining Behavior The ways in which values and norms influence behavior must be understood in a larger explanatory framework, and models of purposive action in all the social sciences provide that framework (Marini 1992). These models rest on the assumption that actors are purposive, acting in ways that tend to produce beneficial results. Although the models of purposive action that have emerged in various social sciences differ in the nature of the assumptions made about purposive action, they share the basic proposition that people are motivated to achieve pleasure and avoid pain and that this motivation leads them to act in ways that, at least within the limits of the information they possess and their ability to predict the future, can be expected to yield greater reward than cost. If reward and cost are defined subjectively and individuals are assumed to act in the service of subjective goals, this proposition links subjective utility, or value, to action. In sociology, a model of purposive action assumes the existence of actors who may be either persons or corporate actors. The usefulness of these models in sociology hinges on making appropriate connections between the characteristics of social systems and the behavior of actors (the macro–micro connection) and between the behavior of actors and the systemic outcomes that emerge from the combined actions of multiple actors (the micro–macro connection). In a model of purposive action, an individual actor (person or corporate actor) is assumed to make choices among alternative actions structured by the social system. Choices among those actions are based on the outcomes expected to ensue from those actions, to which the actor attaches some utility, or value, and which the actor expects with some probability. The choices of the actor are governed by beliefs of three types: The choices of the actor also are governed by the actor’s preferences, or the subjective utility (rewards and costs) of the consequences expected to result from each alternative. Values and norms are among the preferences of an actor that influence action. As evaluative beliefs that synthesize affective and cognitive elements, they affect the utility of the outcomes expected to ensue from an action. Action often results not from a conscious weighing of the expected future benefits of alternatives but from a less deliberate response to internalized or institutionalized values and norms (Emerson 1987). The actor’s finite resources—the human, cultural, social, and material capital available to the actor that enables or precludes action—operate as influences on the choices made by the actor. The component of a model of purposive action that makes the macro–micro connection links the characteristics of the social system to the behavior of actors and models the effects of social structure (both organizational and cultural) on the beliefs and preferences of actors as well as on the available alternatives for action and actors’ resources. In this component of the model, characteristics of the micro model are taken as problematic and to be explained. These characteristics include: A third component of a model of purposive action makes the micro– macro connection, linking the behavior of individual actors to the systemic outcomes that emerge from the combined actions of multiple actors. This link may occur through a simple mechanism such as aggregation, but it is more likely that outcomes emerge through a complex interaction in which the whole is not just the sum of its parts. The action, or behavior, of the system is usually an emergent consequence of the interdependent actions of the actors that compose it. edu.learnsoc.org Copyright 2010 - 2012 © All Rights Reserved |Home | About | Contact | Links|
http://edu.learnsoc.org/Chapters/5%20major%20sociological%20topics/32%20values%20and%20norms.htm
13
64
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (April 2009)| General equilibrium theory is a branch of theoretical economics. It seeks to explain the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that a set of prices exists that will result in an overall equilibrium, hence general equilibrium, in contrast to partial equilibrium, which only analyzes single markets. As with all models, this is an abstraction from a real economy; it is proposed as being a useful model, both by considering equilibrium prices as long-term prices and by considering actual prices as deviations from equilibrium. General equilibrium theory both studies economies using the model of equilibrium pricing and seeks to determine in which circumstances the assumptions of general equilibrium will hold. The theory dates to the 1870s, particularly the work of French economist Léon Walras in his pioneering 1874 work Elements of Pure Economics. It is often assumed that agents are price takers, and under that assumption two common notions of equilibrium exist: Walrasian (or competitive) equilibrium, and its generalization; a price equilibrium with transfers. Broadly speaking, general equilibrium tries to give an understanding of the whole economy using a "bottom-up" approach, starting with individual markets and agents. Macroeconomics, as developed by the Keynesian economists, focused on a "top-down" approach, where the analysis starts with larger aggregates, the "big picture". Therefore, general equilibrium theory has traditionally been classified as part of microeconomics. The difference is not as clear as it used to be, since much of modern macroeconomics has emphasized microeconomic foundations, and has constructed general equilibrium models of macroeconomic fluctuations. General equilibrium macroeconomic models usually have a simplified structure that only incorporates a few markets, like a "goods market" and a "financial market". In contrast, general equilibrium models in the microeconomic tradition typically involve a multitude of different goods markets. They are usually complex and require computers to help with numerical solutions. In a market system the prices and production of all goods, including the price of money and interest, are interrelated. A change in the price of one good, say bread, may affect another price, such as bakers' wages. If bakers differ in tastes from others, the demand for bread might be affected by a change in bakers' wages, with a consequent effect on the price of bread. Calculating the equilibrium price of just one good, in theory, requires an analysis that accounts for all of the millions of different goods that are available. The first attempt in neoclassical economics to model prices for a whole economy was made by Léon Walras. Walras' Elements of Pure Economics provides a succession of models, each taking into account more aspects of a real economy (two commodities, many commodities, production, growth, money). Some (for example, Eatwell (1989), see also Jaffe (1953)) think Walras was unsuccessful and that the later models in this series are inconsistent. In particular, Walras's model was a long-run model in which prices of capital goods are the same whether they appear as inputs or outputs and in which the same rate of profits is earned in all lines of industry. This is inconsistent with the quantities of capital goods being taken as data. But when Walras introduced capital goods in his later models, he took their quantities as given, in arbitrary ratios. (In contrast, Kenneth Arrow and Gérard Debreu continued to take the initial quantities of capital goods as given, but adopted a short run model in which the prices of capital goods vary with time and the own rate of interest varies across capital goods.) Walras was the first to lay down a research program much followed by 20th-century economists. In particular, the Walrasian agenda included the investigation of when equilibria are unique and stable.(Walras himself: Lesson 7 shows neither Uniqueness, nor Stability, nor even Existence of an agreement is guaranteed. Immediate after closing the deal, e.g.) Walras also proposed a dynamic process by which general equilibrium might be reached, that of the tâtonnement or groping process. The tatonnement process is a model for investigating stability of equilibria. Prices are announced (perhaps by an "auctioneer"), and agents state how much of each good they would like to offer (supply) or purchase (demand). No transactions and no production take place at disequilibrium prices. Instead, prices are lowered for goods with positive prices and excess supply. Prices are raised for goods with excess demand. The question for the mathematician is under what conditions such a process will terminate in equilibrium where demand equates to supply for goods with positive prices and demand does not exceed supply for goods with a price of zero. Walras was not able to provide a definitive answer to this question (see Unresolved Problems in General Equilibrium below). In partial equilibrium analysis, the determination of the price of a good is simplified by just looking at the price of one good, and assuming that the prices of all other goods remain constant. The Marshallian theory of supply and demand is an example of partial equilibrium analysis. Partial equilibrium analysis is adequate when the first-order effects of a shift in the demand curve do not shift the supply curve. Anglo-American economists became more interested in general equilibrium in the late 1920s and 1930s after Piero Sraffa's demonstration that Marshallian economists cannot account for the forces thought to account for the upward-slope of the supply curve for a consumer good. If an industry uses little of a factor of production, a small increase in the output of that industry will not bid the price of that factor up. To a first-order approximation, firms in the industry will not experience decreasing costs and the industry supply curves will not slope up. If an industry uses an appreciable amount of that factor of production, an increase in the output of that industry will exhibit decreasing costs. But such a factor is likely to be used in substitutes for the industry's product, and an increased price of that factor will have effects on the supply of those substitutes. Consequently, Sraffa argued, the first-order effects of a shift in the demand curve of the original industry under these assumptions includes a shift in the supply curve of substitutes for that industry's product, and consequent shifts in the original industry's supply curve. General equilibrium is designed to investigate such interactions between markets. Continental European economists made important advances in the 1930s. Walras' proofs of the existence of general equilibrium often were based on the counting of equations and variables. Such arguments are inadequate for non-linear systems of equations and do not imply that equilibrium prices and quantities cannot be negative, a meaningless solution for his models. The replacement of certain equations by inequalities and the use of more rigorous mathematics improved general equilibrium modeling. Modern concept of general equilibrium in economics The modern conception of general equilibrium is provided by a model developed jointly by Kenneth Arrow, Gérard Debreu and Lionel W. McKenzie in the 1950s. Gerard Debreu presents this model in Theory of Value (1959) as an axiomatic model, following the style of mathematics promoted by Bourbaki. In such an approach, the interpretation of the terms in the theory (e.g., goods, prices) are not fixed by the axioms. Three important interpretations of the terms of the theory have been often cited. First, suppose commodities are distinguished by the location where they are delivered. Then the Arrow-Debreu model is a spatial model of, for example, international trade. Second, suppose commodities are distinguished by when they are delivered. That is, suppose all markets equilibrate at some initial instant of time. Agents in the model purchase and sell contracts, where a contract specifies, for example, a good to be delivered and the date at which it is to be delivered. The Arrow–Debreu model of intertemporal equilibrium contains forward markets for all goods at all dates. No markets exist at any future dates. Third, suppose contracts specify states of nature which affect whether a commodity is to be delivered: "A contract for the transfer of a commodity now specifies, in addition to its physical properties, its location and its date, an event on the occurrence of which the transfer is conditional. This new definition of a commodity allows one to obtain a theory of [risk] free from any probability concept..." (Debreu, 1959) These interpretations can be combined. So the complete Arrow–Debreu model can be said to apply when goods are identified by when they are to be delivered, where they are to be delivered and under what circumstances they are to be delivered, as well as their intrinsic nature. So there would be a complete set of prices for contracts such as "1 ton of Winter red wheat, delivered on 3rd of January in Minneapolis, if there is a hurricane in Florida during December". A general equilibrium model with complete markets of this sort seems to be a long way from describing the workings of real economies, however its proponents argue that it is still useful as a simplified guide as to how a real economies function. Some of the recent work in general equilibrium has in fact explored the implications of incomplete markets, which is to say an intertemporal economy with uncertainty, where there do not exist sufficiently detailed contracts that would allow agents to fully allocate their consumption and resources through time. While it has been shown that such economies will generally still have an equilibrium, the outcome may no longer be Pareto optimal. The basic intuition for this result is that if consumers lack adequate means to transfer their wealth from one time period to another and the future is risky, there is nothing to necessarily tie any price ratio down to the relevant marginal rate of substitution, which is the standard requirement for Pareto optimality. Under some conditions the economy may still be constrained Pareto optimal, meaning that a central authority limited to the same type and number of contracts as the individual agents may not be able to improve upon the outcome, what is needed is the introduction of a full set of possible contracts. Hence, one implication of the theory of incomplete markets is that inefficiency may be a result of underdeveloped financial institutions or credit constraints faced by some members of the public. Research still continues in this area. Properties and characterization of general equilibrium Basic questions in general equilibrium analysis are concerned with the conditions under which an equilibrium will be efficient, which efficient equilibria can be achieved, when an equilibrium is guaranteed to exist and when the equilibrium will be unique and stable. First Fundamental Theorem of Welfare Economics The First Fundamental Welfare Theorem asserts that market equilibria are Pareto efficient. In a pure exchange economy, a sufficient condition for the first welfare theorem to hold is that preferences be locally nonsatiated. The first welfare theorem also holds for economies with production regardless of the properties of the production function. Implicitly, the theorem assumes complete markets and perfect information. In an economy with externalities, for example, it is possible for equilibria to arise that are not efficient. The first welfare theorem is informative in the sense that it points to the sources of inefficiency in markets. Under the assumptions above, any market equilibrium is tautologically efficient. Therefore, when equilibria arise that are not efficient, the market system itself is not to blame, but rather some sort of market failure. Second Fundamental Theorem of Welfare Economics While every equilibrium is efficient, it is clearly not true that every efficient allocation of resources will be an equilibrium. However, the second theorem states that every efficient allocation can be supported by some set of prices. In other words, all that is required to reach a particular outcome is a redistribution of initial endowments of the agents after which the market can be left alone to do its work. This suggests that the issues of efficiency and equity can be separated and need not involve a trade-off. The conditions for the second theorem are stronger than those for the first, as consumers' preferences now need to be convex (convexity roughly corresponds to the idea of diminishing rates of marginal substitution, or to preferences where "averages are better than extrema"). Further up, the Second Fundamental Theorem of Equilibrium Analysis leads to Perfect Equilibrium Analysis (Enrico Gallo Modena, 2013) where market forces join together planned economies in a perfect bound. Even though every equilibrium is efficient, neither of the above two theorems say anything about the equilibrium existing in the first place. To guarantee that an equilibrium exists, it suffices that consumer preferences be convex (although with enough consumers this assumption can be relaxed both for existence and the second welfare theorem). Similarly, but less plausibly, convex feasible production sets suffice for existence; convexity excludes economies of scale. Proofs of the existence of equilibrium traditionally rely on fixed-point theorems such as Brouwer fixed-point theorem for functions (or, more generally, the Kakutani fixed-point theorem for set-valued functions). In fact, the converse holds, according to Uzawa's derivation of Brouwer’s fixed point theorem from Walras's law. Following Uzawa's theorem, many mathematical economists consider proving existence a deeper result than proving the two Fundamental Theorems. Nonconvexities in large economies Ross M. Starr (1969) applied the Shapley–Folkman–Starr theorem to prove that even without convex preferences there exists an approximate equilibrium. The Shapley–Folkman–Starr results bound the distance from an "approximate" economic equilibrium to an equilibrium of a "convexified" economy, when the number of agents exceeds the dimension of the goods. Following Starr's paper, the Shapley–Folkman–Starr results were "much exploited in the theoretical literature", according to Guesnerie (page 112), who wrote the following: some key results obtained under the convexity assumption remain (approximately) relevant in circumstances where convexity fails. For example, in economies with a large consumption side, nonconvexities in preferences do not destroy the standard results of, say Debreu's theory of value. In the same way, if indivisibilities in the production sector are small with respect to the size of the economy, [ . . . ] then standard results are affected in only a minor way. (page 99) To this text, Guesnerie appended the following footnote: The derivation of these results in general form has been one of the major achievements of postwar economic theory. Although generally (assuming convexity) an equilibrium will exist and will be efficient, the conditions under which it will be unique are much stronger. While the issues are fairly technical the basic intuition is that the presence of wealth effects (which is the feature that most clearly delineates general equilibrium analysis from partial equilibrium) generates the possibility of multiple equilibria. When a price of a particular good changes there are two effects. First, the relative attractiveness of various commodities changes; and second, the wealth distribution of individual agents is altered. These two effects can offset or reinforce each other in ways that make it possible for more than one set of prices to constitute an equilibrium. A result known as the Sonnenschein–Mantel–Debreu theorem states that the aggregate (excess) demand function inherits only certain properties of individual's demand functions, and that these (Continuity, Homogeneity of degree zero, Walras' law and boundary behavior when prices are near zero) are the only real restriction one can expect from an aggregate excess demand function: any such function can be rationalized as the excess demand of an economy. In particular uniqueness of equilibrium should not be expected. There has been much research on conditions when the equilibrium will be unique, or which at least will limit the number of equilibria. One result states that under mild assumptions the number of equilibria will be finite (see regular economy) and odd (see index theorem). Furthermore if an economy as a whole, as characterized by an aggregate excess demand function, has the revealed preference property (which is a much stronger condition than revealed preferences for a single individual) or the gross substitute property then likewise the equilibrium will be unique. All methods of establishing uniqueness can be thought of as establishing that each equilibrium has the same positive local index, in which case by the index theorem there can be but one such equilibrium. Given that equilibria may not be unique, it is of some interest to ask whether any particular equilibrium is at least locally unique. If so, then comparative statics can be applied as long as the shocks to the system are not too large. As stated above, in a regular economy equilibria will be finite, hence locally unique. One reassuring result, due to Debreu, is that "most" economies are regular. Recent work by Michael Mandler (1999) has challenged this claim. The Arrow-Debreu-McKenzie model is neutral between models of production functions as continuously differentiable and as formed from (linear combinations of) fixed coefficient processes. Mandler accepts that, under either model of production, the initial endowments will not be consistent with a continuum of equilibria, except for a set of Lebesgue measure zero. However, endowments change with time in the model and this evolution of endowments is determined by the decisions of agents (e.g., firms) in the model. Agents in the model have an interest in equilibria being indeterminate: "Indeterminacy, moreover, is not just a technical nuisance; it undermines the price-taking assumption of competitive models. Since arbitrary small manipulations of factor supplies can dramatically increase a factor's price, factor owners will not take prices to be parametric." (Mandler 1999, p. 17) When technology is modeled by (linear combinations) of fixed coefficient processes, optimizing agents will drive endowments to be such that a continuum of equilibria exist: "The endowments where indeterminacy occurs systematically arise through time and therefore cannot be dismissed; the Arrow-Debreu-McKenzie model is thus fully subject to the dilemmas of factor price theory." (Mandler 1999, p. 19) Critics of the general equilibrium approach have questioned its practical applicability based on the possibility of non-uniqueness of equilibria. Supporters have pointed out that this aspect is in fact a reflection of the complexity of the real world and hence an attractive realistic feature of the model. In a typical general equilibrium model the prices that prevail "when the dust settles" are simply those that coordinate the demands of various consumers for various goods. But this raises the question of how these prices and allocations have been arrived at, and whether any (temporary) shock to the economy will cause it to converge back to the same outcome that prevailed before the shock. This is the question of stability of the equilibrium, and it can be readily seen that it is related to the question of uniqueness. If there are multiple equilibria, then some of them will be unstable. Then, if an equilibrium is unstable and there is a shock, the economy will wind up at a different set of allocations and prices once the convergence process terminates. However stability depends not only on the number of equilibria but also on the type of the process that guides price changes (for a specific type of price adjustment process see Tatonnement). Consequently some researchers have focused on plausible adjustment processes that guarantee system stability, i.e., that guarantee convergence of prices and allocations to some equilibrium. When more than one stable equilibrium exists, where one ends up will depend on where one begins. Unresolved problems in general equilibrium Research building on the Arrow–Debreu–McKenzie model has revealed some problems with the model. The Sonnenschein-Mantel-Debreu results show that, essentially, any restrictions on the shape of excess demand functions are stringent. Some think this implies that the Arrow-Debreu model lacks empirical content. At any rate, Arrow-Debreu-McKenzie equilibria cannot be expected to be unique, or stable. A model organized around the tatonnement process has been said to be a model of a centrally planned economy, not a decentralized market economy. Some research has tried to develop general equilibrium models with other processes. In particular, some economists have developed models in which agents can trade at out-of-equilibrium prices and such trades can affect the equilibria to which the economy tends. Particularly noteworthy are the Hahn process, the Edgeworth process and the Fisher process. The data determining Arrow-Debreu equilibria include initial endowments of capital goods. If production and trade occur out of equilibrium, these endowments will be changed further complicating the picture. In a real economy, however, trading, as well as production and consumption, goes on out of equilibrium. It follows that, in the course of convergence to equilibrium (assuming that occurs), endowments change. In turn this changes the set of equilibria. Put more succinctly, the set of equilibria is path dependent... [This path dependence] makes the calculation of equilibria corresponding to the initial state of the system essentially irrelevant. What matters is the equilibrium that the economy will reach from given initial endowments, not the equilibrium that it would have been in, given initial endowments, had prices happened to be just right (Franklin Fisher, as quoted by Petri (2004)). The Arrow–Debreu model in which all trade occurs in futures contracts at time zero requires a very large number of markets to exist. It is equivalent under complete markets to a sequential equilibrium concept in which spot markets for goods and assets open at each date-state event (they are not equivalent under incomplete markets); market clearing then requires that the entire sequence of prices clears all markets at all times. A generalization of the sequential market arrangement is the temporary equilibrium structure, where market clearing at a point in time is conditional on expectations of future prices which need not be market clearing ones. Although the Arrow–Debreu–McKenzie model is set out in terms of some arbitrary numéraire, the model does not encompass money. Frank Hahn, for example, has investigated whether general equilibrium models can be developed in which money enters in some essential way. One of the essential questions he introduces, often referred to as the Hahn's Problem is : "Can one construct an equilibrium where money has value?" The goal is to find models in which existence of money can alter the equilibrium solutions, perhaps because the initial position of agents depends on monetary prices. Some critics of general equilibrium modeling contend that much research in these models constitutes exercises in pure mathematics with no connection to actual economies. "There are endeavors that now pass for the most desirable kind of economic contributions although they are just plain mathematical exercises, not only without any economic substance but also without any mathematical value" (Nicholas Georgescu-Roegen 1979). Georgescu-Roegen cites as an example a paper that assumes more traders in existence than there are points in the set of real numbers. Although modern models in general equilibrium theory demonstrate that under certain circumstances prices will indeed converge to equilibria, critics hold that the assumptions necessary for these results are extremely strong. As well as stringent restrictions on excess demand functions, the necessary assumptions include perfect rationality of individual complete information about all prices both now and in the future; and the conditions necessary for perfect competition. However some results from experimental economics suggest that even in circumstances where there are few, imperfectly informed agents, the resulting prices and allocations may wind up resembling those of a perfectly competitive market(although certainly not a stable general equilibrium in all markets). Frank Hahn defends general equilibrium modeling on the grounds that it provides a negative function. General equilibrium models show what the economy would have to be like for an unregulated economy to be Pareto efficient. Computing general equilibrium Until the 1970s general equilibrium analysis remained theoretical. With advances in computing power and the development of input-output tables, it became possible to model national economies, or even the world economy, and attempts were made to solve for general equilibrium prices and quantities empirically. Applied general equilibrium (AGE) models were pioneered by Herbert Scarf in 1967, and offered a method for solving the Arrow-Debreu General Equilibrium system in a numerical fashion. This was first implemented by John Shoven and John Whalley (students of Scarf at Yale) in 1972 and 1973, and were a popular method up through the 1970s. In the 1980s however, AGE models faded from popularity due to their inability to provide a precise solution and its high cost of computation. Also, Scarf's method was proven non-computable to a precise solution by Velupillai (2006). (See AGE model article for the full references) Computable general equilibrium (CGE) models surpassed and replaced AGE models in the mid-1980s, as the CGE model was able to provide relatively quick and large computable models for a whole economy, and was the preferred method of governments and the World Bank. CGE models are heavily used today, and while 'AGE' and 'CGE' is used inter-changeably in the literature, Scarf-type AGE models have not been constructed since the mid-1980s, and the CGE literature at current is not based on Arrow-Debreu and General Equilibrium Theory as discussed in this article. CGE models, and what is today referred to as AGE models, are based on static, simultaneously solved, macro balancing equations (from the standard Keynesian macro model), giving a precise and explicitly computable result (Mitra-Kahn 2008). Other schools General equilibrium theory is a central point of contention and influence between the neoclassical school and other schools of economic thought, and different schools have varied views on general equilibrium theory. Some, such as the Keynesian and Post-Keynesian schools, strongly reject general equilibrium theory as "misleading" and "useless"; others, such as the Austrian school, show more influence and acceptance of general equilibrium thinking, though the extent is debated. Other schools, such as new classical macroeconomics, developed from general equilibrium theory. Keynesian and Post-Keynesian Keynesian and Post-Keynesian economists, and their Underconsumptionist predecessors criticize general equilibrium theory specifically, and as part of criticisms of neoclassical economics generally. Specifically, they argue that general equilibrium theory is neither accurate nor useful, that economies are not in equilibrium, that equilibrium may be slow and painful to achieve, and that modeling by equilibrium is "misleading", and that the resulting theory is not a useful guide, particularly for understanding of economic crises. Let us beware of this dangerous theory of equilibrium which is supposed to be automatically established. A certain kind of equilibrium, it is true, is reestablished in the long run, but it is after a frightful amount of suffering.—Simonde de Sismondi, New Principles of Political Economy, vol. 1 (1819), 20-21. The long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is past the ocean is flat again.—John Maynard Keynes, A Tract on Monetary Reform, 1923, Ch. 3 It is as absurd to assume that, for any long period of time, the variables in the economic organization, or any part of them, will "stay put," in perfect equilibrium, as to assume that the Atlantic Ocean can ever be without a wave.—Irving Fisher, The Debt-Deflation Theory of Great Depressions, 1933, p. 339 Robert Clower and others have argued for a reformulation of theory toward disequilibrium analysis to incorporate how monetary exchange fundamentally alters the representation of an economy as though a barter system. More methodologically, it is argued that general equilibrium is a fundamentally static analysis, rather than a dynamic analysis, and thus is misleading and inapplicable. The theory of dynamic stochastic general equilibrium seeks to address this criticism. Austrian economics Whether Austrian economics supports or rejects general equilibrium theory and the precise relationship is unclear. Different Austrian economists have advocated differing positions, which have changed as Austrian economics developed over time. Some new classical economists argue that the work of Friedrich Hayek in the 1920s and 1930s was in the general equilibrium tradition and was a precursor to business cycle equilibrium theory. Others argue that while there are clear influences of general equilibrium on Hayek's thought, and that he used it in his early work, he came to substantially reject it in his later work, post 1937. It is also argued by some that Friedrich von Wieser, along with Hayek, worked in the general equilibrium tradition, while others reject this, finding influences of general equilibrium on the Austrian economists superficial. New classical macroeconomics While general equilibrium theory and neoclassical economics generally were originally microeconomic theories, New classical macroeconomics builds a macroeconomic theory on these bases. In new classical models, the macroeconomy is assumed to be at its unique equilibrium, with full employment and potential output, and that this equilibrium is assumed to always have been achieved via price and wage adjustment (market clearing). The best-known such model is Real Business Cycle Theory, in which business cycles are considered to be largely due to changes in the real economy, unemployment is not due to the failure of the market to achieve potential output, but due to equilibrium potential output having fallen and equilibrium unemployment having risen. Socialist economics Within socialist economics, a sustained critique of general equilibrium theory (and neoclassical economics generally) is given in Anti-Equilibrium (Kornai 1971), based on the experiences of János Kornai with the failures of Communist central planning. - Walras, Elements of Pure Economics (trans Jaffe), Irwin, 1954 - Starr, Ross M. (1969). "Quasi-equilibria in markets with non-convex preferences". Econometrica 37 (1): 25–38. doi:10.2307/1909201. JSTOR 1909201. - Page 138 in Guesnerie: Guesnerie, Roger (1989). "First-best allocation of resources with nonconvexities in production". In Bernard Cornet and Henry Tulkens. Contributions to Operations Research and Economics: The twentieth anniversary of CORE (Papers from the symposium held in Louvain-la-Neuve, January 1987). Cambridge, MA: MIT Press. pp. 99–143. ISBN 0-262-03149-3. MR 1104662. - See pages 392-399 for the Shapley-Folkman-Starr results and see page 188 for applications in Arrow & Hahn: Arrow, Kenneth J.; Hahn, Frank H. (1971). "Appendix B: Convex and related sets". General Competitive Analysis. Mathematical economics texts [Advanced textbooks in economics] (6 ). San Francisco, CA: Holden-Day, Inc. [North-Holland]. pp. 375–401. ISBN 0-444-85497-5. MR 439057. - Pages 52-55 with applications on pages 145-146, 152-153, and 274-275 in Mas-Colell, Andreu (1985). "1.L Averages of sets". The Theory of General Economic Equilibrium: A Differentiable Approach. Econometric Society Monographs (9). Cambridge UP. ISBN 0-521-26514-2. MR 1113262. - Hildenbrand, Werner (1974). Core and Equilibria of a Large Economy. Princeton Studies in Mathematical Economics (5). Princeton, N.J.: Princeton University Press. pp. viii+251. ISBN 978-0-691-04189-6. MR 389160. - See section 7.2 Convexification by numbers in Salanié: Salanié, Bernard (2000). "7 Nonconvexities". Microeconomics of market failures (English translation of the (1998) French Microéconomie: Les défaillances du marché (Economica, Paris) ed.). Cambridge, MA: MIT Press. pp. 107–125. ISBN 0-262-19443-0. - An "informal" presentation appears in pages 63-65 of Laffont: Laffont, Jean-Jacques (1988). "3 Nonconvexities". Fundamentals of Public Economics. MIT. ISBN 0-585-13445-6. - http://www.debtdeflation.com/blogs/2009/05/04/debtwatch-no-34-the-confidence-trick/ Debtwatch No 34: The Confidence Trick], Steve Keen, May 4th, 2009 - DebtWatch No 29 December 2008, Steve Keen, November 30th, 2008 - • Robert W. Clower (1965). "The Keynesian Counter-Revolution: A Theoretical Appraisal," in F.H. Hahn and F.P.R. Brechling, ed., The Theory of Interest Rates. Macmillan. Reprinted in Clower (1987), Money and Markets. Cambridge. Description and preview via scroll down. pp. 34-58. • _____ (1967). "A Reconsideration of the Microfoundations of Monetary Theory," Western Economic Journal, 6(1), pp. 1-8 (press +). • _____ and Peter W. Howitt (1996). "Taking Markets Seriously: Groundwork for a Post-Walrasian Macroeconomics", in David Colander, ed., Beyond Microfoundations, pp. 21-37. • Herschel I. Grossman (1971). "Money, Interest, and Prices in Market Disequilibrium," Journal of Political Economy,79(5), p p. 943-961. • Jean-Pascal Bénassy (1990). "Non-Walrasian Equilibria, Money, and Macroeconomics," Handbook of Monetary Economics, v. 1, ch. 4, pp. 103-169. Table of contents. • _____ (1993). "Nonclearing Markets: Microeconomic Concepts and Macroeconomic Applications," Journal of Economic Literature, 31(2), pp. 732-761 (press +). • _____ (2008). "non-clearing markets in general equilibrium," in The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. - Debunking economics: the naked emperor of the social sciences, Steve Keen, ISBN 978-1-85649-992-7 - Lucas and Laidler, referenced in Butos - Butos, William N. (October 1985). "Hayek and General Equilibrium Analysis". Southern Economic Journal (Southern Economic Association) 52 (2): 332–343. doi:10.2307/1059619. JSTOR 1059619. More than one of - Friedrich von Wieser and Friedrich A. Hayek: The General Equilibrium Tradition in Austrian Economics, Joseph T. Salerno - Caldwell, Bruce (2004). Hayek’s Challenge: An Intellectual Biography of F.A. Hayek. Chicago: University of Chicago Press. ISBN 978-0-226-09193-8 - Arrow, K. J., and Hahn, F. H. (1971). General Competitive Analysis, San Francisco: Holden-Day. - Arrow K. J. and G. Debreu (1954). "The Existence of an Equilibrium for a Competitive Economy" Econometrica, vol. XXII, 265-90 - Black, Fischer (1995). Exploring General Equilibrium. Cambridge Mass: MIT Press. p. 318. ISBN 0-262-02382-2 - Debreu, G. (1959). Theory of Value, New York: Wiley. - Eaton, Eaton and Allen, "Intermediate Microeconomics" Chapters 13 and 18. - Eatwell, John (1987). "Walras's Theory of Capital", The New Palgrave: A Dictionary of Economics (Edited by Eatwell, J., Milgate, M., and Newman, P.), London: Macmillan. - Geanakoplos, John (1987). "Arrow-Debreu model of general equilibrium," The New Palgrave: A Dictionary of Economics, v. 1, pp. 116–24. - Georgescu-Roegen, Nicholas (1979). "Methods in Economic Science", Journal of Economic Issues, V. 13, N. 2 (June): 317-328. - Grandmont, J. M. (1977). "Temporary General Equilibrium Theory", Econometrica, V. 45, N. 3 (Apr.): 535-572. - Hicks, John R. (1939, 2nd ed. 1946). Value and Capital. Oxford: Clarendon Press. - Jaffe, William (1953). "Walras's Theory of Capital Formation in the Framework of his Theory of General Equilibrium", Economie Appliquee, V. 6 (Apr.-Sep.): 289-317. - Kornai, János (1971). Anti-Equilibrium. - Kubler, Felix (2008). "Computation of general equilibria (new developments)," The New Palgrave Dictionary of Economics. 2nd Edition Abstract. - Mandler, Michael (1999). Dilemmas in Economic Theory: Persisting Foundational Problems of Microeconomics, Oxford: Oxford University Press. - Mas-Colell, A., Whinston, M. and Green, J. (1995). Microeconomic Theory, Oxford University Press - McKenzie, Lionel W. (1981). "The Classical Theorem on Existence of Competitive Equilibrium", Econometrica. - _____ (1983). "Turnpike Theory, Discounted Utility, and the von Neumann Facet", Journal of Economic Theory, 1983. - _____ (1987). "General equilibrium", The New Palgrave; : A Dictionary of Economics, 1987, v. 2, pp. 498–512. - _____ (1987). Turnpike theory," The New Palgrave: A Dictionary of Economics, 1987, v. 4, pp. 712-20 - _____ (1999). "Equilibrium, Trade, and Capital Accumulation", Japanese Economic Review. - Mitra-Kahn, Benjamin H., 2008, "Debunking the Myths of Computable General Equilibrium Models", SCEPA Working Paper 01-2008 - Petri, Fabio (2004). General Equilibrium, Capital, and Macroeconomics: A Key to Recent Controversies in Equilibrium Theory, Edward Elgar. - Samuelson, Paul A. (1941) "The Stability of Equilibrium: Comparative Statics and Dynamics," Econometrica, 9(2), p p. 97-120.] - _____ (1947, Enlarged ed. 1983). Foundations of Economic Analysis, Harvard University Press. ISBN 0-674-31301-1(1947, Enlarged ed. 1983). Foundations of Economic Analysis, Harvard University Press. ISBN 0-674-31301-1 - Scarf, Herbert E. (2008). "Computation of general equilibria," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. - Schumpeter, Joseph A. (1954, History of Economic Analysis, Oxford University Press, - Walras, Léon (1877, trans. 1954). Elements of Pure Economics. ISBN 0-678-06028-2 Scroll to chapter-preview links. - Selected entries on general equilibrium theory from The New Palgrave Dictionary of Economics, 2nd Edition, 2008 with Abstract links: - "Arrow–Debreu model of general equilibrium" by John Geanakoplos. - "general equilibrium" by Lionel W. McKenzie. - "general equilibrium (new developments" by William Zame. - "non-clearing markets in general equilibrium" by Jean-Pascal Bénassy. - overlapping generations model of general equilibrium" by John Geanakoplos. See also - Applied general equilibrium or AGE models - Cobweb model - Convex preferences - Computable general equilibrium or CGE models - Decision theory - Dynamic stochastic general equilibrium or DSGE - Game theory - Mechanism design theory - Partial equilibrium A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia.
http://www.digplanet.com/wiki/General_equilibrium_theory
13
181
Begin learning about spherical geometry with: - Spherical Easel Exploration This exploration uses Spherical Easel (a Java applet) to explore the basics of spherical geometry. - Spherical Geometry Exploration Using a ball and markers, this is a hands on exploration of spherical geometry. - Spherical Geometry: Polygons What type of polygons exist on the sphere? Use of Spherical Easel is recommended. - Spherical Triangles Exploration Explore properties of spherical triangles with Kaleidotile. - Regular Spherical Tessellations Exploration Find the regular tessellations of the sphere. - Spherical Geometry: Isometry Exploration Explore the properties of translations, rotations, relfections and glide reflections on the sphere. - Spherical versus Euclidean Polygons Exploration For platonic solids and duality turn to: Points and Lines Spherical geometry is nearly as old as Euclidean geometry. In fact, the word geometry means “measurement of the Earth”, and the Earth is (more or less) a sphere. The ancient Greek geometers knew the Earth was spherical, and in c235BC Eratosthenes of Cyrene calculated the Earth’s circumference to within about 15%. Navigation motivated the study of spherical geometry, because even 2000 years ago the fact that the earth was curved had a noticeable effect on mapmaking. Even more importantly, the sky can be (and often was) thought of as a spherical shell enclosing the earth, with sun, moon, and stars dancing about on its surface. Navigation and timekeeping required a thorough understanding of how the heavenly bodies moved, and that required spherical geometry. In geometry there are undefined terms. There are also first principles “the truth of which it is not possible to prove”, according to Aristotle. These first principles are called postulates. In Euclidean geometry we assume that we know what is meant by “point” and “line” – these are undefined terms. To do geometry on a sphere, we need to make sense of these terms. You can try this yourself with Spherical Geometry Exploration. In spherical geometry, the “points” are points on the surface of the sphere. We are not concerned with the “inside” of the sphere. A soap bubble makes a good mental image. When thinking about the Earth, it’s helpful to realize that if you shrunk the Earth and dried off the oceans with a towel, the planet would be as smooth as a pool ball, and ones elevation off the surface would be too small to notice. Lines in spherical geometry are more subtle. Since the surface is curved, there are no straight lines on it, in the usual sense of the word straight. Because of this, we use the word geodesic instead of line when talking about spherical geometry: - A geodesic in non-Euclidean geometry plays the role that a straight line plays in Euclidean geometry. We expect geodesics in spherical geometry to behave like straight lines in Euclidean geometry. In particular, there are two essential features of a straight line in Euclidean geometry that we expect geodesics to have: - The shortest distance between two points is a straight line. - Walking forwards, without turning, one should follow a straight line. - Great Circle - A great circle is a circle on a sphere which divides the sphere into two equal hemispheres. A person walking on the surface of a sphere without turning will follow a great circle. The shortest distance between two points on a sphere also lies along a great circle. Because of this: Geodesics in spherical geometry are great circles. We will treat geodesics in spherical geometry as we treat straight lines in Euclidean geometry. Consider the statement “two points determine a line”. This is a postulate of Euclidean geometry, which means we accept its truth without proof. In spherical geometry, it is not quite true. Consider the Earth’s North and South poles. These points are joined by many great circles, which are known as meridians or lines of longitude. In fact, leaving the North pole in any direction and heading straight will take you to the South pole along a geodesic. The North and South pole are not the only points with this property: - Antipodal points - Two points which are opposite each other on the sphere are called antipodal points. In spherical geometry, we can say “two points determine a geodesic, unless they are antipodal points, in which case there are infinitely many geodesics joining them”. This is less elegant than Euclidean geometry but fairly typical for spherical geometry, where there are often exceptions for antipodal points. - Geodesic segment - A geodesic segment is an arc of a geodesic and its two endpoints. When saying “two points determine a line”, one usually thinks of the line segment joining the two points. On a sphere, two points lying on a geodesic create two geodesic segments since the geodesics are circles. Unless the points are antipodal, there will be a short segment and a long segment which “goes around the back of the sphere”. Angle Sum and Area - Spherical Polygon - A polygon in spherical geometry is a sequence of points and geodesic segments joining those points. The geodesic segments are called the sides of the polygon. A triangle in spherical geometry is a polygon with three sides, a quadrilateral is a polygon with four sides, and so on, as in Euclidean geometry. One fundamental result of Euclidean geometry is that the sum of the angles in any triangle is 180°. To see this, we used properties of parallel lines. However, in spherical geometry there are no parallel lines, because any pair of geodesics intersect at two (antipodal) points. Instead, in spherical geometry we have: The sum of the angles in any spherical triangle is more than 180°. To justify this statement, take a spherical triangle and then draw a flat triangle with the same vertices, as in the figure. The flat triangle has angle sum 180°, and since the spherical triangle bulges out from the flat one, its angles must be larger. - The defect of a spherical triangle is (angle sum of the triangle) - 180°. The more area a triangle covers, the more it bulges, the more its angles differ from a Euclidean triangle, and the larger its defect. There is a direct mathematical relationship between a triangle’s area and its defect. We measure the area as a fraction of the total area of the sphere, and find that the fraction of the sphere covered by a triangle is the triangle's defect divided by 720°. As a formula: Area fraction = The Spherical Triangles Exploration should help you understand this formula. To find the actual area covered by a triangle, you need to know the radius R of the sphere and then use the fact that the total surface area of a sphere of radius R is 4πR2. Example: The triangle shown in the figure has two 90° angles and one 45° angle. Its angle sum is 90°+90°+45° = 225°, and its defect is 225° – 180° = 45°. It covers 45/720 = 1/16 of the sphere. Can you see how 16 of these triangles would cover the whole sphere? We saw in Tessellations by Polygons#euclidean-angle-sum that a Euclidean polygon with n sides has angle sum , by cutting the polygon into n − 2 triangles. A spherical polygon with n sides can be cut in the same way into n − 2 spherical triangles, each of which has angle sum more than 180°, and so the angle sum of a spherical n-gon is more than . Put another way, the angle sum of a spherical polygon always exceeds the angle sum of a Euclidean polygon with the same number of sides. The amount (in degrees) of excess is called the defect of the polygon. The fraction of the sphere covered by a polygon is equal to its defect divided by 720°, just as for triangles. Spherical Tessellations And Polyhedra A tessellation of the sphere is a covering of the sphere by tiles, with no overlapping tiles and no gaps. We focus exclusively on tessellations by tiles which are polygons. As a first step, we look for regular tessellations. Look for them yourself with Regular Spherical Tessellations Exploration. Regular Polygons on the Sphere Recall that a regular polygon is a polygon with all sides the same length and all angles equal. We keep the same definition in non-Euclidean geometry. In Euclidean geometry, the angle sum for a polygon with n sides is , and this forces the corner angles of a regular n-gon to be . This means there is only one shape of Euclidean regular n-gon. In spherical geometry there are many regular n-gons. There is a regular n-gon with any angle sum larger than (up to a maximum size). So, there is a regular n-gon with any choice of corner angle larger than (again, up to some maximum size). The maximum sizes aren’t as important, and are left for the exercises. This table summarizes the corner angles of some regular polygons on the sphere: |Name||Number of Sides||Corner Angle| To make a regular tessellation of the sphere, we need to pick one regular polygon and use it to cover the sphere. As with regular tessellations of the plane, the difficulty is to fit corner angles around a vertex, which requires the corner angle to divide evenly into 360°. This means that the possibilities for corner angles are 360/2 = 180°, 360/3 = 120°, 360/4 = 90°, 360/5 = 72°, 360/6 = 60°, and so on. Compare the corner angles needed for tessellating and the corner angles of spherical polygons in the table. Most spherical polygons have corner angles too large to fit together at a vertex. Ignoring biangles and 180° corner angles for the moment, there are only five possibilites for regular spherical tessellations: - triangles with 72° angles, five meeting at a vertex - triangles with 90° angles, four meeting at a vertex - triangles with 120° angles, three meeting at a vertex - quadrilaterals with 120° angles, three meeting at a vertex - pentagons with 120° angles, three meeting at a vertex When making tessellations of the Euclidean plane, it was not so surprising that once we had six 60°-60°-60° triangles around one vertex we were able to fill out the rest of the plane. With spherical geometry, we can fit five 72°-72°-72° triangles around a vertex, but as we fill up the sphere with triangles we have to hope that they come together on the back and actually close up. Amazingly, in all five cases listed above, the polygons do cover the sphere and we get a regular tessellation. Here’s what they look like: |120° Triangles||90° Triangles||72° Triangles| |120° Quadrilaterals||120° Pentagons| Degenerate Regular Tessellations Two strange “degenerate” types of regular tessellations show up in spherical geometry. The first is by polygons with corner angles equal to 180°. A 180° corner doesn’t look like a corner at all, and a regular n-gon with 180° corner angles simply looks like a hemisphere with n evenly spaced dots on its edge for the “vertices”. Two of these fit together to cover the sphere. One can argue about whether this should be a polygon at all, but we’ll see that it fits very nicely in a larger picture of regular tessellations and is worth including. The other degenerate tessellation is a “beach ball” made with k biangles which have 360°/k corner angles. The beach ball with k = 7 is shown, and there is one for any choice of k > 1. The five non-degenerate regular tessellations have been known for thousands of years, although in their alter-egos as polyhedra. - A polyhedron is a three dimensional solid with a surface made of polygons. The polygons are known as the faces of the polyhedron. Spherical tessellations and polyhedra are closely related. Starting with a spherical tessellation by polygons, we can often replace the spherical (curved) polygons by flat polygons (that lie inside of the sphere) with the same vertices. The resulting solid is a polyhedron. Doing this for the regular tessellations of the sphere results in five polyhedra known as the Platonic solids: From left to right we see: Tetrahedron, Cube, Octahedron, Dodecahedron, Icosahedron. Plato did not discover these solids, but in the dialogue Timaeus he discusses the construction of the universe and (at some length) associates the cube, tetrahedron, octahedron and icosahedron with the elemental ideas of earth, fire, air and water. The dodecahedron, he claims, “God used in the delineation of the universe”. The attachment of mystical or spiritual properties to the platonic solids is, in some sense, a tribute to their mathematical perfection, and continues to this day. The 17th century astronomer Johannes Kepler wrote extensively about them, and attempted (more or less unsuccessfully) to explain the orbits of the planets as coming from the radii of a nested set of platonic solids. His reasoning was that God must have created the universe according to the platonic solids because of their mathematical perfection. In Kepler’s time, this was a somewhat heretical stance since it suggested that God was bound by rules of mathematics discovered by science. Make friends with the Platonic solids by doing the Platonic Solids Exploration. In 1750, the Swiss mathematician Leonhard Euler discovered a remarkable formula involving the number of faces f, edges e, and vertices v of a polyhedron: v − e + f = 2 As a first step to understanding this equation, we will calculate v, e, and f for the Platonic solids and check that v − e + f = 2 in these cases. We can count the faces of each Platonic solid by considering the corresponding spherical tessellation. For example, in the tessellation corresponding to the dodecahedron there are three pentagons at each vertex, so that each pentagon has 120° corner angles. The five angles give an angle sum of 5*120° = 600°. Since a Euclidean pentagon has angle sum 540°, these spherical pentagons have defect equal to 600°-540° = 60°. Each pentagon therefore covers 60°/720° = 1/12 of the sphere, and so there are 12 faces on the dodecahedron. In the spherical tessellation corresponding to the octahedron, four triangles meet at a vertex. Therefore these are 90°-90°-90° triangles, which have defect 270°-180° = 90°. Each one covers 90°/720° = 1/8 of the sphere, so there are 8 faces on the octahedron. To count the edges of the dodecahedron, notice that each of the 12 faces has 5 edges. Since each edge is shared by two faces, there are 12*5/2 = 30 edges on the dodecahedron. Another way to understand this calculation is to imagine each edge cut into a left and right half. Then each face contributes 5 half-edges, and 12 * 5 *1/2 = 30. As another example, let’s count the edges of the octahedron (which you can probably do by inspection). There are eight triangles, each with three edges, so there are 8*3/2 = 12 edges in the octahedron. We’ll count the vertices in a similar manner. For the dodecahedron, each of the 12 faces has 5 vertices. Since each vertex is shared by three faces, there are 12*5/3 = 20 edges on the dodecahedron. Another way to understand this calculation is to imagine each vertex cut into three 120° wedges. Then one corner of one face is 1/3 of a vertex, and 12*5*1/3 = 20. For the octahedron, there are 8 faces with 3 vertices each, and each vertex is shared by 4 faces. There are 8*3/4 = 6 vertices. Similar calculations establish the number of faces, edges, and vertices on the tetrahedron, cube, and icosahedron. It is also possibly to simply count by inspection. We arrive at the following table: |Polyhedron||# of vertices (v)||# of edges (e)||# of faces (f)||v – e + f| We calculate one more example, the “tetrakis hexahedron”, which is the basis for Escher’s Sphere with Angels and Devils. Each face is a triangle, but this is not a regular tessellation of the sphere since these are not equilateral triangles. In the corresponding spherical tessellation, two of the triangle’s corners are 60° angles, since six of them together at those points. The other corner is 90° since four triangles come together at that point. The defect is 60°+60°+90° - 180° = 30°. The area fraction is 30°/720° = 1/24, so 24 of these triangles cover the sphere. Since the tetrakis hexahedron has 24 faces and each face has 3 edges, it has 24*3/2 = 36 edges. The easiest way to count the vertices of the tetrakis hexahedron is to use v − e + f = 2. Since f = 24 and e = 36, v must be 14. As a check, we count another way. Each triangle contributes 1/4 of one vertex, and 1/6 each of two others. Since there are 24 triangles, the total number of vertices v = 24 * (1 / 4 + 1 / 6 + 1 / 6) = 14. Having seen some evidence that v – e + f is 2, we try to make a convincing argument that it is always 2 for spherical tessellations. We write the Greek letter chi as a shorthand, χ = v − e + f. The plan is to start with a sphere with one dot on it. Since there is one face (the sphere) and one vertex (the dot), χ = v − e + f = 1 − 0 + 1 = 2 in this case. Now we build whatever tessellation we desire by using one of the following two moves: - Move I: Add a new dot and an edge connecting it to an existing dot. - Move II: Add an edge connecting two (different) existing dots. The key point is that neither Move I or Move II changes χ. Move I adds one vertex and one edge, which cancel in v – e + f. Move II adds one new edge, and cuts one face into two, creating a net increase of one face. Again, e increasing by 1 and f increasing by 1 cancel in v − e + f. To make this argument into a rigorous mathematical proof, we would need to argue that any spherical tessellation can be built from the one-dot-sphere via a series of Moves I and II. While not a difficult argument, it is too technical for this discussion. However, it is not hard to draw specific tessellations by hand in this way. The quantity χ = v − e + f is called the Euler characteristic. It is always 2 for spherical tessellations (and for polyhedra), but can actually be different from 2 on other surfaces. A torus is a surface with one hole, for example a donut or an inner tube. A careful count of v, e, and f on a torus made of polygons shows that its Euler characteristic is 0. The number χ can detect the shape of a surface without noticing size or other deformations such as stretching (e.g. a sphere stretched into a football shape). Because of this, it is considered part of a branch of mathematics called topology. Every tessellation by polygons has a dual. The duality process works in Euclidean geometry, non-Euclidean geometry, and even with polyhedra. We start with Euclidean geometry first, to get the idea. To find the dual to a tessellation, start with a tessellation by polygons, and put a point inside each polygon of the tessellation. Connect these new points by line segments, connecting two points when their enclosing polygons share an edge. The dual tessellation is made up of these points and the edges connecting them. Sometimes the line joining two new points will have to cross through other tiles of the tessellation, but that’s fine. Another question is what to do near the edges of a tessellation, and as you can see in the example we left some polygons open to avoid this. For tessellations that cover the entire plane, and for spherical tessellations, this isn’t an issue so we won’t worry about it. In the dualizing process, we get one new vertex for every tile of the original tessellation. We also get one edge in our dual for every edge in the original, since two points are connected by an edge if their faces had an edge in common. Finally, we get one new tile in the dual for each vertex of the original. This is because the edges leaving an original vertex get turned into edges of a polygon that surrounds the original vertex. Repeating the dual returns to the original tessellation, which means that neither one is particularly more original than the other, so we just say that two tessellations are dual. For example, the tessellation by equilateral triangles and the tessellation by regular hexagons are dual: Duality works for spherical tessellations and for polyhedra. With polyhedra, the process forms the dual polyhedron inside the original, but altering the sizes can lead to pleasing “compound” solids. As an example, the octahedron and cube are dual. |Octahedron inside cube; Cube-octahedron compound; Cube inside octahedron.||A Japanese temari ball showing duality of the cube (orange) and octahedron (purple). (D. Abolt)| Practice with duality by doing the Duality Exploration. The cube-octahedron compound is visible in the upper left corner of Escher’s Stars. The duality in Escher's Double Planetoid works on two levels. First, there is a mathematical duality between the two tetrahedra, while at the same time there is the natural duality between the urban city and the wild jungle landscape. Duality is a recurring theme in Escher's work. His early works emphasize duality using rotation or reflection symmetry. Scapegoat juxtaposes good and evil with an order 2 rotation, while Paradise depicts man and woman almost as mirror images. Many of Escher's tessellations feature two opposing figures, and are used in prints where one figure dominates half the image while the other figure dominates in the other half. For example, in Day and Night, black birds flying left dominate the daytime half of the print, while white birds flying right dominate the nighttime half. In the center of the print, both sets of birds fit together and balance the print. Escher takes a similar approach in Sky and Water I and Sky and Water II, though these are less developed than Day and Night. As a print artist, and particularly a maker of woodcut prints, Escher developed a strong sense of the duality between figure (foreground) and ground (background). When making a woodcut, the artist carves away wood where the block will not print, leaving the printing areas untouched. In other words, to create figure, the artist removes the ground. Sun and Moon is a beautiful example of the duality between figure and ground. One can see this as a picture of grey birds flying and partially obscuring a golden sun. Alternately, it is a picture of brightly colored birds flying against the backdrop of a night sky. Switching back and forth between these interpretations takes some mental effort, but demonstrates that either set of birds functions equally well as the figure or the ground. Symmetries in Spherical Geometry Rotations are Translations: Spherical rotations involve spinning the sphere around an axis line that goes through the center of the sphere. A spherical rotation has two points that don’t move, where the rotation axis hits the sphere at a pair of antipodal points. For example, the Earth (idealized a bit) rotates on its axis, and the North and South poles don’t move. Translations on the sphere are exactly the same as rotations. A translation should slide along a geodesic. The geodesics are great circles, and if you slide along a great circle the sphere rotates around an axis. Picture the Earth’s equator, and as the world turns it appears that points near the equator are being translated east. Note that translations of the sphere do differ quite a bit from translations of the plane. In the Euclidean plane translations and rotations are distinct isometries, while on the sphere they can be thought of as the same rigid motion of the sphere. An added peculiarity is that on the sphere translating through a distance greater than the circumference of the sphere would result in the image circling the sphere before reaching its destination. Reflections: You can reflect a sphere using a geodesic as your reflection line. The reflection exchanges the two hemispheres. Reflections play an important role in planar geometry. It can be shown that composing reflections across parallel mirror lines results in a translation. Reflections across two intersecting lines results in a rotation about this intersection point. On the sphere we do not have any parallel lines, and hence the composition of two distinct reflections always results in a rotation about the intersection point of the two mirror lines. But by comments above it follows that we could also interpret this as a spherical translation if we wanted. Glide-Reflections: Like Euclidean geometry, the combination of a reflection and a translation is a new kind of symmetry. We saw above that translations on the sphere are really rotations, and hence a glide-reflection could also be called a rotation-reflection. Symmetry Groups of the Sphere Symmetry groups of plane figures were completely classified into the rosette, frieze, and wallpaper groups. There are infinitely many rosette groups in two types, depending on whether reflection symmetry is present. The frieze and wallpaper groups were finite lists, harder to classify and less structured. For the sphere, the classification of symmetry groups is closely tied to the platonic solids. In fact: Every symmetry group of a spherical figure comes about by selecting some of the symmetries of a regular spherical tessellation. For example, there are two symmetry groups coming from the octahedron: One which has all symmetries of the octahedron and a second which has only the rotations, and none of the reflections. Escher's Sphere with Fish has symmetry of this reflectionless octahedron type. Two more spherical symmetry groups come from the icosahedron, in exactly the same way. Because of duality, the cube and dodecahedron contribute nothing new - they have the same symmetries as their duals. The tetrahedron contributes three: One with no reflections, one with all six possible reflections, and a third with three of the reflections. The rest of the symmetry groups possible on the sphere come from the degenerate regular tessellations. All symmetries in these groups keep one axis of the sphere fixed in place, much like the rosette groups have a fixed center point. Because there are infinitely many degenerate regular tessellations, there are also infinitely many symmetry groups derived from them. Again, the situation parallels that of rosette groups. For more details, notation, and a complete list of all these groups, see Wikipedia's list of spherical symmetry groups. Relevant examples from Escher's work - Concentric Rinds - Sphere with Fish, Sphere with Angels and Devils, Sphere with Eight Grotesques, Sphere with Reptiles, Carved Polyhedron with Flowers - Sphere Surface with Fish, Sphere Spirals - Double Planetoid, Tetrahedral Planetoid - Four Regular Solids (Stereometric Figure) - Crystal, Order and Chaos, Order and Chaos II - Stars and Study for Stars - Spherical Easel web applet for spherical geometry. - The Geometry of the Sphere, by John C. Polking. - KaleidoTile, by Jeff Weeks. - Archimedean Kaleidoscope, by Jim Morey. - Temari, a Japanese textile art, by Deborah Abolt - Google Planimeter, measures area of triangles on the Earth. Platonic Solids and Polyhedra - wikipedia:Platonic solid - Polyhedra and Art examples through history, by George W. Hart. - Platonic Solids from Geometry in Art and Architecture by Paul Calter. - 3D models, by Evgeny Demidov. - Paper models of polyhedra, by Korthals Altes. - MatHSoliD Java applet for viewing and unfolding Platonic and Archimedean polyhedra, by E. daSilva and H. Bortolossi. - Animated Platonic solids, with duals, from http://www.platonicsolids.info - Applet for Platonic solids with duals, by G.M. Todesco. - ↑ D.E. Smith, History of Mathematics, Vol. II, pg. 280
http://mathcs.slu.edu/escher/index.php/Spherical_Geometry
13
51
In mathematics, cardinal numbers, or cardinals for short, are a generalization of the natural numbers used to measure the cardinality of sets. The cardinality of a finite set is a natural number – the number of elements in the set. The transfinite cardinal numbers describe the sizes of infinite... In linguistics, ordinal numbers are the words representing the rank of a number with respect to some order, in particular order or position . Its use may refer to size, importance, chronology, etc... hundredth (or, one hundredth) | Numeral system A numeral system is a writing system for expressing numbers, that is a mathematical notation for representing numbers of a given set, using graphemes or symbols in a consistent manner.... The unary numeral system is the bijective base-1 numeral system. It is the simplest numeral system to represent natural numbers: in order to represent a number N, an arbitrarily chosen symbol representing 1 is repeated N times. For example, using the symbol | , the number 6 is represented as ||||||... In mathematics, factorization or factoring is the decomposition of an object into a product of other objects, or factors, which when multiplied together give the original... In mathematics, a divisor of an integer n, also called a factor of n, is an integer which divides n without leaving a remainder.-Explanation:... | 1, 2, 4, 5, 10, 20, 25, 50, 100 | Greek numeral Greek numerals are a system of representing numbers using letters of the Greek alphabet. They are also known by the names Ionian numerals, Milesian numerals , Alexandrian numerals, or alphabetic numerals... | Roman numeral The numeral system of ancient Rome, or Roman numerals, uses combinations of letters from the Latin alphabet to signify values. The numbers 1 to 10 can be expressed in Roman numerals as:... | Roman numeral (Unicode) || C, ⅽ The Hindu–Arabic numeral system or Hindu numeral system is a positional decimal numeral system developed between the 1st and 5th centuries by Indian mathematicians, adopted by Persian and Arab mathematicians , and spread to the western world... Bengali or Bangla is an eastern Indo-Aryan language. It is native to the region of eastern South Asia known as Bengal, which comprises present day Bangladesh, the Indian state of West Bengal, and parts of the Indian states of Tripura and Assam. It is written with the Bengali script... | Chinese numeral Chinese numerals are characters for writing numbers in Chinese. Today speakers of Chinese use three numeral systems:the ubiquitous Arabic numerals and two indigenous systems.... The Korean language has two regularly used sets of numerals, a native Korean system and Sino-Korean system.-Construction:For both native and Sino- Korean numerals, the teens are represented by a combination of tens and the ones places... Devanagari |deva]]" and "nāgarī" ), also called Nagari , is an abugida alphabet of India and Nepal... The system of Hebrew numerals is a quasi-decimal alphabetic numeral system using the letters of the Hebrew alphabet.In this system, there is no notation for zero, and the numeric values for individual letters are added together... Qoph or Qop is the nineteenth letter in many Semitic abjads, including Phoenician, Aramaic, Syriac, Hebrew and Arabic alphabet . Its sound value is an emphatic or . The OHED gives the letter Qoph a transliteration value of Q or a K and a final transliteration value as a ck... Khmer numerals are characters used for writing numbers for several languages in Cambodia, most notably Cambodia's official language, Khmer. They date back to at least the oldest known epigraphical inscription of the Khmer numerals in 604 AD, found on a stele in Prasat Bayang, Cambodia, located not... Tamil numerals , refers to the numeral system of the Tamil language used officially in Tamil Nadu, Sri Lanka, Singapore and Mauritius, as well as by the other Tamil-speaking populations around the world including Malaysia, Réunion, and South Africa, and other emigrant communities around the world... | ௱, க00 Thai numerals constitute a numeral system of Thai number names for the Khmer numerals traditionally used in Thailand, also used for the more common Arabic numerals, and which follow the Hindu-Arabic numeral system.-Usage:... | ร้อย, ๑๐๐ Number prefixes are prefixes derived from numbers or numerals. In English and other European languages, they are used to coin numerous series of words, such as unicycle – bicycle – tricycle, dyad – triad – decade, biped – quadruped, September – October – November – December, decimal – hexadecimal,... The binary numeral system, or base-2 number system, represents numeric values using two symbols, 0 and 1. More specifically, the usual base-2 system is a positional notation with a radix of 2... The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Numerals can be made from binary numerals by grouping consecutive binary digits into groups of three... The duodecimal system is a positional notation numeral system using twelve as its base. In this system, the number ten may be written as 'A', 'T' or 'X', and the number eleven as 'B' or 'E'... In mathematics and computer science, hexadecimal is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to represent values zero to nine, and A, B, C, D, E, F to represent values ten to fifteen... ) (Roman numeral The numeral system of ancient Rome, or Roman numerals, uses combinations of letters from the Latin alphabet to signify values. The numbers 1 to 10 can be expressed in Roman numerals as:... C, for centum ) is the natural number In mathematics, the natural numbers are the ordinary whole numbers used for counting and ordering . These purposes are related to the linguistic notions of cardinal and ordinal numbers, respectively... 99 is the natural number following 98 and preceding 100.-Mathematics:99 is the ninth repdigit, a palindromic number and a Kaprekar number... and preceding 101 101 is the natural number following 100 and preceding 102.It is variously pronounced "one hundred and one" / "a hundred and one", "one hundred one" / "a hundred one", and "one oh one"... is the square of 10 10 is an even natural number following 9 and preceding 11.-In mathematics:Ten is a composite number, its proper divisors being , and... (in scientific notation Scientific notation is a way of writing numbers that are too large or too small to be conveniently written in standard decimal notation. Scientific notation has a number of useful properties and is commonly used in calculators and by scientists, mathematicians, doctors, and engineers.In scientific... it is written as 102 ). The standard SI prefix The International System of Units specifies a set of unit prefixes known as SI prefixes or metric prefixes. An SI prefix is a name that precedes a basic unit of measure to indicate a decadic multiple or fraction of the unit. Each prefix has a unique symbol that is prepended to the unit symbol... for a hundred is "hecto-". One hundred is the basis of percentage In mathematics, a percentage is a way of expressing a number as a fraction of 100 . It is often denoted using the percent sign, “%”, or the abbreviation “pct”. For example, 45% is equal to 45/100, or 0.45.Percentages are used to express how large/small one quantity is, relative to another quantity... s (per cent meaning "per hundred" in Latin), with 100% being a full amount. It is the sum of the first nine prime number A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a prime number is called a composite number. For example 5 is prime, as only 1 and 5 divide it, whereas 6 is composite, since it has the divisors 2... s, as well as the sum of four pairs of prime numbers (47 + 53, 17 + 83, 3 + 97, 41 + 59), and the sum of the cubes of the first four integers (100 = 13 ). Also, 26 = 100, thus 100 is a Leyland number In number theory, a Leyland number is a number of the form xy + yx, where x and y are integers greater than 1. The first few Leyland numbers are... One hundred is also an 18-gonal number In mathematics, a polygonal number is a number represented as dots or pebbles arranged in the shape of a regular polygon. The dots were thought of as alphas . These are one type of 2-dimensional figurate numbers.- Definition and examples :... . It is divisible by the number of primes below it, 25 in this case. But it can not be expressed as the difference between any integer and the total of coprime In number theory, a branch of mathematics, two integers a and b are said to be coprime or relatively prime if the only positive integer that evenly divides both of them is 1. This is the same thing as their greatest common divisor being 1... s below it, making it a noncototient In mathematics, a noncototient is a positive integer n that cannot be expressed as the difference between a positive integer m and the number of coprime integers below it. That is, m − φ = n, where φ stands for Euler's totient function, has no solution for m... . However, it can be expressed as a sum of some of its divisors, making it a semiperfect number In number theory, a semiperfect number or pseudoperfect number is a natural number n that is equal to the sum of all or some of its proper divisors. A semiperfect number that is equal to the sum of all its proper divisors is a perfect number.... 100 is a Harshad number A Harshad number, or Niven number in a given number base, is an integer that is divisible by the sum of its digits when written in that base. Harshad numbers were defined by D. R. Kaprekar, a mathematician from India. The word "Harshad" comes from the Sanskrit + , meaning joy-giver. The Niven... in base 10, and also in base 4, and in that base it is a self-descriptive number A self-descriptive number is an integer m that in a given base b is b-digits long in which each digit d at position n counts how many instances of digit n are in m.For example, in base 10, the number 6210001000 is self-descriptive because of the following... There are exactly 100 prime numbers whose digits are in strictly ascending order. (e.g. 239, 2357 etc.) 100 is the smallest number whose common logarithm is a prime number. The atomic number of fermium Fermium is a synthetic element with the symbol Fm. It is the 100th element in the periodic table and a member of the actinide series. It is the heaviest element that can be formed by neutron bombardment of lighter elements, and hence the last element that can be prepared in macroscopic quantities,... On the Celsius Celsius is a scale and unit of measurement for temperature. It is named after the Swedish astronomer Anders Celsius , who developed a similar temperature scale two years before his death... scale, 100 degrees is the boiling temperature of pure water Water is a chemical substance with the chemical formula H2O. A water molecule contains one oxygen and two hydrogen atoms connected by covalent bonds. Water is a liquid at ambient conditions, but it often co-exists on Earth with its solid state, ice, and gaseous state . Water also exists in a... at sea level Mean sea level is a measure of the average height of the ocean's surface ; used as a standard in reckoning land elevation... - There are 100 blasts of the Shofar A shofar is a horn, traditionally that of a ram, used for Jewish religious purposes. Shofar-blowing is incorporated in synagogue services on Rosh Hashanah and Yom Kippur.Shofar come in a variety of sizes.- Bible and rabbinic literature :... heard in the service of Rosh Hashana, the Jewish New Year. - A religious Jewish person is expected to utter at least 100 blessings daily. Most of the world's currencies In economics, currency refers to a generally accepted medium of exchange. These are usually the coins and banknotes of a particular government, which comprise the physical aspects of a nation's money supply... are divided into 100 subunits; for example, one euro The euro is the official currency of the eurozone: 17 of the 27 member states of the European Union. It is also the currency used by the Institutions of the European Union. The eurozone consists of Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg,... is one hundred cents and one pound sterling The pound sterling , commonly called the pound, is the official currency of the United Kingdom, its Crown Dependencies and the British Overseas Territories of South Georgia and the South Sandwich Islands, British Antarctic Territory and Tristan da Cunha. It is subdivided into 100 pence... is one hundred pence. The U.S. hundred-dollar bill The United States one hundred-dollar bill is a denomination of United States currency. U.S. statesman, inventor and diplomat Benjamin Franklin is currently featured on the obverse of the bill. On the reverse of the banknote is an image of Independence Hall. The time on the clock according to the... has Benjamin Franklin Dr. Benjamin Franklin was one of the Founding Fathers of the United States. A noted polymath, Franklin was a leading author, printer, political theorist, politician, postmaster, scientist, musician, inventor, satirist, civic activist, statesman, and diplomat... 's portrait; the "Benjamin" is the largest U.S. bill in print. American savings bonds of $100 have Thomas Jefferson Thomas Jefferson was the principal author of the United States Declaration of Independence and the Statute of Virginia for Religious Freedom , the third President of the United States and founder of the University of Virginia... 's portrait, while American $100 treasury bonds have Andrew Jackson Andrew Jackson was the seventh President of the United States . Based in frontier Tennessee, Jackson was a politician and army general who defeated the Creek Indians at the Battle of Horseshoe Bend , and the British at the Battle of New Orleans... In other fields - The number of years in a century A century is one hundred consecutive years. Centuries are numbered ordinally in English and many other languages .-Start and end in the Gregorian Calendar:... - The number of pounds in an American short hundredweight The hundredweight or centum weight is a unit of mass defined in terms of the pound . The definition used in Britain differs from that used in North America. The two are distinguished by the terms long hundredweight and short hundredweight:* The long hundredweight is defined as 112 lb, which... - The number of tiles in a standard Scrabble Scrabble is a word game in which two to four players score points by forming words from individual lettered tiles on a game board marked with a 15-by-15 grid. The words are formed across and down in crossword fashion and must appear in a standard dictionary. Official reference works provide a list... set for the languages, English, Arabic, Catalan, Czech, Danish, Esperanto, Finnish, Hungarian, Irish, Latin, Malaysian, Norwegian, Polish, Romanian, Slovak, Slovenian, Spanish, Swedish, Turkish - In Greece Greece , officially the Hellenic Republic , and historically Hellas or the Republic of Greece in English, is a country in southeastern Europe.... India , officially the Republic of India , is a country in South Asia. It is the seventh-largest country by geographical area, the second-most populous country with over 1.2 billion people, and the most populous democracy in the world... The State of Israel is a parliamentary republic located in the Middle East, along the eastern shore of the Mediterranean Sea... Nepal , officially the Federal Democratic Republic of Nepal, is a landlocked sovereign state located in South Asia. It is located in the Himalayas and bordered to the north by the People's Republic of China, and to the south, east, and west by the Republic of India... , 100 is the police telephone number. - In Belgium Belgium , officially the Kingdom of Belgium, is a federal state in Western Europe. It is a founding member of the European Union and hosts the EU's headquarters, and those of several other major international organisations such as NATO.Belgium is also a member of, or affiliated to, many... , 100 is the ambulance and firefighter telephone number. - In United Kingdom The United Kingdom of Great Britain and Northern IrelandIn the United Kingdom and Dependencies, other languages have been officially recognised as legitimate autochthonous languages under the European Charter for Regional or Minority Languages... , 100 is the operator A telephone operator is either* a person who provides assistance to a telephone caller, usually in the placing of operator assisted telephone calls such as calls from a pay phone, collect calls , calls which are billed to a credit card, station-to-station and person-to-person calls, and certain... - Hundred Days The Hundred Days, sometimes known as the Hundred Days of Napoleon or Napoleon's Hundred Days for specificity, marked the period between Emperor Napoleon I of France's return from exile on Elba to Paris on 20 March 1815 and the second restoration of King Louis XVIII on 8 July 1815... , aka the Waterloo The Battle of Waterloo was fought on Sunday 18 June 1815 near Waterloo in present-day Belgium, then part of the United Kingdom of the Netherlands... - "The First Hundred Days" is an arbitrary benchmark of a President of the United States The President of the United States of America is the head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander-in-chief of the United States Armed Forces.... ' performance at the beginning of his or her term. - 100 is the HTTP status code indicating that the client should continue with its request When a TV series reaches 100 episodes In the U.S. television industry, 100 episodes is the traditional threshold at which point a television series becomes viable for syndication. 100 episodes are advantageous for stripped syndication because such shows can be sold for higher per-episode pricing; it also allows for 20 weeks of... , it is generally considered viable for syndication. (For shows picked up midseason, this point is generally reached during a prime time series' 5th season). - The number of yards in an American football American football is a sport played between two teams of eleven with the objective of scoring points by advancing the ball into the opposing team's end zone. Known in the United States simply as football, it may also be referred to informally as gridiron football. The ball can be advanced by... field (not including the end zones). - The number of runs required for a cricket Cricket is a bat-and-ball game played between two teams of 11 players on an oval-shaped field, at the centre of which is a rectangular 22-yard long pitch. One team bats, trying to score as many runs as possible while the other team bowls and fields, trying to dismiss the batsmen and thus limit the... batsman to score a century In the sport of cricket, a batsman reaches his century when he scores 100 or more runs in a single innings. The term is also included in "century partnership" which occurs when two batsmen add 100 runs to the team total when they are batting together. A century is regarded as a landmark score for... , a significant milestone. - The number of points required for a snooker Snooker is a cue sport that is played on a green baize-covered table with pockets in each of the four corners and in the middle of each of the long side cushions. A regular table is . It is played using a cue and snooker balls: one white , 15 worth one point each, and six balls of different :... cueist to score a century break, a significant milestone. - The record number of points scored in one NBA The National Basketball Association is the pre-eminent men's professional basketball league in North America. It consists of thirty franchised member clubs, of which twenty-nine are located in the United States and one in Canada... Wilt Chamberlain's 100-point game, named by the National Basketball Association as one of its greatest games, was a regular-season game between the Philadelphia Warriors and the New York Knicks held on March 2, 1962, at Hersheypark Arena in Hershey, Pennsylvania.The Warriors won the game 169-147,... by a single player, set by Wilt Chamberlain Wilton Norman "Wilt" Chamberlain was an American professional NBA basketball player for the Philadelphia/San Francisco Warriors, the Philadelphia 76ers and the Los Angeles Lakers; he also played for the Harlem Globetrotters prior to playing in the NBA... of the Philadelphia Warriors The Golden State Warriors are an American professional basketball team based in Oakland, California. They are part of the Pacific Division of the Western Conference in the National Basketball Association... on March 2, 1962 I - The minimum distance in yard A yard is a unit of length in several different systems including English units, Imperial units and United States customary units. It is equal to 3 feet or 36 inches... s for a Par 3 on a golf course A golf course comprises a series of holes, each consisting of a teeing ground, fairway, rough and other hazards, and a green with a flagstick and cup, all designed for the game of golf. A standard round of golf consists of playing 18 holes, thus most golf courses have this number of holes... - AFI's 100 Years... - 100 Greatest Britons 100 Greatest Britons was broadcast in 2002 by the BBC. The programme was the result of a vote conducted to determine whom the United Kingdom public considers the greatest British people in history. The series, Great Britons, included individual programmes on the top ten, with viewers having further... - 100 Greatest Christmas Moments - 100 Greatest Kids' TV shows - Hundred (division) - Hundred (word) Today in English, a hundred is always taken to be equal to 100. However, before the 18th century, it could mean other values, depending on the objects being counted. Sometimes the value of 100 was referred to as a small hundred and the larger value of 120 was referred to as a long hundred... - Hundred Days The Hundred Days was Napoleon Bonaparte's final military campaign in 1815.Hundred Days may also refer to:* Hundred Days Offensive, the final Allied offensive on the Western Front during the World War I... - Hundred Years' War The Hundred Years' War was a series of separate wars waged from 1337 to 1453 by the House of Valois and the House of Plantagenet, also known as the House of Anjou, for the French throne, which had become vacant upon the extinction of the senior Capetian line of French kings... - List of highways numbered 100 - Top 100 Top 100 may refer to:*Billboard Hot 100, the United States music industry standard singles popularity chart issued weekly by Billboard magazine... - The Top 100 Crime Novels of All Time The Top 100 Crime Novels of All Time is a list published in book form in 1990 by the British-based Crime Writers' Association. Five years later, the Mystery Writers of America published a similar list entitled The Top 100 Mystery Novels of All Time... - Top 100 winning pitchers of all time - 100 Most Influential Books Ever Written
http://www.absoluteastronomy.com/topics/100_(number)
13
61
Coleoptera () is an order of insects commonly called beetle. The word "coleoptera" is from the Greek ???e??, koleos, "sheath"; and pte???, pteron, "wing", thus "sheathed wing". Coleoptera contains more species than any other order, constituting almost 25% of all known life-forms. About 40% of all described insect species are beetles (about 400,000 species), and new species are discovered frequently. Some estimates put the total number of species, described and undescribed, at as high as 100 million, but 1 million is a more accepted figure. The largest taxonomic family, the Curculionidae (the weevils or snout beetles), also belongs to this order. The diversity of beetles is very wide-ranging. They are found in almost all types of habitat s, but are not known to occur in the sea or in the polar regions. They interact with their ecosystems in several ways. They often feed on fungi, break down animal and plant debris, and eat other invertebrates. Some species are prey of various vertebrates including birds and mammals. Certain species are agricultural pests, such as the Colorado potato beetle Leptinotarsa decemlineata, the boll weevil Anthonomus grandis, the red flour beetle Tribolium castaneum, and the mungbean or cowpea beetle Callosobruchus maculatus, while other species of beetles are important controls of agricultural pests. For example, beetles in the family Coccinellidae ("ladybirds" or "ladybugs") consume aphids, scale insects, thrips, and other plant-sucking insects that damage crops. Species in this order are generally characterized by a particularly hard exoskeleton and hard forewings (elytra, singular elytron). These elytra distinguish beetles from most other insect species, except for a few species of Hemiptera. The beetle's exoskeleton is made up of numerous plates called sclerites, separated by thin sutures. This design creates the armored defenses of the beetle while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages may vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. Coleopteran internal morphology is similar to other insects, although there are several examples of novelty. Such examples include species of water beetle which use air bubbles to dive under the water, and can remain submerged thanks to passive diffusion allowing oxygen to transfer from the water into the bubble. Beetles are endopterygotes, which means they undergo complete metamorphosis, a biological process by which an animal physically develops after birth or hatching, undergoing a series of conspicuous and relatively abrupt change in the its body structure. Coleopteran species have an extremely intricate behavior when mating, using such methods as pheromones for communication to locate potential mates. Males may fight for females using very elongated mandibles, causing a strong divergence between males and females in sexual dimorphism. Coleoptera comes from the Greek koleopteros, literally "sheath-wing", from koleos meaning "sheath", and pteron, meaning "wing". The name was given to the group by Aristotle for their elytra, hardened shield-like forewings. The English name "beetle" comes from the Old English word bitela, literally meaning small biter, deriving from the word bitel, which means biting. In addition to names including the word "beetle", individual species of Coleoptera have a variety of common names, including fireflies, June bugs, ladybugs and weevils. Distribution and diversity Beetles are one of the largest orders of insects, with 350,000?400,000 species in four suborders (Adephaga, Archostemata, Myxophaga, and Polyphaga), making up about 40% of all insect species described. Even though classification at the family level is a bit unstable, there are about 500 recognized families and subfamilies. One of the first proposed estimates of the total number of beetle species on the planet is based on field data rather than on catalog numbers. The technique used for his original estimate, possibly as many as 12,000,000 species, was criticized, and was later revised, wit h estimates of 850,000?4,000,000 species proposed. Some 70?95% of all beetle species, depending on the estimate, remain undescribed. The beetle fauna is not equally well known in all parts of the world. For example, the known beetle diversity of Australia is estimated at 23,000 species in 3265 genera and 121 families. This is slightly lower than reported for North America, a land mass of similar size with 25,160 species in 3526 genera and 129 families. While other predictions show there could be as many as 28,000 species in North America, including those currently undescribed, a realistic estimate of the little-studied Australian beetle fauna's true diversity could vary from 80,000 to 100,000. Patterns of beetle diversity can be used to illustrate factors that have led to the success of the group as a whole. Based on estimates for all 165 families, more than 358,000 species of beetles have been described a nd are considered valid. Most species (about 62%) are in six extremely diverse families, each with at least 20,000 described species: Curculionidae, Staphylinidae, Chrysomelidae, Carabidae, Scarabaeidae, and Cerambycidae. The smaller families account for 22% of the total species - 127 families with fewer than 1000 described species and 29 families with 1000?6000 described species. So, the success of beetles as a whole is driven not only by several extremely diverse lineages, but also by a high number of moderately successful lineages. The patterns seen today indicate that beetles went through a massive adaptive radiation early in their evolutionary history, with many of the resulting lineages flourishing through hundreds of millions of years to the present. The adaptive radiation of angiosperms helped drive the diversification of beetles, as four of the six megadiverse families of beetles are primarily angiosperm-feeders: Curculionidae], Chrysomelidae, Scarabaeidae, and Cerambycidae. However, even without the phytophagous groups, lineages of predators, scavengers, and fungivores are tremendously successful. Coleoptera are found in nearly all natural habitats, including freshwater and marine habitats, everywhere there is vegetative foliage, from trees and their bark to flowers, leaves, and underground near roots- even inside plants in galls, in every plant tissue, including dead or decaying ones. Beetles are generally characterized by a particularly hard exoskeleton and hard forewings (elytra). The beetle's exoskeleton is made up of numerous plates called sclerites, separated by thin sutures. This design provides armored defenses while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages may vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. The head, having mouthparts projecting forward or sometimes downturned, is usually heavily sclerotized and varies in size. The eyes are compound and may display remarkable adaptability, as in the case of whirligig beetles (family Gyrinidae), where they are split to allow a view both above and below the waterline. Other species also have divided eyes ? some longhorn beetles (family Cerambycidae) and weevils ? while many have eyes that are notched to some degree. A few beetle genera also possess ocelli, which are small, simple eyes usually situated farther back on the head (on the vertex). Beetles' antennae are primarily organs of smell, but may also be used to feel out a beetle's environment physically. They may also be used in some families during mating, or among a few beetles for defence. Antennae vary greatly in form within the Coleoptera, but are often similar within any given family. In some cases, males and females of the same species will have different antennal forms. Antennae may be clavate (flabellate and lamellate are sub-forms of clavate, or clubbed antennae), filiform, geniculate, moniliform, pectinate, or serrate. Beetles have mouthparts similar to those of grasshoppers. Of these parts, the most commonly known are probably the mandibles, which appear as large pincers on the front of some beetles. The mandibles are a pair of hard, often too th-like structures that move horizontally to grasp, crush, or cut food or enemies (see defence, below). Two pairs of finger-like appendages are found around the mouth in most beetles, serving to move food into the mouth. These are the maxillary and labial palpi. In many species the mandibles are sexually dimorphic, with the males' enlarged enormously compared with those of females of the same species. The thorax is segmented into the two discernible parts, the pro- and pterathorax. The pterathorax is the fused meso- and metathorax, which are commonly separate in other insect species, although flexibly articulate from the prothorax. When viewed from below, the thorax is that part from which all three pairs of legs and both pairs of wings arise. The abdomen is everything posterior to the thorax. When viewed from above, most beetles appear to have three clear sections, but this is deceptive: on the beetle's upper surface, the middle "section" is a hard plate called the pronotum, which is only the front part of the thorax; the back part of the thorax is concealed by the beetle's wings. This further segmentation is usually best seen on the abdomen. The multi-segmented legs end in two to five small segments called tarsi. Like many other insect orders beetles bear claws, usually one pair, on the end of the last tarsal segment of each leg. While most beetle s use their legs for walking, legs may be variously modified and adapted for other uses. Among aquatic families ? Dytiscidae, Haliplidae, many species of Hydrophilidae and others ? the legs, most notably the last pair, are modified for swimming and often bear rows of long hairs to aid this purpose. Other beetles have fossorial legs that are widened and often spined for digging. Species with such adaptations are found among the scarabs, ground beetles, and clown beetles (family Histeridae). The hind legs of some beetles, such as flea beetles (within Chrysomelidae) and flea weevils (within Curculionidae), are enlarged and designed for jumping. The elytra are connected to the pterathorax, so named because it is where the wings are connected (pteron meaning "wing" in Greek). The elytra are not used for flight, but tend to cover the hind part of the body and protect the second pair of wings (al ae). They must be raised in order to move the hind flight wings. A beetle's flight wings are crossed with veins and are folded after landing, often along these veins, and stored below the elytra. In some beetles, the ability to fly has been lost. These include some ground beetles (family Carabidae) and some "true weevils" (family Curculionidae), but also desert- and cave-dwelling species of other families. Many have the two elytra fused together, forming a solid shield over the abdomen. In a few families, both the ability to fly and the elytra have been lost, with the best known example being the glow-worms of the family Phengodidae, in which the females are larviform throughout their lives. The abdomen is the section behind the metathorax, made up of a series of rings, each with a hole for breathing and respiration, called a spiracle; composing three different segmented sclerites: the tergum, pleura, and the sternum. The tergum in almost all species is membranous, or usually soft a nd concealed by the wings and elytra when not in flight. The pleura (singular: pleuron) are usually small or hidden in some species, with each pleuron having a single spiracle. The sternum is the most widely visible part of the abdomen, being a more or less scelortized segment. The abdomen itself does not have any appendages, however some species (for example,, Mordellidae) have articulating sternal lobes. The dig estive system of beetles is primarily based on plants which they for the most part feed upon, with mostly the anterior midgut performing digestion, although in predatory species (for example Carabidae) most digestion occurs in the crop by means of midgut enzymes. In Elateridae species, the predatory larvae defecate enzymes on their prey, with digestion being extraorally. The alimentary canal basically consists of a short narrow pharynx, a widened expansion, the crop and a poorly developed gizzard. After there is a midgut, that varies in dimensions between species, with a large amount of cecum, with a hindgut, with varying lengths. There are typically four to six Malpighian tubules. The nervous system in beetles contains all the types found in insects, varying between different species. With three thoracic and seven or eight abdominal ganglia can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure. Oxygen is obtained via a tracheal system. Air enters a series of tubes along the body through openings called spiracles, and is then taken into increasingly finer fibers. Some species of diving beetles (Dytiscidae) carry a bubble of air with them whenever they dive beneath the water surface. This bubble may be held under the elytra or it may be trapped against the body using specialized hairs. The bubble usually covers one or more spiracles so the insect can breathe air from the bubble while submerged. An air bubble provides an insect with o nly a short-term supply of oxygen, but thanks to its unique physical properties, oxygen will diffuse into the bubble and displacing the nitrogen, called passive diffusion. However, the volume of the bubble eventually diminishes and the beetle will have to return to the surface. Pumping movements of the body force the air through the system. Beetles have hemolymph instead of blood like other insect species, the open circulatory system of the beetle is driven by a tube-like heart attached to the top inside of the thorax. Different glands specialize for different pheromones produced for finding mates. Pheromones from species of Rutelinea are produced from epithelial cells lining the inner surface of the apical abdominal segments or amino acid based pheromones of Melolonthinae from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty-acid-derived aldehydes and acetates. For means of finding a mate also, fireflies (Lampyridae) utilized modified fat body cells with transparent surfaces backed with reflective uric acid crystals to biosynthetically produce light, or bioluminescence. The light produce is highly efficient, as it is produced by oxidation of luciferin by the enzymes luciferase in the presence of ATP (adenosine triphospate) and oxygen, producing oxyluciferin, carbon dioxide, and light. A notable number of species have developed special glands that produce chemicals for deterring predators (see Defense and predation). The Ground beetle's (of Carabidae) defensive glands, located at the posterior, produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids re leased from an opening at the end of the abdomen. African carabid beetles (for example, Anthia and Thermophilum - Thermophilum generally included within Anthia) employ the same chemicals as ants: formic acid. Bombardier beetles have well-developed, like other carabid beetles, pygidial glands that empty from the lateral edges of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers. The first holds hydroquinones and hydrogen peroxide, with the second holding just hydrogen peroxide plus catalases. These chemicals mix and result in an explosive ejection, forming temperatures of around 100 ?C (212 ?F), with the breakdown of hydroquinone to H2 + O2 + quinone, with the O2 propelling the excretion. Tympana l organs or hearing organs, which is a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurons, are described in two families. Several species of the genus Cicindela (Cicindelidae) have ears on the dorsal surface of the first abdominal segment beneath the wing; two tribes in the subfamily Dynastinae (Scarabaeidae) have ears just beneath the pronotal shield or neck membrane. The ears of both families are to ultrasonic frequencies, with strong evidence that they function to detect the presence of bats via there ultrasonic echolocation. Even though beetles constitute a large order and live in a variety of niches, examples of hearing is surprisingly lacking in species, though it is likely that most are just undiscovered. Reproduction and development Beetles are members of Endopterygota, which means like most other insects under complete metamorphosis, which consists of four main stages: the egg, the larva, the pupa, and the imago or adult. The larvae are commonly called grubs and the pupa are called cocoons. Beetles may display extremely intricate behavior when mating. Pheromone communication is likely to be important in the location of a mate. Different species use different chemicals for their pheromones. Some scarab beetles (for example,, Rutelinae) utilize pheromones derived from fatty acid synthesis, while other scarab be etles use amino acids and terpenoid compounds (for example,, Melolonthinae). Another way species of Coleoptera find mates is the use of biosynthesized light, or bioluminescence. This special form of a mating call is confined to fireflies (Lampyridae) by the use of abdominal light producing organs. The males and females engage in complex dialogue before mating, identifying different species by differences in duration, flight patterns, composition, and intensity. Before mating male and females may engage in various forms of behavior. Males and females may stridulate, or vibrate the object they are on. In some species (for example,, Meloidae) the male climbs onto the dorsum of the female and stroke his antennae on her head, palps and antennae. In the genus Eupompha of said family, the males draws the antennae along the longitudinal vertex on the male. They may not mate at all if they do not perform the precopulatory ritual. Conflict can play a part in the mating rituals of species such as burying beetles (genus Nicrophorus) where conflicts between males and females rage until only one of each is left, thus ensuring reproduction by the strongest and fittest. Many male beetles are territorial and will fiercely defend their small patch of territory from intruding males. In such species, the males may often have horns on the head and/or thorax, making their overall body lengths greater than those of the females, unlike most insects. Pairing is generally short but in some cases will last for several hours. During pairing sperm cells are transferred to the female to fertilize the egg. A single female may lay from several dozen to several thousand eggs during her lifetime. Eggs are usually laid according to the substrate the larva will feed on upon hatching. Among others, they can be laid loose in the substrate (for example, flour beetle), laid in clumps on leaves (for example, Colorado potato beetle), or individually attached (for example, mungbean beetle and other seed borers) or buried in the medium (for example, carrot weevil). Parental care varies between species, ranging from the simple laying of eggs under a leaf to certain scarab beetles, which construct underground structures complete with a supply of dung to house and feed their young. Other beetles are leaf rollers, biting sections of leaves to cause them to curl inwards, then laying their eggs, thus protected, inside. The larva is usually the principal feeding stage of the beetle life cycle. Larvae tend to feed voraciously once they emerge from their eggs. Some feed externally on plants, such as those of certain leaf beetles, while others feed within their food sources. Examples of internal feeders are most Buprestidae and longhorn beetles. The larvae of many beetle families are predatory like the adults (ground beetles, ladybirds, rove beetles). The larval period vari es between species but can be as long as several years. The larva are highly varied amongst species, with a well-developed and sclerotized head and have distinguishable thoracic and abdominal segments (usually the tenth, though sometimes the eight or ninth). Beetle larvae can be differentiated from other insect larvae by their hardened, often darkened head, the presence of chewing mouthparts, and spiracles along the sides of the body. Like adult beetles, the larvae are varied in appearance, particularly between beetle families. Beetles whose larvae are somewhat flattened and are highly mobile are the ground beetles, some rove beetles, and others; their larvae are described as campodeiform. Some beetle larvae resemble hardened worms with dark head capsules and minute legs. These are elateriform larvae, and are found in the click beetle (Elateridae) and darkling beetle (Tenebrionidae) families. Some elateriform larvae of click beetles are known as wireworms. Beetles in the families of the Scarabaeoidea have short, thick larvae described as scarabaeiform, but more commonly known as grubs. All beetle larvae go through several instars, which are the developmental stages between each moult. In many species the larvae simply increase in size with each successive instar as more food is consumed. In some cases, however, more dramatic changes occur. Among certain beetle families or genera, particularly those that exhibit parasitic lifestyles, the first instar (the planidium) is highly mobile in order to search out a host, while the following instars are more sedentary and remain on or within their host. This is known as hypermetamorphosis; examples include the blister beetles (family Meloidae) and some rove beetles, particularly those of the genus Aleochara. As with all endopterygotes, beetle larvae pupate, and from this pupa emerges a fully formed, sexually mature adult beetle, or imago. Ad ults have an extremely variable lifespan, from weeks to years, depending on the species. In some species the pupa may go through all four forms during its development, called hypermetamorphosis (for example,, Meloidae). Pupae always have no mandibles, or adecticous. In most, the appendages are not attached to the pupae, or they are exarate; with most being obtect in form. Aquatic beetles use several techniques for retaining air beneath the water's surface. Beetles of t he family Dytiscidae hold air between the abdomen and the elytra when diving. Hydrophilidae have hairs on their under surface that retain a layer of air against their bodies. Adult crawling water beetles use both their elytra and their hind coxae (the basal segment of the back legs) in air retention, while whirligig beetles simply carry an air bubble down with them whenever they dive. The elytra allows beetles and weevils to both fly and move through confined spaces. Doings so by folding the delicate wings under the elytra while not flying, and folding their wings out just before take off. The unfolding and folding of the wings is operated by muscles attached to the wing base; as long as the tension on the radial and cubital veins remains, the wings remain straight. In day-flying species (for example, Buprestidae, Scarabaeidae), flight does not include large amounts of lifting of the elytra, having the metathorac wings extende d under the lateral elytra margins. Beetles have a variety of ways to communicate. Some of which include a sophisticated chemical language through the use of pheromones. From the host tree, the mountain pine beetle have many forms of communication. They can emit both an aggregative pheromone and an anti-aggregative pheramone. The aggregative pheromone attracts other beetles to the tree, and the anti-aggregative pheromone neutralizes the aggregative pheromone. This helps to avoid the harmful effects of having too many beetles on one tree competing for resources. The mountain pine beetle can also stridulate to communicate, or rub body parts together to create sound, having a ?scraper? on their abdomen that they rub against a grooved surface on the underside of their left wing cover to create a sound that is not audible to humans. Once the female beetles have arrived on a suitable pine tree ho st, they begin to stridulate and produce aggregative pheromones to attract other unmated males and females. New females arrive and do the same as they land and bore into the tree. As the males arrive, they enter the galleries that the females have tunneled, and begin to stridulate to let the females know they have arrived, and to also warn others that the female in that gallery is taken. At this point, the female stops producing aggregative pheromones and starts producing anti-aggregative pheromone to deter more beetles from coming. Since species of Coleoptera use environmental stimuli to communicate, they are affected by the climate. Microclimates, such as wind or temperature, can disturb the use of pheromones; wind would blow the pheromones while they travel through the air. Stridulating can be interrupted when the stimulus is vibrated by something else. Among insect, parental care is very uncommon, only found in a few species. Some beetles also display this unique social behavior. One theory states parental care is necessary for the survival of the larvae, protecting them from adverse environmental conditions and predators. One species, a rover beetle (Bledius spectabilis) displays both causes for parental care: physical and biotic environmental factors. Said species lives in salt marshes, so the eggs and/or larvae are endangered by the rising tide. The materna l beetle will patrol the eggs and larva and apply the appropriate burrowing behavior to keep them from flooding and from asphyxiating. Another advantage is that the mother protects the eggs and larvae from the predatory carabid beetles species Dicheirotrichus gustavi and from the parasitoid wasp species Barycnemis blediator. Up to 15% of larvae are killed by this parasitoid wasp, being only protected by maternal beetles in their dens. Some species of dung beetle also display a form of parental care. Dung beetles collect animal feces, or "dung", from which their name is derived, and roll it into a ball, sometimes being up to 50 times their own weight; albeit sometimes it is also used to store food. Usually it is the male that rolls the ball, with the female hitch-hiking or simply following behind. In some cases the male and the female roll together. When a spot with soft soil is found, they stop and bury the dung ball. They will then mate underground. After the mating, both or one of them will prepare the brooding ball. When the ball is finished, the female lays eggs inside it, a form of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring. Besides being abundant and varied, beetles are able to exploit the wide diversity of food sources available in their many habitats. Some are omnivores, eatin g both plants and animals. Other beetles are highly specialized in their diet. Many species of leaf beetles, longhorn beetles, and weevils are very host-specific, feeding on only a single species of plant. Ground beetles and rove beetles (family Staphylinidae), among others, are primarily carnivorous and will catch and consume many other arthropods and small prey, such as earthworms and snails. While most predatory beetles are generalists, a few species have more specific prey requirements or preferences. Decaying organic matter is a primary diet for many species. This can range from dung, which is consumed by coprophagous species (such as certain scarab beetles of the family Scarabaeidae), to dead animals, which are eaten by necrophagous species (such as the carrion beetles of the family Silphidae). Some of the beetles found within dung and carrion are in fact predatory. These include the clown beetles, preying on the larvae of coprophagous and necrophagous insects. Defense and predation Beetles and their larvae have a variety of strategies to avoid being attacked by predators or parasitoids. These include camouflage, mimicry, toxicity, and active defense. Camouflage involves the use of coloration or shape to blend into the surrounding environment. This sort of protective coloration is common and widespread among beetle families, especially those that feed on wood or vegetation, such as many of the leaf beetles (family Chrysomelidae) or weevils. In some of these species, scul pturing or various colored scales or hairs cause the beetle to resemble bird dung or other inedible objects. Many of those that live in sandy environments blend in with the coloration of the substrate. For example, the Giant African longhorn beetle (Petrognatha gigas) which resembles the moss and bark of the tree it feeds on. Another defense that often uses color or shape to deceive potential enemies is mimicry. A number of longhorn beetles (family Cerambycidae) bear a striking resemblance to wasps, which helps them avoid predation even though the beetles are in fact harmless. This defense is an example of Batesian mimicry and, together with other forms of mimicry and camouflage occurs widely in other beetle families, such as the Scarabaeidae. Beetles may combine their color mimicry with behavioral mimicry, acting like the wasps they already closely resemble. Many beetle species, including ladybirds, blist er beetles, and lycid beetles can secrete distasteful or toxic substances to make them unpalatable or even poisonous. These same species often exhibit aposematism, where bright or contrasting color patterns warn away potential predators, and there are, not surprisingly, a great many beetles and other insects that mimic these chemically protected species. Chemical defense is another important defense found amongst species of Coleoptera, usually being advertised by bright colors. Others may utilize behaviors that would be done when releasing noxious chemicals (for example,, Tenebrionidae). Chemical defense may serve purposes other than just protection from vertebrates, such as protection from a wide range of microbes, and repellents. Some species release chemicals in the form of a spray with surprising accuracy, such as ground beetles (Carabidae), may spray chemicals from their abdomen to repel predators. Some species take advantage of the plants from which they feed, and sequester the chemicals from the plant that would protect it and incorporate into their own defense. African carabid beetles (for example,, Anthia and Thermophilum) employ the same chemicals used by ants, while Bombardier beetles have a their own unique separate gland, spraying potential predators from far distances. Large ground beetles and longhorn beetles may defend themselves using strong mandibles and/or spines or horns to forcibly persuade a predator to seek out easier prey. Many species have large protrusions from their thorax and head such as the Rhinoceros beetle, which can be used to defended themselves from predators. Many species of weevil that feed out in the open on leaves of plants react to attack by employing a "drop-off reflex". Even further, some will combine it with thanatosis, which they will close up their legs, antennae, mandibles, etc. and use their cryptic coloration to blend in with the background. Species with varied coloration do not do this as they can not camaflouge. Over 1000 species of beetles are known to be either parasitic, predatory, or commensals in the nests of ants. Technically, most beetles and their larvae might be considered parasites, because they feed on plants and live inside the bark of trees and other woody plants, but such relationships are generally regarded as herbivory rather than parasitism. A few species of beetles even are ectoparasitic on mammals. One such species is Platypsyllus castoris, which parasitises beavers (Castor spp.). This beetle lives as a beaver parasite both as larva and as adult, feeding on epidermal tissue and possibly on skin secretions and wound exudates. They are strikingly flattened dorso-ventrally, no doubt as an adaptation for slipping between the beavers' hairs. They also are wingless and eyeless, as many other ectoparasites are. Other parasitic beetles include those that are parasitoids of other invertebrates, such as the small hive beetle (Aethina tumida) that infests honey bee hives. The larvae tunnel through comb towards stored honey or pollen, damaging or destroying cappings and comb in the process. Larvae defecate in honey and the honey becomes discolored from the feces, which causes fermentation and a frothiness in the honey; the honey develops a characteristic odor of decaying oranges. Damage and fermentation cause honey to run out of combs, destroing large amounts in hives and sometimes in the extracting rooms. Heavy infestations cause bees to abscond; some beekeepers have reported the rapid collapse of even strong colonies. Beetle-pollinated flowers are usually large, greenish or off-white in color and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Most beetle-pollinated flowers are flattened or dish shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plant's ovaries are usually well protected from the biting mouthparts of their pollinators. B eetles may be particularly important in some parts of the world such as semi-arid areas of southern Africa and southern California and the montane grasslands of KwaZulu-Natal in South Africa. Amongst most orders of insects, mutualism is not common, however there are so me examples in species of Coleoptera. Such as the Ambrosia beetle, the Ambrosia fungus, and probably bacteria. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery; so the weevils and the fungus both benefit. The beetles can not eat due to toxins, which uses its relationship with fungi to help overcome it's host tree defenses and to provide nutrition for their larvae. The beetle-fungal mutualism is chemically mediated by a bacterially produced polyunsaturated peroxide. The molecule's selective toxicity toward the beetle's fungal antagonist, combined with the prevalence and localization of its bacterial source, indicates an insect-microbe association that is both mutualistic and coevolved. This unexpected finding in a well-studied system indicates that mutualistic associations between insects and antibiotic-producing bacteria are more common than currently recognized and that identifying their small-molecule mediators can provide a powerful search strategy for therapeutically useful antimicrobial compounds. Pseudoscorpions are small arachnids with a flat, pear-shaped body and pincers that resemble those of scorpions (only distant relatives), usually ranging from 2 to 8 millimetres (0.08 to 0.31 in) in length. Their small size allows them to hitch rides under the elytra of a giant harlequin beetle to be dis persed over wide areas while simultaneously being protected from predators. They may also find mating partners as other individuals join them on the beetle. This would be a form of parasitism if the beetle was harmed in the process, however the beetle is, presumably, unaffected by the presence of the hitchhikers. Phylogeny and systematics Fossil record< p>A 2007 study based on DNA of living beetles and maps of likely beetle evolution indicated that beetles may have originated during the Lower Permian, up to 299 million years ago. In 2009, a fossil beetle was described from the Pennsylvanian of Mazon Creek, Illinois, pushing the origin of the beetles to an earlier date, 318 to 299 million years ago. Fossils from this time have been found in Asia and Europe, for instance in the red slate fossil beds of Niedermoschel near Mainz, Germany. Further fossils have been found in Obora, Czechia and Tshekarda in the Ural mountains, Russia. However, there are only a few fossils from North America before the middle Permian, al though both Asia and North America had been united to Euramerica. The first discoveries from North America were made in the Wellington formation of Oklahoma and were published in 2005 and 2008. As a consequence of the Permian?Triassic extinction event, there is only little fossil record of insects including beetles from the Lower Triassic. However, there are a few exceptions, like in Eastern Europe: at the Babiy Kamen site in the Kuznetsk Basin numerous beetle fossils were discovered, even entire specimen of the infraorders Archostemata (e.g. Ademosynidae, Schizocoleidae), Adephaga (e.., Triaplidae, Trachypachidae) and Polyphaga ( e.g. Hydrophilidae, Byrrhidae, Elateroidea) and in nearly a perfectly preserved condition. However, species from the families Cupedidae and Schizophoroidae are not present at this site, whereas they dominate at other fossil sites from the Lower Triassic. Further records are known from Khey-Yaga, Russia in the Korotaikha Basin. There are many important sites from the Jurassic, with more than 150 important sites with beetle fossils, the majority being situated in Eastern Europe and North Asia. In North America and especially in South America and Africa the number of sites from that time period is smaller and the sites have not been exhaustively investigated yet. Outstanding fossil sites include Solnhofen in Upper Bavaria, Germany, Karatau in South Kazakhstan, the Yixian formation in Liaoning, North China, as well as the Jiulongshan formation and further fossil sites in Mongolia. In North America there are only a few sites with fossil records of insects from the Jurassic, namely the shell limestone deposits in the Hartford basin, the Deerfield basin and the Newark basin. There is a large number of important fossil sites worldwide containing beetles from the Cretac eous. Most of them are located in Europe and Asia and belong to the temperate climate zone during the Cretaceous. A few of the fossil sites mentioned in the chapter Jurassic also shed some light on the early cretaceous beetle fauna (for example, the Yixian formation in Liaoning, North China). Further important sites from the Lower Cretaceous include the Crato Fossil Beds in the Araripe basin in the Cear?, North Brazil as well as overlying Santana formation, with the latter was situated near the paleoequator, or the position of the earth's equator in the geologic past as defined for a specific geologic period. In Spain there are important sites near Montsec and Las Hoyas. In Australia the Koonwarra fossil beds of the Korumburra group, South Gippsland, Victoria is noteworthy. Important fossil sites from the Upper Cretaceous are Kzyl-Dzhar in South Kazakhstan and Arkagala in Russia. The oldest known insect that resembles species of Coleoptera date back to the Lower Permian (270 mya), though they instead have 13-segmented antennae, elytra with more fully developed venation and more irregular longitudinal ribbing, and an abdomen and ovipositor extending beyond the apex of the elytra. The oldest true beetle, that is having features that include 11-segmented antennae, regular longitudinal ribbing on the elytra, and having genitalia that are internal. At the end of the Permian, the biggest mass extinction in the history history took place, collectively called the Permian?Triassic extinction event: 30% of all insect species became extinct, however, it is the only mass extinction of insects in Earth's history until today. Due to the P-Tr extinction, there is only little fossil record of insects including beetles from the Lower Triassic (220 million years ago). Around this time, during the Late Triassic, mycetophagous, or fungus feeding species (e.g. Cupedidae) appear in the fossil record. In the stages of the Upper Triassic representatives of the algophagous, or algae feeding species (e.g. Triaplidae and Hydrophilidae) begin to appear, as well as predatory water beetles. The first primitive weevils appear (e.g. Obrienidae), as well as the first representatives of the rove beetles (e.g. Staphylinidae), which show no ma rked difference in physique compared to recent species. During the Jurassic (210 to 145 million years ago) there was a dramatic increase in the known diversity of family-level Coleoptera. This includes the development and growth of carnivorous and herbivorous species. Species of the superfamily Chrysomeloidea are believed to have developed around the same time, which include a wide array of plant host ranging from cycads and conifers, to angiosperms. Close to the Upper Jurassic, the portion of the Cupedidae decreased, however at the same time the diversity of the early plant eating, or phytophagous species increased. Most of the recent phytophagous species of Coleoptera feed on flowering plants or angiosperms. It is be lieved that the increase in diversity of the angiosperms also influenced the diversity of the phytophagous species, which doubled during the Middle Jurassic. However, recently doubts have been raised since the increase of the number of beetle families during the Cretaceous does not correlate with the increase of the number of angiosperm species. Also around the same time, numerous primitive weevils (e.g. Curculionoidea) and click beetles (e.g. Elateroidea) appeared. Also first jewel beetles (e.g. Buprestidae) are present, however, they were rather rare until the Cretaceous. The first scarab beetles would appear around this time, however they were not coprophagous, or feeding upon fecal matter, presumabl y feeding upon the rotting wood with the help of fungus, and early example of a mutualistic relationship (see the Mutualism section ). The Cretaceous witness the initiation of the most recent round of southern landmass fragmentation, via the opening of the southern Atlantic ocean and the isolation of New Zealand, while the South America, Antarctica, and Australia grew more distant. During the Cretaceous the diversity of Cupedidae and Archostemata decreased considerably. Predatory ground beetles (Carabidae) and rove beetles (Staphylinidae) began to distribute into different patterns: whereas the Carabidae predominantly occurred in the warm regions, the Staphylinidae and click beetles (Elateridae) preferred many areas with temperate climate. Likewise, predatory species of Cleroidea and Cucujoidea, hunted their prey under the bark of trees together with the jewel beetles (Buprestidae). The jewel beetles dive rsity increased rapidly during the Cretaceous, as they were the primary consumers of wood, while longhorn beetles (Cerambycidae) were rather rare and their diversity increased only towards the end of the Upper Cretaceous. The first coprophagous beetles have been recorded from the Upper Cretaceous, and are believed to have lived on the excrement of herbivorous dinosaurs, however there is still a discussion, whether the beetles were always tied to mammals during its development. Also, the first species with an adaption of both larvae and adults to the aquatic lifestyle are found. Whirligig beetles (Gyrinidae) were moderately diverse, although other early beetles (e.g. Dytiscidae) were less, with the most widespread being the species of Coptoclavidae, which preyed on aquatic fly larvae. The time between the Paleogene and the Neogene, or more recent history is where today's beetles developed. During this time, the continents began to situate themselves to where we see them today. Around 5 million years ago the land bridge between South America and North America was formed, and this is when fauna exchange between Asia and North America started. Even though many recent genera and species already existed during the Miocene, however, their distribution differed considerably from today's. The suborders diverged in the Permian and Triassic. Their phylogenetic relationship is uncertain, with the most popular hypothesis being that Polyphaga and Myxophaga are most closely related, with Adephaga as the sister group to those two, and Archostemata as sister to the other three collectively. Although there are six other competing hypotheses, the other most widely discussed one is Myxophaga as the sister group of all remaining beetles rather than just of Polyphaga. Evidence for a close relationship of the two suborders, Polyphaga and Myxophaga, includes the shared reduction in the number of larval leg articles. The Adephaga is further considered as sister to Myxophaga and Polyphaga, based on their completely sclerotized elytra, reduced number of crossveins in the hind wings, and the folded (as opposed to rolled) hind wings of those three suborders. Recent cladistic analysis of some of the structural charac teristics supports the Polyphaga and Myxophaga hypothesis. The membership of the clade Coleoptera is not in dispute, with the exception of the twisted-wing parasites, Strepsiptera. These odd insects have been regarded as related to the beetle families Rhipiphoridae and Meloidae, with which they share first-instar larvae that are active, host-seeking triungulins and later-instar larvae that are endoparasites of other insects, or the sister group of beetles, or more distantly related to insects. There are about 450,000 species of beetles ? representing about 40% of all known insects. Such a large number of species poses special problems for classification, with some families consisting of thousands of speci es and needing further division into subfamilies and tribes. This immense number of species allegedly led evolutionary biologist J. B. S. Haldane to quip, when some theologians asked him what could be inferred about the mind of the Creator from the works of His Creation, that God displayed "an inordinate fondness for beetles". - Polyphaga is the largest suborder, containing more than 300,000 described species in more than 170 families, including rove beetles (Staphylinidae), scarab beetles (Scarabaeidae), blister beetles (Meloidae), stag beetles (Lucanidae) and true weevils (Curculionidae). These beetles can be identified by the cervical sclerites (hardened parts of the head used as points of attachment for muscles) absent in the other suborders. - Adephaga contains about 10 families of largely predatory beetles, includes ground beetles (Carabidae), Dytiscidae and whirligig beetles (Gyrinidae). In these beetles, the t estes are tubular and the first abdominal sternum (a plate of the exoskeleton) is divided by the hind coxae (the basal joints of the beetle's legs). - Archostemata contains four families of mainly wood-eating beetles, including reticulated beetles (Cupedidae) and the telephone-pole beetle. - Myxophaga contains about 100 described species in four families, mostly very small, including Hydroscaphidae and the genus Sphaerius. Relationship to people As pests About 3/4 of beetle species are phytophagous in both the larval and adult stages, living in or on plants, wood, fungi, and a variety of stored products, including cereals, tobacco, and dried fruits. Because many of these plants are important for agriculture, forestry, and the household, the beetle can be considered a pest. Some of these species cause significant damage, such as the Boll weevil, which feeds on cotton buds and flowers. The Boll Weevil crossed the Rio Grande near Brownsville, Texas to enter the United States from Mexico around 1892 and had reached southeastern Alabama by 1915. By the mid 1920s it had entered all cotton growing regions in the U.S., traveling 40 to 160 miles (60?260 km) per year. It remains the most destructive cotton pest in North America. Mississippi State University has estimated that since the boll weevil entered the United States it has cost U.S. cotton producers about $13 billion, and in recent ti mes about $300 million per year. Many other species also have done extensive damage to plant populations, such as the bark beetle and elm Leaf beetle. The bark beetle and elm leaf beetle, among other species, have been known to nest in elm trees. Bark beetles in particular carry Dutch elm disease as they move from infected breeding sites to feed on healthy elm trees. The spread of Dutch elm disease by the beetle has led to the devastation of elm trees in many parts of the Northern Hemisphere, notably in Europe and North America. Situations in which a species has developed immunity to pesticides are worse, as in the case of the Colorado potato beetle, Leptinotarsa decemlineata, which is a notorious pest of potato plants. Crops are destroyed and the beetle can only be treated by employing expensive pesticides, many of which it has begun to develop resistance to. As well as potatoes, suitable hosts can be a number of plants from the potato family (Solanaceae), such as nightshade, tomato, eggplant and capsicum. The Colorado potato beetle has developed resistance to all major insecticide classes, although not every population is resistant to every chemical. Pests don't only affect agriculture, but can also even affect houses, such as the Death watch beetle. The death watch beetle, Xestobium rufovillosum, (family Anobiidae) is of considerable importance as a pest of older wooden buildings in Great Britain. It attacks hardwoods such as oak and chestnut, always where some fungal decay has taken or is taking place. It is thought that the actual introduction of the pest into buildings takes place at the time of construction. Other pest include the Coconut hispine beetle, Brontispa longissima, feeds on young leaves and damages seedlings and mature coconut palms. On September 27, 2007, Philippines' Metro Manila and 26 provinces were quarantined due to having been infested with this pest (to save the $800-million Philippine coconut industry). The mountain pine beetle normally attacks mature or weakened lodgepole pine. It can be the most destructive insect pest of mature pine forests. The current infestation in British Columbia is the largest Canada has ever seen. As beneficial Beetles are not only pests, but can also be beneficial, usually by controlling the populations of pests. One of the best, and widely known, examples are the Ladybugs or ladybirds (family Coccinellidae). Both the larvae and adults are found feeding on aphid colonies. Other ladybugs feed on scale insects and mealybugs. If normal food sources are scarce, they may feed on other things, such as small caterpillars, young plant bugs, honeydew and nectar. Ground beetles (family Carabidae) are common predators of many different insects and other arthropods, including fly eggs, caterpillars, wireworms and others. Dung beetles (Coleoptera, Scarabidae) have been successfully used to reduce the populations of pestilent flies and parasitic worms that breed in cattle dung. The beetles make the dung unavailable to breeding pests by quickly rolling and burying it in the soil, with the added effect of improving soil fertility and nutrient cycling. The Australian Dung Beetle Project (1965?1985), led by Dr. George Bornemissza of the Commonwealth Scientific and Industrial Research Organization introduced species of dung beetle to Australia from South Africa and Europe and effectively reduced the bush fly (Musca vetustissima) population by 90%. Dung beetles play a remarkable role in agriculture. By burying and consuming dung, they imp rove nutrient recycling and soil structure. They also protect livestock, such as cattle, by removing the dung which, if left, could provide habitat for pests such as flies. Therefore, many countries have introduced the creature for the benefit of animal husbandry. In developing countries, the beetle is especially important as an adjunct for improving standards of hygiene. The American Institute of Biological Sciences reports that dung beetles save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces. Some beetles help in a professional setting, doing things that people can't; such as those of the family Dermestidae are often used in taxidermy and preparation of scientific specimens to clean bones of remaining soft tissue. The beetle larvae to clean skulls because they do a thorough job of cleaning, and the beetle larvae do not leave the tool marks that taxidermists tools do. Another benefit is that with no traces of meat remaining, and no emulsified fats in the bones, the trophy will not develop the unpleasant dead odor. Using the beetle larvae means that all cartilage is removed along with the flesh, leaving the bones spotless. As food Insects are used as human food in 80% of the world's nations. Beetles are the most widely eaten insects. 344 species are known to be used as food. They are usually eaten in the larval stage. The mealworm is the most eaten beetle species. The larvae of the darkling beetle and the rhinoceros beetle are also commonly eaten. In art Many beetles have beautiful and durable elytra that have been used as a material in arts, with Beetlewing the best example. Sometimes they're also incorporated into ritual objects for their religious significance. Whole beetles, either by themselves or encased in clear plastic, are also made into everything from cheap souvenirs such as key chains to expensive fine-art jewelry. In parts of Mexico, beetles of the genus Zopherus are made into living brooches by attaching costume jewelry and golden chains. This is made possible by the incredibly hard elytra and sedentary habits of the genus. In ancient culture Many beetles were prominent in ancient cultures. Of these the most prominent might be the dung beetle in Ancient Egypt. Several species of dung beetle, most notably the species Scarabaeus sacer (often referred to as the sacred scarab), enjoyed a sacred status among the ancient Egyptians. Popular interpretation in modern academia theorizes the hieroglyphic image of the beetle represents a triliteral phonetic that Egyptologists transliterate as xpr or ?pr and translate as "to come into being", "to become" or "to transform". The derivative term xprw or ?pr(w) is variously translated as "form", "transformation", "happening", "mode of being" or "what has come into being", depending on the context. It may have existential, fictional, or ontologic significance. The scarab was linked to Khepri ("he who has come into being"), the god of the rising sun. The ancients believed that the dung beetle was only male in gender, and reproduced by depositing semen into a dung ball. The supposed self-creation of the beetle resembles that of Khepri, who creates himself out of nothing. Moreover, the dung ball rolled by a dung beet le resembles the sun. Plutarch wrote: The ancient Egyptians believed that Khepri renewed the sun every day before rolling it above the horizon, then carried it through the other world after sunset, only to renew it, again, the next day. Some New Kingdom royal tombs exhibit a threefold image of the sun god, with the beetle as symbol of the morning sun. The astronomical ceiling in the tomb of Ramses VI portrays the nightly "death" and "rebirth" of the sun as being swallowed by Nut, goddess of the sky, and re-emerging from her womb as Khepri. Excavations of ancient Egyptian sites have yielded images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals, dating from the Sixth Dynasty and up to the period of Roman rule. They are generally small, bored to allow stringing on a necklace, and the base bears a brief inscription or cartouche. Some have been used as seals. Pharaohs sometimes commissioned the manufacture of larger images with lengthy inscriptions, such as the commemorative scarab of Queen Tiye. Massive sculptures of scarabs can be seen at Luxor Temple, at the Serapeum in Alexandria (see Serapis) and elsewhere in Egypt. The scarab was of prime significance in the funerary cult of ancient Egypt. Scarabs, generally, though not always, were cut from green stone, and placed on the chest of the deceased. Perhaps the most famous example of such "heart scarabs" is the yellow-green pectoral scarab found among the entombed provisions of Tutankhamen. It was carved from a large piece of Libyan desert glass. The purpose of the "heart scarab" was to ensure that the heart would not bear witness against the deceased at judgement in the Afterlife. Other possibilities are suggested by the "transformation spells" of the Coffin Texts, which affirm that the soul of the deceased may transform (xpr) into a human being, a god, or a bird and reappear in the world of the living. In contrast to funerary contexts, some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these being Judean LMLK seals (8 of 21 designs contained scarab beetles), which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. The scarab remains an item of popular interest thanks to modern fascination with the art and beliefs of ancient Egypt. Scarab beads in semiprecious stones or glazed ceramics can be purchased at most bead shops, while at Luxor Temple a massive ancient scarab has been roped off to discourage visitors from rubbing the base of the statue "for luck". In modern culture Beetles still play roles in modern culture. One example is in insect fighting for entertainment and gambling. This sport exploits the territorial behavior and mating competition of certain species of large beetles. Enthusiasts collect and raise various species of insects for fight. Among beetles the most popular are large species of Stag Beetle, Rhinoceros Beetle, Kabutomushi, and Goliath Beetle. The study of beetles is called coleopterology (from Coleoptera, see above, and Greek -????a, -logia), and its practitioners are coleopterists. Coleopterists have formed organizations to facilitate the study of beetles. Among these is The Coleopterists Society, an international organization based in the United States. Such organizations may have both professionals and amateurs interested in beetles as members. Research in this field is often published in peer-reviewed journals specific to the field of coleopterology, though journals dealing with general entomology also publish many papers on various aspects of beetle biology. Some of the journals specific to beetle research are: - The Coleopterist (United Kingdom beetle fauna) - The Coleopterists Bulletin (published by The Coleopterists Society) - Elyt ron (published by the European Association of Coleopterology) Further reading - Poul Beckmann. Living Jewels: The Natural Design of Beetles. ISBN 3-7913-2528-0. - J. Cooter & M. V. L. Barclay, ed. (2006). A Coleopterist?s Handbook. Amateur Entomological Society. ISBN 0-900054-70-0. - Beetle Larvae of the World. Entomological Society of America. ISBN 0-643-05506-1. - David Grimaldi, Michael S. Engel. Evolution of the Insects. ISBN 0-521-82149-5. - K. W. Harde. A Field Guide in Color to Beetles. pp. 7?24. ISBN 0-7064-1937-5. - R. E. White (1983). Beetles. New York, NY: Houghton Mifflin Company. ISBN 0-395-91089-7. - ^ a b c d e f g h i j k l m n o p q r s t Powell (2009) - ^ P. M. Hammond 1992. Species inventory. pp. 17?39 in Global Biodiversity, Status of the Earth?s Living Resources, B. Groombridge, ed. Chapman and Hall, London. 585 pp. - ^ Arthur D. Chapman (2009) (PDF). Numbers of Living Species in Australia and the World (2nd ed.). Department of the Environment, Water, Heritage and the Arts. ISBN 978-0-642-56861-8 . http://www.environment.gov.au/biodiversity/abrs/publications/other/species-numbers/2009/pubs/nlsaw-2nd-complete.pdf. - ^ Harper, Douglas. "Coleoptera". The Online Etymology Dictionary. http://www.etymonline.com/index.php?term=Coleoptera. Retrieved 26 February 2011. - ^ a b c d e f g h Gilliott, Cedric (August 1995). Entomology (2 ed.). Springer-Verlag New York, LLC. ISBN 0-306-44967-6. http://books.google.com/books?id=DrTKxvZq_IcC&pg=PA96&dq=Insect+classification+based+on+winged+and+wingless#v=onepage&q=coleoptera&f=false. - ^ a b Foottit, Robert G.; Pe ter Holdridge Adler (2009). Insect biodiversity: science and society. John Wiley and Sons. ISBN 1405151420. http://books.google.com/books?id=LBZHpYY2_8gC&printsec=frontcover#v=snippet&q=Biodiversity%20of%20Coleoptera&f=false. - ^ a b Gullan, P.J.; P.S. Cranston (March 22, 2010). The Insects: An Outline of Entomology (4 ed.). Oxford: Wiley, John & Sons, Incorporated. ISBN 1-444-330 36-5. http://books.google.com/?id=S7yGZasJ7nEC&printsec=frontcover&dq=Insects&cd=1#v=onepage&q=&f=false. - ^ Michael A. Ivie (2002). Ross H. Arnett & Michael Charles Thomas. ed. American Beetles: Polyphaga: Scarabaeoidea through Curculionoidea. American Beetles. 2. CRC Press. ISBN 9780849309540. - ^ Schmidt-Nielsen, Knut (Jan 15, 1997). "Insect Respiration". Animal Physiology: Adaptation and Environment (5th ed.). Cambridge University Press. p. 55. ISBN 0521570980. http://books.google.com/?id=Af7IwQWJoCMC&pg=PA55&lpg=PA55&dq=oxygen+diffusing+into+a+bubble#v=onepage&q&f=false. - ^ a b Evans & Bellamy (2000) - ^ Scoble, MJ. (1992). The Lepidoptera: Form, function, and diversity.. Oxford Univ. Press. ISBN 9781402062421. - ^ R. H. Arnett, Jr. & M. C. Thomas (2001). "Haliplidae". American Beetles, Volume 1. CRC Press, Boca Raton, Florida. pp. 138?143. ISBN 0-8493-1925-0. - ^ a b "Mountain Pine Beetle - Beetle Love". Parks Canada. http://www.pc.gc.ca/eng/docs/v-g/dpp-mpb/sec2/dpp-mpb2b.aspx. Retrieved March 13, 2011. - ^ T. D. Wyatt & W. A. Foster (1989). "Parental care in the subsocial intertidal beetle, Bledius spectabilis, in relation to parasitism by the ichneumonid wasp, Barycnemis blediator". Behaviour 110 (1?4): 76?92. doi:10.1163/156853989X00394. JSTOR 4534785. - ^ Hanski, Ilkka; Yves, Cambefort (1991). Dung Beetle Ecology. Princeton University Press. pp. 626?67 2. ISBN 0691087393. - ^ A.L. Lobanov (2002). "feeding". Beetle Biology And Ecology. Beetles (Coleoptera) and Coleopterologist. http://www.zin.ru/animalia/coleoptera/eng/biol3.htm. Retrieved March 13, 2011. - ^ a b c Evans & Bellamy (2000), p. 126 - ^ Powell (2009), p. 199 - ^ Meyer, John R. (8 March 2005). "Coleoptera". Department of Entomology, NC State University. http://www.cals.ncsu.edu/course/ent425/compendium/coleop~1.html. Retrieved March 13, 2011. - ^ Stewart B. Peck (2006). "Distribution and biology of the ectoparasitic beaver beetle Platypsyllus castoris Ritsema in North America (Coleoptera: Leiodidae: Platypsyllinae)". Insecta Mundi 20 (1?2): 85?94. http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1107&context=insectamundi. - ^ P. Neumann & P. J. Elzen (2004). "The biology of the small hive beetle (Aethina tumida, Coleoptera: Nitidulidae): Gaps in our knowledge of an invasive species". Apidologie 35 (3): 229?247. doi:10.1051/apido:2004010. - ^ G. D. Jones & S. D. Jones (2001). "The uses of pollen and its implication for Entomology". Neotropical Entomology 30 (3): 314?349. doi:10.1590/S1519-566X2001000300001. - ^ J. Ollerton, S. D. Johnson, L. Cranmer & S. Kellie (2003). "The pollination ecology of an assemblage of grassland asclepiads in South Africa". Annals of Botany 92 (6): 807?834. doi:10.1093/aob/mcg206. - ^ a b Malloch, D., and M. Blackwell. 1993. Dispersal biology of ophiostomatoid fungi. p. 195-206. In: Ceratocystis and Ophiostoma: Taxonomy, Ecology and Pathology. Eds., Wingfield, M.J., K.A. Seifert, and J.F. Webber. APS, St. Paul. - ^ H. Francke-Grossmann (1967). "Ectosymbiosis in wood inhabiting insects". In M. Henry. Symbiosis. 2. New York: Academic Press. pp. 141?205. - ^ "Pseudoscorpions". Insect Advice from Extension. College of Agricultural Sciences. 2011. http://www.ento.psu.edu/extension/factsheets/pseudoscorpion.htm. Retrieved March 13, 2011. - ^ Gail, Vines (18 April 1992). "Hitchhiking pseudoscorpions take beetles for a ride". New Scientist (1817). http://www.newscientist.com/article/mg13418173.500--science-hitchhiking-pseudoscorpions-take-beetles-for-a-ride-.html. - ^ Poole, Robert W. (2004). "Ecology - Population Ecology - Commensalism". Nearctica. http://www.nearctica.com/ecology/pops/commens.htm#pseudoscorpion. [dead link] - ^ Dave Mosher (December 26, 2007). "Modern beetles predate dinosaurs". Live Science. http://www.livescience.com/animals/071226-tough-beetles.html. Retrieved June 24, 2010. - ^ Oliver B?thoux (2009). "The earliest beetle identified". Journal of Paleontology 83 (6): 931?937. doi:10.1666/08-158.1. - ^ H?rnschemeyer, T.; H. Stapf, Terra Nostra. "Die Insektentaphoz?nose von Niedermoschel (Asselian, unt. Perm; Deutschland)" (in German). Schriften der Alfred-Wegener-Stiftung (99/8): 98. - ^ J. Kukalov? (1969). "On the systemati c position of the supposed Permian beetles, Tshecardocoleidae, with a description of a new collection from Moravia". Sborn?k geologick?ch Ved, Paleontologie 11: 139?161. - ^ a b c d e f g h i j Benisch, Christoph (2010). "Phylogeny of the beetles". The beetle fauna of Germany. Kerbtier. http://www.kerbtier.de/Pages/Themenseiten/enPhylogenie.html#L9. Retrieved March 16, 2011. - ^ Beckemeyer, R. J.; M. S. Engel (2008). "A second specimen of Permocoleus (Coleoptera) from the Lower Permian Wellington Formation of Noble County, Oklahoma". Journal of the Kansas Entomological Society 81 (1): 4?7. doi:10.2317/JKES-708.01.1. http://fossilinsects.net/pdfs/beckemeyer_engel_2008_JKansEntSoc_PermocoleusPermOklahoma.pdf. - ^ D. E. Shcherbakov (2008). "On Permian and Triassic insect faunas in r elation to biogeography and the Permian-Triassic crisis". Paleontological Journal 42 (1): 15?31. - ^ Ponomarenko, A. G. (2004). "Beetles (Insecta, Coleoptera) of the Late Permian and Early Triassic". Paleontological Journal 38 (Suppl. 2): S185?S196. http://palaeoentomolog.ru/Publ/PALS185.pdf. - ^ Alexandr G. Ponomarenko (1985). "Fossil insects from the Tithonian "Solnhofener Plattenkalke" in the Museum of Natural History, Vienna" (PDF). Annalen des Naturhistorischen Museums in Wien 87 (1): 135?144. http://www.landesmuseum.at/pdf _frei_remote/ANNA_87A_0135-0144.pdf. - ^ E. V. Yan (2009). "A new genus of elateriform beetles (Coleoptera, Polyphaga) from the Middle-Late Jurassic of Karatau" (PDF). Paleontological Journal 43 (1): 78?82. doi:10.1134/S0031030109010080. http://fossilinsects.net/pdfs/Yan_2009_PalJ_ElateriformJurassicKaratau.pdf. - ^ a b J.-J. Tan; , D. Ren, M. Liu (2005). "New ommatids from the Late Jurassic of western Liaoning, China (Coleoptera: Archostemata)" (PDF). Insect Science 12 (3): 207?216. doi:10.1111/j.1005-295X.2005.00026.x. http://fossilinsects.net/pdfs/tan_etal_2005.pdf. - ^ A. G. Ponomarenko (1997). "New beetles of the family Cupedidae from the Mesozoic of Mongolia. Ommatini, Mesocupedini, Priacmini" (PDF). Paleontological Journal 31 (4): 389?399. http://palaeoentomolog.ru/Publ/PALJ389.pdf. - ^ a b c < b>d Powell (2009), p. 186 - ^ Labandeira, C. C.; Sepkoski, J. J. (1993). "Insect diversity in the fossil record" (PDF). Science 261 (5119): 310?315. Bibcode 1993Sci...261..310L. doi:10.1126/science.11536548. PMID 11536548. http://18.104.22.168/biol5530/Labandeira_Sepkoski_1993.pdf. - ^ GRATSHEV, Vadim G.; ZHERIKHIN, Vladimir V., J. J. (15 Oct., 2003). "Insect diversity in the fossil record" (PD F). Acta Zoologica Cracoviensia (Fossil Insects) (261): 129?138. http://www.isez.pan.krakow.pl/journals/azc_i/pdf/46(suppl)/15.pdf. - ^ Chang, H.; Zhang, F.; Ren, D. (2008). "A new genus and two new species of fossil elaterids from the Yixian Formation of Western Liaoning, China (Coleoptera: Elateridae)". Zootaxa (1785): 54?62. http://22.214.171.124/upload/20080609030648.pdf. - ^ Orekhovo-Zuyevo, A. V. A. (1993). "Jurassic and Lower Cretaceous Buprestidae (Coleoptera) from Eurasia". Paleontological Journ al (1A): 9?34. http://www.zin.ru/Animalia/Coleoptera/pdf/Alexeev%20-%20jurassic%20and%20lc%20buprestidae%2093.pdf. - ^ "New Jewel Beetles (Coleoptera: Buprestidae) from the Cretaceous of Russia, Kazakhstan, and Mongolia" (PDF). Paleontological Journal (43): 277?281. 2009. http://fossilinsects.net/pdfs/Alexeev_2009_PalJ_BuprestidaeCretaceousRussiaKazakhstanMongolia.pdf. - ^ Chin, K.; Gill, B. D. (1996). "Dinosaurs, dung beetles, and conifers; participants in a Cretaceous food web". Palaois (11): 280?285. - ^ Antonio Arillo & Vicente M. Ortu?o (2008). "Did dinosaurs have any relation with dung-beetles? (The origin of coprophagy)". Journal of Natural History 42 (19&20): 1405?1408. doi:10.1080/00222930802105130. - ^ a b Beutel, R.; F. Haas (2000). "Phylogenetic relationships of the suborders of Coleoptera (Insecta)". Cladistics 16: 103?141. doi:10.1111/j.1096-0031.2000.tb00350.x. - ^ a b Kukalov?-Peck, J.; J. F. Lawrence (1993). "Evolut ion of the hind wing in Coleoptera". Canadian Entomologist 125 (2): 181?258. doi:10.4039/Ent125181-2. - ^ Maddison, David R. (2000 version 11 September 2000 (under construction)). "Coleoptera. Beetle". Tree of Life Web Project. [tolweb.org TolWeb]. http://tolweb.org/coleoptera. Retrieved 2011-03-18. - ^ G. Evelyn Hutchinson (1959). "Homage to Santa Rosalia or why are there so many kinds of animals?". The American Naturalist 93 (870): 145?159. doi:10.1086/282070. JSTOR 2458768. - ^ a b Mississippi State University. "History of the Boll Weevil in the United States". Economic impacts of the boll weevil. http://www.bollweevil.ext.msstate.edu/webpage_history.htm. [dead link] - ^ "Elm Leaf Beetle". University of California. May 11, 2011. http://cisr.ucr.edu/elm_leaf_beetle.html. Retrieved July 17, 2011. - ^ A. Alyokhin, M. Baker, D. Mota-Sanchez, G. Dively & E. Grafius (2008). "Colorado potato beetle resistance to insecticides". American Journal of Potato Research 85 (6): 395?413. doi:10.1007/s12230-008-9052-0. - ^ Adcock, Edward (2005). "Pests - Death watch beelte". Conservation and collective care. University of Oxford. http://www.bodley.ox.ac.uk/dept/preservation/training/pests/watch.htm. Retrieved July 17, 2011. - ^ Amy R. Remo (September 27, 2007). "Beetles infest coconuts in Manila, 26 provinces". Philippine Daily Inquirer. http://newsinfo. inquirer.net/breakingnews/nation/view_article.php?article_id=91109. - ^ "The Mountain Pine Beetle in British Columbia". Natural Resources Canada. August 19, 2008. http://mpb.cfs.nrcan.gc.ca/biology/introduction_e.html. Retrieved June 24, 2010. [dead link] - ^ "'Deadly ladybird' sighted in UK". BBC News. 5 October 2004. http://news.bbc.co.uk/1/hi/england/essex/3715120.stm. Retrieved 17 June 2010. - ^ B. Kromp (1999). "Carabid beetles in sustainable agriculture: a review on pest control efficacy, cultivation aspects and enhancement". Agriculture, Ecosystems and Environment 74 (1?3): 187?228. doi:10.1016/S0167-8809(99)00037-7. - ^ Brown, J., Scholtz, C.H., Janeau, J-L., Grellier, S. and Podwojewski, P. 2010. Dung beetles (Coleoptera: Scarabaeidae) can improve soil hydrological properties. Applied Soil Ecology 46: 9-16 - ^ Losey, John E. and Mace Vaughan (2006). "The Economic Value of Ecological Services Provided by Insects". BioScience 56(4):311-323. - ^ P., Christy. "T he Benefit of Dermestid Beetles". The Daily Puppy. http://www.dailypuppy.com/articles/the-benefit-of-dermestid-beetles/2dc663b4-4292-a1ca-9ab0-648c8378cf7d. Retrieved July 17, 2011. [dead link] - ^ Damian Carrington. "Insects could be the key to meeting food needs of growing global population", The Guardian 1 August 2010. Retrieved 27 February 2011. - ^ Ramos-Elorduy, Julieta; Menzel, Peter (1998). Creepy crawly cuisine: the gourmet guide to edible insects. Inner Traditions / Bear & Company. p. 5. ISBN ;9780892817474. http://books.google.com/?id=Q7f1LkFz11gC. - ^ Life cycle of the rounded jewel beetles, Sternocera spp. ??????????????????????????????????? 2 ?? - Siam Insect Zoo-Museum - ^ Michael A. Ivie (2002). "105. Zopheridae". In Ross H. Arnett & Michael Charles Thomas. American Beetles: Polyphaga: Scarabaeoidea through Curculionoidea. Volume 2 of American Beetles. CRC Press. pp. 457?462. ISBN 9780849309540. - ^ a b Zabludoff, Marc (2008). Beetles. Malaysia: Michelle Bison. pp. 14?17. ISBN 9780761425328. - ^ Dollinger, Andr? (January 2002). "Ancient Egyptian bestiary: Insects". http://www.reshafim.org.il/ad/egypt/bestiary/insects.htm. Retrieved July 19, 2011. - ^ "Isis and Osiris", Moralia, in volume V of the Loeb Classical Library edition, 1936, now in the public domain. Retrieved on 2007-08-02. - ^ Morales-Correa, Ben (2006). "Egyptian Symbols". All-About-Egypt. http://www.all-about-egypt.com/egyptian-symbols.html. Retrieved Ju ly 19, 2011. - Evans, Arthur V.; Charles Bellamy (2000). An Inordinate Fondness for Beetles. University of California Press. ISBN 978-0-520-22323-3. http://books.google.com/?id=ZZ_hfpMo8oAC&pg=PA31. - Powell, Jerry A. (2009). "Coleoptera". In Vincent H. Resh & Ring T. Card?. Encyclopedia of Insects (2nd ed.). Academic Press. p. 1132. ISBN 9780123741448. http://books.google.co.in/books?id=wrMcPwAACAAJ. External links - Coleoptera from the Tree of Life Web Project - List of major Beetle collections - Scarab beetles as Religious symbols - Beetles and coleopterologists - (German) K?fer der Welt - Beetles ? Coleoptera - Beetle larvae - Beetle images - Gallery of European beetles - Poland beetles< /a> - Identification keys to some British beetles - North American Beetles - Beetles of North America - Texas beetle information - The Beetle Ring - Beetles of Africa - Beetles of Mauritius - Southeast Asian beetles The Order Coleoptera is further organized into finer groupings including: - Suborder (6): Adephaga · Archecoleoptera · Archostemata · Myxophaga · Polyphaga · Protocoleoptera - Infraorder (4): Bostrichiformia · Cucujiformia · Elateriformia · Scarabaeiformia - Family (231): Acanthocnemidae · Aclopidae · Aculagnathidae · Ademosynidae · Aderidae · Aegialiidae · Aglycyderidae · Agyrtidae · Alleculidae · Amphizoidae · Anobiidae · Anthicidae · Anthribidae · Aphidiidae · Aphodiidae · Apionidae · Archeocrypticidae · Artematopidae · Asiocoleidae · Attelabidae · Aul onocnemidae · Belidae · Belohinidae · Biphyllidae · Boganiidae · Bolboceratidae · Boridae · Bostrichidae · Bostrychidae · Bothrideridae · Brachinidae · Brachyceridae · Brachypsectridae · Brachypteridae · Brenthidae · Brentidae · Bruchidae · Buprestidae · Byrrhidae · Byturidae · Callirhipidae · Cantharidae · Carabidae · Catopidae · Cavognathidae · Cebrionidae · Cephaloidae · Cerambycidae · Ceratocanthidae · Cerophytidae · Cerylonidae · Cetoniidae · Chaetosomatidae · Chalcodryidae · Chelonariidae · Cholevidae · Chrysomelidae · Cicindelidae · Ciidae · Clambidae · Cleridae · Cneoglossidae · Coccinellidae · Colydiidae · Corticariidae · Corylophidae · Crowsoniellidae · Cryptolaryngidae · Cryptophagidae · Cucujidae · Cupedidae · Curculionidae · Cybocephalidae · Dascillidae · Dasyceridae · Dermestidae · Derodontidae · Diphyllostomatidae · Discolomatidae · Discolomidae · Drilidae · Dryophthoridae · Dryopidae · Dynastidae · Dytiscidae · Elateridae · Elmidae · Elminthidae · Endomychidae · Erirhinidae · Erotylidae · Euchiridae · Eucinetidae · Eucnemidae · Eulichadidae · Eurhynchidae · Georyssidae · Geotrupidae · Glaphyridae · Glaresidae · Gyrinidae · Haliplidae · Harpalidae · Helodidae · Helotidae · Heteroceridae · Histeridae · Homalisidae · Hybosoridae · Hydraenidae · Hydrophilidae · Hydroscaphidae · Hygrobiidae · Ithyceridae · Jacobsoniidae · Laemophloeidae · Lagriidae · Lamingtoniidae · Lampyridae · Languriidae · Largriidae · Lathridiidae · Latridiidae · Leiodidae · Lepiceridae · Leptinidae · Limnichidae · Limulodidae · Lucanidae · Lutrochidae · Lycidae · Lyctidae · Lymexylidae · Lymexylonidae · Malachiidae · Melandryidae · Meloidae · Melolonthidae · Melyridae · Micromalthidae · Micropeplidae · Microsporidae · Monommidae · Monotomidae · Mordellidae · Mycetophagidae · Mycteridae · Nanophyidae · Nemonychidae · Nitidulidae · Nosodendridae · Noteridae · Ochodaeidae · Oedemeridae · Omethidae · Ommatidae · Orphnidae · Oxycorynidae · Pachypodidae · Passalidae · Passandridae · Paussidae · Pedilidae · Perimylopidae · Permocupedidae · Perothopidae · Phalacridae · Phengodidae · Phloeostichida e · Phloiophilidae · Phycosecidae · Platypodidae · Pleocomidae · Propalticidae · Prostomidae · Proterhinidae · Protocucujidae · Pselaphidae · Psephenidae · Pterogeniidae · Ptiliidae · Ptilodactylidae · Ptinidae · Pyrochroidae · Pythidae · Raymondionymidae · Rhinorhipidae · Rhipiceridae · Rhipiphoridae · Rhizophagidae · Rhombocoleidae · Rhynchitidae · Rhynchophoridae · Rhysodidae · Ripiphoridae · Rutelidae · Salpingidae · Scaphidiidae · Scarabaeidae · Schizocoleidae · Schizophoridae · Schizopodidae · Scirtidae · Scolytidae · Scraptidae · Scraptiidae · Scydmaenidae · Serropalpidae · Silphidae · Silvanidae &mid dot; Sphaeritidae · Sphaerosomatidae · Sphindidae · Staphylinidae · Stenotrachelidae · Synchroidae · Synteliidae · Taldycupedidae · Telegeusidae · Tenebrionidae · Termitotrogidae · Tetratomidae · Throscidae · Torridincolidae · Tricoleidae · Trictenotomidae · Trogidae · Trogositidae · Trogossitidae · Tshekardocoleidae< /a> · Zopheridae - Species: ZipcodeZoo has pages for 200,044 species and subspecies in the Order Coleoptera. Acanthocnemidae is a small family of beetles, in the suborder Polyphaga. The single species of Acanthocnemidae, Acanthocnemus nigricans, is native to Australia. [more] Aderidae, the ant-like leaf beetles, is a family of beetles that bear some resemblance to ants. The family consists of about 1,000 species in about 50 genera, of which most are tropical, although overall distribution is worldwide. [more] Aglycyderini are a tribe of belids, primitive weevils of the family Belidae. Like in other belids, their antennae are straight, not elbowed as in the true weevils (Curculionidae). They occur only on the Pacific Islands and in the Macaronesian region.. [more] Agyrtidae or primitive carrion beetles are a small family of polyphagan beetles They are found in mostly temperate areas of the northern hemisphere and in New Zealand. They are feeding on decaying organic material. [more] Amphizoa is a genus of beetles, placed in its own family, Amphizoidae. It comprises six species, three from western North America and three from China. The vernacular name "trout-stream beetle" comes from the original finding of A. insolens and A. lecontei in high mountain streams, although other species occur at lower elevation. They are notable as a possible intermediate stage between terrestrial and aquatic beetles; while living in the water, they are not good swimmers and physically resemble ground beetles more than other types of water beetle. [more] Anobiidae is a family of beetles. The larvae of a number of species tend to bore into wood, earning them the name "woodworm" or "wood borer". A few species are pests, causing damage to wooden furniture and house structures, notably the death watch beetle, Xestobium rufovillosum, and the common furniture beetle, Anobium punctatum. [more] Anthicidae is a family of beetles, sometimes called ant-like flower beetles or ant-like beetles that resemble ants. The family consists of over 3,000 species in about 100 genera. [more] Anthribidae is a family of beetles also known as fungus weevils. The antennae are not elbowed, may occasionally be longer than the body and thread-like, and can be the longest of any members of Curculionoidea. As in the Nemonychidae, the labrum appears as a separate segment to the clypeus, and the maxillary palps are long and projecting. [more] The Attelabidae or leaf-rolling weevils are a widespread family of weevils. There are more than 2000 species. They are included within the primitive weevils, because of their straight antennae, which are inserted near the base of the rostrum. The prothorax is much narrower than the base of the elytra on the abdomen. [more] Belidae is a family of weevils, called belids or primitive weevils because they have straight antennae, unlike the "true weevils" or Curculionidae which have elbowed antennae. They are sometimes known as "cycad weevils", but this properly refers to a few species from the genera and Rhopalotria. [more] Belohina inexpectata is a polyphagan beetle and the sole member of family Belohinidae. It is endemic to southern Madagascar. Only a few specimens of this species are known. [more] Boganiidae is a family of beetles, in the suborder Polyphaga. [more] The family Boridae is a small group of beetles with no vernacular common name, though recent authors have coined the name conifer bark beetles. [more] The Bostrichidae are a family of beetles with more than 700 described species. They are commonly called auger beetles, false powderpost beetles or horned powderpost beetles. The head of most auger beetles cannot be seen from above, as it is downwardly directed and hidden by the thorax. An exception is the powderpost beetles from the subfamily Lyctinae. [more] The Bostrichidae are a family of beetles with more than 700 described species. They are commonly called auger beetles, false powderpost beetles or horned powderpost beetles. The head of most auger beetles cannot be seen from above, as it is downwardly directed and hidden by the thorax. An exception is the powderpost beetles from the subfamily Lyctinae. [more] Bothrideridae is a family of beetles, in the suborder Polyphaga. Larvae of some species are ectoparasites of the larvae and pupas of wood-boring beetles. [more] Brachypsectridae is a family of beetles commonly known as the Texas beetles. There is only one genus, . The type species, Brachypsectra fulva (LeConte, 1874), occurs in North America. There are three other species which occur in southern India, Singapore and northwestern Australia. Two other extant and fossil species have been described from the Dominican Republic. [more] Brentidae is a cosmopolitan family of primarily xylophagous beetles also known as straight-snouted weevils. The concept of this family has been recently expanded with the inclusion of three groups formerly placed in the Curculionidae; the subfamilies , Cyladinae, and Nanophyinae, as well as the Ithycerinae, previously considered a separate family. They are most diverse in the tropics, but occur throughout the temperate regions of the world. They are among the families of weevils that have non-elbowed antennae, and tend to be elongate and flattened, though there are numerous exceptions. [more] The bean weevils or seed beetles are a subfamily (Bruchinae) of beetles, now placed in the family Chrysomelidae, though they have historically been treated as a separate family. They are granivores, and typically infest various kinds of seeds or beans, living for most of their lives inside a single seed. The family includes about 1,350 species found worldwide. [more] Buprestidae is a family of beetles, known as jewel beetles or metallic wood-boring beetles because of their glossy iridescent colors. The family is among the largest of the beetles, with some 15,000 species known in 450 genera. In addition, almost 100 fossil species have been described. [more] Byrrhidae, the pill beetles, is a family of beetles in the superfamily Byrrhoidea. [more] Byturidae, also known as Fruitworms is a family of beetles, in the suborder Polyphaga. The larvae develop in fruits. Byturus unicolor affects species of Rubus and Geum, the larvae of Raspberry beetle raspberry plants. [more] The soldier beetles, Cantharidae, are relatively soft-bodied, straight-sided beetles, related to the Lampyridae or firefly family, but being unable to produce light. They are cosmopolitan in distribution. One common British species is bright red, reminding people of the red coats of soldiers, hence the common name. A secondary common name is leatherwing, obtained from the texture of the wing covers. [more] Ground beetles are a large, cosmopolitan family of beetles, Carabidae, with more than 40,000 species worldwide, approximately 2,000 of which are found in North America and 2,700 in Europe. [more] Cavognathidae is a family of beetles, in the suborder Polyphaga. [more] The longhorn beetles (Cerambycidae; also known as long-horned beetles or longicorns) are a cosmopolitan family of beetles, typically characterized by extremely long antennae, which are often as long as or longer than the beetle's body. In various members of the family, however, the antennae are quite short (e.g., Neandra brunnea, figured below) and such species can be difficult to distinguish from related beetle families such as Chrysomelidae. The family is large, with over 20,000 species described, slightly more than half from the Eastern Hemisphere. Several are serious pests, with the larvae boring into wood, where they can cause extensive damage to either living trees or untreated lumber (or, occasionally, to wood in buildings; the old-house borer, Hylotrupes bajulus, being a particular problem indoors). A number of species mimic ants, bees, and wasps, though a majority of species are cryptically colored. The rare titan beetle (Titanus giganteus) from nor theastern South America is often considered the largest (though not the heaviest, and not the longest including legs) insect, with a maximum known body length of just over 16.7 centimetres (6.6 in). [more] Cerylonidae is a family of beetles, in the suborder Polyphaga.The Cerylonidae are a family of small to minute beetles (usually 2 mm. 01- less) which occur most commonly in forest litter and under bark. At present, there are about 40 genera and over 300 described species known from all of the major zoogeographic regions. Crowson (1955) first recognized the Cerylonidae as an independent clavicorn family, including the cerylonines and murmidiines, as well as Euxes- tus and its allies; but these groups have been treated as tribes of the heteromerous family Colydiidae by both Hetschko (1930) and Ar- nett (1968). In their world generic revision of the family, Sen Gupta and 'Crowson (1973) added Anommatus Wesmael, Abromus Reitter, and Ostomopsis Scott, while transferring Eidoreus Sharp (== Eupsilob'ws Casey) to the Endomychidae. The present paper consists of a revision of the 10 genera and 18 species of Cerylonidae occurring in America north of Mexico. With respect to the compo- sition of the family and that of its major subordinate groups, we have followed the classification presented by Sen Gupta and Crowson; the interrelationships among the subgroups, however, are still obscure, so we have treated the Euxestinae, Anommatinae, Metaceqloninae (not North American), Murrnidiinae, Ostomopsinae, and Cerylon- inae as independent subfamilies. The following abbreviations have been used in keys and descrip- tions: PL - pronotal length, PW - pronotal width, EL - elytral length, EW - elytral width, and TL -sum of PL and EL. The word "length" refers to the total length, including the head, and is 'Published with the aid of a grant from the Museum of Comparative Zoology. Museum of comparative Zoology, Harvard University, Cambridge, Mass [more] Flower chafers are a group of scarab beetles, comprising the subfamily Cetoniinae. Many species are diurnal and visit flowers for pollen and nectar, or to browse on the petals. Some species also feed on fruit. The group is also called fruit and flower chafers, flower beetles and flower scarabs. There are around 4,000 species, many of them still undescribed. [more] Beetles in the family Chrysomelidae are commonly known as leaf beetles. This is a family of over 35,000 species in more than 2,500 genera, one of the largest and most commonly encountered of all beetle families. [more] The tiger beetles are a large group of beetles known for their aggressive predatory habits and running speed. The fastest species of tiger beetle can run at a speed of 9 km/h (5.6 mph), which, relative to its body length, is about 22 times the speed of former Olympic sprinter Michael Johnson, the equivalent of a human running at 480 miles per hour (770 km/h). As of 2005, about 2,600 species and subspecies were known, with the richest diversity in the Oriental (Indo-Malayan) region, followed by the Neotropics. [more] The minute tree-fungus beetles, family Ciidae, are a sizeable group of beetles which inhabit Polyporales bracket fungi or coarse woody debris. Most numerous in warmer regions, they are nonetheless widespread and a considerable number of species occur as far polewards as Scandinavia for example. [more] Cleridae are a family of beetles of the superfamily Cleroidea. They are commonly known as checkered beetles. The Cleridae family has a worldwide distribution, and a variety of habitats and feeding preferences. [more] Cneoglossidae is a family of beetles, in the large suborder Polyphaga. It contains nine species in a single genus: [more] Coccinellidae is a family of beetles, known variously as ladybirds (UK, Ireland, Australia, Sri Lanka, Pakistan, South Africa, New Zealand, India, Malta, some parts of Canada and the US), or ladybugs (North America). Scientists increasingly prefer the names ladybird beetles or lady beetles as these insects are not true bugs. Lesser-used names include God's cow, ladycock, lady cow, and lady fly. [more] Colydiinae is a subfamily of beetles, commonly known as cylindrical bark beetles. They have been treated historically as a family, but have recently been moved into the Zopheridae , where they constitute the bulk of the diversity of the new composite family, with about 140 genera worldwide. There is not much known about the biology of these beetles. Most feed on fungi, others are carnivores and feed on arthropods. [more] Corylophidae is a family of beetles, sometimes known as the minute fungus beetles. [more] Crowsoniellidae is a small family of beetles, in the suborder Archostemata. [more] Cryptophagidae is a family of beetles with representatives found in all ecozones. Only around 800 species have been described but it seems certain that many others await discovery. Members of this family are commonly called silken fungus beetles and both adults and larvae appear to feed exclusively on fungi although in a wide variety of habitats and situations (e.g. rotting wood, shed animal fur/feathers). These beetles are generally small to very small, usually with a basically oval body shape with a slight "waist". [more] The Cucujidae, sometimes called flat bark beetles are a family of distinctively flat beetles found worldwide under the bark of dead and live trees. The family consists of about 40 species in four genera. [more] Cupedidae is a small family of beetles, notable for the square pattern of "windows" on their elytra (hard forewings), which gives the family their common name of reticulated beetles. [more] Curculionidae is the family of the "true" weevils (or "snout beetles"). It was formerly recognized in 1998 as the largest of any animal family, with over 40,000 species described worldwide at that time. Today, it is still one of the largest known. [more] Dermestidae are a family of Coleoptera that are commonly referred to as skin beetles. Other common names include larder beetle, hide or leather beetles, carpet beetles, and khapra beetles. There are approximately 500 to 700 species worldwide. They can range in size from 1?12 mm. Key characteristics for adults are round oval shaped bodies covered in scales or setae. The (usually) clubbed antennae fit into deep grooves. The hind femora also fit into recesses of the coxa. Larvae are scarabaeiform and also have setae. [more] Derodontidae is a family of beetles, in its own superfamily, Derodontoidea, sometimes known as tooth-necked fungus beetles. There are 38 species in 4 genera and 3 subfamilies. Beetles of this family are small, between 2 and 6 mm in length, with spiny margins on their pronotum (part of the thorax) that give them their name. The genus, Laricobius, lacks these spines. They have two ocelli on the top of their heads. [more] The false stag beetles (Diphyllostoma) are a group of three species of rare beetles known only from California. Almost nothing is known of their life history beyond that the adults are diurnal and females are flightless; larvae have not been observed. [more] Discolomatidae is a family of beetles, in the suborder Polyphaga. [more] Dryophthorinae is a weevil subfamily within the family Curculionidae. [more] The Rhinoceros Beetles or Rhino Beetles are a subfamily (Dynastinae) of the scarab beetle family (Scarabaeidae). Other common names ? some for particular groups of rhino beetles ? are for example Hercules beetles, unicorn beetles or horn beetles. There are over 300 known species of rhino beetles. [more] Dytiscidae ? based on the Greek dytikos (d?t????), "able to dive" ? are the predaceous diving beetles, a family of water beetles. They are about 25 mm (one inch) long on average, though there is much variation between species. Dytiscus latissimus, the largest, can grow up to 45 mm long. Most are dark brown, blackish or dark olive in color with golden highlights in some subfamilies. They have short, but sharp mandibles. Immediately upon biting they deliver digestive enzymes. The larvae are commonly known as water tigers. The family has not been comprehensively cataloged since 1920, but is estimated to include about 4,000 species in over 160 genera. [more] The family Elateridae is commonly called click beetles (or "typical click beetles" to distinguish them from the related Cerophytidae and ), elaters, snapping beetles, spring beetles or "skipjacks". They are a cosmopolitan beetle family characterized by the unusual click mechanism they possess. There are a few closely related families in which a few members have the same mechanism, but all elaterids can click. A spine on the prosternum can be snapped into a corresponding notch on the mesosternum, producing a violent "click" which can bounce the beetle into the air. Clicking is mainly used to avoid predation, although it is also useful when the beetle is on its back and needs to right itself. There are about 9300 known species worldwide, and 965 valid species in North America. [more] Endomychidae, or handsome fungus beetles is a family of beetles with representatives found in all ecozones.There are around 120 genera and 1300 species. As the name suggests Endomychidae feed on fungi. [more] Erotylidae is the pleasing fungus beetles, is a family of beetles containing over 100 genera. In the present circumscription, it includes the subfamilies , Encaustinae, Erotylinae, Megalodacninae, and Tritominae. In other words, the narrowly-circumscribed Erotylidae correspond to the subfamily Erotylinae in the definition sensu lato. They feed on plant and fungal matter; some are important pollinators (e.g. of the ancient cycads), while a few have gained notoriety as pests of some significance. Sometimes, useful and harmful species are found in one genus, e.g. Pharaxonotha. Most pleasing fungus beetles are inoffensive animals of little significance to humans however. [more] Eucinetidae is a family of beetles, notable for their large that cover much of the first ventrite of the abdomen, sometimes called plate-thigh beetles. The family is small for beetles, with about 37 species in nine genera, but are found worldwide. [more] Georissus, also called minute mud-loving beetles, is the only genus in the beetle family Georissidae (or Georyssidae). They are tiny insects living in wet soil, often near water. Found on every continent except Antarctica. [more] Geotrupidae (from Greek geos, earth, and trypetes, borer) is a family of beetles in the order Coleoptera. They are commonly called dor beetles or earth-boring dung beetles. Most excavate burrows in which to lay their eggs. They are typically detrivores, provisioning their nests with leaf litter (often moldy), but are occasionally coprophagous, similar to dung beetles. The eggs are laid in or upon the provision mass and buried, and the developing larvae feed upon the provisions. The burrows of some species can exceed 2 metres in depth. [more] Glaphyridae is a family of beetles, commonly known as The bumble bee scarab beetles. There are eight genera with about 80 species distributed worldwide. [more] Glaresis is a genus of beetles, sometimes called "enigmatic scarab beetles", in its own family, the Glaresidae. It is closely related to scarab beetles. Although its members occur in arid and sandy areas worldwide (except Australia), only the nocturnal adults have ever been collected (typically at lights), and both the larvae and biology of Glaresis are as yet unknown. Due to their narrow habitat associations, a great number of these species occur in extremely limited geographic areas, and are accordingly imperiled by habitat destruction. [more] The whirligig beetles are a family (Gyrinidae) of water beetles that usually swim on the surface of the water if undisturbed, though they swim actively underwater when threatened. They get their common name from their habit of swimming rapidly in circles when alarmed, and are also notable for their divided eyes which are believed to enable them to see both above and below water. [more] The Haliplidae are a family of water beetles who swim using an alternating motion of the legs. They are therefore clumsy in water (compared e.g. with the Dytiscidae or Hydrophilidae), and prefer to get around by crawling. The family consists of about 200 species in 5 genera, distributed wherever there is freshwater habitat; it is the only extant member of superfamily Haliploidea. They are also known as crawling water beetles or haliplids. [more] The Heteroceridae, or variegated mud-loving beetles, are a widespread and relatively common family of beetles. They occur on every continent except for Antarctica. [more] Histeridae is a family of beetles commonly known as Clown beetles or Hister beetles. This very diverse group of beetles contains 3,900 species found worldwide. They can be easily identified by their shortened elytra that leaves two of the seven tergites exposed, and their elbowed antennae with clubbed ends. These predatory feeders are most active at night and will fake death if they feel threatened. This family of beetles will occupy almost any kind of niche throughout the world. Hister beetles have proved useful during forensic investigations to help in time of death estimation. Also, certain species are used in the control of livestock pests that infest dung and to control houseflies. Because they are predacious and will even eat other Hister beetles, they must be isolated when collected. [more] Hybosoridae, sometimes known as the scavenger scarab beetles, is a family of scarabaeiform beetles. The 210 species in 33 genera occur widely in the tropics, but little is known of their biology. [more] Hydraenidae is a family of very small aquatic beetles with a worldwide distribution. These beetles are generally 1-3 mm in length (although some species reach 7 mm) with clubbed antennae. They do not swim well and are generally found crawling in marginal vegetation. Most are phytophagous but a few saprophagous and predatory species are known. [more] Hydrophilidae , also called water scavenger beetles, is a family of chiefly aquatic beetles. Aquatic hydrophilids are notable for their long , which are longer than their antennae. Several of the former subfamilies of Hydrophilidae have recently been removed and elevated to family rank; Epimetopidae, Georissidae (= Georyssinae), Helophoridae, Hydrochidae, and Spercheidae (= Sphaeridiinae). Some of these formerly-included groups are primarily terrestrial or semi-aquatic. [more] Hydroscaphidae is a small family of water beetles, consisting of 13 species in three genera, which are sometimes called skiff beetles. [more] The New York weevil (Ithycerus noveboracensis) is a species of primitive weevil; large for weevils (12-18 mm), it is covered with fine bristles and has a regular pattern of light and dark spots. It occurs in the eastern United States and southern Canada. [more] Jacobsoniidae is a family of beetles. The larvae and adults live under bark, in plant litter, fungi, bat guano and rotten wood. [more] Laemophloeidae is a family of beetles, in the suborder Polyphaga. [more] Lampyridae is a family of insects in the beetle order Coleoptera. They are winged beetles, and commonly called fireflies or lightning bugs for their conspicuous crepuscular use of bioluminescence to attract mates or prey. Fireflies produce a "cold light", with no infrared or ultraviolet frequencies. This chemically-produced light from the lower abdomen may be yellow, green, or pale-red, with wavelengths from 510 to 670 nanometers. [more] Latridiidae is a family of tiny, little-known beetles commonly called minute brown scavenger beetles. The number of described species currently stands at around 1050 in 29 genera but the number of species is undoubtedly much higher. [more] Leiodidae is a family of beetles with around 3800 described species found worldwide. Members of this family are commonly called round fungus beetles due to the globular shape of many species, although some are more elongated in shape. They are generally small or very small beetles (less than 10 mm in length) and many (but not all) species have clubbed antennae. [more] Stag beetles are a group of about 1,200 species of beetle in the family Lucanidae, presently classified in four subfamilies Some species grow up to over 12 cm (4.8 in), but most are about 5 cm (2 in). [more] Lycidae is a family in the beetle order Coleoptera, members of which are commonly called net-winged beetles. [more] Powderpost beetles are a group of seventy species of woodboring beetles classified in the insect subfamily Lyctinae. These beetles, along with spider beetles, death watch beetles, common furniture beetles, skin beetles, and others, make up the superfamily Bostrichoidea. While most woodborers have a large prothorax, powderpost beetles do not, making their heads more visible. In addition to this, their antennae have two-jointed clubs. They are considered pests and attack deciduous trees, over time reducing the wood to a powdery dust. The damage caused by longhorn beetles (family Cerambycidae) is often confused with that of powderpost beetles, but the two groups are unrelated. Their larvae are white and C-shaped. [more] The Lymexylidae, or ship-timber beetles, are a family of wood-boring beetles, and the sole member of the superfamily Lymexyloidea. [more] Melandryidae or The false darkling beetles is a family of beetles in the large suborder Polyphaga. [more] Blister beetles are beetles (Coleoptera) of the family Meloidae, so called for their defensive secretion of a blistering agent, cantharidin. There are approximately 7,500 known species worldwide. Many are conspicuous and some aposematically colored, announcing their toxicity to would-be predators. [more] Melyridae (common name: soft-wing flower beetles) are a family of beetles of the superfamily Cleroidea. The family Melyridae contains 520 species in 58 genera in North America. Most are elongate-oval, soft-bodied beetles 10 mm long or less. Many are brightly colored with brown or red and black. Some melyrids () have peculiar orange structures along the sides of the abdomen, which may be everted and saclike or withdrawn into the body and inconspicuous. Some melyrids have the two basal antennomeres greatly enlarged. Most adults and larvae are predaceous, but many are common on flowers. The most common North American species belong to the genus Collops (Malachiinae); C. quadrimaculatus is reddish, with two bluish black spots on each elytron. Batrachotoxins are found in them. [more] The telephone-pole beetle, Micromalthus debilis, is a beetle native to the eastern United States, and the only species in the family Micromalthidae. [more] Sphaerius is a genus of beetle, comprising 23 species, which are the only members of the family Sphaeriusidae. They are typically found along the edges of streams and rivers, where they feed on algae, occurring on all continents except Antarctica. Only 3 species occur in the United States. [more] Monommatinae is a subfamily (or sometimes only considered a tribe) of beetles with no vernacular common name, though recent authors have coined the name opossum beetles. They have been treated historically as a family (sometimes spelled Monommidae), but have recently been placed into the Zopheridae. There are some 15 genera in this group, commonly found in association with plants in the family Agavaceae. [more] Mordellidae is a family of beetles commonly known as tumbling flower beetles for the typical irregular movements they make when escaping predators, or as pintail beetles due to their abdominal tip which aids them in performing these tumbling movements. Worldwide, there are about 1500 species. [more] Mycetophagidae or The hairy fungus beetles is a family of beetles, in the large suborder Polyphaga. The different species are between 1.0 - 6.5 mm in length. The larvae and adults live in decaying leaf litter, fungi and under bark. Most species feed on fungi (hence the name). Worldwide, there are about 18 genera which 200 species. [more] Nemonychidae is a small family of weevils, placed within the primitive weevil group because they have straight rather than elbowed antennae. They are often called pine flower weevils. As in the Anthribidae, the labrum appears as a separate segment to the clypeus, and the maxillary palps are long and projecting. Nemonychidae have all ventrites free, while Anthribidae have ventrites 1-4 connate or partially fused. Nemonychidae lack lateral carinae on the pronotum, while these are usually present, though may be short, in Anthribidae. [more] The sap beetles are a family (Nitidulidae) of beetles. [more] Nosodendridae is a family of beetles. [more] Noteridae is a family of water beetles closely related to the Dytiscidae, and formerly classified with them. They are mainly distinguished by the presence of a distinctive "noterid platform" underneath, in the form of a plate between the second and third pair of legs. The family consists of about 230 species in 12 genera, and is found worldwide, more commonly in the tropics. They are sometimes referred to as burrowing water beetles. [more] Ochodaeidae, sometimes known as the sand-loving scarab beetles, is a small but widely-distributed family of scarabaeiform beetles. [more] The family Oedemeridae is a cosmopolitan group of beetles commonly known as false blister beetles, though some recent authors have coined the name pollen-feeding beetles. There are some 100 genera and 1,500 species in the family, mostly associated with rotting wood as larvae, though adults are quite common on flowers. [more] Oxycoryninae are subfamily of primitive weevils of the family Belidae, but sometimes treated as a distinct family Oxycorynidae. Like in other belids, their antennae are straight, not elbowed as in the true weevils (Curculionidae), and their larvae feed on the wood of diseased or dying plants or on deadwood or fruits; they tend to avoid healthy plants. [more] Passalidae is a family of beetles known variously as "bessbugs", "bess beetles", "betsy beetles" or "horned passalus beetles". Nearly all of the 500-odd species are tropical; species found in North America are notable for their size, ranging from 20?43 mm, for having a single "horn" on the head, and for a form of social behavior unusual among beetles. [more] Fire-colored beetles are the beetles of the Pyrochroidae family, which includes the red Cardinal beetles. This family contains some 150 species. Many species in the subfamily ve comb- or antler-like antennae. This family also now includes most former members of the defunct family Pedilidae. [more] The Phalacridae are a family of beetles commonly called the shining flower beetles. They are often found in composite flowers. They are oval-shaped, usually tan, and about 2 mm in length. [more] The beetle family Phengodidae is known also as glowworm beetles, whose larvae are known as glowworms. The females and larvae have bioluminescent organs. They occur throughout the New World from extreme southern Canada to Chile. The family Rhagophthalmidae, an Old World group, used to be included in the Phengodidae. [more] Ambrosia beetles are beetles of the weevil subfamilies Scolytinae and Platypodinae (Coleoptera, Curculionidae), which live in nutritional symbiosis with ambrosia fungi and probably with bacteria. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery. The majority of ambrosia beetles colonize xylem (sapwood and/or heartwood) of dying or recently dead trees. Species differ in their preference for different parts of trees, different stages of deterioration, in the shape of their tunnels (?galleries?). However, the majority of ambrosia beetles are not specialized to any taxonomic group of hosts, unlike most phytophagous organisms including the closely related bark beetles. [more] The rain beetles are a group of beetles found in the far west of North America. They spend most of their lives underground, emerging in response to rain or snow, thus the common name. Formerly classified in the Geotrupidae, they are currently assigned to their own family Pleocomidae, considered the sister group to all the remaining families of Scarabaeoidea. The family contains a single extant genus, Pleocoma, and one extinct genus, Cretocoma, described in 2002 from Late Cretaceous deposits in Mongolia. [more] Propalticidae is a family of beetles, in the suborder Polyphaga. It contains two genera (Propalticus and Slipinskogenia) with the following species: [more] Aglycyderini are a tribe of belids, primitive weevils of the family Belidae. Like in other belids, their antennae are straight, not elbowed as in the true weevils (Curculionidae). They occur only on the Pacific Islands and in the Macaronesian region.. [more] Pselaphinae is a subfamily of small (usually less than 2.5 mm long) beetles. The group was originally regarded as a separate family, named Pselaphidae. Newton and Thayer (1995) placed them in the Omaliine group of the family Staphylinidae, based on shared morphological characters. [more] Water-penny beetles are a family (Psephenidae) of aquatic beetles. The young, which live in water, resemble pennies. The larvae feed off of algae, larvae, and feces. The presence of water penny larvae in a stream can be used as a test for the quality of the water. Among the pollution sensitivity categories sensitive, somewhat-sensitive, and tolerant; water pennies belong to the sensitive category. They cannot live in habitats where rocks acquire a thick layer of algae, fungi, or inorganic sediment. Therefore, their presence along with other diverse phyla signifies good quality water. They are around 6 to 10 millimeters in length. [more] Ptiliidae is a family of very tiny beetles with a worldwide distribution. This family contains the smallest of all beetles, with a length of 0.5 mm, and even the largest members of the family do not exceed 2 mm. The weight is approximately 0.4 milligrams. [more] Spider beetles are the approximately 500 species of beetles in the subfamily Ptininae of the family Anobiidae. They are sometimes considered a family in their own right, which is then called Ptinidae. Spider beetles have round bodies with long, slender legs, and lack wings. They are generally 1?5 mm long. Both the larvae and the adults are scavengers. They reproduce at the rate of two to three generations per year. [more] Fire-colored beetles are the beetles of the Pyrochroidae family, which includes the red Cardinal beetles. This family contains some 150 species. Many species in the subfamily ve comb- or antler-like antennae. This family also now includes most former members of the defunct family Pedilidae. [more] The family Ripiphoridae (formerly spelled Rhipiphoridae) is a cosmopolitan group of beetles commonly known as wedge-shaped beetles containing some 450 species. They are one of the most unusual beetle families, in that they are parasitoids?different groups within the family attack different hosts, but most are associated with bees or vespid wasps, while some others are associated with roaches. They often have abbreviated elytra, and branched antennae. [more] The tooth-nosed snout weevils receive this name due to the teeth on the edges of their mandibles. They are small beetles (1.5 to 6.5 mm) that are usually found on low vegetation. [more] Rhysodidae (sometimes called wrinkled bark beetles) is a family of beetles, consisting of several hundred species in about 20 genera. [more] Salpingidae or narrow-waisted bark beetles is a family of beetles, in the large suborder Polyphaga. The species are small, about 1.5 - 7 mm in length. This family is worldwide distributed and consists of about 45 genera and 300 species. [more] The family Scarabaeidae as currently defined consists of over 30,000 species of beetles worldwide. The species in this large family are often called scarabs or scarab beetles. The classification of this family is fairly unstable, with numerous competing theories, and new proposals appearing quite often. It is probable that many of the subfamilies listed here will no longer be recognized very much longer, as they will likely be reduced in status below subfamily rank, or elevated to family status (the latter is most likely, e.g., with the family "Melolonthidae" already appearing in some recent classifications). Other families have been removed recently, and are nearly universally accepted (e.g., Pleocomidae, Glaresidae, Glaphyridae, Ochodaeidae, Geotrupidae, ) [more] Scirtidae is a family of beetles (Coleoptera). [more] A bark beetle is one of approximately 220 genera with 6,000 species of beetles in the subfamily Scolytinae. Traditionally, this was considered a distinct family Scolytidae, but now it is understood that bark beetles are in fact very specialized members of the "true weevil" family (Curculionidae). Well-known species are members of the type genus Scolytus - namely the European elm bark beetle S. multistriatus and the large elm bark beetle S. scolytus, which like the Hylurgopinus rufipes, transmit Dutch elm disease fungi (Ophiostoma). The mountain pine beetle Dendroctonus ponderosae, southern pine beetle Dendroctonus frontalis and their near relatives are major pests of conifer forests in North America. A similarly aggressive species in Europe is the spruce Ips Ips typographus. A tiny bark beetle?the coffee berry borer, Hypothenemus hampei is a major pest of coffee around the world. [more] Scydmaenidae is a family of small beetles, commonly called ant-like stone beetles or scydmaenids. These beetles occur worldwide, and the family includes some 4,500 species in about 80 genera. [more] Sphaerites is a genus of beetles, the only genus in the family Sphaeritidae, sometimes called the false clown beetles. It is closely related to the clown beetles but with distinct characteristics. There are four known species, widespread in temperate area but not commonly seen. [more] The rove beetles are a large family (Staphylinidae) of beetles, primarily distinguished by their short elytra that leave more than half of their abdomens exposed. With over 46,000 species in thousands of genera, the group is the second largest family of beetles after the Curculionidae (the true weevils). It is an ancient group, with fossil rove beetles known from the Triassic, 200 million years ago. [more] Darkling beetles (also known as darkening beetles) are a family of beetles found worldwide, estimated at more than 20,000 species. Many of the beetles have black elytra, leading to their common name. Apart from the 9 subfamilies listed here, the tribe Opatrini of the Tenebrioninae is sometimes considered a distinct family, and/or the are included in the Tenebrioninae as a tribe Pimeliini. [more] The (Trogidae) or hide beetles are a family of beetles with a distinctive warty or bumpy appearance. Found worldwide, the family includes about 300 species contained in three genera. [more] Trogossitidae is a small family of beetles, in the suborder Polyphaga. Trogossitidae consists of about 600 species. 59 species are found in America about 36 in Australia. [more] Zopheridae is a family of beetles that has grown considerably in recent years as the members of two other families have been included within its circumscription; these former families are the Monommatidae and the Colydiidae, which are now both considered subfamilies within the Zopheridae. There are over 100 genera in the redefined family, and hundreds of species worldwide. There is no vernacular common name for the new family, though some of the constituent subfamilies have their own, including the ironclad beetles, and the cylindrical bark beetles. [more] At least 125 species and subspecies belong to the Family Zopheridae. More info about the Family Zopheridae may be found here. - Poul Beckmann, Living Jewels: The Natural Design of Beetles ISBN 3-7913-2528-0 - Arthur V. Evans, Charles Bellamy, and Lisa Charles Watson, An Inordinate Fondness for Beetles ISBN 0-520-22323-3 - Entomological Society of America, Beetle Larvae of the World ISBN 0-643-05506-1 - David Grimaldi, Michael S. Engel, Evolution of the Insects ISBN 0-521-82149-5 - Ross H. Arnett, Jr. and Michael C. Thomas, American Beetles (CRC Press, 2001-2). ISBN 0-8493-1925-0 - K. W. Harde, A Field Guide in Colour to Beetles ISBN 0-7064-1937-5 Pages 7–24 - White, R.E. 1983. Beetles. Houghton Mifflin Company, New York, NY. ISBN 0-395-91089-7 - The text on this page is licensed under the GNU Free Documentation License. It includes material from Wikipedia retrieved Wednesday, April 25, 2012. - Photographs on this page are copyrighted by individual photographers, and individual copyrights apply. - The technology underlying this page, including the controls behind Keep Exploring, is owned by the BayScience Foundation. All rights are reserved.
http://zipcodezoo.com/Key/Animalia/Coleoptera_Order.asp
13
69
- Motor Action Michael Faraday showed that passing a current through a conductor freely suspended in a fixed magnetic field creates a force which causes the conductor to move through the field. Conversely, if the conductor rather than the magnet is constrained then the magnet creating the field will move relative to the conductor. More generally, the force created by the current, now known as the Lorentz force, acts between the current conductor and the magnetic field, or the magnet creating the field. The magnitude of the force acting on the conductor is given by: F = BLI Where F is the force on the conductor, L is the length of the conductor and I is the current flowing through the conductor - Generator Action Faraday also showed that the converse is true - moving a conductor through a magnetic field, or moving the magnetic field relative to the conductor, causes a current to flow in the conductor. The magnitude of the EMF generated in this way is given by: E = BLv Where E is the generator EMF (or back EMF in a motor) and v is the velocity of the conductor through the field - Alternative Motor Action (Interactive Fields) Another form of motive power, which does not depend on the Lorentz force and the flow of an electrical current, can in principle be derived from the purely attractive (or repulsive) magnetic force which is exerted on a magnet or on magnetically susceptible materials such as iron when they are placed in the field of another magnet. The movement of a compass needle in the presence of a magnet is an example. In practice however at least one magnet creating the field must be an electromagnet in order to obtain the necessary control of the magnetic field to achieve sustained motion as well as practical levels of torque. Brushless DC motors and reluctance motors depend on this phenomenon known as "reluctance torque" since no electric currents flow in the rotor. Rotary motion is obtained by sequential pulsing of the stator poles to create a rotating magnetic field which drags along the moving magnet with it. In AC induction motors the rotating field is obtained by a different method and the basic motor action depends on the Lorentz force, however synchronous AC motors have magnetic rotor elements which are pulled around in synchronism with the rotating field just as in a brushless DC motor. - Reluctance Torque Torque is created due to the reaction between magnetic fields. Consider a small bar magnet in the field of another larger magnet such as the gap between the poles of a horse shoe magnet or one of the pole pairs of an electric motor. (See reluctance motor diagram). When the bar magnet is aligned with the poles of the large magnet its field will be in line with the external field. This is an equilibrium position and the bar will not experience any force to move it. However if the bar is misaligned with the poles, either rotated or displaced, it will experience a force pulling it back into line with the external field. In the case of a lateral displacement, the force diminishes as the distance increases, but in the case of a rotation, the force will increase reaching a maximum when the bar is at right angles to the external field. In other words the torque on the magnet is at a maximum when the fields are orthogonal and zero when the field are aligned. - Salient Poles Motors which depend on reluctance torque normally have "salient poles" - poles which stick out. This is to concentrate the flux into discrete angular sectors to maximise and focus the alignment force between the fields. - Torque from Rotating Fields In motors which depend on rotating fields, such as induction motors, brushless DC and reluctance motors, the instantaneous torque on the rotor depends on its angular position with respect to the angular position of the flux wave. Though the flux wave tries to pull the rotor poles in line with the flux, there will always be inertia and losses holding the rotor back. The friction, windage and other losses cause the rotor of an induction motor to turn at a slower speed than the rotating field resulting in an angular displacement between the rotating flux wave and the rotating field associated with the rotor poles. The difference between the speed of the flux wave and the speed of the rotor is called the "slip" and the motor torque is proportional to the slip. - Torque Angle Even in synchronous motors in which the rotor turns at the same speed as the flux wave, because of the losses noted above the rotor poles will never reach complete alignment with the peaks in the flux wave, and there will still be a displacement between the rotating flux wave and the rotating field. Otherwise there would be no torque. This displacement is called the "torque angle". The motor torque is zero when the torque angle is zero and is at its maximum when the torque angle is 90 degrees. If the torque angle exceeds 90 degrees the rotor will pull out of synchronism and stop. - Electrical Machines The majority of electrical machines (motors and generators) sold today are still based on the Lorentz force and their principle of operation can be demonstrated by the example below in which a single turn coil carrying electrical current rotates in a magnetic field between the two poles of a magnet. For multiple turn coils, the effective current is NI (Ampere Turns) where N is the number of turns in the coil. If the coil is supplied with a current the machine acts as a motor. If the coil is rotated mechanically, current is induced in the coil and the machine thus acts as a generator. In rotating machines the rotating element is called the rotor or armature and the fixed element is called the stator. - Action and Reaction In practice, both the motor and the generator effects take place at the same time. Passing the current through a conductor in the magnetic field causes the conductor to move through the field but once the conductor starts moving it becomes a generator creating a current through the conductor in the opposite direction to the applied current. Thus the motion of the conductor creates a "back EMF " which opposes the applied EMF. Conversely moving the conductor through the field causes a current to flow through the conductor which in turn creates a force on the conductor opposing the applied force. The actual current which flows in the conductor is given by: I = (V - E) Where V is the applied voltage, E is the back EMF and R is the resistance of the conductor (the armature of the motor).. - The EMF Equation From the above, the back EMF in an electric motor is equal to the applied voltage less the volt drop across the armature. E = V - RI This is known as the "Motor EMF Equation". The volt drop across the amature RI is sometimes called the Net Voltage - The Power Equation Multiplying the voltage by the armature current to get the power gives the following relationship: P = EI = VI - I2R It shows that the mechanical power delivered by the motor is equal to the back EMF times the armature current OR the electrical power applied to the motor less the I2R losses in the windings. (Disregarding frictional losses). This is known as the "Motor Power Equation". - Operating Equilibrium Under Load The "Action and Reaction" effects outlined above provide an important automatic self regulating feedback mechanism in both DC and AC motors for adapting to changes to the applied load. As the load on the motor is increased it tends to slow down, reducing the back EMF. This in turn allows more current to flow generating more torque to accommodate the increased load until a point of balance or equilibrium is reached. Thus the motor will set itself to an appropriate speed for the torque demanded. See also Power Handling below. - Magnetic Fields The motor's magnetic field is provided by the stator and in the above example the stator is a permanent magnet however in the majority of electrical machines the magnetic field is provided electromagnetically by coils wound around the stator poles. The stator windings are also called the field windings and the motor is said to be "field energised". The rotor is normally wound on an iron core to improve the efficiency of the machine's magnetic circuit. - Magnetic Circuits In the case of electrical machines, the magnetic circuit is the path of the magnetic flux through the stator body, across the air gap, through the rotor and back through the air gap into the stator. The length l of this path is known as the mean magnetic path length MMPL Magnetic circuits are designed to produce the maximum flux possible and to concentrate it in the air gap between the rotor and the stator through which the coils move. The flux Φ is measured in Webers The flux density B is measured in Teslas and is defined as the magnetic flux Φ per unit area A. Thus B = Φ/A where A is the area through which the flux passes. From the equations above it can be seen that the torque generated by the electric motor or the EMF created by the generator are directly proportional to the magnetic flux density B in the region surrounding the moving electrical conductors and for efficient machines, B should be as high as possible. - MagnetoMotive Force (MMF) The magnetic flux arising in a magnetic circuit is proportional to the magnetomotive force (MMF) creating it. For an electromagnet, the MMF is the effective current in the magnetising coil measured in Ampere turns NI and, as above, this is the actual current I times the number of turns N in the coil. Thus MMF = NI = Φ X R where R is the reluctance of the magnetic circuit. The reluctance is the inherent resistance of the material in the magnetic circuit to the setting up of the magnetic flux through it. (For iron the reluctance is very low. For air it is very high) This equation for the flux in magnetic circuits is analogous to Ohm's law for the current in electric circuits in which: EMF = I X R where R is the resistance of the electric circuit. Because the reluctance of the air gap between the stator and the rotor is very high, the air gap should be as small as possible to minimise the Ampere turns needed to create the desired flux density. - Magnetic Force (H) also called the Magnetic Field Strength The magnetic field strength H is the MMF per unit length in a magnetic circuit. Thus: The magnetomotive force is the cause of the magnetic field, the magnetic force is the effect. - Flux Density (B) and Magnetic Permeability (µ) For uniform fields, the flux density associated with the magnetic force is proportional to the field strength and is given by: µ0 is the known as the magnetic constant or the permeability of free space. µr is the relative permeability of the magnetic material. Unfortunately, the relationship becomes non-linear as the flux density increases and the magnetic material becomes saturated. Then the flux produced by increases in the the magnetic field decreases and levels off and the relaitive permeability µr tends towards 0. From the above it can be seen that increasing the MMF (Ampere turns) in a magnetic circuit increases the flux through the circuit but there is a limit to the flux density which can be created in magnetic materials such as iron when the material is said to be saturated. Above this point more and more MMF is needed to create less and less flux. In other words the reluctance increases sharply when the material saturates. For maximum efficiency, electric machines are usually designed to work just below the onset of saturation. - Magnetic Poles Electric machines can have multiple pole pairs. Multiple pole machines usually provide more efficient magnetic circuits and smoother torque characteristics. The connection to the moving coil in the basic machine shown above is made via carbon brushes bearing on a pair of slip rings, one connected to each end of the coil. If the machine is used as a generator, the direction of the current generated will reverse every half cycle as the arm of the coil passes the opposite poles in succession. If a unidirectional current is required, the slip rings are split and interconnected such that, each half cycle, the current is taken from alternate arms of the coil. This simple switching mechanism is called a commutator. Similarly when the machine is used as a DC motor, the commutator switches the DC supply voltage to alternate arms of the coil each half cycle in order to achieve unidirectional rotation. Thus in all wound rotor DC machines, both motors and generators, the current in the rotor windings is AC and it is the commutator which enables the corresponding DC input or output. There are however some notable exceptions. The world's first motors and generators invented by Faraday were unipolar or homopolar machines in which unidirectional current flowed in the conductors. Faraday's motor was a laboratory curiosity with no practical applications but his so called "Faraday Disk" dynamo was able to generate useful current. For over 100 years, mechanical commutation was the only practical way of switching the direction of the current flow however since the 1970s the availability of high power semiconductors has made electronic commutation possible. In AC machines the complexities of commutation can be avoided since current can be induced in the rotor windings by transformer action with the stator windings, obviating the need for direct connections between the supply line and the rotating windings. See Induction Motors. Because the commutator is essentially a mechanical switch, rapidly making and breaking a high current circuit, the switch is prone to sparking and the generation of Radio Frequency Interference (RFI) which can disrupt the working of other electronic circuits in the vicinity. In very large motors the propensity for sparking can be reduced by the addition of "interpoles" or "commutating poles", narrow auxiliary windings midway between the main stator poles. These are connected in series with the rotor windings and produce an MMF equal and opposite to the rotor MMF so that the effective flux between the main poles is zero. Commutation is designed to occur the instant when the current passes through zero between the negative and positive half cycles and this takes place when the rotor is midway between the main poles. By neutralising the flux in this region the possibility of sparking is reduced. The earliest electrical machines depended on permanent magnets to provide the magnetic field, however the best magnetic materials available at the time were only capable of providing very weak fields limiting potential machine applications to laboratory demonstrations. It was eventually realised that much stronger magnetic fields could be generated by using electromagnets powered by the applied or generated line voltage. This allowed the construction of much more powerful machines enabling the development of practical applications. Advances in magnetic materials have now created much more powerful permanent magnets enabling their use in practical machines, simplifying machine construction by eliminating one set of windings. At the same time many features such as encoders, tachogenerators, thermal cut outs, brakes and fans are being built into the machines See also Controllers Generally speaking the torque produced by a motor is proportional to the current it consumes and also proportional to the flux in the air gap. T = K1I B - DC Motors In DC motors the rotational speed is proportional to the applied voltage and the normal method of speed control is by varying the input voltage. N = K2 V The speed is however also inversely proportional to the flux in the air gap. This means that the speed increases as the flux provided by the field coils decreases. Theoretically the speed could go to infinity if the current in the field coil is removed, though the motor would most likely be destroyed before this happens. In practice a limited increase in speed can be obtained by reducing the field current in a controlled way. But note from the Torque equation above that reducing the field current also reduces the torque. This method of speed control is called "Field Weakening" - AC Motors In AC motors the speed is proportional to the frequency of the applied voltage and inversely proportional to the number of magnetic poles. N = K3 f - Torque - Speed Characteristic DC motors produce their maximum torque at zero speed or when they are stalled (when they consume maximum current) and the torque falls off linearly as the speed increases, reaching zero when the reverse voltage generated by the rotating coils in the magnetic field (the back EMF) is equal to the applied voltage. With AC motors the starting torque at zero speed may be about 70% to 90% of its maximum, rising to a peak as the speed increases then falling sharply to zero as the motor approaches synchronous speed. See note about synchronous motors . (The torque - speed characteristics of electric motors are in contrast to an internal combustion engines whose torque is very low at low speeds, typically stalling below 800 rpm, but increasing with speed up to a peak at about 80% of its maximum speed falling off only slightly as it reaches maximum speed.) Some motor designs are not self starting in their basic configuration but they normally incorporate design adaptations to enable self starting so that the user may be unaware of the problem. - Power Handling The motor output power is directly proportional to its speed. output power P in Watts is given by: P = ωT Where ω is the speed in radians per second and T is the torque in Newton metres P = 2π NT = NT Where N is the speed in revolutions per minute (RPM) NOTE: This relationship shows that for a given power, the speed reduces as the load or torque increases and vice versa. This is in some ways equivalent to what occurs in a mechanical gear box and is in line with the Operating Equilibrium mentioned above. - Maximum Power The maximum power which a motor can handle is determined by its maximum permissible temperature. Power handling capacity can be increased by utilising materials capable of withstanding higher temperatures, particularly for the insulation on the windings, or by providing forced cooling which lowers the motor temperature for a given current consumption. - Corner Power Corner power is an alternative way of specifying motor performance which some people find useful for comparing machines. It is simply the product of the maximum torque the motor can deliver and the maximum speed it can attain. Since the maximum torque rarely, if ever, occurs at the same time as the maximum speed, the actual delivered machine power will always be less than the corner power. In DC motors the commutation limit is set by the ability of the commutator segments and brushes to handle high voltages (speed limit) and high currents (torque limit). Note also that at high voltages and currents forced cooling may be required. The power handling capacity of an electrical machine is limited by the maximum allowable temperature of its windings. Higher power motors require higher magnetic fields and the current necessary to provide the higher flux density increases linearly with the motor size. The cross sectional area of the copper cable necessary to carry the current however increases as the square of the cuurent. Power handling can be increased by using insulation which can withstand higher temperatures or by providing forced cooling to remove the heat from the windings. Forced cooling is not normally required for fractional horsepower machines but larger integral horsepower motors usually incorporate a built in cooling fan to force air through the machine. Forced air cooling can be effective in machines up to 50 megawatts but larger machines with multi megawatt power ratings, as used in the electricity generating industry, must resort to liquid cooling with the coolant being circulated through hollow conductors. The working fluid may be water but for the largest machines hydrogen is used because of its low weight and high thermal capacity. For a given torque, the motor power is proportional to the speed. Low speed motors will thus deliver very low power. Applications requiring high torque at low speeds will require very high currents and impractically large motors. Such applications are better served by higher speed motors with gearing mechanisms to reduce the speed and increase the torque. The size of a motor is determined by the torque it has to deliver. For similar motors with similar cooling systems the motor torque is proportional to the volume of the rotor and hence the overall motor volume. As noted above, for a given torque, the motor power is proportional to the speed whereas the electrical and windage losses tend to be roughly constant, rising relatively slowly. Thus the motor efficiency increases with speed. Efficiency is also dependent on the size of the motor since resistive losses tend to be proportionately much higher in smaller devices than in larger machines which can be designed with more efficient magnetic circuits. Cogging is the jerky, non uniform angular velocity of a machine rotor particularly apparent at low speeds in motors with a small number of poles. It occurs because the rotor tends to speed up as it approaches the stator poles and to slow down as it leaves the poles. It is also noticeable when pulsed DC is used if the frequency of the supply waveform is too low. The problem can be reduced by using skewed rotor windings as well as increasing the number of poles in the motor. Losses reduce the efficiency of the machine and usually result in unwanted heat. - Copper losses These are the I2R heat losses resulting from the current flowing in the windings. The copper losses are variable, depending on the current and hence the load on the machine. The iron and other losses tend to be relatively constant. - Stator winding resistance - Rotor winding resistance - Iron Losses These are losses which occur in the magnetic circuit. This is the wasteful use of energy associated with using materials at flux densities above the saturation point. - Hysteresis loss This is the energy needed to magnetise and demagnetise the iron in the magnetic circuit each machine cycle. Since the losses per cycle are fixed, they will increase in line with the frequency. See more information about hysteresis. Special low hysteresis steels have been developed to reduce these losses. - Eddy current loss These losses are due to the unwanted, circulating currents which are induced in the iron of the machine's magnetic circuit by the machine windings. They are minimised by using laminated iron in the magnetic circuits instead of solid iron. The insulating oxide layer on the laminations inhibits eddy current flow between laminations. - Flux Leakage In practical magnetic circuits it is not always possible to concentrate all of the magnetic flux where it is needed for optimum magnetic coupling and the maximum energy interchange between the rotor and the stator. Consequently some of the applied energy is lost. - Windage / Friction These are the mechanical losses resulting from the drag on the movement of the rotor. - Power Factor An induction motor appears to the power line as a large inductor and consequently the line current lags behind the applied voltage. The effective power of the motor will then be VAcosΦ where V is the applied voltage, A is the current which flows and Φ is the phase angle by which the current lags the voltage. CosΦ is known as the power factor. When Φ = 0 the current is in phase with the voltage, cosΦ = 1 and there is no power loss. When Φ = 1 the current lags the voltage by 90°, cosΦ = 0 and there will be no effective power delivered to the load. The factor (1 - cosΦ) represents the extra power which the machine must consume from the source in order to deliver its nominal power. As with motors there are many different applications of the above principles. See some practical examples in the section on Generators.
http://www.mpoweruk.com/machines.htm
13
102
The Physical Principles of Sound An introductory guide to the physical properties of sound and a basic introduction to the acoustics of enclosed spaces. To aid the understanding of any technical matters relating to sound, as often the case with any discipline, it is crucial to understand the fundamental scientific principles of the subject and how they are commonly interpreted. This guide offers an introduction to the basic physics of sound including the build up of sound waves and their properties, the speed of sound, how it is shaped in acoustical environments, and how treatment can be applied to listening rooms. This document does not, on the whole, provide advice, but presents information for reference with which to aid an overall understanding of sound in a practical working environment. The Physics of Sound At its most stripped back level, sound is the mechanical disturbance of a medium, either gas, liquid or solid. For example, when a piano key is struck, the movement of the string disturbs the surrounding medium, air, causing the displacement of molecules. This disturbance has a knock on effect causing adjacent molecules to be disturbed over a certain distance until the initial energy created by the initial displacement has disappeared. The energy decays to zero after being transferred from one molecule to the next, in the process losing an amount through each transferral. In a more practical context sound can be described as the transmission of pressure, from an initial sound source to a listener through the air. Sound has three stages which affect how it is perceived by a listener. - The initial character is shaped by the properties of the sound source (i.e. an instruments' material and shape) and its excitation, such as being hit, plucked or breathed into. - The environment and the mediums which sound travels through to the listener. For example, a shout heard in a canyon sounds different than if heard in a small room, or through a pair of speakers. See section Elementary Acoustics) - The listening conditions applied by the listener. The individuals subjective perception of sound as well as their physical condition (i.e. the shape of the outer ears, or someone's age), affects how sound is perceived by the listener (this is discussed in the section, Sound and the Ear) Sound pressure is transmitted in air as a wave like motion. In air, sound waves have a pressure which alternately deviates from a state of equilibrium. These deviations are regions of compression and rarefaction of molecules. Imagine a set of adjoining springs which is fixed at each end. When the spring at the farthest end is pulled taut and released, this spring will then push the adjoining spring, after which it will be pulled apart back to it's initial state of equilibrium. This push and pull force is then transferred along all of the adjoining springs. The region where a spring is pushed is called a compression, and the region where it is pulled apart is known as a rarefaction. Diagram 1 shows the vibrating system of a tuning fork and the impact on the surrounding molecules. This method of propagation is expressed as a longitudinal wave (see diagram 2) and is an accurate model for how sound travels through air. A tuning fork propagates sound waves longitudinally. Upon excitement the prongs expand and contract creating periodic disturbance. This can be seen as periods of compression and rarefaction of molecules, until the prongs return to their natural state. Sound travels through solids in a different manner. Imagine the spring model, as discussed, where the centremost spring is pulled from side to side, as opposed to pushed and pulled. This lateral movement causes lateral disturbance along both directions of the spring alignment, known as transverse waves. Propagation of this type is found in the vibrating systems of instruments, such as strings. Although similar in shape to a sine wave, the diagram represents the displacement from equilibrium over time creating a longitudinal sound wave. The periods of compression in a wave contain the content that is audible to our ears. Within the range of human hearing, sound waves travel at such a speed whereby we perceive there to be a constant tone. This is similar to a light bulb that flickers on and off so fast, that our eyes perceive there to be only constant light. Speed of sound Sound pressure in air is a scalar quantity, which means it can be measured at certain points but has no specific direction (as sound spreads from its source). Sound travelling through solids has a speed, or velocity V, in a direction dependent on the elasticity, known as Young's modulus, and the density of the material. This is expressed in the equation: Vsolid = √ E where V = speed in metres per second (ms -1 ) E = Youngs' modulus (Nm -2 ) ρ = density (kg/m -3 ) As air contains no elasticity an alternative model to Young's modulus is derived based upon the velocity of sound in air being relative to its temperature and is calculated as: Velocity in Air = Vair = √ E where V = speed in metres per second (ms -1 ) ρ = density (kg m -3 ) E = Y where Y = a constant depending on the gas P = pressure of the gas (Nm -2 ) Sound travelling through air is proportional to the square root of the absolute temperature. As a result the speed of sound increases by 0.6 metres per second (ms -1) for every unit increase in Celsius in ambient temperature. Speed of sound in air is calculated as approximately 344 metres per second (ms -1). It is always advisable when using moving image and sound resources in a live or performance context to take into account the differences in the speeds of sound and light, as light travels considerably faster than sound and arrives before sound does to an audience which is at a distance from the two sources. Sound waves are characterised by the principles of harmonic motion and the generic properties of waves. They are graphically represented over two dimensions, plotting amplitude against time (see Diagram 2). The Sine Wave The most common tool for understanding sound due to its spectral purity is the sine wave. It is the fundamental building block of all sounds (discussed later) and is used extensively for testing and analysis of audio equipment. Diagram 3, below, shows a rotating wheel where the constantly angle of rotation is plotted against time. This can be thought of as viewing the wheel along the plane of rotation. If we recap to the analogy of the connected springs, the effect of connecting a rotating wheel to the end of the springs, driving a periodic push/pull motion, can also be seen below. The result is variations in pressure over time which are proportional to the sine of the angle of the wheels rotation. The phase angle θ, is derived from the angular velocity multiplied by time t. i.e. the speed of rotation measured in radians per second multiplied by time t. This model describes the theory of simple harmonic motion. The sine of the angle of the wheels rotation is displayed as functions of displacement. A wave cycle is the completion of the peak positive and negative displacement and return to equilibrium. The above diagram shows one cycle of a sine wave. The period of a wave is the time taken to complete one cycle. It is worth noting that the phase angle is a continually changing variable with time. Phase shift, on the other hand, is the distinction between two separate waves and can be constant, as shown in diagram 4. When two waveforms have a difference in phase they are often referred to as being out of phase with one another by the degree of the phase shift. When we hear sounds, the way in which we perceive the position (relative to oneself) of the sound source relies on the difference in time that sound arrives at each of our ears (which is a difference in phase). The brain uses this time difference to calculate and interpret the position of the sound source. Two sine waves with identical amplitude and frequency have a phase difference of 180° There are three main properties of a sine wave that describe its audible characteristics. These properties can help us to understand the basis of all sound waves, even when the content of the wave becomes much more complex. Amplitude is the measurement of sound level displacement above and below the equilibrium pressure. It is measured in the amount of force applied over an area and as such is relative to the energy or intensity of a sound. As a result, amplitude is also relative to the perceived volume of sound, although it is not a unit or a meter of volume. Diagram 2 shows the amplitude of a wave plotted against time. As mentioned above, phase is the angle of displacement at the starting point of the wave, i.e. when t = 0. Phase is expressed as Theta or θ. Phase is measured in degrees as the offset or onset (before or after) angle from 0°. The wave in Diagram 2 above begins at point A = 0 on the vertical (y) axis. - Frequency and Pitch The frequency of a wave is derived from the amount of cycles (or periods) per unit of time (seconds). Frequency is a logarithmic scale, measured in Hertz, and derived from the formula: F = 1/T Where T = Period = second Physically, frequency rises with the stiffness of the vibrating system (of the sound source), and falls with its inertial mass (discuss?). Frequency is the physical basis for differences in musical pitch. How is pitch different from frequency? Pitch is a perceived quality regarding how ‘high' or ‘low' a musical note is. The model for standard western pitch is known as the Twelve-tone Equal Tempered Scale. This is more commonly known as the twelve semitones from A to G on a piano where the interval between each note is of equal value and has an identical frequency ratio. These twelve semitones make up an octave. AUDIO EXAMPLE HERE Periodic and Aperiodic Waves So far, only periodic waves have been considered. Whereas a tuning fork moves the surrounding air back and forth creating periodic vibration, a cymbal crash contains no fixed period. It creates a short lived oscillation, a transient, and is aperiodic. In fact, nearly all sound we hear is not constant sounds but sounds that decay or end abruptly. Diagram 5 shows the waveform of middle C played on a grand piano viewed at three levels of resolution. From 5a, we can see that the amplitude of the note starts at high level at the time of excitation and then decays over time. Zooming into the beginning of the impulse, diagram 5b shows that the thick wave is comprised of lots of spikes in the waveform. These spikes are the compressions and rarefactions as a result of the resonating piano string. Zooming up close to the beginning of the wave in 5c, is proof that this wave is aperiodic. There may be similarities in some cycles but the wave is complex and each cycle is unique. Waveforms of a piano string excitation Harmonic Structure and the Nature of Sound Musical and environmental sounds are much more complex than a single sine wave. These sounds contain a wide range of frequencies that start and stop at different times within the sound, with differing levels of loudness, harshness, clarity and character (known as Timbre, discussed further). At a fundamental level, all sounds can be thought of as the build up of multiple sine waves. Therefore, in theory, sine waves with differing amplitudes, frequency and phase can be added together to create any sound imaginable. To clarify this concept, the deconstruction of a square wave shall be considered. A square wave (illustrated in Diagram 4d) is another wave used for analysis, not found in the natural world but attainable through electronics. Technically it is a very complex waveform with many elements but it can be understood simply as a fundamental sine wave, or f0, with the lowest frequency present, and the summation of a further set of sine waves of the appropriate amplitude, frequency, and phase. A 500Hz square sounds similar to a 500Hz sine but has a different character, or timbre. A square wave contains multiple modes of vibration meaning it is rich in harmonic content (in theory it is infinite, but in practice this is impossible to recreate). The most dominant wave, the f0, is as expected, 500Hz. For a square wave the additional sine waves are odd harmonics of the f0. Harmonics are multiple integer values of the fundamental frequency f0. When f0 = 500Hz, the 2nd and 3rd harmonics are 1000Hz and 1500Hz and so on. For a square wave, these extra harmonics have an amplitude which is 1 divided by it's number within the series of harmonics. Therefore the 3rd harmonic (at 15000 Hz) should be 1/3 of the amplitude of f0, the 4th harmonic at 1/4 and so on. The making of a square wave Diagram 6a shows the fundamental wave f0 and the first three odd harmonics are needed to make up a square wave f3, f5 and f7. The waveform in diagram 6b is the summation of these four waves, which begins to take the form of a square wave. By adding further odd harmonics to the series the crests and peaks become more flattened, which can be seen in diagram 6c which shows a wave with twenty two harmonics added to f0. With the addition of infinite odd harmonics a perfect square wave is formed, which is shown in diagram 6d. In a musical environment, most sounds are formed with additional waves which are inharmonic . These are known as partial tones . If a note is described as being C, then it has fundamental frequency of C along with partial tones which usually occur very close to the harmonics of the fundamental. These partial tones affect the quality of the note produced and define the audible character of the resonant body, or the instrument. Different instruments produce harmonics and partials at different amplitudes and it is the makeup of these which allows us to distinguish the sounds of different musical instruments and sound sources. From something with such a simple build up of frequencies as a square this technique of sine wave addition can be applied mathematically to express far more complex sounds using the principles of Fourier Theory. The collage of sine waves to create more complex waves can be expressed through a mathematical system called the Fourier Transform. This is based on two principles. 1. Any function of time can be represented as a sum of sine waves differing in frequency. 2. Each component has a distinct frequency, amplitude and phase. Using these principles it is then possible to think of the complex wave of a piano string in diagram 5 as the sum of many individual sine waves. Sound outputs a very low energy content. The energy put into generating a sound is generally much greater than that of the sound itself. When measured most musical instruments have approximately 1% (or less) energy efficiency. Examples of power levels Bass drum (full volume) 25 Watts Piano (full volume) 0.4 Watts Clarinet (full volume) 0.05 Watts Energy of sound from a sound source is measured using the Inverse Square Law. Following this law, an area twice as far from the source is spread over four times the area and is subsequently one fourth of the intensity. This is explained in Diagram 6 below. The Inverse Square Law The intensity (I) in the area (A) is derived from the following: P = I Where P = Power of the source sound (Watts) 4πr2 = Area of the sphere As with many scientific theories, in practice the Inverse Square Law is not always obeyed. This is due to the following reasons. 1. Few systems emit sound equally in all direction. This is certainly true for musical instruments and the human voice. 2. The effect of reflections (see the section Elementary Acoustics below). 3. The effect of absorption (see the section Elementary Acoustics below). Level and Loudness Sound Intensity level and the decibel scale The perceived intensity of a sound is dependant on the sound energy density at the position of the listener. The relationship between sound intensity and perceived sound is measured logarithmically (which is a method used to scale a large range of values), using a unit called the decibel or db. The decibel is a unit used in consumer and professional audio equipment, be it hi-hi amps or mixing desks, as it allows us to apply a standard measurement for the volume of sounds we listen to. This db value is measured relative to a 0db reference level, which is typically set at the threshold of human hearing. Therefore the decibel measurement is the ratio of two sounds, one being 0db, and is a relative measure. Sound intensity is calculated using the following equation. SIL = 10log 10 Intensity source where Intensity reference = 10 -12 W/m 2 It is common in the majority of audio systems that the maximum volume level is 0db, and it is possible to cut and boost the volume by decibel values. An important point to note when working with decibel measurements is that due to its logarithmical form, doubling the sound intensity at source accounts for a 3.01db increase. Similarly, halving the level of intensity reduces the decibel amount by 3.01db. Sound Pressure Level In practice it is difficult to measure sound intensity as two reading are required. The sound pressure level SPL is also a logarithmic measure of a source pressure relative to a reference level. It is a measure of the pressure levels arriving from a sound source relative to the local ambient pressure level. Sound pressure, which is measured in unit Pascals (pa), can be measured with a microphone. SPL meters are a practical tool which display readings of a connected microphone in real-time. SPL meters are useful for measuring musical performance levels and for delivering health and safety standards. They are often used to give accurate readings of peak (maximum) and average levels of a signal. (short term exposure) Loud rock concert 40 – 60db 2 x 10 -3 – 2 x 10 -2 pa 2 x 10 -4 pa Equation for SPL: SPL = 20log Pressuresource where Pressurereference = 20μpa It is important, especially in the context of sound as a musical form to understand the meaning of Timbre. In fact, everyone is already aware of it, and uses it on a daily basis, but Timbre helps us understand the science of sound as well as music of sound. Timbre is that attribute of auditory sensation in terms of which a listener can judge that two sounds similarly presented and having the same loudness and pitch are dissimilar. American National Standards Institute (1960). USA Standard Acoustical Terminology (Including Mechanical Shock and Vibration) S1.1-1960 (R1976). New York: American National Standards Institute. Essentially, timbre is the descriptive element of sound quality. It is the distinction that one sound is harsher than another, softer than another, warmer, brighter, darker, and so on. It applies to the expressive variations in sounds, and the commonalities within similar instruments. Timbre can be described as quality that differentiates two sounds which are identical in pitch and loudness. A useful analogy is trying to apply colours as descriptive words of music, a cello sounds brown or a guitar sounds yellow. Another method is to try and describe a colour without actually using the name of the colour. Timbre is not an acoustical attribute but is a perceptual and subjective sensation that is processed in the mind. It is how we perceive and understand sound in a non-scientific form. Acoustics is the scientific study of sound behaviour. Sound has three stages, generation, propagation and reception. This section will discuss the basic factors that can shape sound travelling in an enclosed space during propagation to a listener, and some common methods of treating sound within these spaces. Sound in enclosed spaces So far, the concept of wave propagation has been discussed without the consideration of boundaries. As mentioned at the beginning of this document, sound can be altered by its immediate environment and any further physical mediums it passes through after initial excitation. In order to understand this better, let us consider an example of how sound propagates in an enclosed space. At one end of an empty room a balloon is popped which is heard by a listener at the opposite end of the room. There are three key aspects to how the sound behaves which explain how it is perceived by the listener. 1. After the excitation of the bang, the listener hears the direct sound after a short delay. This sound will have travelled the shortest distance possible from the source to the listener and contains the highest intensity. 2. Shortly after the arrival of the direct sound, waves that have been reflected (bounced) off one or more surfaces will be heard. These are known as early reflections, and would vary in the same room depending on the positions of the source and the listener. Early reflections provide information to listener that is used to perceive the size of the space and the location of the source. If reflections are long, i.e. have further to travel, they are perceived as an echo effect. They are separate from the direct sound and as such can add changes to the timbre of the overall sound. 3. Finally, after the early reflections have arrived, sound from many other reflection paths in all directions reach the listener. As there are so many possible paths the effect is of a dense build up of reflections, which smears the overall sound together. This effect is called reverberation and is a coveted feature in music sound as it adds depth and space to sound. The time taken for reverberation to occur is relative to the size of the room as a smaller room has a shorter distance for all the reflections to travel. This time between the direct sound and the reverberation is commonly known as pre-delay, the delay prior to reverberation. Sound Intensity is lost on each impact of reflection and subsequently the reverberation decays over a duration known as the reverberation time. When a room is excited by an impulse, the surfaces reflect the waves and at each reflection absorb the energy so that it decays exponentially. In practice this is not always the case and energy can be reflected in cyclic paths. When the distance of a waves path (i.e. from one wall to another) is an exact multiple of half of the wavelength (the length of one wave cycle), the energy is actually reflected back to the original position of the impulse. This creates a 'standing wave', and the added energy results in an increase in volume. For example, if a wave is transmitted towards a metal surfcae which is one wave cycle away in distance, the reflected energy will be added to the energy from the source signal resulting in the waves being added together. This summation is heard as an increase in volume of the wave and is called resonance. These standing waves (or resonant modes) occur in the audible lower frequencies and are undesirable in a environment used for audio work. Non-parallel walls are used to offset the effect of standing waves and bass reducing absorption materials can lessen the audible effects in a room. Rooms and spaces can often have a detrimental effect on what we hear. Too many reflective surfaces can often create a harsher and metallic sounding space, where higher frequencies are amplified. Similarly too many absorbent surfaces suck the frequencies from the room create a dry and empty sounding room. When monitoring audio a neutral sounding room is desired which has as little effect as possible on the signal path between the monitor source and the listener. As a result it can be necessary to apply treatment to a room, and depending on the use and the desired result this will mean different types of treatment. There are three main types of acoustic treatment that can alter a sound's characteristics in a physical environment. Treatment is commonly done by the addition or subtracting of surfaces which have strong acoustic properties. The three main properties are reflection, absorption and diffusion. Surfaces which are reflective can be made from materials such as stone or wood panelling. Most reflective surfaces accenuate higher frequencies due to the nature of the waves (low frequency waves travel through materials much easier than high frequency waves), therefore the addition of reflective surfaces can make a room sound brighter. Reverberation and echo are often undesirable in a room used for sound analysis as they add extra frequency content which can vary at different positions in the room. To eliminate these effects absorbent surfaces that absorb (audibly deaden) waves can be applied. Low frequency waves tend to congregate in corners of rooms and can often cause strong audible differences in rooms with symmetrical surfaces, through standing waves. Adequately absorbent materials can be installed to counter these effects. Diffusion is the spread of frequencies caused by a medium. The use of diffusion is often used in the acoustic treatment of rooms and spaces to stop sound waves from grouping, which can create inconsistencies from one point in a room to another. Diffusion helps to reduce standing waves and flutter, and can make a small room appear audibly larger through creating a sense of openness. It can be used to stop early reflections from room boundaries merging with the sound from the initial source without absorbing the frequencies, hence there being no loss in energy and overall frequency content. Most professional monitoring spaces are designed and built by experienced professionals. For everyone else it is a case of measuring for yourself if the space you are using to monitor needs improving. Sometimes, applying treatment to a room may not be possible and in these cases it is important to optimise the equipment and layout of the space to the natural sound of the room (for further information see the Advice document: Preparing your workstation). Here are two basic exercises to help you begin to understand the effect your room may be having on what you hear. 1. Listen to a recognisable audio file through your monitoring system at a reasonable level. This could be a favourite song you that you are very familiar or a piece of music you have been working on and therefore understand well. You may instantly hear imperfections in the room that colour the sound. If the piece sounds ‘muddy' or lacking in clarity, it may be the result of too many reflective surfaces and not enough absorption. You may notice an increase in frequency content, for example, too much bass. This could be attenuated by your systems' equaliser. 2. Apply a ‘sine sweep' through your monitoring system. This is done by amplifying a sine wave test tone at a fixed amplitude at different frequencies within the audible threshold. By literally sweeping the tone across the frequency range or by changing the frequency to set tones across the spectrum (the standard frequency bands being 32, 64, 125, 250, 500, 1k, 2k, 4k, 8k, 16k), you may find some frequencies appear to be louder than others. If so, then this is likely to be the result of standing waves and low frequency attenuation in the shape of the room. A lot of DAWs have a test tone feature built in to the system, often as a plug-in. It is recommended that you refer to the appropriate manual for directions on how to use this. There are a number of companies who supply acoustic treatment materials which are simple to install. If you require structural modifications to a building for acoustic improvements then it is advisable to consult a qualified acoustician. Decibel - db A logarithmical unit which measures the intensity or level of a signal. The act of movement from a state of equilibrium. The act of causing the initial displacement of an objects material. A quick succession of reflected sounds which occurs between two parallel surfaces and is normally stimulated by a transient sound. n.b. Flutter is also the result of mechanical error when working with analogue tape. A mathematical term for the ratio of values expressed by the base 10 or function e. The manner in which an acoustic wave is propagated, as characterized by the particle motion in the wave (shear, Lamb, surface or longitudinal).RW The physical action of the spreading and movement of waves Speed which is defined as the distance travelled per unit of time. The vibration of an object or medium at a specific frequency. A continuous single-frequency periodic waveform whose amplitude varies as the sine of the linear function of time. Sometimes referred to as a sinusoidal wave. Simple Harmonic Motion A back and forth periodic motion, which is neither driven nor damped, which repeats about a central equilibrium point. An impulse of sound The act of communication through wave movement Angus, J. Howard, D. Acoustics and Psychoacoustics. Focul Press, Third Edition, 2006.
http://www.jiscdigitalmedia.ac.uk/guide/the-physical-principles-of-sound/
13
89
Momentum and Collisions Review Navigate to Answers for: Part A: Multiple-Multiple Choice - Momentum is a vector quantity. - The standard unit on momentum is the Joule. - An object with mass will have momentum. - An object which is moving at a constant speed has momentum. - An object can be traveling eastward and slowing down; its momentum is westward. - Momentum is a conserved quantity; the momentum of an object is never changed. - The momentum of an object varies directly with the speed of the object. - Two objects of different mass are moving at the same speed; the more massive object will have the greatest momentum. - A less massive object can never have more momentum than a more massive object. - Two identical objects are moving in opposite directions at the same speed. The forward moving object will have the greatest momentum. - An object with a changing speed will have a changing momentum. a. TRUE - Momentum is a vector quantity. Like all vector quantities, the momentum of an object is not fully described until the direction of the momentum is identified. Momentum, like other vector quantities, is subject to the rules of vector operations. b. FALSE - The Joule is the unit of work and energy. The kg m/s is the standard unit of momentum. c. FALSE - An object has momentum if it is moving. Having mass gives an object inertia. When that inertia is in motion, the object has momentum. d. TRUE - This is true. However, one should be quick to note that the object does not have to have a constant speed in order to have momentum. e. FALSE - The direction of an object's momentum vector is in the direction that the object is moving. If an object is traveling eastward, then it has an eastward momentum. If the object is slowing down, its momentum is still eastward. Only its acceleration would be westward. f. FALSE - To say that momentum is a conserved quantity is to say that if a system of objects can be considered to be isolated from the impact of net external forces, then the total momentum of that system is conserved. In the absence of external forces, the total momentum of a system is not altered by a collision. However, the momentum of an individual object is altered as momentum is transferred between colliding objects. g. TRUE - Momentum is calculated as the product of mass and velocity. As the speed of an object increases, so does its velocity. As a result, an increasing speed leads to an increasing momentum - a direct relationship. h. TRUE - For the same speed (and thus velocity), a more massive object has a greater product of mass and velocity; it therefore has more momentum. i. FALSE - A less massive object would have a greater momentum owing to a velocity which is greater than that of the more massive object. Momentum depends upon two quantities * mass and velocity. Both are equally important. j. FALSE - When comparing the size of two momentum vectors, the direction is insignificant. The direction of any vector would never enter into a size comparison. k. TRUE - Objects with a changing speed also have a changing velocity. As such, an object with a changing speed also has a changing momentum. - Momentum is a form of energy. - If an object has momentum, then it must also have mechanical energy. - If an object does not have momentum, then it definitely does not have mechanical energy either. - Object A has more momentum than object B. Therefore, object A will also have more kinetic energy. - Two objects of varying mass have the same momentum. The least massive of the two objects will have the greatest kinetic energy. a. FALSE - No. Momentum is momentum and energy is energy. Momentum is NOT a form of energy; it is simply a quantity which proves to be useful in the analysis of situations involving forces and impulses. b. TRUE - If an object has momentum, then it is moving. If it is moving, then it has kinetic energy. And if an object has kinetic energy, then it definitely has mechanical energy. c. FALSE - If an object does NOT have momentum, then it definitely does NOT have kinetic energy. However, it could have some potential energy and thus have mechanical energy. d. FALSE - Consider Object A with a mass of 10 kg and a velocity of 3 m/s. And consider Object B with a mass of 2 kg and a velocity of 10 m/s. Object A clearly has more momentum. However, Object B has the greatest kinetic energy. The kinetic energy of A is 45 J and the kinetic energy of B is 100 J. e. TRUE - When comparing the momentum of two objects to each other, one must consider both mass and velocity; both are of equal importance when determining the momentum value of an object. When comparing the kinetic energy of two objects, the velocity of an object is of double importance. So if two objects of different mass have the same momentum, then the object with the least mass has a greater velocity. This greater velocity will tip the scales in favor of the least massive object when a kinetic energy comparison is made. - Impulse is a force. - Impulse is a vector quantity. - An object which is traveling east would experience a westward directed impulse in a collision. - Objects involved in collisions encounter impulses. - The Newton is the unit for impulse. - The kg•m/s is equivalent to the units on impulse. - An object which experiences a net impulse will definitely experience a momentum change. - In a collision, the net impulse experienced by an object is equal to its momentum change. - A force of 100 N acting for 0.1 seconds would provide an equivalent impulse as a force of 5 N acting for 2.0 seconds. a. FALSE - Impulse is NOT a force. Impulse is a quantity which depends upon both force and time to change the momentum of an object. Impulse is a force acting over time. b. TRUE - Impulse is a vector quantity Like momentum, impulse is not fully described unless a direction is associated with it. c. FALSE - An object which is traveling east could encounter a collision from the side, from behind (by a faster-moving object) or from the front. The direction of the impulse is dependent upon the direction of the force exerted upon the object. In each of these scenarios, the direction of the force would be different. d. TRUE - In a collision, there is a collision force which endures for some amount of time. The combination of force and time is what is referred to as an impulse. e. FALSE - The Newton is the unit of force. The standard metric unit of impulse is the N•s. f. TRUE - The N•s is the unit of momentum. The Newton can be written as a kg•m/s^2. When substituted into the N•s expression, the result is the kg m/s. g. TRUE - In a collision, there is a collision force which endures for some amount of time to cause an impulse. This impulse acts upon the object to change its velocity and thus its momentum. h. TRUE - Yes!!! This is the impulse-momentum change theorem. The impulse encountered by an object in a collision causes and is equal to the momentum change experienced by that object. i. TRUE - A force of 100 N for 0.10 s results in an impulse of 10 N•s. This 10 N•s impulse is equivalent to the impulse created by a force of 5 N for 2.0 seconds. Momentum and Impulse Connection - Two colliding objects will exert equal forces upon each other even if their mass is significantly different. - During a collision, an object always encounters an impulse and a change in momentum. - During a collision, the impulse which an object experiences is equal to its velocity change. - The velocity change of two respective objects involved in a collision will always be equal. - While individual objects may change their velocity during a collision, the overall or total velocity of the colliding objects is conserved. - In a collision, the two colliding objects could have different acceleration values. - In a collision between two objects of identical mass, the acceleration values could be different. - Total momentum is always conserved between any two objects involved in a collision. - When a moving object collides with a stationary object of identical mass, the stationary object encounters the greater collision force. - When a moving object collides with a stationary object of identical mass, the stationary object encounters the greater momentum change. - A moving object collides with a stationary object; the stationary object has significantly less mass. The stationary object encounters the greater collision force. - A moving object collides with a stationary object; the stationary object has significantly less mass. The stationary object encounters the greater momentum change. a. TRUE - In any collision between two objects, the colliding objects exert equal and opposite force upon each other. This is simply Newton's law of action-reaction. b. TRUE - In a collision, there is a collision force which endures for some amount of time to cause an impulse. This impulse acts upon the object to change its momentum. c. FALSE - The impulse encountered by an object is equal to mass multiplied by velocity change - that is, momentum change. d. FALSE - Two colliding objects will only experience the same velocity change if they have the same mass and the collision occurs in an isolated system. However, their momentum changes will be equal if the system is isolated from external forces. e. FALSE - This statement is mistaking the term velocity for momentum. It is momentum which is conserved by an isolated system of two or more objects. f. TRUE - Two colliding objects will exert equal forces upon each other. If the objects have different masses, then these equal forces will produce different accelerations. g. FALSE - It the colliding objects have different masses, the equal force which they exert upon each other will lead to different acceleration values for the two objects. h. FALSE - Total momentum is conserved only if the collision can be considered isolated from the influence of net external forces. i. FALSE - In any collision, the colliding objects exert equal and opposite forces upon each other as the result of the collision interaction. There are no exceptions to this rule. j. FALSE - In any collision, the colliding objects will experience equal (and opposite) momentum changes, provided that the collision occurs in an isolated system. k. FALSE - In any collision, the colliding objects exert equal and opposite forces upon each other as the result of the collision interaction. There are no exceptions to this rule. l. FALSE - In any collision, the colliding objects will experience equal (and opposite) momentum changes, provided that the collision occurs in an isolated system. Momentum and Impulse Connection || The Law of Action-Reaction (Revisited) - Perfectly elastic and perfectly inelastic collisions are the two opposite extremes along a continuum; where a particular collision lies along the continuum is dependent upon the amount kinetic energy which is conserved by the two objects. - Most collisions tend to be partially to completely elastic. - Momentum is conserved in an elastic collision but not in an inelastic collision. - The kinetic energy of an object remains constant during an elastic collision. - Elastic collisions occur when the collision force is a non-contact force. - Most collisions are not inelastic because the collision forces cause energy of motion to be transformed into sound, light and thermal energy (to name a few). - A ball is dropped from rest and collides with the ground. The higher that the ball rises upon collision with the ground, the more elastic that the collision is. - A moving air track glider collides with a second stationary glider of identical mass. The first glider loses all of its kinetic energy during the collision as the second glider is set in motion with the same original speed as the first glider. Since the first glider lost all of its kinetic energy, this is a perfectly inelastic collision. - The collision between a tennis ball and a tennis racket tends to be more elastic in nature than a collision between a halfback and linebacker in football. a. TRUE - A perfectly elastic collision is a collision in which the total kinetic energy of the system of colliding objects is conserved. Such collisions are typically characterized by bouncing or repelling from a distance. In a perfectly inelastic collision (as it is sometimes called), the two colliding objects stick together and move as a single unit after the collision. Such collisions are characterized by large losses in the kinetic energy of the system. b. FALSE - Few collisions are completely elastic. A completely elastic collision occurs only when the collision force is a non-contact force. Most collisions are either perfectly inelastic or partially inelastic. c. FALSE - Momentum can be conserved in both elastic and inelastic collisions provided that the system of colliding objects is isolated from the influence of net external forces. It is kinetic energy that is conserved in a perfectly elastic collision. d. FALSE - In a perfectly elastic collision, in an individual object may gain or lose kinetic energy. It is the system of colliding objects which conserves kinetic energy. e. TRUE - Kinetic energy is lost from a system of colliding objects because the collision transforms kinetic energy into other forms of energy - sound, heat and light energy. When the colliding objects don't really collide in the usual sense (that is when the collision force is a non-contact force), the system of colliding objects does not lose its kinetic energy. Sound is only produced when atoms of one object make contact with atoms of another object. And objects only warm up (converting mechanical energy into thermal energy) when their surfaces meet and atoms at those surfaces are set into vibrational motion or some kind of motion. f. TRUE - See above statement. g. TRUE - If large amounts of kinetic energy are conserved when a ball collides with the ground, then the post-collision velocity is high compared to the pre-collision velocity. The ball will thus rise to a height which is nearer to its initial height. h. FALSE - This is a perfectly elastic collision. Before the collision, all the kinetic energy is in the first glider. After the collision, the first glider has no kinetic energy; yet the second glider has the same mass and velocity as the first glider. As such, the second glider has the kinetic energy which the first glider once had. i. TRUE - There is significant bounce in the collision between a tennis racket and tennis ball. There is typically little bounce in the collision between a halfback and a linebacker (though there are certainly exceptions to this one). Thus, the ball-racket collision tends to be more elastic.
http://www.physicsclassroom.com/reviews/momentum/momans1.cfm
13
72
There are two well-known features visible in Saturn's A-Ring, one called the Encke Minima, and the other called the Encke Division. The Encke Minima is a broad, low contrast feature that is located about halfway out in the middle of the A-Ring. The Encke Division is a narrow, high contrast feature located near the outer edge of the A-Ring, and unlike the Encke Minima is an actual division within the ring. It has been generally accepted that Johann Franz Encke first reported seeing what is now called the Encke Minima in 1837, while James Keeler first reported seeing what is now called the Encke Division in 1888. However, there is evidence that other observers saw both the Encke Minima and Encke Division before this. This article will discuss these observations in greater detail. It will discuss also some of the important factors involved in seeing these two features, including how wide the rings were as seen from Earth, Saturn's altitude, its diameter, seeing conditions, aperture of the telescope used, and magnification. In addition, the Encke Division is sometimes referred to as the Keeler Gap. These are actually two different features in Saturn's A-Ring, and this article will discuss the differences between these features. Drawing by Johann Franz Encke in May of 1837 using a 9.6" refractor showing the broad, low contrast feature in the on the northern face in the middle of the A-Ring that is now called the Encke Minima. Saturn was at opposition that month, but it was relatively low in the sky in the constellation of Libra with a declination of -13° and an altitude of 27°. Saturn's equatorial diameter was 18.64", and its polar diameter was 16.63". He observed the minima during April and May of 1837 and noted: "In the very clear night of April 25th I tried a new achromatic eyepiece made by the local skillful mechanic Duwe for the refractor on Saturn, I noticed something what I never before observed. Saturn's ring was, at first, divided in an outer and inner ring by the known separation split" (the split he was referring to is the Cassini Division). "In addition to this I saw perfectly clear the outer, less wider ring" (the less wider ring he was referring to is the A-Ring) "being separated by a stripe in two equal parts. The stripe appeared similar as the main separation is seen in less magnifying telescopes. It could still be followed a little from the outer ends towards Saturn's sphere." He noted that his observation of the minima is very similar to an observation by Henry Kater twelve years before on December 25th 1825: "I fancied that I saw the outer ring separated by numerous dark divisions extremely close, one stronger than the rest dividing the ring about equally". Encke concluded that due to the fact that the stripe had been seen 1825 when the ring system was seen from the southern side of the ring than in 1837, the phenomena should actually be a separation. We know today that the Encke Minima is not an actual division in the ring but rather a broad, low contrast feature. Drawing of Johann Franz Encke. Drawing by James Keeler made in January 1888 using the Lick 36" refractor showing the high contrast feature at the edge of the A-Ring that is now called the Encke Division on the northern face of Saturn's A-Ring. Saturn was at opposition that month, and was relatively high in the sky in the constellation of Cancer with a declination of +19°, and an altitude of 69°. Saturn's equatorial diameter was 20.32", and its polar diameter was 18.13". Photograph of James Keeler. One of the earliest recorded observations of what is now called the Encke Minima and Encke Division was by Henry Kater. He observed Saturn often to test two reflecting telescopes that he owned. One was manufactured by Watson and had an aperture of 6-1/4" and a focal length of 40 inches, giving it a focal ratio of approximately f/6.4. Of this telescope he felt that "Unfortunately, the mirror of this telescope is too thin, and its performance, in consequence perhaps of this, is subject to much uncertainty; but under favorable circumstances of temperature, &c., the precise nature of which I have not been able to ascertain, I have never seen a telescope which gave a more perfect image." The second telescope was manufactured by Dollond, had an aperture of 6-3/4", and a focal length of 68", giving it a focal ratio of approximately f/10. He felt this telescope was a very good instrument. Portrait of Henry Kater. In December 1825 Kater observed Saturn and noted that the seeing conditions were very good, as there was no wind with slight fog (ground fog usually indicates a very stable atmosphere). He was using his 6-1/4" aperture reflector with his best eyepiece at relatively high magnification of approximately 280x, which was 45x per inch of aperture. He noted that there appeared to be three divisions around both ansa on the southern face of the A-Ring, with the central one being the darkest and widest, which is now known as the Encke Minima. The outer most division matches the location of the Encke Division shown in Keeler's drawing. Kater remarked "I have little doubt, from a most careful examination of some hours, that that which has been considered as the outermost ring of Saturn consists of several rings." He was unable to see this detail in the larger 6-3/4" reflector. Saturn was at opposition earlier that month, and about this time the southern face of the rings were near their maximum presentation towards the Earth. Also, Saturn was high in the sky in the constellation of Taurus, with an altitude of almost 60°, a declination of over +21°, its equatorial diameter was 20.54", and it's polar diameter was 18.32". Kater made a drawing of Saturn, and a requested that a friend who was observing with him to examine the ring and make a drawing of it. To his friend Ring-A appeared to be composed of a number of divisions, six in total, that seemed similar to a course line engraving. Another friend who observed Saturn that night but was not much accustomed to telescopic observations was able to see central division, but not the two minor ones. Kater felt that this was because his friend was extremely shortsighted, and that the focus had been left set for Kater's eyes. While it is true that the total number of divisions seen were different for each observer, all recorded seeing at least one division in the A-Ring. Kater was the most experienced observer of the three, and his observations match those of later observers including William Dawes, William Lassell, and Phillip Sidney Coolidge. Drawing by Henry Kater on December 17, 1825 showing several rings features on the southern face of Saturn's A-Ring, including the Encke Division at the outer edge of the A-Ring, the Encke Minima near the middle, and a low contrast ring feature outside of the Cassini Division. In January 1826 Kater examined Saturn on two more occasions with the larger 6-3/4" reflector. On the first night he thought that he that the A-Ring was composed of several divisions, but they were not so distinctly seen in December with the smaller 6-1/4" reflector. On the second night the ring appeared to be made up of several rings, but he wasn't positive. In January 1828 Kater once again examined Saturn with the 6-3/4" reflector, but did not see a trace of any divisions in the A-Ring. He therefore was no longer sure that these divisions were permanent, but due to ill health was unable to observe Saturn very often. However, he found that two other observers had noted detail in the A-Ring was that was similar to what he had seen. One was James Short, who was a well-known mirror maker. He reported observing Saturn's rings were divided into three concentric rings in the mid-1700's with one of his large (either a 12" or 18" aperture mirror with a focal length of 144" or 12 feet) reflecting telescopes. Here is a sketch of James Short observation showing the Saturn and its rings system: Observation by James Short's showing several ring divisions in Saturn's rings. In this sketch, the ring outside of the letter A represents the Cassini Division, and he recorded two rings features outside of the Cassini Division, which appear to be the Encke Minima and Encke Division. He reported that the rings became more distinct as they approached letter B, and became points at letters C and E. Another person was Lambert Adolphe Quetelet, who told Kater that he had seen on the southern face A-Ring divided into two concentric rings in December 1823 when observing Saturn with a 10" refractor. Saturn had been at opposition in November that year. It appears Quetelet saw the same feature in 1823 that Encke reported in 1837, and which is now called the Encke Minima. Since other observers had seen this detail, Kater finally presented a paper on his observations to the Royal Astronomical Society on the divisions he saw in Saturn's A-Ring on May 14, 1830. It should be noted that when he Kater was observing Saturn in January 1828 he mentioned "The quintuple belt was also distinctly seen, and the shading, or deeper yellow, of the inside edge of the inner ring". This description of a "quintuple belt" is similar to an observation that William Herschel made on November 11, 1793, which is now believed to be an early observation of the C-Ring or Crepe Ring. It is possible that Kater saw the Crepe Ring without realizing it, as Herschel had done 35 years earlier. The discovery of the C-Ring is generally credited to William Cranch Bond and George P. Bond, as well as William Dawes in 1850, but others observers including Herschel and Kater apparently saw it without understanding that they were seeing a separate ring. In September 1843 William Lassell and William Dawes using Lassell's 9" f/12 equatorially mounted reflector reported seeing what is now called the Encke Minima near the center of the A-Ring, and another division near the outside of the ring which is now called the Encke Division on the northern face of the A-Ring. It had been a very warm day with the temperature of 76 degrees, and the seeing conditions that night appeared to have been very good, with the sky being hazy. In regards to the Encke Minima, Dawes remarked "Having obtained a fine adjustment of the focus, I presently perceived the outer ring to be divided into two. This coincided with the impression Mr. Lassell had previously received." For the Encke Division he noted "With 400 the secondary division was perceptible during occasional best views of the planet." A magnification of 400x it took to see the Encke Division that night on their 9" reflector works out to about the same magnification Kater was using per inch of aperture, around 44x per inch. Photograph of William Dawes. Photograph of William Lassell. Although the northern face of the rings were relatively wide open during this time, Saturn was much lower in the sky in the constellation of Sagittarius, and its altitude was only about 14°. Saturn's declination was -22°, its equatorial diameter was 17.57", and its polar diameter was 15.68". Also, Saturn was almost two months past opposition. Dawes mentioned that although he had heard reports of divisions in Saturn's A-Ring by Short, Quetelet, and Kater (he apparently was unaware of Encke's observation), he had been somewhat incredulous of any existing subdivisions. Now that he and Lassell had seen similar detail, he felt more inclined to report their observation. He noted also how the detail that he and Lassell saw that night was very similar to Kater's drawing from 1825, and how he wished that the planet had had an altitude of 60° as it did when Kater had observed it, rather then an altitude of 14° the night he and Lassell saw it. It does not appear that Lassell and Dawes made a drawing of Saturn during their observation of Saturn on September 7, 1843, but this diagram made from SkyMap Pro Version 5.0 gives an idea of how wide the rings were that night. In November and December 1850 Lassell and Dawes reported seeing the Encke Division again. At this time the southern face of the rings were not as wide open as they had been previously, but Saturn was higher in the sky in the constellation of Pisces, with an altitude of almost 41°. Saturn's declination was +3°, its equatorial diameter was 19.09", and its polar diameter was 17.04". Saturn was at opposition in October of that year. On November 21, 1850, Lassell was observing Saturn with his 24" f/10 equatorially mounted reflector with a magnification of 219x, 567x, and 614x and noted "I several times suspected a second division of the outer ring at both ansa, but could not absolutely verify it. The appearance I saw, or suspected, was a line one-third of the breath of the outer ring from its outer edge." Drawing of Saturn by William Lassell in December 1850 showing the Encke Division and Crepe Ring. William Dawes was able to see the Encke Division using his 6-1/3" f/16 Merz & Mahler refractor. On November 23, 1850, he noted that with a magnification of 425x he "Sometimes suspected that the outer ring had a short and narrow line upon it near its extremity." He wondered if perhaps the division in the outer ring was becoming visible again, and made a note to ask William Lassell to look for it. The next day he received a letter from Lassell indicating he had seen the division again. On November 25 Dawes noted that with a magnification of 282x "I was satisfied that, in the finest moments, a very narrow and short line was visible on the outer ring near its extremities; which was confirmed with power 425x." On November 29 he saw the division again, occasionally at 323x, but far more certainly with 460x. The magnification he used during these observing sessions works out to be between 45x and 73x per inch of aperture. It was also during these observing sessions in November and December 1850 that Dawes independently discovered Saturn's C-Ring or Crepe Ring. Lassell visited Dawes in early December and verified Dawes observation. The Bond's had discovered the Crepe Ring a couple of weeks earlier, but word did not reach Dawes and Lassell until after they had seen the ring themselves. So both the Bond's and Dawes are given credit for its discovery. Drawing of Saturn by William Dawes in December 1850 showing the Encke Division and Crepe Ring. Note that for both of these drawings the rings are much more closed up compared to Kater's observation in December 1825, or Lassell and Dawes observation in 1843. Another observer who saw the Encke Division was Phillip Sidney Coolidge in December 1854 and January 1855 using a Merz & Mahler 15" f/16 refractor. On December 26th he was observing Saturn and noted that the seeing seemed very good as there was thick haze. He was using a magnification of 141x, 316x, and 401x, and noted that "There is certainly one division in the outer half of ring A, and I cannot be positive that that there is not a second one. If so, it is outside of the first division." On December 27th using a magnification of 401x he noted "There are two (and at times I suspect three) divisions in ring A." Photograph of Major Phillip Sidney Coolidge. Drawing by Phillip Sidney Coolidge on December 27, 1854 showing several rings features within Saturn's A-Ring, including the Encke Division at the outer edge of the A-Ring, the Encke Minima near the middle, and a low contrast ring feature outside of the Cassini Division. On January 9th, 1855, he was observing Saturn again with magnifications of 360x and 401x and was able to see the these features more clearly defined: Drawing by Phillip Sidney Coolidge in January 9, 1855 showing the Encke Division at the outer edge of the A-Ring, the Encke Minima near the middle, and a low contrast ring feature outside of the Cassini Division. As when Kater had observed the Encke Minima and Encke Division in December 1825, Saturn was at opposition that month, was high in the sky in the constellation of Taurus, with an altitude of almost 68°. Saturn's declination was over +20°, its equatorial diameter was 20.34", and its polar diameter was 18.15". The drawings he made on December 27th and January 9th of the three ring features in the A-Ring are very similar to the drawing that Kater had made 29 years earlier. From the currently available evidence, it appears that the broad, low contrast feature located about halfway out in the middle of the A-Ring, now called the Encke Minima, that Johann Franz Encke was given credit for first seeing in 1837, was seen previously by James Short in the mid-1700's, by Lambert Adolphe Quetelet in 1823, by and Henry Kater in 1825. Also, the high contrast feature located out near the outer edge of the A-Ring, now called the Encke Division, that James Keeler was given credit for first seeing in 1888, was seen previously by James Short in the mid-1700's, by Henry Kater in 1825, by William Lassell and William Dawes in both 1843 and 1850, and by Phillip Sidney Coolidge in 1854 and 1855. It should be noted that the narrow, high contrast feature located near the outer edge of the A-Ring now known officially as the Encke Division by the International Astronomical Union, was never actually observed by Johann Franz Encke. The feature he did observe, now known as the Encke Minima, does not have an official designation by the International Astronomical Union. To confuse matters somewhat further, the International Astronomical Union named a division that is even narrower than the Encke Division, and is located at the very edge of the A-Ring, the Keeler Gap after James Keeler, even though he never saw this division because it is not visible with ground based telescopes. The conventional wisdom seems to be that when using telescopes of less then 10" aperture, a magnification of 400x or more is needed to see the Encke Division. A better guide may be to consider the factors that worked in Kater's, Lassell's, and Dawes favor when they observed the Encke Division. These included very good to excellent seeing conditions, a magnification of ~45x per inch of aperture or higher, and the rings being relatively wide open. If the rings are not as wide open, or Saturn is lower in the sky, then it may indeed require higher magnification and/or larger aperture to see the Encke Division in addition to very good to excellent seeing conditions. Article © 2000 - 2013, Eric Jamison, All rights reserved. May not be used without written permission of the author.
http://ejamison.net/encke.html
13
76
All the New Zealand CensusAtSchool activities have been developed using the investigative cycle: Problem, Plan, Data, Analysis, Conclusions. Statisticians use this cycle and we think that it is important that students should begin to as well. Using real data means that real investigations can be carried out. Because the data is real, there is probably more than one story that can be told by the data. Exploring stories in real data helps to make the process more meaningful and relevant for children. - Formulating and defining a statistical question is important as it tells students what to investigate and how to investigate it. - Most investigations begin with a wondering ‘I wonder if boys are more technologically literate than girls?’ From this general question a statistical question needs to be developed so that a meaningful investigation can be carried out. All the terms in the questions need to be defined and understood by the students. - Activities have been written to allow for both collecting data from the class and obtaining it from CensusAtSchool. While the suggestion is that students survey students in their class, you could also use a sample of data from CensusAtSchool. - Lead the students through a series of questions to help them think about the problem and to develop a statistical question of their own. - The variables and terms in the question need to be understood and defined by the students so they interpret the question correctly. - Students learn more effectively if they are encouraged to make predictions and then to test them and reflect on the difference between their prediction and the result. - Level 3-4: Suggest the sample size and discuss sampling methods students could use. Students need to be able to justify the sampling and data collection methods. - Level 5/6/7: Students should select their own sample size and method and provide justification. - The first question to ask is: How would you answer the question now, before you gather the data? Remember to justify your answer. - Further questions: - how will we gather this data? - what data will we gather? - what measurement system will we use? - how are we going to record this information? - At every opportunity ask students to predict. This encourages them to think about the data and reveals their misconceptions. Later dissonance is created between their prediction and the results and so they may drop their misconceptions - Students may need to manipulate the data for example, to allow for the thickness of clothing. Students may record their data in any format as long as it is clear and easily manipulated. A table is usually the best format. Tables are the most common organisational tool that statisticians use. The standard entry in on the students’ worksheet is shown below. Sometimes the table columns are filled in; sometimes they are left for the students to fill in. Each column will usually represent one variable. Each row usually represents a person from the CensusAtSchool databank or from your class. |Students||Variable 1||Variable 2||Variable 3||Variable 4| - When students look at the data table they should notice features like largest or smallest measurements, modes. This will help them to select the correct scales for their graphs. - The first few questions should help the students to look at the data in the table. - A row stands for the measurements of one person. - You could also draw students’ attention to their own data so they have a reference point to reason with other data. - Students should be encouraged to make another prediction now they have looked at the data in the table. - Students should be encouraged to create their own graphs rather than being told which graph to use so that they have ownership of the data detective and discovery process. It doesn’t matter which graphs they use to plot the data, as long as they are investigating the stories in it and the graph is suitable for the type of data. - One of the key aims of statistics is to deal with the variation in data and to say whether it is natural or random or whether it is caused by something else. You might like to ask students to think about what the graph would look like if … - Students are asked to summarise their analysis using two sentence starters: - I noticed that … - I wondered if … - Graph/ Data/ plotted data: The graph is the whole image of the plotted data, its title, and the axes. It is not just the plotted data, so to ask ‘what is the shape of the graph?’ is doesn’t make sense. It is more correct to ask: ‘What is the shape of the plotted data?’ or ‘what is the shape of the distribution?’ - Developing understanding about graphs and creating them. Recent research shows that younger children can create and reason with their own graphs much better than with standard graphs. — This means that they should be encouraged to create their own graphs to explore the stories in the data. It is acceptable for children up to level 5 to be creating their own graphs. This means they may choose to draw two graphs side by side, pictograms or even put all the data on one graph but have several keys. The aim is to encourage statistical thinking rather than perfect graphs. Teach graphing conventions such as giving the graph a title and labelling the axes to students as a way of making it easier for them to communicate their findings with others rather than a separate skill lesson. This demonstrates the purpose of conventions; to aid communication. - Students find determining which scales to use difficult as it depends on the data set. Use different sizes of data to give them experience in considering scales. - Encourage students to create many different graphs. Statisticians use multiple graphs to explore the data as each may describe a different story in the data. They also look for the best few graphs to present their stories. Students should also be encouraged to do the same. For lower level students the worksheets are more guided in this aspect. - The transition from ungrouped to grouped data is difficult. To help lower level students use post it notes or paper squares to construct graphs so that students are still see their individual records. Intermediate students often still need to be able to identify individual data points so that they can understand what the graph means. When introducing box plots in level five keep the data points behind the plot so students can see how the box plot is related to the data. Ask questions to prompt students to think of the data in context. E.g. what does this data point mean? Where would a short person with big feet be on the graph? - Students also find the transition from discrete to continuous data difficult. The transition from using frequencies to relative frequencies also requires a jump in their thinking as relative frequencies require proportional thinking. Relative frequencies are critical for comparing unequal sized data sets which is required in level five of the curriculum. - Graphical sense and behaviours to encourage - Recognise components of graphs e.g. what is the mode, where does most of the data lie? Where is the median? - Using graphical language e.g. spread, skew, variability, mean, mode, spikes - Understanding relationships between tables, data, and graphs. Being able to convert between formats. - Reading the graph objectively rather than adding their personal opinions - Interpreting information in a graph and answering questions about it - Recognising which graphs are appropriate for the data and the context. - Looking for possible causes of variation - Discovering relationships between variables. For example as a person’s height increases foot size also increases. - Developing questions for graphs: It is good to have questions from all these levels of questions and to ask them roughly in this order as the earlier levels help students to look more closely at the graph. - Reading the data: taking information directly off the graph. For example, what is the largest foot size? What is the mode? Who is the shortest student in this sample? - Reading between the data: interpreting the graph, the answer will take one step to solve. For example, how many children would be able to ride a roller coaster that had a minimum height restriction of 1.30m? - Reading beyond the data: extending, predicting or inferring. For example, based on this data what do you think the height of Big foot was if he was a human and he had feet that were 50cm long? If another student came into the class, how many texts do you think they will send in one day? - Reading behind the data: connecting the data to the context. For example: If you measured another class’s feet would you get a similar distribution? If you measured your left foot would you get a similar result? Why do you think there is a sudden increase in boys heights when they are 15 years old? - Averages and distributions: - The measure of average is one way to describe and summarise a data set. It is also used to compare one data set to another. Because it is used to describe a data set it should be slowly developed along side other ways of describing data. - Students need to develop a picture in their minds about how the data may look when it is graphed. To help them develop the picture, ask them to predict the shape of the distribution and then after they have plotted the data ask them to compare their prediction with the graph. - Always ask students to describe the distribution of the data in a plot. Younger students will describe the shape as a bump, clump or even an object that is familiar to them such as a rabbit or a worm. This is the beginning of describing the central tendency and distribution of the data. Use a mixture of student language e.g. ‘Where is the bump? How many bumps are there? What does that mean?’ while slowly change the language to more sophisticated statistical language ‘Where does most of the data lie?’ - Student’s conclusions should relate back to their original question. - They should also mention any features they had noticed or wondered about and investigated. - A list of statistical language has been provided to help students construct a conclusion. - Remind students to give reasons based on what they have found out in their investigation. - Encourage students to use some statistical language in their conclusion. Here are some phrases that might be useful: - For histograms: normal/skewed distribution, middle range - For scatterplots: outlier, slope of the graph, trend - For all analyses: these data suggest, probably, most, spread, shape, relative proportions, ratios, middle range. - Get students to think about who would be interested in their conclusions and why?
http://new.censusatschool.org.nz/resource/how-kids-learn-the-statistical-enquiry-cycle/
13
161
One of the first steps in deciding which statistical test to use is determining what kinds of variables you have. When you know what the relevant variables are, what kind of variables they are, and what your null and alternative hypotheses are, it's usually pretty easy to figure out which test you should use. For our purposes, it's important to classify variables into three types: measurement variables, nominal variables, and ranked variables. Similar experiments, with similar null and alternative hypotheses, will be analyzed completely differently depending on which of these three variable types are involved. For example, let's say you've measured variable X in a sample of 56 male and 67 female isopods (Armadillidium vulgare, commonly known as pillbugs or roly-polies), and your null hypothesis is "Male and female A. vulgare have the same values of variable X." If variable X is width of the head in millimeters, it's a measurement variable, and you'd analyze it with a t-test or a Model I one-way analysis of variance (anova). If variable X is a genotype (such as AA, Aa, or aa), it's a nominal variable, and you'd compare the genotype frequencies with a Fisher's exact test, chi-square test or G-test of independence. If you shake the isopods until they roll up into little balls, then record which is the first isopod to unroll, the second to unroll, etc., it's a ranked variable and you'd analyze it with a Kruskal–Wallis test. Measurement variables are, as the name implies, things you can measure. An individual observation of a measurement variable is always a number. Examples include length, weight, pH, and bone density. The mathematical theories underlying statistical tests involving measurement variables assume that they could have an infinite number of possible values. In practice, the number of possible values of a measurement variable is limited by the precision of the measuring device. For example, if you measure isopod head widths using an ocular micrometer that has a precision of 0.01 mm, the possible values for adult isopods whose heads range from 3 to 5 mm wide would be 3.00, 3.01, 3.02, 3.03... 5.00 mm, or only 201 different values. As long as there are a large number of possible values of the variable, it doesn't matter that there aren't really an infinite number. However, if the number of possible values of a variable is small, this violation of the assumption could be important. For example, if you measured isopod heads using a ruler with a precision of 1 mm, the possible values could be 3, 4 or 5 mm, and it might not be a good idea to use the statistical tests designed for continuous measurement variables on this data set. Variables that require counting a number of objects, such as the number of bacteria colonies on a plate or the number of vertebrae on an eel, are known as meristic variables. They are considered measurement variables and are analyzed with the same statistics as continuous measurement variables. Be careful, however; when you count something, it is sometimes a nominal variable. For example, the number of bacteria colonies on a plate is a measurement variable; you count the number of colonies, and there are 87 colonies on one plate, 92 on another plate, etc. Each plate would have one data point, the number of colonies; that's a number, so it's a measurement variable. However, if the plate has red and white bacteria colonies and you count the number of each, it is a nominal variable. Each colony is a separate data point with one of two values of the variable, "red" or "white"; because that's a word, not a number, it's a nominal variable. In this case, you might summarize the nominal data with a number (the percentage of colonies that are red), but the underlying data are still nominal. Something that could be measured is a measurement variable, even when the values are controlled by the experimenter. For example, if you grow bacteria on one plate with medium containing 10 mM mannose, another plate with 20 mM mannose, etc. up to 100 mM mannose, the different mannose concentrations are a measurement variable, even though you made the media and set the mannose concentration yourself. These variables, also called "attribute variables" or "categorical variables," classify observations into a small number of categories. A good rule of thumb is that an individual observation of a nominal variable is usually a word, not a number. Examples of nominal variables include sex (the possible values are male or female), genotype (values are AA, Aa, or aa), or ankle condition (values are normal, sprained, torn ligament, or broken). Nominal variables are often used to divide individuals up into classes, so that other variables may be compared among the classes. In the comparison of head width in male vs. female isopods, the isopods are classified by sex, a nominal variable, and the measurement variable head width is compared between the sexes. Nominal variables are often summarized as proportions or percentages. For example, if I count the number of male and female A. vulgare in a sample from Newark and a sample from Baltimore, I might say that 52.3 percent of the isopods in Newark and 62.1 percent of the isopods in Baltimore are female. These percentages may look like a measurement variable, but they really represent a nominal variable, sex. I determined the value of the nominal variable (male or female) on 65 isopods from Newark, of which 34 were female and 31 were male. I might plot 52.3 percent on a graph as a simple way of summarizing the data, but I would use the 34 female and 31 male numbers in all statistical tests. It may help to understand the difference between measurement and nominal variables if you imagine recording each observation in a lab notebook. If you are measuring head widths of isopods, an individual observation might be "3.41 mm." That is clearly a measurement variable. An individual observation of sex might be "female," which clearly is a nominal variable. Even if you don't record the sex of each isopod individually, but just counted the number of males and females and wrote those two numbers down, the underlying variable is a series of observations of "male" and "female." It is possible to convert a measurement variable to a nominal variable, dividing individuals up into a small number of classes based on ranges of the variable. For example, if you are studying the relationship between levels of HDL (the "good cholesterol") and blood pressure, you could measure the HDL level, then divide people into two groups, "low HDL" (less than 40 mg/dl) and "normal HDL" (40 or more mg/dl) and compare the mean blood pressures of the two groups, using a nice simple t-test. Converting measurement variables to nominal variables ("categorizing") is common in epidemiology and some other fields. It is a way of avoiding some statistical problems when constructing complicated regression models involving lots of variables. I think it's better for most biological experiments if you don't do this. One problem with categorizing measurement variables is that you'd be discarding a lot of information; in our example, you'd be lumping together everyone with HDL from 0 to 39 mg/dl into one group, which could decrease your chances of finding a relationship between the two variables if there really is one. Another problem is that it would be easy to consciously or subconsciously choose the dividing line between low and normal HDL that gave an "interesting" result. For example, if you did the experiment thinking that low HDL caused high blood pressure, and a couple of people with HDL between 40 and 45 happened to have high blood pressure, you might put the dividing line between low and normal at 45 mg/dl. This would be cheating, because it would increase the chance of getting a "significant" difference if there really isn't one. If you are going to categorize variables, you should decide on the categories by some objective means; either use categories that other people have used previously, or have some predetermined rule such as dividing the observations into equally-sized groups. Ranked variables, also called ordinal variables, are those for which the individual observations can be put in order from smallest to largest, even though the exact values are unknown. If you shake a bunch of A. vulgare up, they roll into balls, then after a little while start to unroll and walk around. If you wanted to know whether males and females unrolled at the same average time, you could pick up the first isopod to unroll and put it in a vial marked "first," pick up the second to unroll and put it in a vial marked "second," and so on, then sex the isopods after they've all unrolled. You wouldn't have the exact time that each isopod stayed rolled up (that would be a measurement variable), but you would have the isopods in order from first to unroll to last to unroll, which is a ranked variable. While a nominal variable is recorded as a word (such as "male") and a measurement variable is recorded as a number (such as "4.53"), a ranked variable can be recorded as a rank (such as "seventh"). You could do a lifetime of biology and never use a true ranked variable. The reason they're important is that the statistical tests designed for ranked variables (called "non-parametric tests," for reasons you'll learn later) make fewer assumptions about the data than the statistical tests designed for measurement variables. Thus the most common use of ranked variables involves converting a measurement variable to ranks, then analyzing it using a non-parametric test. For example, let's say you recorded the time that each isopod stayed rolled up, and that most of them unrolled after one or two minutes. Two isopods, who happened to be male, stayed rolled up for 30 minutes. If you analyzed the data using a test designed for a measurement variable, those two sleepy isopods would cause the average time for males to be much greater than for females, and the difference might look statistically significant. When converted to ranks and analyzed using a non-parametric test, the last and next-to-last isopods would have much less influence on the overall result, and you would be less likely to get a misleadingly "significant" result if there really isn't a difference between males and females. Some variables are impossible to measure objectively with instruments, so people are asked to give a subjective rating. For example, pain is often measured by asking a person to put a mark on a 10-cm scale, where 0 cm is "no pain" and 10 cm is "worst possible pain." This is a measurement variable, even though the "measuring" is done by the person's brain. For the purpose of statistics, the important thing is that it is measured on an "interval scale"; ideally, the difference between pain rated 2 and 3 is the same as the difference between pain rated 7 and 8. Pain would be a ranked variable if the pains at different times were compared with each other; for example, if someone kept a pain diary and then at the end of the week said "Tuesday was the worst pain, Thursday was second worst, Wednesday was third, etc...." These rankings are not an interval scale; the difference between Tuesday and Thursday may be much bigger, or much smaller, than the difference between Thursday and Wednesday. A special kind of measurement variable is a circular variable. These have the property that the highest value and the lowest value are right next to each other; often, the zero point is completely arbitrary. The most common circular variables in biology are time of day, time of year, and compass direction. If you measure time of year in days, Day 1 could be January 1, or the spring equinox, or your birthday; whichever day you pick, Day 1 is adjacent to Day 2 on one side and Day 365 on the other. If you are only considering part of the circle, a circular variable becomes a regular measurement variable. For example, if you're doing a regression of the number of geese in a corn field vs. time of year, you might treat Day 1 to be March 28, the day you planted the corn; the fact that the year circles around to March 27 would be irrelevant, since you would chop the corn down in September. If your variable really is circular, there are special, very obscure statistical tests designed just for circular data; see chapters 26 and 27 in Zar. When you have a measurement variable with a small number of values, it may not be clear whether it should be considered a measurement or a nominal variable. For example, if you compare bacterial growth in two media, one with 0 mM mannose and one with 20 mM mannose, and you have several measurements of bacterial growth at each concentration, you should consider mannose to be a nominal variable (with the values "mannose absent" or "mannose present") and analyze the data using a t-test or a one-way anova. If there are 10 different mannose concentrations, you should consider mannose concentration to be a measurement variable and analyze the data using linear regression (or perhaps polynomial regression). But what if you have three concentrations of mannose, or five, or seven? There is no rigid rule, and how you treat the variable will depend in part on your null and alternative hypotheses. If your alternative hypothesis is "different values of mannose have different rates of bacterial growth," you could treat mannose concentration as a nominal variable. Even if there's some weird pattern of high growth on zero mannose, low growth on small amounts, high growth on intermediate amounts, and low growth on high amounts of mannose, the one-way anova can give a significant result. If your alternative hypothesis is "bacteria grow faster with more mannose," it would be better to treat mannose concentration as a measurement variable, so you can do a regression. In my class, we use the following rule of thumb: —a measurement variable with only two values should be treated as a nominal variable; —a measurement variable with six or more values should be treated as a measurement variable; —a measurement variable with three, four or five values does not exist. Of course, in the real world there are experiments with three, four or five values of a measurement variable. Your decision about how to treat this variable will depend in part on your biological question. You can avoid the ambiguity when you design the experiment--if you want to know whether a dependent variable is related to an independent variable that could be measurement, it's a good idea to have at least six values of the independent variable. The same rules apply to ranked variables. If you put 10 different bandages on a person's arm, rip them off, then have the person rank them from most painful to least painful, that is a ranked variable. You could do Spearman's rank correlation to see if the pain rank is correlated with the amount of adhesive on the bandage.If you do the same experiment with just two bandages and ask "Which hurts worse, bandage A or bandage B?", that's a nominal variable; it just has two possible values (A or B), or three if you allow ties. Some biological variables are ratios of two measurement variables. If the denominator in the ratio has no biological variation and a small amount of measurement error, such as heartbeats per minute or white blood cells per ml of blood, you can treat the ratio as a regular measurement variable. However, if both numerator and denominator in the ratio have biological variation, it is better, if possible, to use a statistical test that keeps the two variables separate. For example, if you want to know whether male isopods have relatively bigger heads than female isopods, you might want to divide head width by body length and compare this head/body ratio in males vs. females, using a t-test or a one-way anova. This wouldn't be terribly wrong, but it could be better to keep the variables separate and compare the regression line of head width on body length in males to that in females using an analysis of covariance. Sometimes treating two measurement variables separately makes the statistical test a lot more complicated. In that case, you might want to use the ratio and sacrifice a little statistical rigor in the interest of comprehensibility. For example, if you wanted to know whether there was a relationship between obesity and high-density lipoprotein (HDL) levels in blood, you could do multiple regression with height and weight as the two X variables and HDL level as the Y variable. However, multiple regression is a complicated, advanced statistical technique, and if you found a significant relationship, it could be difficult to explain to your fellow biologists and very difficult to explain to members of the public who are concerned about their HDL levels. In this case it might be better to calculate the body mass index (BMI), the ratio of weight over squared height, and do a simple linear regression of HDL level and BMI. Sokal and Rohlf, pp. 10-13. Zar, pp. 2-5 (measurement, nominal and ranked variables); pp. 592-595 (circular variables). Place, A.J., and C.I. Abramson. 2008. Habituation of the rattle response in western diamondback rattlesnakes, Crotalus atrox. Copeia 2008: 835-843. This page was last revised August 20, 2009. Its address is http://udel.edu/~mcdonald/statvartypes.html. It may be cited as pp. 7-12 in: McDonald, J.H. 2009. Handbook of Biological Statistics (2nd ed.). Sparky House Publishing, Baltimore, Maryland. ©2009 by John H. McDonald. You can probably do what you want with this content; see the permissions page for details.
http://udel.edu/~mcdonald/statvartypes.html
13
131
To continue as a living species, a higher plant must succeed in community competition at the most precarious point of its life history, which for land plants, at least, is usually the period of reproduction. A plant's reproductive success requires the following exacting sequence during this period: 1. The production of sufficient viable seeds. The responses of plants to environmental factors vary greatly with the genetic make-up of the species; the more exacting these requirements and their timing, the more tenuous the plants' reproductive success. The giant sequoia is a classic example of reproductive fragility in the plant world, eloquently expressed in its restricted and much interrupted natural range, the relative stability of grove boundaries, and the diverse age-class structure within individual groves. Many of the groves have produced little or no progeny within the relatively short period since the beginning of western civilization in California. The much more extensive range of sequoian ancestors has led several persons to believe that the species may be nearing extinction, much like the dinosaurs of the past. Here rests an interesting parallel: the sequoia and the dinosaur represent the largest developments of a land organism within its kingdom in the earth's history. One of the most misunderstood facets in the sequoia life cycle is that of its reproductive requirements and the sequence of events which lead to the successful perpetuation of its kind. Until now the story, fragmented and erroneous, has been passed without question from one generation to the next by amateur and scientist alike. Only recently, following an extensive series of objective studies, have the ecological relationships of the sequoia life cycle begun to be well understood. The reproductive sequence generally begins in mid- to late winter, when the tiny staminate (male) cones literally cover the outer branchlets of the crown. At the height of pollen dispersal, golden clouds of this pollen or male reproductive cells may be seen drifting about on the slightest breeze or staining the snow with their yellowish tint. This great abundance of pollen simply ensures reproduction, other conditions permitting. Such safety in numbers is typical of organisms faced with low reproductive success. At the time of pollination, the female cone is only about the size of a grain of wheat and is hard, with a pearly gray tint (Fry and White 1930). In the first summer after fertilization, the developing cones begin to produce chlorophyll, which colors them bright green. At the end of the first growing season, the cones are usually more than three-quarters their full size and the cone scales are very soft and fleshy. From the flattened apices of the scales project slender hair-like bracts, which identify the cones as immature. The seeds, including the area occupied by the embryo, are now a light straw color, and generally cannot germinatealthough, in an experiment at San Jose State University, a single first-year seed successfully germinated on wetted filter paper in a petri dish. Because first-year seeds have little opportunity to be disseminated onto the ground, their role in regeneration is very much limited. At the end of their second growing season, the cones attain maturity and generally produce viable seeds. By late summer of the second growing season, the cones approximate their full size. The cone scales are somewhat woodier in texture the hair-like bracts will have now fallen away, and the cone is usually a dark forest green. Maturity of the seeds is clearly indicated by the dark brown, longitudinal stripe which runs through the flat oval wing and marks the place where the embryo lies. With maturity, sequoia cones behave differently from those of most other conifers, neither turning brown nor in any way commencing to disseminate their seeds. Rather, the cones remain attached to the stems in a green, active photosynthetic state, and they increase slightly in size each year so that the cone scales, becoming bulbous, give the older cones a rather knobby appearance. There is good evidence that the vascular connections between the cones and the seeds remain intact as long as the cones remain green on the trees. Cones remain in this state rather commonly for 8-12 years and at least one green cone was determined to be 22 years of age by Buchholz (1938). After 4 or 5 years, cones may begin to support a growth of foliose lichens, which sometimes completely cover them (Fig. 22). We determine the age of sequoia cones just as we determine the age of a tree's trunk, namely, by counting the annual growth rings in the peduncle or stem of the cone. A clear cross-sectioned cut with a razorblade or knife will suffice to expose the rings for counting. The first 2 years' rings are large enough to be seen with only slight magnification, but the later ones are often extremely narrow and difficult to distinguish (Fig. 31). The phenomenon of green cone retention poses some interesting questions. Perhaps foremost is the situation in which mature, viable seeds are retained in a moist cone where summer temperatures are surely within the range required for germination. What, then, prohibits germination of the seeds in these circumstances? Experiments indicate that it may well be the function of the reddish, crystalline-like substance found between the cone scales. This substance, often referred to as red cone pigment, is in a liquid form at the time the cones are green. It is an amorphous, water-soluble compound, like the blackened exudate found on scarred parts of sequoia trunks, and makes up about half of the seeds' weight. The function of this pigment has been much debated over the years, in part because green cone retention was not understood. In the early part of the century, analyses indicated that the pigment was very high in tannin, and therefore many assumed that its role was to prevent insect and fungus attack. Fry and White (1930) felt that the pigment helped maintain the seeds' viability over the years. Beetham (1962), after soaking the seeds in a solution of cone pigment for 2 months, found no apparent advantage or disadvantage to seed viability or to the resulting seedling. She did find that planted seeds which had been treated with an 85% solution for 2 months were very slow to germinate, but that the total final germination was very near that of the controls. Martin (1957-58) obtained a rather different result from similar experiments in northern Germany. Although his studies concur with Beetham's in showing that seed germination time increased with increased concentration of the cone pigment extract, he found that in concentrations of 30% and more the seeds failed to sprout at all. Thus, he reasoned that the pigment's role was to arrest germination of the seeds within the cones through the establishment of a reverse osmotic gradient. He further reasoned that when the cones opened, the liberation of the seeds occurred only when rainfall had dissolved and washed away the cone pigment. Martin reports identification of the pigment chemical as tannin glucoside (C21H20O10). In fungus cultures treated with the extract, however, not even its highest concentrations inhibited their growth. Such results do not necessarily negate the possible role of the pigment as a fungal preventive. Recent studies also indicate that at least two species of insects feed regularly within sequoia cones, the pigment apparently not deterring them in the slightest. Clearly, further investigation is necessary to clarify the exact role of the pigmenting chemical. Because the pigment is soluble in water, naturalist John Muir made ink of it and wrote numerous letters with the fluid. Letters written 60-70 years ago are still clearly legible today. The Forest Products Laboratory in Madison, Wis., analyzed this substance for its potential as commercial ink, but samples, submitted to the Carter Ink Co., clogged pens readily because of gummy accumulations that attended the evaporation of the water (Kressman 1911); thus nothing ever came of it. The other major question in cone retention is over just what finally triggers the browning of the cones to permit seed dispersal. Ring counts on the penduncles of browned cones removed from living trees show that the drying process usually commences after the 5th or 6th year of the cone's life on the tree, although it may often start only several years later. This great variation among individual trees largely eliminates genetic time-switches and climatic variations as causative factors. Stecker (1969) discovered that cone drying resulted from the activities of a small cerambycid beetle, Phymatodes nitidus, which feeds upon the cone flesh. This subject will be considered in greater detail under "Seed Dispersal and Cone Fall." Estimates of the age at which sequoias begin to produce cones also vary greatly in the literature, perhaps because authors have used different standards. While some have given the species' average age at which cones begin to appear, others have been specific for individual trees, and some give the age of abundant cone production. Still others cite the time at which cones bearing viable seeds are produced. For instance, Harwell (1947) sets an age of 200 years for the beginning of cone production. Clarke (1954) gives 175-200 years; the U.S. Department of Agriculture (1948), 125 years; Cook (1955), 70 years; Metcalf (1948), 50 years; Wulff et al. (1911), 24 years; and Anon. (1960), 11 months (Fig. 32). This represents a ratio of estimate of 218:1. This great reported variation is interesting because we have a long record of sequoias that have borne cones at an early age. The first such reports came out of England, where proud gardeners probably watched their precious newly imported specimens with intense interest. The Gardener's Chronicle (London) (Anon. 1859) reported a 4.5-ft specimen of Wellingtonia bearing cones, and Muggleton (1859) records a cone on a 36-inch specimen. Numerous other reports of cones borne at an early age came from throughout northern Europe in the following decade. None of these earliest cones bore viable seeds, however, which raised considerable curiosity until it was discovered that sequoias begin to produce ovulate cones several years before staminate cones, so that pollen was not at first available for fertilization in Europe. When staminate cones eventually did begin to develop, so did viable seed. Brown (1868) records fertile seed produced for the first time in England on a specimen that he felt could not have been more than 10 years old. Undoubtedly, some of the cones removed for seed viability tests were first-year cones which bore only immature seeds. We have experimented with seeds taken from trees estimated to be within 10- to 14-year range that had been planted by the U.S. Forest Service on the McGee Burn in the Converse Basin of Sequoia National Forest. Cones removed from near the tops of several 12- to 15-ft specimens yielded a total of 220 seeds. These were placed in petri dishes for germination. A total of 64 seeds, or 29%, germinated, a viability percentage which compares favorably to that of seeds taken from trees several hundred years old. (Hartesveldt et al. 1967). The specimen recorded by Anon. (1960) as producing a cone at 11 months was perhaps influenced by unnatural growth conditions at the Argonne National Laboratory in Illinois. It was subjected to an abnormally long daily photoperiod of 16 hours for its entire 11 months, plus a controlled temperature regime. The seedling was only 16 inches tall when its leader shoot began to swell and formed a cone which eventually arrested the growth of the terminal shoot. A lateral branch took over the leader shoot role, and the cone and its shoot became lateral. While this specimen was unusually young and small for cone production, such cones are not uncommon in nature on trees 6-10 ft tall and perhaps 8-10 years of age. The cone's tip usually bears the continuation of the leader shoot. Although tests have not been made, it has been generally assumed that these cones bear no viable seeds. Sequoias of all sizes and presumably of all ages are prolific bearers of cones and viable seeds. We know of no age beyond which the tree's reproductive capacity stops or even diminishes. Even severely damaged trees continue to bear abundantly. Because the crown is so high and the foliage so dense, it has been difficult at best to obtain reliable estimates of the total number of cones on mature trees. Recently, with an elevator rigged in a 290-foot specimen now known as the Castro Tree in the Redwood Mountain Grove of Kings Canyon National Park, Steckler has made in-crown studies which may now permit more accurate estimates than those made from the ground by Shellhammer. These studies indicate that previous estimates of cone-load and number of seeds dispersed yearly have been greatly in error. Almost certainly, earlier authors have also misunderstood the life span of living green cones and the ratio of green cones with viable seeds to browned cones with none. According to these in-crown studies, a large sequoia tree might be expected to contain at any given time about 11,000 cones, of which perhaps 7000 would be closed, fleshy, and photosynthetically active. The remaining 4000 cones would be opened, brown, and largely seedless, although Fry and White (1930) record viable seeds remaining in cones 16 years after the latter have turned brown. An average of 1500-2000 new cones are produced in the average year. Occasionally, optimal weather conditions greatly increase the production of new cones. For instance, in 1970, the Castro study tree produced 20,697 new cones. Each year somewhat fewer older cones, both green and brown, are felled by wind, rain, and snow than the previous year. Variations in total cone-load are very probably genetically controlled, and certainly controlled by site location; very large specimens growing on favorable sites may bear more than 40,000 cones at one time, while those on the poorer sites may have as few as 6000. The upper part of the crown of any mature tree invariably produces a greater abundance of cones than its lower portions (Fig. 33). A correlation apparently exists between the number of new cones produced each year and the quantity of available soil moisture during the winter and spring months. The cones produced in wet years are more numerous and yield seeds of greater viability than those produced in dry years. While Fry and White (1930) record the average viability of sequoia seeds as only 15% of the total crop, our experience shows that 35% is a more representative figure. The actual percentage of viable seeds produced varies with such factors as the tree's topographic site, the cone's age, its specific location in the crown, and the seed's position within the cone. Large seeds usually are more viable than small ones. Initial studies show that rocky slopes and ridges are definitely more advantageous for seed viability than flattish bottom lands with deeper soils. Our experimental plots in the Redwood Mountain Grove have trees on the rockier slopes yielding seeds with 54% average viability, whereas in the flats along Redwood Creek the figure is 32%. In preliminary studies with cones of various ages, it was found that seeds which retain their vascular connections with the cone continue to grow. This is partially borne out by the viability statistics in Table 2 derived from both snap tests and actual germination tests in petri dishes. The snap test involves breaking the seed across the embryo and noting its color. If it is an off-white color and the embryo completely fills the embryo case, it is most likely viable. However, if it is snow-white and shriveled, or brownish, the seed is not viable (Clark 1907). On the basis of a previous assumption that the seeds of the living cones grow and perhaps increase in size and viability, the decrease in viability following the fifth year appears inconsistent. However, the beetle-feeding activities mentioned earlier provide a plausible explanation. TABLE 2. Germination for sequoia seeds of various ages. Within individual cones, the seeds' germination was 26% in the basal portion (nearest the peduncle), 59% in the central portion, and 36% in the apical region. Perhaps this variation is due to effective vascular connection between the seeds and cone scales, which in turn may also control the size of the seeds produced. This, however, is yet to be verified. Studies indicate that cones produced near the tops of the trees tend to be smaller than those lower in the crown, and that the larger cones produce a higher percentage of viable seeds. Metcalf (1948) records large cones with seeds showing a 75% viability, a figure which Stecker has verified in cones from the crown of the Castro study tree. The older cones cluster along the branches' main axes, back from the growing tips; the newest cones are in groups of 2-19 at the very ends of the main branches and are often surrounded by dense new foliage; the mature cones of various age classes are distributed proportionately between these two extremes. Cones from the last 3- or 4-year age groups, that is, ages 1 through 3 or 4, are about equally represented on a branch. About 65% of the living cones are up to 5 years old, 25% are 6-10 years old, and up to about 10% are 11-20 years old. Over the years, writers have quoted others who said that a mature sequoia tree sheds close to a million seeds a year, a figure derived by unknown methods and which the in-crown studies show to be a gross overestimate. Ground-level calculations, however, were undoubtedly difficult because of the dense foliage obscuring many of the inner cones. Certain assumptions are necessary to obtain a more realistic figure of annual production. We can assume that the average number of cones opened or otherwise lost per year probably equals the average number produced. Then, according to the given average of 1500-2000 new cones per year and an equal loss due to browning and falling, and if we assume 200 seeds per cone (see page 88), the mature sequoia would disperse between 300,000 and 400,000 seeds per year. This means a potential seed dispersal of from 200,000 per acre per year to perhaps as many as 2 million. Dispersal takes two different forms: seeds may be distributed from tree-top level as the cones either open upon browning or are eaten by chickarees on the limbs of the crown; or, the cones may be cut or otherwise fall to the ground, where they dry out and spill their seeds upon a relatively small area of the soil or leaf litter surface. Each has advantages and disadvantages. A very important point here is that a high percentage of sequoia seed is dispersed by the activities of animals. The chickaree (Fig. 34), inhabits many trees in the Sierran forest region. Outside sequoia groves, the chickaree feeds commonly on the seeds of the sugar pine, white fir, red fir, ponderosa pine, incense-cedar, etc. But in sequoia groves where sequoias displace some of the other tree species, this handsome little squirrel also feeds on the fleshy green scales of the younger sequoia cones, much as people do in eating the flesh from artichoke bracts. In the process, the seeds, too small to have much food value for the squirrel, are dislodged and, if the cone is eaten on a high limb, they are scattered over the ground. Dispersal from this height has the potential advantage that wind drift may extend the species' range. In circumstances not fully understood, chickarees will cut innumerable cones from individual trees, dropping them immediately to the ground. They may cut cones from individual trees for 6 or 7 years in a row, thus changing the cone-load factor for a considerable period. Shellhammer (1966) observed a lone chickaree which cut down 539 green sequoia cones in 31 minutes. There are many records of large cone cuttings in a single day or during a season, one of the most amazing being that of Fry and White (1930). In 1905 they recorded a single chickaree cutting cones which, when gathered up, filled 38 barley sacks and yielded 26 lb of seeds. Calculating 91,000 seeds to the pound and 200 seeds per cone, we obtain a figure of nearly 12,000 cones. Once the cones are cut, the chickaree caches them away as future food. Because the cones are edible only when green and tender, the storing must be such as to maintain this condition over the longest possible time. Many chickarees will bury the cones individually in the leaf litter and duff, or in caches of six or seven cones where space permits. In places, often in the bed of an intermittent stream where the soil is moist or bog-like, they will store hundreds or even thousands of cones in impressive piles. Sometimes the space between a fallen sequoia log and the ground is used, or the base of a hollow tree may be packed tightly with cones. The chickaree, returning in fall and winter to eat, usually spills the seeds onto the ground in the relatively small space where it feeds. This wastage certainly contributes to the sequoia's potential regeneration, although without the advantage of wind dispersal or of shallow burial in soft, friable soil following a fire. However, seeds properly placed during this feeding process are in good cold storage throughout the winter and so are ready for spring germination. The many cones not eaten by chickarees eventually dry out and spill their seed contents too. In late summer and autumn, especially on warm days with a breeze, sequoia seeds flutter to the ground in an almost constant rain. Their small oval wings help insure that the slightest movement of air will carry them away from the parent tree, perhaps as much as 600 ft. The already mentioned feeding activities of the beetle, Phymatodes nitidus (Fig. 35), causing the cones to turn brown, probably bring about much of this seed fall. Stecker (1969) discovered this beetle high in the crown of the Castro Tree during the summer of 1968. In virtually all the browned and dried cones, tiny insect emergence-holes were found. These insignificant indications of insect activity, having somehow eluded discovery by science until then, gave the first evidence that browning was not simply a natural aging of the giant sequoia's cone tissues. Dissection of thousands of these cones showed that the agent is a very small cerambycid or long-horned wood-boring beetle larva (Phymatodes nitidus LeC) and, furthermore, that it plays an important role in the giant sequoia's regeneration. The larvae's length is only 3-5 mm (1/8 to 1/5 inch). Oviposition by the female beetle takes place at cone-scale junctures and occasionally along the cone stem or peduncle. Upon hatching, the larval borer chews its way into the cone's interior, obtaining nourishment from the carbohydrate-laden tissues of the cone scales. The mines are about one-third the diameter of a lead pencil and are packed with chewing and digestive waste, or frass, which in this case resembles a fine salt-and-pepper mixture. The vascular channelways or veins (two layers of them in each cone scale of giant sequoia) are often severed during this feeding, which diminishes water conduction to the ends of the cone scales. At first one scale will brown, and then others, as they dry out, following the direction taken by the feeding larvae. As the mining turns the green, fleshy tissues into a sun- and air-dried cone, the flesh shrinks, creating gaps between the cone scales. And so, the cone's hold upon the seeds is relaxed and dispersal follows. Seeds thus released from the tree's top, which also has the greatest cone load, can be carried to considerable distances by even relatively light winds. Normally, Phymatodes does not eat the seeds although it may damage some of them in its feeding pathway. As mentioned earlier, browned cones do not necessarily drop their seeds immediately upon opening, but rather drop them slowly over the years as the vascular connections are severed. The beetle larvae average 1.4 individuals per cone in the cones attacked, and Stecker has occasionally noted as many as eight in a single cone. About one-quarter of the 39,508 cones in the Castro Tree are "browns," or cones on which Phymatodes has fed. If this is more or less representative for mature sequoias, this tiny insect deserves considerable credit for spreading the sequoia's seeds. It seems that each species' well-being depends on some reciprocal arrangement of services. Chickarees, interestingly, seem to prefer cone flesh when the cones are between 2 and 5 years old, infrequently cutting 1-year cones, but for some reason seldom eating them. As Phymatodes is apparently most prevalent in cones 4 years and older, there is really little competition between these two animals. This further insures that seeds of all age classes are shed. The sequoia, the chickaree, and the cone beetle may someday reveal a fascinating coevolutional story because without the two animal organisms, as well as frequent fires, the giant sequoia might not exist today. We know that the chickaree prefers the larger seeds of other conifers, and that it feeds more heavily upon the cone flesh of sequoias in years when the other food is in poor supply. If a fire destroys a large percentage of the other tree species, chickarees provide added assurance by eating sequoia cones and dispersing the seeds at a time when the seed bed is a soft, friable mineral soil. Fires also speed the rate at which cones will dry out and scatter their seeds to the ground. In essence, this is a back-up mechanism insuring seed release and rather heavy seeding of sequoias after a fire. While animal activities cause most of the seed dispersal, cones do fall during wind and ice storms and because of heavy loads of wet snow. The proportion of this kind of seed-fall has never been measured. Certainly it is locally heavy, but occurs so irregularly that its overall role in regeneration may be slight in comparison to animal-influenced release. Some cones find their way into streams and are carried considerable distances, extending the range of the species. (See details in sections discussing present distribution.) What happens to a cone and its seeds if not cut by a chickaree or opened by the feeding of Phymatodes is a minor question. Note that, although the oldest known cone in a green, growing condition was 22 years of age, the longer the cones persist on the tree, the less viable their seeds; hence, increased length of cone retention does not benefit the species. Furthermore, the older cones frequently become so encrusted with foliose lichens that the seeds are less able to fall free, even when the cones finally open. Whereas seed viability in green mature cones may be as hight as 75%, snap tests of seeds taken from the surface of the soil indicate an average viability of between 1 and 2%! This markedly reduced reproductive potential seems due mostly to the sun's direct radiation and to desiccation resulting when radiant energy is converted into heat at the soil surface. When kept dry and out of sunlight, sequoia seeds are known to maintain their viability for many years. In tests, a viability of 40% dropped to 10% when the seeds were exposed to sunlight for 10 consecutive days, and 0% by the end of 20 days. Although this reduction in viability is proportionate to the size of the seeds, the average sampling of seeds tested shows clearly how vulnerable is this stage of the sequoia's life cycle. Thus man, by changing the tree's environment, can either aid or retard its reproduction rate. Contrary to some earlier reports, animals do not greatly disturb sequoia seeds on the ground. Insect damage to the embryo is probably the most serious effect noted to date, but we do not know whether this occurs before or after loss of viability by radiation and desiccation. Vertebrate animals apparently rarely eat the seeds on the ground. Repeated tests by Howard Shellhammer and by Beetham (1962) both in and out of sequoia groves have shown that animals favor sequoia seeds least among the seeds of sugar pine, ponderosa pine, Jeffrey pine, red fir, white fir, and incense-cedar. Usually, if sequoia seeds are placed on the ground in piles, squirrels, chipmunks, mice, or shrews merely kick them about and leave them uneaten. For germination to occur, the giant sequoia requires the necessary physical conditions, but also the proper sequence of events. Perhaps even more uncertain is the survival of the resulting seedlings. Muir (1878) was very probably the original source of the idea that bare mineral soil is essential for the germination of sequoia seeds. This was a convincing story, which virtually all sequoia literature repeats without question. Yet, in some sequoia groves, there are extensive areas of almost bare ground within the seeding range of mature sequoias on which we find no sequoia seedlings year after year. In other instances, mineral soil plays absolutely no role in germination and small sequoia seedlings growing from rotting stumps and other masses of organic debris such as thick leaf litter. Normally, such debris holds too little moisture to permit continued growth; if the litter is too thick, the seeds' small energy package does not allow the seed roots to penetrate to mineral soil and, if germination does take place, the young trees often die aborning. Mineral soil, then, is only one of several influences upon germination of sequoia seeds. Statistically, it is the most important substrate for germination, but is not an absolute requirement. The condition of the bare mineral soil, also affecting the survival of seedlings which do actually get started, may well be at least as critical, if not more so, for seedling survival as it is for germination. To better insure germination, the soil's condition must be such that some sort of disturbance will loosen it before the distribution of seeds. The tiny seeds lack sufficient weight to become buried upon falling, except where the soil is very soft and friable. This perhaps explains the absence of seedlings on soil surfaces where the seeds may be unable to penetrate. When lying on the exposed soil surface, the seeds not only quickly lose their viability but also seem to germinate poorly even when the soil is moist. Fry and White (1930) claim that seeds pressed against the soil surface by heavy snow germinate well, but recent experimentation indicates that germination is greatest when the seed is completely surrounded by moist soil, as when burried. That seeds germinate on the surface is well documented; but again, survival is extremely low. Disturbances of the soil may come about in any of several ways. Falling trees leave exposed and loosened soil in their root pits, and the skidding trunks on sloping terrain may "plow" up the soil and even bury some seeds. Avalanches of snow and floods of water contribute to the burial which favors germination. But fire is perhaps the most influential and certainly the most widespread of all the natural factors. In fact, there is a high probability that, without fire, the giant sequoia would not today be an extant species. Fire burns the organic content of the uppermost horizon of the soil leaving temporary voids between the soil's particles, so that the tiny seed falling from the crown is often buried to a depth sufficient to insure its full contact with the soil and to eliminate potential radiation damage. Usually, the soft, friable surface condition of the soil after fire is short-lived; rainfall, wind, and gravitational settling of the soil particles result in compaction, which in turn reduces successful germination. We have also observed that the heat of fires speeds the cone's drying and that seed-fall during the days immediately following fires is greatly increased. In recent years, of course, man's mechanical disturbances to the soilas along roadsides, in areas of building construction and logging activityhave likewise created a receptive medium for seed burial, so that seedling growth is often extremely heavy and the degree of survival is high. Other factors that influence germination, for better or for worse are air temperature, soil moisture, light, mineral content of the soil, soil pH, soil type (texture), and depth of the seed in the soil. These, and other lesser influences, have been thoroughly investigated by Stark (1968) in the Sierra Nevada. Stark's field experiments indicate that although germination actually occurs over a very wide range of temperatures, namely, from 30° to 92°F (-1.6°C to 34°C), optimum temperatures are most common during the months of April, May, September, and October. Soil moisture conditions and seedling survival are generally better in the spring than during any other season. High summer temperatures and the resulting desiccation of the soil greatly reduce germination. Soil moisture content varies with soil texture and the amount and nature of the soil's organic matter. At field capacity, which is the maximum moisture that can be held against the pull of gravity, native Sierran soils, which are mostly sandy, probably store no more than 18-22% soil moisture by weight. This appears wholly adequate for the germination of sequoia seeds, and the high sand content insures good aeration in the soil, which is another requisite for successful germination. In flooded soils there is little oxygen available for the embryo's respiration and growth. Some seeding occurs along streams, but germination occurs only when floods deposit seeds high enough on the banks to escape the flooded soil conditions. Dry soils also reduce germination to the zero point, although the critical minimum level of moisture for this process is unknown. Experiments indicate that sequoia seeds will germinate in full sunlight and also in the dark, but that optimal germination occurs during the growing season when the light is approximately one-half full strength of the sun. When stronger, light is converted into excessive heat energy and thus dries the soil. Again, fire plays an important role in preparing the seedbed by reducing the amount of shade. Under field conditions, sequoia seeds require between 40 and 60 days to germinate. Whether natural chemical inhibitors affect the germination of sequoia seeds on the ground is not yet known. In preliminary studies at San Jose State University, germination tests subjected sequoia seeds to various solutions of chemicals normally found in soils and to solutions extracted from leaf litter (Morris 1967). There were no significant differences over those in the controls, indicating possibly that, in the natural state, physical factors influence germination more than chemical factors. Closely allied is the pH or degree of acidity or basicity of the soil, Stark (1968) found that a slightly acid soil (pH 6-7) produced the highest germination percentage at a temperature of 68°F and concluded that pH was not a limiting factor in natural sequoia habitats. She found that strongly basic soils (pH 9) stunted the seedling growth, but did not retard germination. It did, however, alter the color of the foliage to an intense blue-green. Variations in soil texture are actually not very important in the sequoia's native Sierra Nevada where sand percentages are high and clay percentages characteristically low, a combination admirably suited to successful germination. Experiments of Beetham (1962) indicate that poor germination in clays, limestone, and peat soils indicates relationship to insufficient moisture. Once established, seedlings transplanted to many other soil conditions grow well throughout the world. Sequoias have never re-seeded themselves naturally on these other soils, indicating this tree's special needs for successful reproduction. We have already mentioned that seeds on the surface generally do not germinate because insufficient moisture is transmitted to the embryo. Burial of the seed is important, but the seed must not be buried too deeply. While seeds placed deeper than 1 inch may germinate, the developing shoot will seldom reach the surface and survive. The optimum depth, which seeds rarely exceed in normal circumstances, is about 0.25 inch (Beetham 1962). Seeds will often become wedged in a small crack in the soil alongside a partially buried rock or piece of wood, which provides the necessary protection against radiation and proper soil moisture conditions. This is also an advantage in seedling survival, the next most delicate stage in the sequoia life cycle (Hartesveldt and Harvey 1967). To a very strong degree in sequoia reproduction, seedling survival is much more critical than is germination (Beetham 1962). Muir alluded to this in 1878, and numerous writers since have repeated it. Sequoia seeds can remain viable under a rather wide range of conditions, but once the seedlings commence their growth, they cannot survive beyond the range of a rigid set of environmental conditions. It is now suspected that the microenvironmental climate is the major restriction squeezing the species into the present isolated groves, and that it has limited the groves' expansion to relatively minor boundary extensions during the last century or so. Some believe that sequoias would doubtless flourish in many locations in the Sierra Nevada outside the existing groves if stock were only introduced. According to this hypothesis, extension of the range is limited by a subtle environmental barrier and one that was poorly understood until the recent outstanding work of Rundel (1969) Because of the small amount of food stored in sequoia seeds, the newly germinated seedlings must become rapidly self-sufficient. Fry and White (1930) state that the earliest stage of germination (extension of the radicle or primary root) takes place beneath the snow, and that the seed roots are as much as 1 or 2 inches long before the snow melts. This may affect the survival of the emerging cotyledons, or seed leaves. As soon as the protective seed coat is shed from the new leafy shoot, a root system must be functional to supply the cotyledons with the necessities for photosynthetic activity, and because sequoias apparently produce few, if any, root hairs, root length becomes the more essential. Considering the many hazards offsetting this species' great reproductive potential, we may surmise that survival of seedlings is tenuous. Muir (1878) records that not one seed in a million germinates, and that not one seedling in 10,000 attains maturity. These figures, widely repeated, may be figurative rather than literal, but the frailty of the species during this stage is no myth. The newly germinated seedlings are, like those of all plants, tender and soft because of the yet small deposits of cellulose and lignin and are susceptible to a variety of decimating factors even before the unfurling crown is pushed above the soil surface. Some seedlings seem more predisposed to survival than others. For those that die, the cause of death is not always obvious, especially if the roots are affected. Furthermore, the decimating factors may be subtly and confusingly interrelated. Seedlings derived from the larger seeds may have the advantage of a larger and more rapidly growing root system from the very beginning and, therefore, a better chance for survival. However, our studies indicate that even the heartiest of seedlings may die in areas where others of apparent lesser vigor will survive. Probably the most extensive cause of sequoia seedling death is soil desiccation downward to the full depth of the root system. Harvey suggests other possible causes are damping-off, intolerance to shade, flooding, heat canker, root fungi, soil ectocrines, burial by leaf- and branch-fall, insect depredations, grey mould blight, being eaten by birds and mammals, trampling, and various other disturbances by animal life, including man. Many of these are probably more serious threats where there is deep duff and litter. Fire, on the other hand, in removing the duff and litter reduces some of the danger. Desiccation of the soil is generally more prevalent in disturbed open forests, e.g., following a fire. It is related to the length of time the sun strikes the mineral soil surface during the day, to air temperatures, relative humidity of the air, and to some extent the color of the soil surface. Fire has been the most common natural agent influencing the above factors and, in some cases, it also darkens the surface with its char. The increase in sunlight, so necessary to the survival of sequoia seedlings, is also the factor which may dry the soil to the permanent wilting point and bring about the seedlings' death. And while the litter layer is an insulation against soil moisture depletion, it undoubtedly harbors the damping-off fungi and other pathogens (Martin 1957-58). Obviously, sequoia reproductive success lies somewhere between the two extremes. But nature is generally not much given to continuous provision of optimal conditions for survival of any one species, and where sequoias fail, other plants with adaptations more suitable for the immediate set of conditions may succeed. Beetham (1962) has amply demonstrated that seedlings grow best in full sunlight where the soil is protected by at least a light layer of leaf litter. Where litter was absent, she found soil temperatures 25-35°F; hotter and increased death rate of seedlings by heat canker. Hartesveldt et al. (1967) recorded surface temperatures in July up to 157°F on char-darkened soil in the Redwood Mountain Grove at 1:45 p.m. by means of a tiny thermister, which records the temperatures of a literally paper-thin layer at the soil surface. The threshold temperature for the death of most protoplasm being less than 150°F, not surprisingly, several seedlings were found dead with the blanched and sunken symptoms of heat canker on the stem just above the mineral soil level. Despite the very high temperature of the soil surface in this open situation, many seedlings in the same vicinity were not killed by the heat, a testimony to litter effectively moderating temperature extremes. By far the greatest mortality occurs where soils dry out to below the seedlings' average rooting depth, which is rather common during periods of prolonged though not necessarily excessive high temperatures and low relative humidity. In experimental manipulations, Hartesveldt et al. (1967) found that more than 90% of the seedling mortality occurred under these conditions. Death began within a few weeks after germination, continuing at a much reduced rate in the following years. At the end of 3 years, surviving seedlings will usually have root systems that penetrate the soil to beneath the level of midsummer dryness, or about 14 inches. Beetham (1962) reported that optimal growth of sequoia seedlings occurred in soils with moisture contents at or near field capacity, or about 20% for most Sierran soils. She further determined that the soil moisture content at the time of seedling death by desiccation is about 5.2%. Seedling survival was found to be critical in the low range of soil moisture in studies of Hartesveldt et al. (1967) in which desiccation was the most frequent form of seedling death. However, those same studies revealed that relatively slight variations in soil moisture in the lower range may make the difference between survival and death. It was found that seedlings growing next to partially buried rocks, limbs, etc., definitely had a greater advantage in being taller, more branched, and surviving better than their nearby counterparts whose stems were surrounded only by soil. This may be a response to better soil moisture conditions beneath the partially buried objects, perhaps due to the deflection of drying winds and the interference of direct solar radiation at the soil surface. Together, these factors reduce the rate of evaporative moisture loss from that portion of the soil surface where the objects occur. We should point out that Sierran soils are notably poor compared with those measuring up to agricultural standards, and limitations to seedling growth apparently do not relate to low nutritional levels in the field (Beetham 1962). One of the more difficult forms of seedling death to assess is that from reduced light brought about by canopy shading that may starve the plant. Although Baker (1949) lists sequoia as having intermediate tolerance to shade, Beetham (1962) indicates clearly that it is very sensitive to low light intensity. This is supported by the fact that sequoia seedlings are seldom found in areas densely populated with taller vegetation. A striking example of death influenced by shading was found by Hartesveldt (1963) in Yosemite's Mariposa Grove. At the end of a 25-year period, of the several thousand seedlings established there and recorded on a park map dated 1934, only 13.8% remained alive in a 1959 resurvey of the same area. Hundreds of dead saplings, twisted and contorted in dense shade, demonstrated the effect of the heavy overtopping crown canopy composed largely of white fir (Fig. 10). Soil moisture appeared adequate in the areas in which the young sequoias had died. Undoubtedly, excessive shading may be coupled with another agent such as root fungi or poor soil-moisture conditions. In marginal circumstances, young specimens barely maintain life and grow so sparingly that measurement of yearly increment is difficult. In fact, Metcalf (1948) records a 25-year old specimen as having a stem just 0.50 inch in diameter. Excessive moisture is a factor which limits gas exchange at the root surface because it usurps the pore space normally occupied by gases. Low soil oxygen content reduces root respiration which reduces water intake and photosynthesis, eventually to the point of cessation. This is probably a common cause of seedling death along the edges of meadows where seeds of sequoias are often scattered abundantly, but where seedlings seldom survive. Although the situation has not been thoroughly studied, the general lack of trees within the wetter meadows is expressive of the sensitivity of most tree species to wet soil. Moreover, dead sequoia snags are occasionally found in wet meadows in which the meadows have formed after the tree had become established in a more mesic situation. Large sequoias often fall across drainageways, forming a dam which impedes water drainage and creates wet meadows. Meinecke (1927) recorded the death of large sequoia trees by this means in the Giant Forest. Where sequoias do become established in moist areas, the degree of moisture definitely affects their growth, and even large specimens have died where the soil moisture has become excessive. Soil dampness promotes the incidence of damping-off, a disease which fells the seedlings by attacking the stem at the soil level. Any one of several soil fungi can be causative agents in this disease, Damping-off has long existed in forest nurseries among virtually all types of trees, and the giant sequoia is no exception. Another disease affecting juvenile sequoias is a root rot caused by the fungus Sclerotium bataticola Taub. found in damp, dark areas. Gray mould blight was recorded by Martin (1957-58) on sequoia seedlings in Germany, and we later observed it in the Giant Forest and in the Redwood Mountain Grove. The affected leaves appear "cemented" together by the fungus, which completely destroys their photosynthetic capabilities. Long burial under wet snow in winter and spring is the probable cause. For overall seedling survival, however, it is only a minor problem. The depredations of insects and vertebrates on sequoia seedlings have been exaggerated because of the statements by Fry and White (1930), which we have not been able to verify. Their observations in the park nursery at Ash Mountain, outside the tree's natural habitat, possibly led to the contention that no other conifer is attacked in its infancy by so many destructive agencies. They further state: Insect depredations on seedlings do occur, but they appear minor and are almost wholly limited to first-year seedlings' tender tissues. Damage may also be accentuated when sequoia seedlings predominate among foods available, as they may after a fire. In weekly post-manipulation examination of hundreds of dead seedlings, from 3.4 to 17.5% per year showed signs of insect feeding. The tender epidermis on the stem was commonly eaten away from the crown downward to the ground level, or else the stem was girdled. In a report of ours (Hartesveldt et al. 1968), Stecker identified this damage as the work of a camel cricket (Pristocauthophilus pacificus Thomas), a nocturnal feeder, in its first and second instar stages. Occasionally, the larvae of a geometrid moth (Sabulodes caberata Gn. and Pero behrensarius Pack.) were feeding upon the seedlings' leaves, but the few resultant deaths would qualify it as relatively unimportant. There is some indication that tops of new seedlings are eaten off by birds or mammals, but the number is certainly insignificant, as is the number of those uprooted by rodents or by deer hooves. Compared to desiccation losses, animal losses are negligible. Sequoia seedlings probably seldom die by winter freezing within their native range, Temperatures here rarely reach 0°F and, at times when such lows are most likely, seedlings are generally well insulated by snow and are not much affected. Beetham (1962) lost seedlings planted above 9600 ft in the Sierra where low winter temperatures prevail. Low temperatures are more a problem in nurseries (Wulff et al. 1911) and where seedlings have been introduced into cold climates in other parts of the world. The density and rates of growth of young sequoias vary considerably with the circumstances of seed distribution and seedling survival, Where fire has created optimal conditions for seedling establishment, there may be as many as 25-50 seedlings, and reportedly even more, per square foot. Such densities, however, are usually limited to relatively small areas because of either an irregular seed dispersal pattern, or spotty soil receptivity, or both. In areas where fires have heated the soil surface, the high temperatures seem to favor both the seedlings survival and increased rates of growth. DeBano and Krammes (1966) have discussed water-repellent soils and their relationship to fire; more specifically, Donaghey (1969), experimenting with soils from sequoia groves, has demonstrated that incineration of soils increases both their wetabiity and soil moisture retention. This better explains sequoia seedlings' survival and growth in burned areas. On recent experimental and prescription burns in Kings Canyon National Park, seedlings have been especially abundant in soils severely burned by the combustion of dry downed logs, and their resulting elongated pattern conforms to the position of the log (Fig. 36). This high seedling survival under burned-out logs possibly explains the remarkably straight rows of even some mature sequoia trees. Competition, of course, increases with density, and mortality thins out the trees continuously as their crowns expand and compete for light, Being very intolerant of shade, the young trees without sufficient light are killed, or at least the lower portions of their foliage die away. The juvenile stage is commonly a narrow conical spire where site conditions are optimal and where growth is therefore rapid (Fig. 11). When sequoias are virtually the sole species established in a given area, the slender crowns are well-fitted for maintaining dense stands for many years since the conical form assures that the sun will reach part of each crown. The 10-year olds portrayed in Fig, 37 at Cherry Gap, Sequoia National Forest, are from 6 to 15 ft tall, and such stands are much too dense for a man to walk through with comfort. Although the supply of subsurface drainage water is highly dependable, this density obviously cannot long continue because the light factor will soon become critical for the smaller trees. Those which now dominate the others in height will almost assuredly be the eventual survivors. Of the 126 saplings in this small grouping, probably no more than four or five will survive to mature size. In areas of high site quality, vertical growth of young sequoias is commonly as much as 1-2 ft per year, and rapid growth continues until the trees are 100 or more feet high. The trunk grows rapidly in thickness at the same time, the growth layers of vigorous specimens being as much as 0.50 inch thick or more. Both radial growth and growth in height reflect photosynthetic success. Radial growth decreases slowly with size and age although total additions of wood, spreading over a greater circumference, may remain essentially the same. At this stage, cone production is usually well advanced and the vigorous spire-topped trees may be heavily laden with cones of several years' production. As the spreading bases of the crowns begin to compete for light, not only does the foliage begin to die back, but within several years' time the limbs are self-pruned through shading and give the tree a more mature appearance, In its native habitat, topographic and soil conditions rarely restrict nearby competing vegetation to such an extent that a sequoia maintains its branches down to the ground level. Specimen trees in municipal parks and other open situations show this characteristic much more commonly than in their native range (Fig. 30). As vertical growth slows, elongation of the lateral limbs continues at an increased proportional rate so that the crown's sharp spire gradually gives way to the rounded form of the mature tree (Fig, 16). If the tree is more than 200 ft high, it has, in all likelihood, solved its light problem and, barring the catastrophies of intense fire, lightning, snow-loading on the crown, and blow-down, now stands an excellent chance of surviving to old age. No other tree in its native Sierra Nevada overtops the crown of the taller sequoias, Thus, its upper foliage, which is still intolerant of shade, has a dependable, uninterrupted supply of sunlight for its photosynthetic needs. By this stage in the sequoian life cycle, the total number of large, mature specimens has been considerably reduced by fires and other decimating factors. In the Redwood Mountain grove we studied, the mature trees varied from 5 to 8 per acre. Elsewhere, in localized situations, the density may approximate 15-20 per acre such as the Senate Group in the Giant Forest and the Sugar Bowl group in the Redwood Mountain Grove. Nowhere has the sequoia ever been found as a pure stand. The characteristic form of the older trees is one of irregular, craggy, and sometimes grotesque crowns which often reveal the dead upper part of the trunk, or "snag-top" (Fig. 38). Although lightning has been suggested as the reason for these dead tops, it now seems more probable that they are the result of fire scars at the bases of the trees which have a diminished translocation of water upward from their roots. The few large, old specimens still maintaining symmetrically rounded crowns invariably have small fire evidence, usually only a superficial char on the bark. A study of 100 snag-top sequoias in the Giant Forest revealed that each of them had one or more basal fire scars burned through the bark into the wood tissue (Hartesveldt 1965). The severance of the effective connections between the root system and the crown may well have deprived the crown of a portion of its requisite supply of moisture and nutrients (Rundel (1971), and it seems logical that the portion with the longest supply route would be the first to suffer. Scars at the bases of snag-tops are from 15 to 96% of living specimens' total circumference and there is a general correlation between the size of the scar and the size of the dead snag at the top. It is very probably impossible to find a sequoia in the mature or later stages that does not bear the black char marks of fire. Various authors have strongly suggested that fire has been a normal factor in almost all Sierran environments since the time forests existed there. The degree to which a tree is burned or spared probably depends on its topographic location with respect to other vegetation, the relationship to fuel accumulations, and other factors which would similarly influence the intensity of the often-repeated fires. The burn scars of individual trees are probably due more often to the cumulative effects of many fires, not all necessarily intense, than to single fires. Prior to the Western innovation of fire prevention and suppression, both the incidence and size of forest fires were greater than now. The intensity, however, of fires then was probably less. With repetitions of fires at rather regular intervals, fuel could not accumulate to proportions that would support today's dreaded crown fire holocausts. Each fire cleaned up the previous accumulations of fuel on the ground, and although other trees' crowns may have been badly damaged, sequoias, because of their size and fire-resistant bark, had a substantial survival advantage. Individual, standing, dead sequoia snags are common and show clear evidence of having been burned to death. But such dead snags are rare in a given locality, and nowhere cover a large area, suggesting that intense crown fires were less frequent in the past. In the snag-top study referred to earlier, fire scars were found just about in any conceivable location on the trunks observed. They varied from surface scorching to deep "cat-face" scars only at the base to longitudinal scars often running the full length of the trunk, and to hollowed "telescope" trees. Of the basal scars, analysis showed that approximately 90% were decidedly on the trunk's up-hill side. In the field, the reason becomes quite obvious. Cones, limbs, and other combustible debris, as it falls to the ground, is generally carried downhill by the pull of gravity and it comes to rest against the barriers formed by the trees' trunks. Thus, even light ground fires smolder longer in these collections of fuel and sear slowly through the bark and into the wood tissues. The resulting scars on the up-hill side become somewhat concave just above the ground level and the cavity presents a greater storage space for new accumulations of falling fuels. Therefore, not only do later fires have a greater fuel supply, but the concave surfaces also tend to accentuate the effects of subsequent fires as they reflect the heat like an oven. And the larger the scar, the more pronounced the effects of each successive fire. Despite their cragginess and "imperfections" in old age, the trees remain mystically beautiful and enchanting. They are by now the largest members of their community and they dominate the tree-top horizon in such a manner that the species can be identified easily by an observer many miles away. These are the specimens of trees that prompted the establishment of the parks that now preserve them and are the trees which are most prominent in the interest of the visiting public. Contrary to popular belief, these old veterans have not stopped growing, nor have they necessarily even slowed their rate of growth. Green foliage proclaims that photosynthesis is still on-going, and this, in turn, implies continuing yearly increments to the trunk diameter, no matter how large the specimen. Although the annual ring width is narrower, the radius of the tree becomes continually greater so that the amount of wood added each year is proportionately the same as when the tree was younger and is more or less maintained continually. The ring width at this stage may vary from 0.50 mm per year to 1 mm for trees in well-watered sites. This means that during each 24 years the world's two largest trees, the General Sherman and the General Grant, will add about 24 mm (1 inch) to their radial growth, or 48 mm (2 inches) of diameter. Even the fragmental trunk of the Black Chamber in the Giant Forest is adding 1 mm of radial growth yearly, despite 96% of its circumference having been destroyed by fire, as is most of the trunk (Fig. 39). Here is the quintessence of tenacity of life for which this species is so well known. Unlike other plants and animals, the sequoia does not seem to lose its reproductive ability with old age. The very largest specimens (not necessarily the oldest) have crowns that are loaded with cones containing viable seeds. Difficult as it is to determine the age of living trees without considerable effort, it is likewise difficult to assess any slowing of cone production due to aging. But, over our years of observation, we have found no indication of such slowing. The five largest specimens of sequoias in the world all bear moderate to heavy cone-loads and germinable seeds. Likewise, the remnant crown of that hopelessly "crippled" Black Chamber is loaded with green cones. That lightning affects the tops of tall sequoia trees is well documented by observers fortunate enough, from our point of view at least, to have been near one of these natural lightning rods when it was struck. Fry and White (1930) describe a dramatic horizontal bolt of lightning striking a trunk with a 16-ft diameter in the Garfield Grove. Surely it was the sight of a lifetime, the bolt knocking out a 20-ft segment from the middle of the trunk and dropping the crown which, oddly, caught up in the split of the trunk below. They further report that the trunk sent out new growth and may be presumed alive today. The Giant Forest seems to have a higher incidence of lightning strikes on sequoias than most of the other groves, perhaps because it is so close to the edge of the great "plateau" on which it is located. In several trees broken off near their middles, the lower branches have taken over the crown's sole function (Fig. 40). Recovery for these stricken trees appears reasonably well assured. Former National Park Service employee Ralph Anderson recalls vividly one of his earlier days in the service when he was stationed in Yosemite's Mariposa Grove. During a summer thunderstorm, a bolt of lightning struck the dead top of the Grizzly Giant, dislodging several hundred pounds of wood and dropping them literally at his feet. Although this incident may have strengthened the assumption that snag-tops are created by this means, it must be recognized that where bolts of lightning have hit the living parts of trees, the entire upper portion was often broken away, perhaps because of the great heat affecting the moisture within the stem. The Grizzly Giant's dry top, dead many years or even centuries, was perhaps a poor conductor of electricity, resulting in a relatively small portion of wood being torn loose. Fry and White (1930), perhaps the longest continuous observers of the giant sequoia, record only two specimens actually killed by lightning. Differing from most other tree species, the sequoia seems to have no known age of senescence. There is no record of mature sequoias ever having died from disease or insect depredations, afflictions common to other species of trees in their old age. If the sequoia could be kept from falling over or being burned to death, we can imagine that our descendants, a millennium hence, may see a General Sherman Tree approaching 40 ft dbh. With advancing old age, the thick, resin-free bark of this tree is doubtless a definite asset to its longevity and survival. Fires that kill or severely damage other species and younger sequoias may be of far less consequence to the relict larger sequoias which have survived many successional sequences of these associated species. Occasionally, however, a fire has been severe enough to kill all the foliage on more than one specimen in a given locality. Stricken, charred trunks remain witness of these fires for many centuries before they disintegrate. Seldom are more than two or three such snags found in close proximity, which suggests that crown fires were less prevalent before the advent of Western Civilization. Sometimes lightning strikes will set fire to the dead wood of a snag-top specimen. The wood, which may have been drying for a thousand warm Sierran summers, holds the fire very well, contrary to popular opinion and despite the limbs' size. We watched such a fire in the top of a large sequoia in Redwood Canyon in the summer of 1966. Charles Castro of the Sequoia National Park Forestry Division had climbed into its crown via a nearby fir tree to spray water onto the burning portion, unreachable from the ground. From our vantage point, and with Castro as a reference, the burning limb seemed about 3 ft in diameter, hardly a good prospect for a continued blaze. Just such a fire in the Giant Forest Lodge cabin area burned away the entire crown and upper part of another large sequoia in 1959. Again, the fire was burning well above the reach of water streams, and there were no adjacent trees suitable for access into its crown even if Castro had already developed his unique fire-fighting method. The tree, which burned for 2 weeks and disconcertingly dropped firebrands amongst the cabins, is now but a dead snag to about one-third its original height. Old sequoias most commonly die by toppling. Because of the wood's brittleness, this virtually assures death, although an occasional specimen has retained green leaves for several years after falling, which perhaps indicates that the fall did not sever all its vascular connections. One unique tree in the Atwell Grove that fell many years ago is still growing vigorously, with seven lateral branches giving the appearance of a candelabrum. Fire is probably the greatest contributor to death by falling because it creates gaping fire scars at ground level which weaken the tree's mechanical support. The extreme weight of the big trees coupled with their shallow roots increase the effects of this weakening, especially in leaning trees. Other causative factors are water-softened soils, undercutting by streams, snow-load on the crown, uneven reloading with moisture, wind, heart-rot, nearby falling trees, perhaps carpenter ants cutting galleries in the bark and dead wood near the trunks' bases, and, of course, various combinations of the above. We mentioned the finding that 90% of the fire scars in trees growing on slopes were on their up-slope sides. It is not surprising, then, that 90% of all fallen trees seen during that survey were found to have fallen up-hill, toward the side with the least support. Likewise, sequoias on the edge of wet meadows tend strongly to fall into the meadow because of the greater weakness associated with the softened saturated soil, and, in addition, because of the heavier foliage on the more illuminated side. Trees in these soft substrates also may come to lean because of wind pressure or snow-loading on the crown. If they survive, their off-center of gravity still continually subjects them to ever greater strains. Yet many of them, such as the immense Grizzly Giant leaning about 17° from the vertical, marvelously remain standing for centuries. The large, tenacious roots obviously provide sufficient anchorage to prevent their fall. The lean of the famous Tunnel Tree in Yosemite National Park was surely part of its undoing, to say nothing of the tunnel cut through it in 1881 which weakened the tree much as a fire scar would. Unfortunately, this tree was chosen because of its large fire scar which lessened the work required to carve the tunnel. The Washburn Brothers were paid only $75 for the task (Russell 1957). Less wood had to be removed on the up-hill side, and the tree had a decided lean in that same direction. Examination of its remains revealed a failure of the wood on the leaning side, which was also the one with the least wood support. The tree probably collapsed between February and May 1969 when, very fortunately, park visitors were not lined up bumper to bumper awaiting their turn to park in the tunnel and take the traditional photograph to be found in almost all geography books for nearly a century (Fig. 41). Because of the unusually heavy snow during that winter, the crown may well have borne 1-2 tons of additional weight which the wood could not support. No evidence supports the idea that its collapse was due to excessive trampling by people, the possible effects of which had earlier caused considerable concern (Hartesveldt 1963). Of course, the cutting of the tunnel is looked upon today as a sort of vandalism, and the tree very likely would still be standing had the tunnel never been cut. Curiously, although averse to vandalism to our irreplaceable natural objects, many people held the Tunnel Tree in near reverence. In both Yosemite and Sequoia National parks, visitors have most often asked, "Where is the tree I can drive through?" Despite this question's humorous association for rangers in Sequoia National Park, the announcement of its falling was greeted with great sorrow. Its fall marked the end of an era in Yosemite that many will remember fondly. Letters have often inquired when and where the next "tunnel tree" would be cut. Of course, it will not be, at least not in the national parks and forests or other public lands because laws hold federal agencies responsible for the preservation of these near-immortal trees. Not far north of the Grizzly Giant in Yosemite's Mariposa Grove is the tunneled California Tree, which in many older photographs bears the name "Wawona" over its portal. This was a useful ruse perpetrated by stage drivers in the spring of the year when the upper part of the grove was still deep in snow and the real Wawona Tree was inaccessible. Delighted passengers failed to notice that the two trees were really quite different. To assure better preservation, this tree has long been accessible only by foot. One rather frequent form of toppling difficult to explain occurs during the warm summertime when the air is still. Records on individual trees are few, and so cannot tell us whether all or most of the fallen trees were leaning ones. If they are leaning, why haven't they gone down at some earlier time? Just what finally gives gravity the advantage after many centuries and brings the trees down? The late Willis Wagener, formerly of the U.S. Forest and Range Experiment Station at Berkeley, proposed some initial hypotheses and submitted them to us a few weeks before his death. To follow up these hypotheses, projected studies in the Giant Forest will consider leaning trees, excessive loss of water by transpiration on warm, dry days, and the shock of recharge weights as more humid conditions return. We hope these studies will bring determinations of and predictions for "accident-prone" specimens, and prevent such tragedies as that of August 1969 in the Hazelwood Picnic Grounds, Sequoia National Park when a woman was hit and killed in a bizarre falling of two trees on the windless day. The chapter on the characteristics of old age may never be finished, at least for many generations to come. Continual protection and surveillance, improved age-dating for living trees, and extensive studies can determine whether there really is an age of senescence or a time when sequoias become more susceptible to debilitating diseases and insect depredations. Man's interest does not necessarily end when a tree lies prostrate on the ground. To many, a downed tree appears even larger than when standingwitness the countless visitors scrambling over the remains in any sequoia park, and the photographs of cavalry troops, horse-drawn carriages, and automobiles standing atop the fallen logs for comparison. Long after death, these trees command admiration. For science, the giant sequoias' slow rate of decay has raised several perplexing and still unanswered questions. The heartwood is particularly resistant to fungal attack and even the sapwood on some specimens is slow to decay. Many fallen trees appear today essentially as they did when photographed as much as a century or more ago. The heartwood's unusually high tannin content, discouraging both fungi and insects, was first advanced as an explanation during the 1800s, and the Forest Service later gave it credence (Sudworth 1908). The hypothesis is still generally accepted (Schubert 1962). Detailed chemical analyses of substances from the coast redwood indicate that perhaps other organic substances deposited in the heartwood are also the retardants in this species (Anderson et al. 1968; Balogh and Anderson 1965, 1966). Similar studies have not been made on the giant sequoia but, considering the relationship of the two trees, we can reasonably assume that related repellents are likewise in its wood. Perhaps the most interesting remaining question is how long the fallen trunks remain undecayed. Muir (1878) postulated that the trunks would last about 10,000 years and that their charred remains should be found in areas outside the existing groves. Such remnants, if they occur, would be a clue to the earlier distribution pattern of the species, as would the trenches in the ground made by the great impact of the trunks in falling. At this writing, no such remains to substantiate Muir's hopes of nearly a century ago have been found. Perhaps the pits or trenches have been eliminated during the milennia that erosive forces have been at work and that the log remnants would have been consumed by repeated fires. Contrary to some earlier opinions (Clark 1937; Schwarz 1904; Blick 1963), sequoia wood burns readily and individual dry logs have been observed to burn completely within a week. Furthermore, if the range were once continuous, fires must have burned repeatedly and erased every last vestige of their presence. Some small indication of the wood's durability is indicated by carbon-dating of three wood specimens mentioned on page 56. There is one species of tree known to be more ancient than the giant sequoia in its living statethe bristlecone pine. This five-needled pine ranges over six southwestern states, including southeastern California, although it is not found in the Sierra Nevada. Specimens growing in the White Mountain Range have been analyzed and one specimen is recorded at 4900 years of age by ring-count determination (Fritts 1969). By cross-dating tree rings from these ancient pines and carefully accounting for missing of duplicate rings, an absolute chronology of nearly 8200 years has been assembled. This chronology has been useful in correcting the carbon-14 determinations which have apparently varied in their rates of production in the atmosphere within past ages (Renfrew 1971). Last Updated: 06-Mar-2007
http://www.nps.gov/history/history/online_books/science/hartesveldt/chap5.htm
13
55
RELATIVE LENGTH & RELATIVE MASS back to ether back to relative motion back to Einstein back to Time topics When something is moving past extremely fast, its appearance changes - a full analysis shows the object as rotated and forshortened. The forshortening is easily shown and is called RELATIVISTIC LENGTH as opposed to its proper length. Let us go back to our alien who is now using light and a stopwatch to measure the speed v of his ship relative to Earth using signals from Earth. He knows the Proper Length l0 of his ship but he will be using a relative time interval of ΔT as he will not be judging the passage of his ship. He will be relying on flares from us. WE, on Earth, will measure the same speed v as the spaceship passes and signal with flares when we judge the front and rear exactly in front of us. We will get a proper time interval ΔT0 as we are judging the event, but any length of the ship arising will be relativistic! v = l / ΔT0 = l0 /ΔT v = speed of ship relative to Earth, l = relativistic length Now use the earlier results relating relativistic time to proper time and we get RELATIVISTIC LENGTH < PROPER LENGTH Special Relativity assumes both Conservation of Momentum & Conservation of Energy. These are much more fundamental assumptions than Newton's Laws which can be derived from these conservation principles at low speeds. If a rocket has a motor going flat out for a very long time, relative to Earth, it will acquire a lot of kinetic energy, but not an infinite kinetic energy as the rocket will NEVER exceed the speed of light relative to Earth. ( Light will always pass it at the speed of light! ) If we now collide this very fast rocket with something to measure its momentum and energy - and assume these laws are conserved, we find the MASS >> than its proper or "rest" mass! Suppose our alien chucks a ball from his fast moving ship which happens to be identical to a ball ( mass m0 ) we throw. We watch the subsequent collision which, by pure chance takes place after both balls move a distance d at right angles ( transverse ) to the ship's motion as measured by us. The experiment is such that we believe that the two transverse momenta cancel - ie the balls' momenta are exactly equal and opposite of each other. We see a PROPER measurement of our ball's momentum - p0 = m0 d / ΔT0 and a RELATIVISTIC measurement of the alien ball's transverse momentum p = m d / ΔT m = relativistic mass But the collision has it that the TOTAL momentum = 0 in this direction, so m0 d / ΔT0= m d / ΔT Immediately m0 / ΔT0= m / ΔT so As the relative speed increases, the mass appears to increase. The RELATIVISTIC MASS increases. The Proper Mass is also called the REST MASS. This is the mass you measure on an ordinary balance. After all, measuring mass IS an experiment you have carried out in your frame of reference - your lab! Example; An electron has a rest mass of 9.11 x 10-31 kg. What is its relativistic mass when travelling at 0.9c in an old TV tube? Soln. m = m0 / γ = 9.11 x 10-31 / ( 1 - 0.81 )1/2 = 2.1 x 10-30 kg In actual fact, most old TV tubes operate at 28kV giving a speed of close to 0.33c, what will the relativistic mass be in these situations? When it collides with the screen, it has this mass for the purposes of calculating the collision. If nothing can go infinitely fast but is limited always to be below the speed of light, what happens when we add energy to our moving object? The relativistic mass increases - the added energy becomes relativistic MASS-ENERGY. Indeed, the Rest Mass is also energy, the rest energy. Mass and energy become interchangeable in the most famous of Einstein's equations. E = mc2 , E is the TOTAL energy of the particle, m is the relativistic mass. Kinetic energy is now the difference between the total energy and the rest mass energy Ek = mc2- m0c2 The implications are profound. ALL chemical reactions in which energy is released or absorbed, MASS CHANGES. This is not what chemists would have you believe. Suppose a power station buring coal produces 200MW. Then 200MJ a second = 200 x 106 / (3 x 108)2 kg a second. Each day this amounts to 1.92 x 10-4 kg . If we store all the products and get their mass very precisely and compare with a very precise measurements of the reactants we would get a deficit of 1.92 x 10-4 kg each day. It is tiny - chemists quite rightly ignore it - but it is as real as in nuclear reactions. A 200MW nuclear reactor will show EXACTLY the same total mass loss each day. Light has mass-energy! Photons have energy given by E = hf , Planck's equation. Compton equated this with E = mc2 and gave photons a momenta of p = mc = hf / c = h / λ so a "mass" is assigned m = hf / c2 . This makes light susceptible to gravity, the beginning of General Relativity. eg; A green photon of wavelength 470 nm will have a momentum of 1.41 x 10-27 kgms-1, and an energy mass of 4.7 x 10-36 kg. Obviously light has no rest mass! ( Note that momentum for light can be described through Maxwell's Equations as well. This value is that given by a very large number of photons. ) This takes us right back to the beginning - how do the aliens passing each other at 0.9c ( according to us ) see each other? They see us pass at 0.9c - no problems. But if I am in one of the alien spaceships? The formula is quite messy. For one direction ONLY - no fancy 2D stuff of the opening page, ( If we work in 2D, it gets worse. You need a second set of different equations for the y axis. ) Notice the extra term in the denominator which guarantees the relative speed will be less than the speed of light. For low speeds this term is clearly close to zero. Substituting for the 0.9c for each AND that they are in opposite directions gives vAlienB rel A = [0.9c - (-0.9c)] / [1 - (-0.9c)*0.9c/c2] = 1.8c /1.81 = 0.994c NOT 1.8c Advanced animations showing relativistic effects created at Australian National University. 1. a) If an alien passes us at 0.99c and measures a metre ruler sitting on the ground next to us, what length will it get? ( 0.14m ) b) What time will a light pulse take to travel the length of the ruler according to us? (3.33 x 10-9 s ) c) What time will the light take to travel the length of the metre ruler according to the alien? ( 4.67 x 10-10 s ) 2.. You are to travel at a constant velocity of 0.97c from Earth to Alpha Centaurus, 4.26 light years away. a)What is the time to travel the distance from Earth's perspective? What is your time for the same trip? ( 4.39y, 1.07y ) b) In what sense does the results of 1. above make interstellar travel highly unlikely? c) Recalculate the spaceship time in 1. for 0.999c velocity. ( 0.19y ) If you are to travel to Alpha Centaurus, you must accelerate then decelerate. The calculations above do not take this into account. Accelerational effects are quite complex on time. There will be MANY other hazards trying this. 3. What is the rest mass energy of a proton? ( 1.5 x 10-10 J or 9.4 x 108 eV ) 4. A proton is now accelerated to 0.99c relative to us. What is its relativistic mass? What is its mass energy in eV? Where might protons acquire such mass energy? ( 1.18 x 10-26 kg, 6.66 GeV ) 5. Cosmic rays are mixtures of charged particles and Gamma Rays. Extremely energetic gamma rays occasionally occur which have the energy of a thrown ball - even over 1020 eV! Current physics cannot readily explain them. a) What is 1020 eV in J ? What speed would a 0.1kg ball be travelling at to attain this energy? ( 16J, 17.9 ms-1 ) b) What photonic frequency does this correspond to? And what photonic mass- energy in kg does this correspond to? ( 2.7 x 1034Hz, 1.78 x 10-16kg ) c) If a proton has a mass energy of 1020 eV , with what fraction of the speed of light is it moving? (0.999999999999999999999956c) 6. CERN is currently building the Large Hadron Collider of energy 14 TeV, for protons. What fraction of the speed of light will these particles be moving at if each is at half this value? ( The LHC takes particles travelling in opposite directions with equal energy to give the total energy. ) ( 7 TeV protons are travelling at 0.999999991c ) 7. For a PET scanner, an artificial radioactive source such as C-11 emits an antielectron. The antielectron annihilates an electron and two gamma rays travelling in opposite directions are produced. From the mass of the electron ( 9.11 x 10-31 kg) and the identical mass for the antielectron, calculate the energies of each of the gamma rays in MeV. What frequency does this correspond to? ( 0.511MeV, 1.2 x 1020 Hz ) 8. What energy must a proton acquire to create an electron? ( Other factors also enter making this a naive calculation.) ( 0.511MeV ) 9. a) You, on Earth, observe Superman passing at 0.7c but Superwoman passes from the opposite direction at 0.8c. What speed does Superman see Superwoman pass at? ( 0.968c ) b)What if Superwoman is trying to overtake Superman with the same speeds as above but in the same direction? What will Superman now see? ( 0.23c ) PS - my thanks to Craig Savage of Australian National University for his comments and corrections of some of these relativity pages where I got it wrong!
http://www.launc.tased.edu.au/online/sciences/physics/LENMAS.HTM
13
76
Density and Specific Gravity - Sample Problems Jump to: Rock and Mineral density | Rock and mineral specific gravity You can download the questions (Acrobat (PDF) 25kB Jul24 09) if you would like to work them on a separate sheet of paper. Calculating densities of rocks and minerals Problem 1: You have a rock with a volume of 15cm3 and a mass of 45 g. What is its density? Density is mass divided by volume, so that the density is 45 g divided by 15cm3 , which is 3.0 g/cm3 Problem 2: You have a different rock with a volume of 30cm3 and a mass of 60g. What is its density? Density is mass divided by volume, so that the density is 60 g divided by 30cm3 , which is 2.0 g/cm3 Problem 3: In the above two examples which rock is heavier? Which is lighter? The question is asking about heavier and lighter, which refers to mass or weight. Therefore, all you care about is the mass in grams and so the 60 g rock in the second problem is heavierand the 45 g rock (in the first question) is lighter. Problem 4: In the above two examples which rock is more dense? which is less dense? The question is asking about density, and that is the ratio of mass to volume. Therefore, the first rock is denser, (density = 3.0) and the second rock is less dense even though it weighs more, because its density is only 2.0. This example shows why it is important to be careful to not use the words heavier/lighter when you means more or less dense. Problem 5: You decide you want to carry a boulder home from the beach. It is 30 centimeters on each side, and so has a volume of 27,000 cm3. It is made of granite, which has a typical density of 2.8 g/cm3. How much will this boulder weigh? In this case, you are asked for a mass, not the density. You will need to rearrange the density equation so that you get mass. By multiplying both sides by volume, mass will be left alone. Substituting in the values from the problem, The result is that the mass is 75,600 grams. That is over 165 pounds! Rocks are sometimes used along coasts to prevent erosion. If a rock needs to weigh 2,000 kilograms (about 2 tons) in order not to be shifted by waves, how big (what volume) does it need to be? You are using basalt, which has a typical density of 3200 kg/m3 In this problem you need a volume, so you will need to rearrange the density equation to get volume. By multiplying both sides by volume, we can get volume out of the numerator (the bottom). You can then divide both sides by density to get volume alone: By substituting in the values listed above, So the volume will be 0.625 m3 Note that the above problem shows that densities can be in units other than grams and cubic centimeters. To avoid the potential problems of different units, many geologists use specific gravity (SG), explored in problems 8 and 9, below. Image from http://www.stat.wisc.edu/~ifischer/Collections/Fossils/rocks.html A golden-colored cube is handed to you. The person wants you to buy it for $100, saying that is a gold nugget. You pull out your old geology text and look up gold in the mineral table, and read that its density is 19.3 g/cm3 . You measure the cube and find that it is 2 cm on each side, and weighs 40 g. What is its density? Is it gold? Should you buy it? To determine the density you need the volume and the mass since You know the mass (40 g), but the volume is not given. To find the volume, use the formula for the volume of a box - volume = length x width x height. The volume of the cube is - 2cm x 2cm x 2cm = 8cm3. The density then is the mass divided by the volume: Thus the cube is NOT gold , since the density (5.0 g/cm3 ) is not the same as gold (19.3g/cm3 ). You tell the seller to take a hike . You might even notice that the density of pyrite (a.k.a. fool's gold) is 5.0 g/cm3 . Luckily you are no fool and know about density! Calculating Specific Gravity of Rocks and Minerals Problem 8: You have a sample of granite with density 2.8 g/cm3. The density of water is 1.0 g/cm3. What is the specific gravity of your granite? Specific gravity is the density of the substance divided by the density of water, so Note that the units cancel, so this answer has no units. We say "the number is unitless Problem 9: You have a sample of granite with density 174.8 lbs/ft3. The density of water is 62.4 lbs/ft3. What is the specific gravity of the granite now? Again, the specific gravity is the density of the substance divided by the density of water, so This shows that the specific gravity does not change when measurements are made in different units, so long as the density of the object and the density of water are in the same units.
http://serc.carleton.edu/mathyouneed/density/densitysp.html
13
117
Massachusetts Math Standards - Grades 7-8MathScore aligns to the Massachusetts Math Standards for Grades 7-8. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience. Number Sense and Operations8.N.1 Compare, order, estimate, and translate among integers, fractions and mixed numbers (i.e., rational numbers), decimals, and percents. (Compare Mixed Values 2 , Number Line , Fractions to Decimals , Decimals To Fractions , Compare Mixed Values , Positive Number Line , Percentages ) 8.N.2 Define, compare, order, and apply frequently used irrational numbers, such as √2 and π. 8.N.3 Use ratios and proportions in the solution of problems, in particular, problems involving unit rates, scale factors, and rate of change. (Unit Cost , Area And Volume Proportions , Proportions 2 , Distance, Rate, and Time ) 8.N.4 Represent numbers in scientific notation, and use them in calculations and problem situations. (Scientific Notation 2 , Scientific Notation ) 8.N.5 Apply number theory concepts, including prime factorization and relatively prime numbers, to the solution of problems. (Prime Factoring , Prime Factoring 2 ) 8.N.6 Demonstrate an understanding of absolute value, e.g., |-3| = |3| = 3. (Absolute Value 1 ) 8.N.7 Apply the rules of powers and roots to the solution of problems. Extend the Order of Operations to include positive integer exponents and square roots. (Order Of Operations ) 8.N.8 Demonstrate an understanding of the properties of arithmetic operations on rational numbers. Use the associative, commutative, and distributive properties; properties of the identity and inverse elements (e.g., -7 + 7 = 0; 3/4 x 4/3 = 1); and the notion of closure of a subset of the rational numbers under an operation (e.g., the set of odd integers is closed under multiplication but not under addition). (Distributive Property , Distributive Property 2 ) 8.N.9 Use the inverse relationships of addition and subtraction, multiplication and division, and squaring and finding square roots to simplify computations and solve problems, e.g. multiplying by 1/2 or 0.5 is the same as dividing by 2. (Single Variable Equations , Single Variable Equations 2 , Single Variable Equations 3 , Estimating Square Roots , Perfect Squares ) 8.N.10 Estimate and compute with fractions (including simplification of fractions), integers, decimals, and percents (including those greater than 100 and less than 1). (Fraction Simplification , Fraction Addition , Fraction Subtraction , Fraction Multiplication , Fraction Division , Decimal Addition , Decimal Subtraction , Decimal Multiplication , Decimal Division , Percentage Change , Percent of Quantity , Integer Addition , Integer Subtraction , Positive Integer Subtraction , Integer Multiplication , Integer Division , Integer Equivalence ) 8.N.11 Determine when an estimate rather than an exact answer is appropriate and apply in problem situations. 8.N.12 Select and use appropriate operations-addition, subtraction, multiplication, division, and positive integer exponents-to solve problems with rational numbers (including negatives). (Fraction Word Problems , Fraction Word Problems 2 , Integers In Word Problems , Exponent Rules For Fractions ) Patterns, Relations, and Algebra8.P.1 Extend, represent, analyze, and generalize a variety of patterns with tables, graphs, words, and, when possible, symbolic expressions. Include arithmetic and geometric progressions, e.g., compounding. (Patterns: Numbers , Function Tables , Function Tables 2 ) 8.P.2 Evaluate simple algebraic expressions for given variable values, e.g., 3a2 - b for a = 3 and b = 7. (Variable Substitution , Variable Substitution 2 ) 8.P.3 Demonstrate an understanding of the identity (-x)(-y) = xy. Use this identity to simplify algebraic expressions, e.g., (-2)(-x+2) = 2x - 4. (Integer Multiplication , Simplifying Algebraic Expressions 2 ) 8.P.4 Create and use symbolic expressions and relate them to verbal, tabular, and graphical representations. (Phrases to Algebraic Expressions , Algebraic Sentences 2 , Algebraic Sentences ) 8.P.5 Identify the slope of a line as a measure of its steepness and as a constant rate of change from its table of values, equation, or graph. Apply the concept of slope to the solution of problems. (Determining Slope ) 8.P.6 Identify the roles of variables within an equation, e.g., y = mx + b, expressing y as a function of x with parameters m and b. (Graphs to Linear Equations ) 8.P.7 Set up and solve linear equations and inequalities with one or two variables, using algebraic methods, models, and/or graphs. (Linear Equations , Single Variable Equations , Single Variable Equations 2 , Single Variable Equations 3 , Single Variable Inequalities ) 8.P.8 Explain and analyze-both quantitatively and qualitatively, using pictures, graphs, charts, or equations-how a change in one variable results in a change in another variable in functional relationships, e.g., C = πd, A = πr2 (A as a function of r), Arectangle = lw (Arectangle as a function of l and w). (Function Tables , Function Tables 2 ) 8.P.9 Use linear equations to model and analyze problems involving proportional relationships. Use technology as appropriate. (Distance, Rate, and Time , Train Problems , Mixture Word Problems , Work Word Problems ) 8.P.10 Use tables and graphs to represent and compare linear growth patterns. In particular, compare rates of change and x- and y-intercepts of different linear patterns. Geometry8.G.1 Analyze, apply, and explain the relationship between the number of sides and the sums of the interior and exterior angle measures of polygons. 8.G.2 Classify figures in terms of congruence and similarity, and apply these relationships to the solution of problems. (Congruent And Similar Triangles , Proportions 2 ) 8.G.3 Demonstrate an understanding of the relationships of angles formed by intersecting lines, including parallel lines cut by a transversal. (Identifying Angles , Angle Measurements 2 ) 8.G.4 Demonstrate an understanding of the Pythagorean theorem. Apply the theorem to the solution of problems. (Pythagorean Theorem ) 8.G.5 Use a straightedge, compass, or other tools to formulate and test conjectures, and to draw geometric figures. (Requires outside materials ) 8.G.6 Predict the results of transformations on unmarked or coordinate planes and draw the transformed figure, e.g., predict how tessellations transform under translations, reflections, and rotations. 8.G.7 Identify three-dimensional figures (e.g., prisms, pyramids) by their physical appearance, distinguishing attributes, and spatial relationships such as parallel faces. 8.G.8 Recognize and draw two-dimensional representations of three-dimensional objects, e.g., nets, projections, and perspective drawings. Measurement8.M.1 Select, convert (within the same system of measurement), and use appropriate units of measurement or scale. (Distance Conversion , Time Conversion , Volume Conversion , Weight Conversion , Area and Volume Conversions ) 8.M.2 Given the formulas, convert from one system of measurement to another. Use technology as appropriate. (Distance Conversion , Time Conversion , Volume Conversion , Weight Conversion , Temperature Conversion ) 8.M.3 Demonstrate an understanding of the concepts and apply formulas and procedures for determining measures, including those of area and perimeter/circumference of parallelograms, trapezoids, and circles. Given the formulas, determine the surface area and volume of rectangular prisms, cylinders, and spheres. Use technology as appropriate. (Triangle Area , Triangle Area 2 , Parallelogram Area , Perimeter , Rectangular Solids , Rectangular Solids 2 , Circle Area , Circle Circumference , Cylinders , Trapezoids ) 8.M.4 Use ratio and proportion (including scale factors) in the solution of problems, including problems involving similar plane figures and indirect measurement. (Proportions 2 ) 8.M.5 Use models, graphs, and formulas to solve simple problems involving rates, e.g., velocity and density. (Distance, Rate, and Time , Train Problems ) Data Analysis, Statistics, and Probability8.D.1 Describe the characteristics and limitations of a data sample. Identify different ways of selecting a sample, e.g., convenience sampling, responses to a survey, random sampling. 8.D.2 Select, create, interpret, and utilize various tabular and graphical representations of data, e.g., circle graphs, Venn diagrams, scatterplots, stem-and-leaf plots, box-and-whisker plots, histograms, tables, and charts. Differentiate between continuous and discrete data and ways to represent them. (Stem And Leaf Plots ) 8.D.3 Find, describe, and interpret appropriate measures of central tendency (mean, median, and mode) and spread (range) that represent a set of data. Use these notions to compare different sets of data. (Mean, Median, Mode ) 8.D.4 Use tree diagrams, tables, organized lists, basic combinatorics ("fundamental counting principle"), and area models to compute probabilities for simple compound events, e.g., multiple coin tosses or rolls of dice. (Probability 2 ) Learn more about our online math practice software.
http://www.mathscore.com/math/standards/Massachusetts/Grades%207-8/
13
53
Electricity and magnetism Cross Product 2 A little more intuition on the cross product. Cross Product 2 ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - Let's see if we can get a little bit more practice and - intuition of what cross products are all about. - So in the last example, we took a cross b. - Let's see what happens when we take b cross a. - So let me erase some of this. - I don't want to erase all of it because it might be useful - to give us some intuition to compare. - I'm going to keep that. - Actually, I can erase this, I think. - So the things I have drawn here, this was a cross b. - Let me cordon it off so you don't get confused. - So that was me using the right hand rule when I tried to do a - cross b, and then we saw that the magnitude of this was 25, - and n, the direction, pointed downwards. - Or when I drew it here, it would point into the page. - So let's see what happens with b cross a, so I'm just - switching the order. - b cross a. - Well, the magnitude is going to be the same thing, right? - Because I'm still going to take the magnitude of b times - the magnitude of a times the sine of the angle between - them, which was pi over 6 radians and then times some - unit vector n. - But this is going to be the same. - When I multiply scalar quantities, it doesn't matter - what order I multiply them in, right? - So this is still going to be 25, whatever my units might - have been, times some vector n. - And we still know that that vector n has to be - perpendicular to both a and b, and now we have to figure out, - well, is it, in being perpendicular, it can either - kind of point into the page here or it could pop out of - the page, or point out of the page. - So which one is it? - And then we take our right hand out, and we try it again. - So what we do is we take our right hand. - I'm actually using my right hand right now, although you - can't see it, just to make sure I draw the right thing. - So in this example, if I take my right hand, I take the - index finger in the direction of b. - I take my middle finger in the direction of a, so my middle - figure is going to look something like that, right? - And then I have two leftover fingers there. - Then the thumb goes in the direction of the cross - product, right? - Because your thumb has a right angle right there. - That's the right angle of your thumb. - So in this example, that's the direction of a, this is the - direction of b, and we're doing b cross a. - That's why b gets your index finger. - The index finger gets the first term, your middle finger - gets the second term, and the thumb gets the direction of - the cross product. - So in this example, the direction of the cross product - is upwards. - Or when we're drawing it in two dimensions right here, the - cross product would actually pop out of the - page for b cross a. - So I'll draw it over. - It would be the circle with the dot. - Or if I were to draw it analogous to this, so this - right here, that was a cross b. - And then b cross a is the exact same magnitude, but it - goes in the other direction. - That's b cross a. - It just flips in the opposite direction. - And that's why you have to use your right hand, because you - might know that, oh, something's going to pop in or - out of the page, et cetera, et cetera, but you need to know - your right hand to know whether it goes in - or out of the page. - Anyway, let's see if we can get a little bit more - intuition of what this is all about because this is all - about intuition. - And frankly, I'll tell you, the cross product comes into - use in a lot of concepts that frankly we don't have a lot of - real-life intuition, with electrons flying through a - magnetic field or magnetic fields through a coil. - A lot of things in our everyday life experience, - maybe if we were metal filings living in a magnetic field-- - well, we do live in a magnetic field. - In a strong magnetic field, maybe we would get an - intuition, but it's hard to have as deep of an intuition - as we do for, say, falling objects, or friction, or - forces, or fluid dynamics even, because we've all played - with water. - But anyway, let's get a little bit more intuition. - And let's think about why is there that sine of theta? - Why not just multiply the magnitudes times each other - and use the right hand rule and figure out a direction? - What is that sine of theta all about? - I think I need to clear this up a little bit just so this - could be useful. - So why is that sine of theta there? - Let me redraw some vectors. - I'll draw them a little fatter. - So let's say that's a, that's a, this is b. - b doesn't always have to be longer than a. - So this is a and this is b. - Now, we can think of it a little bit. - We could say, well, this is the same thing as a sine theta - times b, or we could say this is b sine theta times a. - I hope I'm not confusing-- all I'm saying is you could - interpret this as-- because these are - just magnitudes, right? - So it doesn't matter what order you multiply them in. - You could say this is a sine theta times the magnitude of - b, all of that in the direction of the normal - vector, or you could put the sine theta the other way. - But let's think about what this would mean. - a sine theta, if this is theta. - What is a sine theta? - Sine is opposite over hypotenuse, right? - So opposite over hypotenuse. - So this would be the magnitude of a. - Let me draw something. - Let me draw a line here and make it a real line. - Let me draw a line there, so I have a right angle. - So what's a sine theta? - This is the opposite side. - So a sine theta is a, and sine of theta is opposite over - The hypotenuse is the magnitude of a, right? - So sine of theta is equal to this side, which I call o for - opposite, over the magnitude of a. - So it's opposite over the magnitude of a. - So this term a sine theta is actually just the magnitude of - this line right here. - Another way you could-- let me redraw it. - It doesn't matter where the vectors start from. - All you care about is this magnitude and direction, so - you could shift vectors around. - So this vector right here, and you could call it this - opposite vector, that's the same thing as this vector. - That's the same thing as this. - I just shifted it away. - And so another way to think about it is, it is the - component of vector a, right? - We're used to taking a vector and splitting it up into x- - and y-components, but now we're taking a vector a, and - we're splitting it up into-- you can think of it as a - component that's parallel to vector b and a component that - is perpendicular to vector b. - So a sine theta is the magnitude of the component of - vector a that is perpendicular to b. - So when you're taking the cross product of two numbers, - you're saying, well, I don't care about the entire - magnitude of vector a in this example, I care about the - magnitude of vector a that is perpendicular to vector b, and - those are the two numbers that I want to multiply and then - give it that direction as specified by - the right hand rule. - And I'll show you some applications. - This is especially important-- well, we'll use it in torque - and we'll also use it in magnetic fields, but it's - important in both of those applications to figure out the - components of the vector that are perpendicular to either a - force or a radius in question. - So that's why this cross product has the sine theta - because we're taking-- so in this, if you view it as - magnitude of a sine theta times b, this is kind of - saying this is the magnitude of the component of a - perpendicular to b, or you could interpret - it the other way. - You could interpret it as a times b sine theta, right? - Put a parentheses here. - And then you could view it the other way. - You could say, well, b sine theta is the component of b - that is perpendicular to a. - Let me draw that, just to hit the point home. - So that's my a, that's my b. - This is a, this is b. - So b has some component of it that is perpendicular to a, - and that is going to look something like-- well, I've - run out of space. - Let me draw it here. - If that's a, that's b, the component of b that is - perpendicular to a is going to look like this. - It's going to be perpendicular to a, and it's going to go - that far, right? - And then you could go back to SOH CAH TOA and you could - prove to yourself that the magnitude of this vector is b - sine theta. - So that is where the sine theta comes from. - It makes sure that we're not just multiplying the vectors. - It makes sure we're multiplying the components of - the vectors that are perpendicular to each other to - get a third vector that is perpendicular to both of them. - And then the people who invented the cross product - said, well, it's still ambiguous because it doesn't - tell us-- there's always two vectors that are perpendicular - to these two. - One goes in, one goes out. - They're in opposite directions. - And that's where the right hand rule comes in. - They'll say, OK, well, we're just going to say a convention - that you use your right hand, point it like a gun, make all - your fingers perpendicular, and then you know what - direction that vector points in. - Anyway, hopefully, you're not confused. - Now I want you to watch the next video. - This is actually going to be some physics on electricity, - magnetism and torque, and that's essentially the - applications of the cross product, and it'll give you a - little bit more intuition of how to use it. - See you soon. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/science/physics/electricity-and-magnetism/v/cross-product-2
13
64
Theory of Flight Flight is a phenomenon that has long been a part of the natural world. Birds fly not only by flapping their wings, but by gliding with their wings outstretched for long distances. Smoke, which is composed of tiny particles, can rise thousands of feet into the air. Both these types of flight are possible because of the principles of physical science. Likewise, man-made aircraft rely on these principles to overcome the force of gravity and achieve Lighter-than-air craft, such as the hot air balloon, work on a buoyancy principle. They float on air much like rafts float on water. The density of a raft is less than that of water, so it floats. Although the density of water is constant, the density of air decreases with altitude. The density of hot air inside a balloon is less than that of the air at sea so the balloon rises. It will continue to rise until the air outside of the is of the same density as the air inside. Smoke particles rise on a plume air being generated by a fire. When the air cools, the particles fall back Heavier-than-air flight is made possible by a careful balance of four physical forces: lift, drag, weight, and thrust. For flight, an aircraft's lift must balance its weight, and its thrust must exceed its drag. A plane uses its wings for lift and its engines for thrust. Drag is reduced by a plane's smooth shape and its weight is controlled by the materials it is constructed of. In order for an aircraft to rise into the air, a force must be created that equals or exceeds the force of gravity. This force is called lift. In heavier-than-air craft, lift is created by the flow of air over an airfoil. The shape of an airfoil causes air to flow faster on top than on bottom. The fast flowing air decreases the surrounding air pressure. Because the air pressure is greater below the airfoil than above, a resulting lift force is created. To further understand how an airfoil creates lift, it is necessary to use two important equations of physical science. The pressure variations of flowing air is best represented by Bernoulli's equation. It was derived by Daniel Bernoulli, a Swiss mathematician, to explain the variation in pressure exerted by flowing streams of water. The Bernoulli equation is written as: where: P = pressure (force exerted divided by area exerted on) rho = density of the fluid V = velocity of the moving object or fluid To understand the Bernoulli equation, one must first understand another important principle of physical science, the continuity equation. It simply states that in any given flow, the density (rho) times the cross-sectional area (A) of the flow, times the velocity (V) is constant. The continuity equation is written as: where: P = pressure V = velocity A = cross sectional area of flow Using the Bernoulli equation and the continuity equation, it can be shown how air flowing over an airfoil creates lift. Imagine air flowing over a stationary airfoil, such as an aircraft wing. Far ahead of the airfoil, the air travels at a uniform velocity. To flow past the airfoil, however, it must "split" in two, part of the flow traveling on top and part traveling on the bottom. The shape of a typical airfoil is asymmetrical - its surface area is greater on the top than on the bottom. As the air flows over the airfoil, it is displaced more by the top surface than the bottom. According to the continuity law, this displacement, or loss of flow area, must lead to an increase in velocity. Consider an airfoil in a pipe with flowing water. Water will flow faster in a narrow section of the pipe. The large area of the top surface of the airfoil narrows the pipe more than the bottom surface does. Thus, water will flow faster on top than on bottom. The flow velocity is increased some by the bottom airfoil surface, but considerably less than the flow on top. The Bernoulli equation states that an increase in velocity leads to an decrease in pressure. Thus the higher the velocity of the flow, the lower the pressure. Air flowing over an airfoil will decrease in pressure. The pressure loss over the top surface is greater than that of the bottom surface. The result is a net pressure force in the upward (positive) direction. This pressure force is lift. There is no predetermined shape for a wing airfoil, it is designed based on the function of the aircraft it will be used for. To aid the design process, engineers use the lift coefficient to measure the amount of lift obtained from a particular airfoil shape. Lift is proportional to dynamic pressure and wing area. The lift equation is written as: where S is wing area and the quantity in parantheses is the dynamic pressure. In designing an aircraft wing, it is usually advantageous to get the lift coefficient as high as possible. Every physical body that is propelled through the air will experience resistance to the air flow. This resistance is called drag. Drag is the result of a number of physical phenonmena. Pressure drag is that which you feel when running on a windy day. The pressure of the wind in front of you is greater than the pressure of the wake behind you. Skin friction, or viscous drag, is that which swimmers may experience. The flow of water along a swimmer's body creates a frictional force that slows the swimmer down. A rough surface will induce more frictional drag than a smooth surface. To reduce viscous drag, swimmers attempt to make contact surfaces as smooth as possible by wearing swim caps and shaving their legs. Likewise, an aircraft's wing is designed to be smooth to reduce drag. Like lift, drag is proportional to dynamic pressure and the area on which it acts. The drag coefficient, analgous to the lift coefficent, is a measure of the amount of dynamic pressure gets converted into drag. Unlike the lift coefficient however, engineers usually design the drag coefficient to be as low as possible. Low drag coefficients are desirable because an aircraft's efficiency increases as drag decreases. The weight of an aircraft is a limiting factor in aircraft design. A heavy plane, or a plane meant to carry heavy payloads, requires more lift than a light plane. It may also require more thrust to accelerate on the ground. On small aircraft the location of weight is also important. A small plane must be appropriately "balanced" for flight, for too much weight in the back or front can render the plane unstable. Weight can be calculated using a form of Newton's second law: W = mg where W is weight, m is mass, and g is the acceleration due to gravity on Earth. Propulsion involves a number of principles of physical science. Thermodynamics, aerodynamics, fluid mathematics, and physics all play a role. Thrust itself is a force than can best be described by Newton's second law. The basic form of this law is: F = ma which states that force (F) is equal to mass (m) times acceleration (a). Acceleration is the rate of change of velocity over time. Thrust (T) is produced therefore by accelerating a mass of air. - Would more lift be provided by a fluid with a greater density - How do aircraft designers determine the correct shape for a wing? - Explain how a propeller provides thrust in the same way a wing - An equation for lift was supplied previously. What would be the two forces involved on a propeller? - Would a propeller work better in a fluid with a greater density - Do you think different planes need differently shaped airfoils? - During the design phase, how is a wing's theoretical shape - How are the wings of a small plane, like a Cessna, different from a large one, like a passenger jet? - How are the propulsion systems of a biplane different than that of a fighter jet? - What kind of propulsion does a Lear jet use? The Concorde? - Make a list of the differences between fixed wing aircraft and helicopters. How does each generate lift? How fast can each go? What are the advantages and disadvantages of each? - Some planes have more than one engine to propel the craft. Are the multiple engines necessary or a safety precaution? - Build paper airplanes and demonstrate the effects of lift, drag, thrust, and weight. - Take a trip to your local airport or an airshow. Visit the control tower and the aircraft hangers. - Determine the wing area of a large aircraft. Describe what kind of plane it is. - What kind of propulsion system does the space shuttle use, as opposed to an airplane? - Who are the leading manufacturers of aircraft engines? - Derive the basic equation for lift (Eqn 3) from Bernoulli's Equation (Eqn 1). Note any assumptions that you make. - What is the density of air? Does it differ from high altitudes to low altitudes? - Draw a free-body diagram of an aircraft.
http://web.mit.edu/16.00/www/aec/flight.html
13
301
Higher order functions Haskell functions can take functions as parameters and return functions as return values. A function that does either of those is called a higher order function. Higher order functions aren't just a part of the Haskell experience, they pretty much are the Haskell experience. It turns out that if you want to define computations by defining what stuff is instead of defining steps that change some state and maybe looping them, higher order functions are indispensable. They're a really powerful way of solving problems and thinking about programs. Every function in Haskell officially only takes one parameter. So how is it possible that we defined and used several functions that take more than one parameter so far? Well, it's a clever trick! All the functions that accepted several parameters so far have been curried functions. What does that mean? You'll understand it best on an example. Let's take our good friend, the max function. It looks like it takes two parameters and returns the one that's bigger. Doing max 4 5 first creates a function that takes a parameter and returns either 4 or that parameter, depending on which is bigger. Then, 5 is applied to that function and that function produces our desired result. That sounds like a mouthful but it's actually a really cool concept. The following two calls are equivalent: ghci> max 4 5 5 ghci> (max 4) 5 5 Putting a space between two things is simply function application. The space is sort of like an operator and it has the highest precedence. Let's examine the type of max. It's max :: (Ord a) => a -> a -> a. That can also be written as max :: (Ord a) => a -> (a -> a). That could be read as: max takes an a and returns (that's the ->) a function that takes an a and returns an a. That's why the return type and the parameters of functions are all simply separated with arrows. So how is that beneficial to us? Simply speaking, if we call a function with too few parameters, we get back a partially applied function, meaning a function that takes as many parameters as we left out. Using partial application (calling functions with too few parameters, if you will) is a neat way to create functions on the fly so we can pass them to another function or to seed them with some data. Take a look at this offensively simple function: multThree :: (Num a) => a -> a -> a -> a multThree x y z = x * y * z What really happens when we do multThree 3 5 9 or ((multThree 3) 5) 9? First, 3 is applied to multThree, because they're separated by a space. That creates a function that takes one parameter and returns a function. So then 5 is applied to that, which creates a function that will take a parameter and multiply it by 15. 9 is applied to that function and the result is 135 or something. Remember that this function's type could also be written as multThree :: (Num a) => a -> (a -> (a -> a)). The thing before the -> is the parameter that a function takes and the thing after it is what it returns. So our function takes an a and returns a function of type (Num a) => a -> (a -> a). Similarly, this function takes an a and returns a function of type (Num a) => a -> a. And this function, finally, just takes an a and returns an a. Take a look at this: ghci> let multTwoWithNine = multThree 9 ghci> multTwoWithNine 2 3 54 ghci> let multWithEighteen = multTwoWithNine 2 ghci> multWithEighteen 10 180 By calling functions with too few parameters, so to speak, we're creating new functions on the fly. What if we wanted to create a function that takes a number and compares it to 100? We could do something like this: compareWithHundred :: (Num a, Ord a) => a -> Ordering compareWithHundred x = compare 100 x If we call it with 99, it returns a GT. Simple stuff. Notice that the x is on the right hand side on both sides of the equation. Now let's think about what compare 100 returns. It returns a function that takes a number and compares it with 100. Wow! Isn't that the function we wanted? We can rewrite this as: compareWithHundred :: (Num a, Ord a) => a -> Ordering compareWithHundred = compare 100 The type declaration stays the same, because compare 100 returns a function. Compare has a type of (Ord a) => a -> (a -> Ordering) and calling it with 100 returns a (Num a, Ord a) => a -> Ordering. The additional class constraint sneaks up there because 100 is also part of the Num typeclass. Infix functions can also be partially applied by using sections. To section an infix function, simply surround it with parentheses and only supply a parameter on one side. That creates a function that takes one parameter and then applies it to the side that's missing an operand. An insultingly trivial function: divideByTen :: (Floating a) => a -> a divideByTen = (/10) Calling, say, divideByTen 200 is equivalent to doing 200 / 10, as is doing (/10) 200. A function that checks if a character supplied to it is an uppercase letter: isUpperAlphanum :: Char -> Bool isUpperAlphanum = (`elem` ['A'..'Z']) The only special thing about sections is using -. From the definition of sections, (-4) would result in a function that takes a number and subtracts 4 from it. However, for convenience, (-4) means minus four. So if you want to make a function that subtracts 4 from the number it gets as a parameter, partially apply the subtract function like so: (subtract 4). What happens if we try to just do multThree 3 4 in GHCI instead of binding it to a name with a let or passing it to another function? ghci> multThree 3 4 <interactive>:1:0: No instance for (Show (t -> t)) arising from a use of `print' at <interactive>:1:0-12 Possible fix: add an instance declaration for (Show (t -> t)) In the expression: print it In a 'do' expression: print it GHCI is telling us that the expression produced a function of type a -> a but it doesn't know how to print it to the screen. Functions aren't instances of the Show typeclass, so we can't get a neat string representation of a function. When we do, say, 1 + 1 at the GHCI prompt, it first calculates that to 2 and then calls show on 2 to get a textual representation of that number. And the textual representation of 2 is just the string "2", which then gets printed to our screen. Some higher-orderism is in order Functions can take functions as parameters and also return functions. To illustrate this, we're going to make a function that takes a function and then applies it twice to something! applyTwice :: (a -> a) -> a -> a applyTwice f x = f (f x) First of all, notice the type declaration. Before, we didn't need parentheses because -> is naturally right-associative. However, here, they're mandatory. They indicate that the first parameter is a function that takes something and returns that same thing. The second parameter is something of that type also and the return value is also of the same type. We could read this type declaration in the curried way, but to save ourselves a headache, we'll just say that this function takes two parameters and returns one thing. The first parameter is a function (of type a -> a) and the second is that same a. The function can also be Int -> Int or String -> String or whatever. But then, the second parameter to also has to be of that type. The body of the function is pretty simple. We just use the parameter f as a function, applying x to it by separating them with a space and then applying the result to f again. Anyway, playing around with the function: ghci> applyTwice (+3) 10 16 ghci> applyTwice (++ " HAHA") "HEY" "HEY HAHA HAHA" ghci> applyTwice ("HAHA " ++) "HEY" "HAHA HAHA HEY" ghci> applyTwice (multThree 2 2) 9 144 ghci> applyTwice (3:) [3,3,1] The awesomeness and usefulness of partial application is evident. If our function requires us to pass it a function that takes only one parameter, we can just partially apply a function to the point where it takes only one parameter and then pass it. Now we're going to use higher order programming to implement a really useful function that's in the standard library. It's called zipWith. It takes a function and two lists as parameters and then joins the two lists by applying the function between corresponding elements. Here's how we'll implement it: zipWith' :: (a -> b -> c) -> [a] -> [b] -> [c] zipWith' _ _ = zipWith' _ _ = zipWith' f (x:xs) (y:ys) = f x y : zipWith' f xs ys Look at the type declaration. The first parameter is a function that takes two things and produces a third thing. They don't have to be of the same type, but they can. The second and third parameter are lists. The result is also a list. The first has to be a list of a's, because the joining function takes a's as its first argument. The second has to be a list of b's, because the second parameter of the joining function is of type b. The result is a list of c's. If the type declaration of a function says it accepts an a -> b -> c function as a parameter, it will also accept an a -> a -> a function, but not the other way around! Remember that when you're making functions, especially higher order ones, and you're unsure of the type, you can just try omitting the type declaration and then checking what Haskell infers it to be by using :t. The action in the function is pretty similar to the normal zip. The edge conditions are the same, only there's an extra argument, the joining function, but that argument doesn't matter in the edge conditions, so we just use a _ for it. And function body at the last pattern is also similar to zip, only it doesn't do (x,y), but f x y. A single higher order function can be used for a multitude of different tasks if it's general enough. Here's a little demonstration of all the different things our zipWith' function can do: ghci> zipWith' (+) [4,2,5,6] [2,6,2,3] [6,8,7,9] ghci> zipWith' max [6,3,2,1] [7,3,1,5] [7,3,2,5] ghci> zipWith' (++) ["foo ", "bar ", "baz "] ["fighters", "hoppers", "aldrin"] ["foo fighters","bar hoppers","baz aldrin"] ghci> zipWith' (*) (replicate 5 2) [1..] [2,4,6,8,10] ghci> zipWith' (zipWith' (*)) [[1,2,3],[3,5,6],[2,3,4]] [[3,2,2],[3,4,5],[5,4,3]] [[3,4,6],[9,20,30],[10,12,12]] As you can see, a single higher order function can be used in very versatile ways. Imperative programming usually uses stuff like for loops, while loops, setting something to a variable, checking its state, etc. to achieve some behavior and then wrap it around an interface, like a function. Functional programming uses higher order functions to abstract away common patterns, like examining two lists in pairs and doing something with those pairs or getting a set of solutions and eliminating the ones you don't need. We'll implement another function that's already in the standard library, called flip. Flip simply takes a function and returns a function that is like our original function, only the first two arguments are flipped. We can implement it like so: flip' :: (a -> b -> c) -> (b -> a -> c) flip' f = g where g x y = f y x Reading the type declaration, we say that it takes a function that takes an a and a b and returns a function that takes a b and an a. But because functions are curried by default, the second pair of parentheses is really unnecessary, because -> is right associative by default. (a -> b -> c) -> (b -> a -> c) is the same as (a -> b -> c) -> (b -> (a -> c)), which is the same as (a -> b -> c) -> b -> a -> c. We wrote that g x y = f y x. If that's true, then f y x = g x y must also hold, right? Keeping that in mind, we can define this function in an even simpler manner. flip' :: (a -> b -> c) -> b -> a -> c flip' f y x = f x y Here, we take advantage of the fact that functions are curried. When we call flip' f without the parameters y and x, it will return an f that takes those two parameters but calls them flipped. Even though flipped functions are usually passed to other functions, we can take advantage of currying when making higher-order functions by thinking ahead and writing what their end result would be if they were called fully applied. ghci> flip' zip [1,2,3,4,5] "hello" [('h',1),('e',2),('l',3),('l',4),('o',5)] ghci> zipWith (flip' div) [2,2..] [10,8,6,4,2] [5,4,3,2,1] Maps and filters map takes a function and a list and applies that function to every element in the list, producing a new list. Let's see what its type signature is and how it's defined. map :: (a -> b) -> [a] -> [b] map _ = map f (x:xs) = f x : map f xs The type signature says that it takes a function that takes an a and returns a b, a list of a's and returns a list of b's. It's interesting that just by looking at a function's type signature, you can sometimes tell what it does. map is one of those really versatile higher-order functions that can be used in millions of different ways. Here it is in action: ghci> map (+3) [1,5,3,1,6] [4,8,6,4,9] ghci> map (++ "!") ["BIFF", "BANG", "POW"] ["BIFF!","BANG!","POW!"] ghci> map (replicate 3) [3..6] [[3,3,3],[4,4,4],[5,5,5],[6,6,6]] ghci> map (map (^2)) [[1,2],[3,4,5,6],[7,8]] [[1,4],[9,16,25,36],[49,64]] ghci> map fst [(1,2),(3,5),(6,3),(2,6),(2,5)] [1,3,6,2,2] You've probably noticed that each of these could be achieved with a list comprehension. map (+3) [1,5,3,1,6] is the same as writing [x+3 | x <- [1,5,3,1,6]]. However, using map is much more readable for cases where you only apply some function to the elements of a list, especially once you're dealing with maps of maps and then the whole thing with a lot of brackets can get a bit messy. filter is a function that takes a predicate (a predicate is a function that tells whether something is true or not, so in our case, a function that returns a boolean value) and a list and then returns the list of elements that satisfy the predicate. The type signature and implementation go like this: filter :: (a -> Bool) -> [a] -> [a] filter _ = filter p (x:xs) | p x = x : filter p xs | otherwise = filter p xs Pretty simple stuff. If p x evaluates to True, the element gets included in the new list. If it doesn't, it stays out. Some usage examples: ghci> filter (>3) [1,5,3,2,1,6,4,3,2,1] [5,6,4] ghci> filter (==3) [1,2,3,4,5] ghci> filter even [1..10] [2,4,6,8,10] ghci> let notNull x = not (null x) in filter notNull [[1,2,3],,[3,4,5],[2,2],,,] [[1,2,3],[3,4,5],[2,2]] ghci> filter (`elem` ['a'..'z']) "u LaUgH aT mE BeCaUsE I aM diFfeRent" "uagameasadifeent" ghci> filter (`elem` ['A'..'Z']) "i lauGh At You BecAuse u r aLL the Same" "GAYBALLS" All of this could also be achived with list comprehensions by the use of predicates. There's no set rule for when to use map and filter versus using list comprehension, you just have to decide what's more readable depending on the code and the context. The filter equivalent of applying several predicates in a list comprehension is either filtering something several times or joining the predicates with the logical && function. Remember our quicksort function from the previous chapter? We used list comprehensions to filter out the list elements that are smaller than (or equal to) and larger than the pivot. We can achieve the same functionality in a more readable way by using filter: quicksort :: (Ord a) => [a] -> [a] quicksort = quicksort (x:xs) = let smallerSorted = quicksort (filter (<=x) xs) biggerSorted = quicksort (filter (>x) xs) in smallerSorted ++ [x] ++ biggerSorted Mapping and filtering is the bread and butter of every functional programmer's toolbox. Uh. It doesn't matter if you do it with the map and filter functions or list comprehensions. Recall how we solved the problem of finding right triangles with a certain circumference. With imperative programming, we would have solved it by nesting three loops and then testing if the current combination satisfies a right triangle and if it has the right perimeter. If that's the case, we would have printed it out to the screen or something. In functional programming, that pattern is achieved with mapping and filtering. You make a function that takes a value and produces some result. We map that function over a list of values and then we filter the resulting list out for the results that satisfy our search. Thanks to Haskell's laziness, even if you map something over a list several times and filter it several times, it will only pass over the list once. Let's find the largest number under 100,000 that's divisible by 3829. To do that, we'll just filter a set of possibilities in which we know the solution lies. largestDivisible :: (Integral a) => a largestDivisible = head (filter p [100000,99999..]) where p x = x `mod` 3829 == 0 We first make a list of all numbers lower than 100,000, descending. Then we filter it by our predicate and because the numbers are sorted in a descending manner, the largest number that satisfies our predicate is the first element of the filtered list. We didn't even need to use a finite list for our starting set. That's laziness in action again. Because we only end up using the head of the filtered list, it doesn't matter if the filtered list is finite or infinite. The evaluation stops when the first adequate solution is found. Next up, we're going to find the sum of all odd squares that are smaller than 10,000. But first, because we'll be using it in our solution, we're going to introduce the takeWhile function. It takes a predicate and a list and then goes from the beginning of the list and returns its elements while the predicate holds true. Once an element is found for which the predicate doesn't hold, it stops. If we wanted to get the first word of the string "elephants know how to party", we could do takeWhile (/=' ') "elephants know how to party" and it would return "elephants". Okay. The sum of all odd squares that are smaller than 10,000. First, we'll begin by mapping the (^2) function to the infinite list [1..]. Then we filter them so we only get the odd ones. And then, we'll take elements from that list while they are smaller than 10,000. Finally, we'll get the sum of that list. We don't even have to define a function for that, we can do it in one line in GHCI: ghci> sum (takeWhile (<10000) (filter odd (map (^2) [1..]))) 166650 Awesome! We start with some initial data (the infinite list of all natural numbers) and then we map over it, filter it and cut it until it suits our needs and then we just sum it up. We could have also written this using list comprehensions: ghci> sum (takeWhile (<10000) [n^2 | n <- [1..], odd (n^2)]) 166650 It's a matter of taste as to which one you find prettier. Again, Haskell's property of laziness is what makes this possible. We can map over and filter an infinite list, because it won't actually map and filter it right away, it'll delay those actions. Only when we force Haskell to show us the sum does the sum function say to the takeWhile that it needs those numbers. takeWhile forces the filtering and mapping to occur, but only until a number greater than or equal to 10,000 is encountered. For our next problem, we'll be dealing with Collatz sequences. We take a natural number. If that number is even, we divide it by two. If it's odd, we multiply it by 3 and then add 1 to that. We take the resulting number and apply the same thing to it, which produces a new number and so on. In essence, we get a chain of numbers. It is thought that for all starting numbers, the chains finish at the number 1. So if we take the starting number 13, we get this sequence: 13, 40, 20, 10, 5, 16, 8, 4, 2, 1. 13*3 + 1 equals 40. 40 divided by 2 is 20, etc. We see that the chain has 10 terms. Now what we want to know is this: for all starting numbers between 1 and 100, how many chains have a length greater than 15? First off, we'll write a function that produces a chain: chain :: (Integral a) => a -> [a] chain 1 = chain n | even n = n:chain (n `div` 2) | odd n = n:chain (n*3 + 1) Because the chains end at 1, that's the edge case. This is a pretty standard recursive function. ghci> chain 10 [10,5,16,8,4,2,1] ghci> chain 1 ghci> chain 30 [30,15,46,23,70,35,106,53,160,80,40,20,10,5,16,8,4,2,1] Yay! It seems to be working correctly. And now, the function that tells us the answer to our question: numLongChains :: Int numLongChains = length (filter isLong (map chain [1..100])) where isLong xs = length xs > 15 We map the chain function to [1..100] to get a list of chains, which are themselves represented as lists. Then, we filter them by a predicate that just checks whether a list's length is longer than 15. Once we've done the filtering, we see how many chains are left in the resulting list. Using map, we can also do stuff like map (*) [0..], if not for any other reason than to illustrate how currying works and how (partially applied) functions are real values that you can pass around to other functions or put into lists (you just can't turn them to strings). So far, we've only mapped functions that take one parameter over lists, like map (*2) [0..] to get a list of type (Num a) => [a], but we can also do map (*) [0..] without a problem. What happens here is that the number in the list is applied to the function *, which has a type of (Num a) => a -> a -> a. Applying only one parameter to a function that takes two parameters returns a function that takes one parameter. If we map * over the list [0..], we get back a list of functions that only take one parameter, so (Num a) => [a -> a]. map (*) [0..] produces a list like the one we'd get by writing [(0*),(1*),(2*),(3*),(4*),(5*)... ghci> let listOfFuns = map (*) [0..] ghci> (listOfFuns !! 4) 5 20 Getting the element with the index 4 from our list returns a function that's equivalent to (4*). And then, we just apply 5 to that function. So that's like writing (4*) 5 or just 4 * 5. Lambdas are basically anonymous functions that are used because we need some functions only once. Normally, we make a lambda with the sole purpose of passing it to a higher-order function. To make a lambda, we write a \ (because it kind of looks like the greek letter lambda if you squint hard enough) and then we write the parameters, separated by spaces. After that comes a -> and then the function body. We usually surround them by parentheses, because otherwise they extend all the way to the right. If you look about 5 inches up, you'll see that we used a where binding in our numLongChains function to make the isLong function for the sole purpose of passing it to filter. Well, instead of doing that, we can use a lambda: numLongChains :: Int numLongChains = length (filter (\xs -> length xs > 15) (map chain [1..100])) Lambdas are expressions, that's why we can just pass them like that. The expression (\xs -> length xs > 15) returns a function that tells us whether the length of the list passed to it is greater than 15. People who are not well acquainted with how currying and partial application works often use lambdas where they don't need to. For instance, the expressions map (+3) [1,6,3,2] and map (\x -> x + 3) [1,6,3,2] are equivalent since both (+3) and (\x -> x + 3) are functions that take a number and add 3 to it. Needless to say, making a lambda in this case is stupid since using partial application is much more readable. Like normal functions, lambdas can take any number of parameters: ghci> zipWith (\a b -> (a * 30 + 3) / b) [5,4,3,2,1] [1,2,3,4,5] [153.0,61.5,31.0,15.75,6.6] And like normal functions, you can pattern match in lambdas. The only difference is that you can't define several patterns for one parameter, like making a and a (x:xs) pattern for the same parameter and then having values fall through. If a pattern matching fails in a lambda, a runtime error occurs, so be careful when pattern matching in lambdas! ghci> map (\(a,b) -> a + b) [(1,2),(3,5),(6,3),(2,6),(2,5)] [3,8,9,8,7] Lambdas are normally surrounded by parentheses unless we mean for them to extend all the way to the right. Here's something interesting: due to the way functions are curried by default, these two are equivalent: addThree :: (Num a) => a -> a -> a -> a addThree x y z = x + y + z addThree :: (Num a) => a -> a -> a -> a addThree = \x -> \y -> \z -> x + y + z If we define a function like this, it's obvious why the type declaration is what it is. There are three ->'s in both the type declaration and the equation. But of course, the first way to write functions is far more readable, the second one is pretty much a gimmick to illustrate currying. However, there are times when using this notation is cool. I think that the flip function is the most readable when defined like so: flip' :: (a -> b -> c) -> b -> a -> c flip' f = \x y -> f y x Even though that's the same as writing flip' f x y = f y x, we make it obvious that this will be used for producing a new function most of the time. The most common use case with flip is calling it with just the function parameter and then passing the resulting function on to a map or a filter. So use lambdas in this way when you want to make it explicit that your function is mainly meant to be partially applied and passed on to a function as a parameter. Only folds and horses Back when we were dealing with recursion, we noticed a theme throughout many of the recursive functions that operated on lists. Usually, we'd have an edge case for the empty list. We'd introduce the x:xs pattern and then we'd do some action that involves a single element and the rest of the list. It turns out this is a very common pattern, so a couple of very useful functions were introduced to encapsulate it. These functions are called folds. They're sort of like the map function, only they reduce the list to some single value. A fold takes a binary function, a starting value (I like to call it the accumulator) and a list to fold up. The binary function itself takes two parameters. The binary function is called with the accumulator and the first (or last) element and produces a new accumulator. Then, the binary function is called again with the new accumulator and the now new first (or last) element, and so on. Once we've walked over the whole list, only the accumulator remains, which is what we've reduced the list to. First let's take a look at the foldl function, also called the left fold. It folds the list up from the left side. The binary function is applied between the starting value and the head of the list. That produces a new accumulator value and the binary function is called with that value and the next element, etc. Let's implement sum again, only this time, we'll use a fold instead of explicit recursion. sum' :: (Num a) => [a] -> a sum' xs = foldl (\acc x -> acc + x) 0 xs Testing, one two three: ghci> sum' [3,5,2,1] 11 Let's take an in-depth look into how this fold happens. \acc x -> acc + x is the binary function. 0 is the starting value and xs is the list to be folded up. Now first, 0 is used as the acc parameter to the binary function and 3 is used as the x (or the current element) parameter. 0 + 3 produces a 3 and it becomes the new accumulator value, so to speak. Next up, 3 is used as the accumulator value and 5 as the current element and 8 becomes the new accumulator value. Moving forward, 8 is the accumulator value, 2 is the current element, the new accumulator value is 10. Finally, that 10 is used as the accumulator value and 1 as the current element, producing an 11. Congratulations, you've done a fold! This professional diagram on the left illustrates how a fold happens, step by step (day by day!). The greenish brown number is the accumulator value. You can see how the list is sort of consumed up from the left side by the accumulator. Om nom nom nom! If we take into account that functions are curried, we can write this implementation ever more succinctly, like so: sum' :: (Num a) => [a] -> a sum' = foldl (+) 0 The lambda function (\acc x -> acc + x) is the same as (+). We can omit the xs as the parameter because calling foldl (+) 0 will return a function that takes a list. Generally, if you have a function like foo a = bar b a, you can rewrite it as foo = bar b, because of currying. Anyhoo, let's implement another function with a left fold before moving on to right folds. I'm sure you all know that elem checks whether a value is part of a list so I won't go into that again (whoops, just did!). Let's implement it with a left fold. elem' :: (Eq a) => a -> [a] -> Bool elem' y ys = foldl (\acc x -> if x == y then True else acc) False ys Well, well, well, what do we have here? The starting value and accumulator here is a boolean value. The type of the accumulator value and the end result is always the same when dealing with folds. Remember that if you ever don't know what to use as a starting value, it'll give you some idea. We start off with False. It makes sense to use False as a starting value. We assume it isn't there. Also, if we call a fold on an empty list, the result will just be the starting value. Then we check the current element is the element we're looking for. If it is, we set the accumulator to True. If it's not, we just leave the accumulator unchanged. If it was False before, it stays that way because this current element is not it. If it was True, we leave it at that. The right fold, foldr works in a similar way to the left fold, only the accumulator eats up the values from the right. Also, the left fold's binary function has the accumulator as the first parameter and the current value as the second one (so \acc x -> ...), the right fold's binary function has the current value as the first parameter and the accumulator as the second one (so \x acc -> ...). It kind of makes sense that the right fold has the accumulator on the right, because it folds from the right side. The accumulator value (and hence, the result) of a fold can be of any type. It can be a number, a boolean or even a new list. We'll be implementing the map function with a right fold. The accumulator will be a list, we'll be accumulating the mapped list element by element. From that, it's obvious that the starting element will be an empty list. map' :: (a -> b) -> [a] -> [b] map' f xs = foldr (\x acc -> f x : acc) xs If we're mapping (+3) to [1,2,3], we approach the list from the right side. We take the last element, which is 3 and apply the function to it, which ends up being 6. Then, we prepend it to the accumulator, which is was . 6: is and that's now the accumulator. We apply (+3) to 2, that's 5 and we prepend (:) it to the accumulator, so the accumulator is now [5,6]. We apply (+3) to 1 and prepend that to the accumulator and so the end value is [4,5,6]. Of course, we could have implemented this function with a left fold too. It would be map' f xs = foldl (\acc x -> acc ++ [f x]) xs, but the thing is that the ++ function is much more expensive than :, so we usually use right folds when we're building up new lists from a list. If you reverse a list, you can do a right fold on it just like you would have done a left fold and vice versa. Sometimes you don't even have to do that. The sum function can be implemented pretty much the same with a left and right fold. One big difference is that right folds work on infinite lists, whereas left ones don't! To put it plainly, if you take an infinite list at some point and you fold it up from the right, you'll eventually reach the beginning of the list. However, if you take an infinite list at a point and you try to fold it up from the left, you'll never reach an end! Folds can be used to implement any function where you traverse a list once, element by element, and then return something based on that. Whenever you want to traverse a list to return something, chances are you want a fold. That's why folds are, along with maps and filters, one of the most useful types of functions in functional programming. The foldl1 and foldr1 functions work much like foldl and foldr, only you don't need to provide them with an explicit starting value. They assume the first (or last) element of the list to be the starting value and then start the fold with the element next to it. With that in mind, the sum function can be implemented like so: sum = foldl1 (+). Because they depend on the lists they fold up having at least one element, they cause runtime errors if called with empty lists. foldl and foldr, on the other hand, work fine with empty lists. When making a fold, think about how it acts on an empty list. If the function doesn't make sense when given an empty list, you can probably use a foldl1 or foldr1 to implement it. Just to show you how powerful folds are, we're going to implement a bunch of standard library functions by using folds: maximum' :: (Ord a) => [a] -> a maximum' = foldr1 (\x acc -> if x > acc then x else acc) reverse' :: [a] -> [a] reverse' = foldl (\acc x -> x : acc) product' :: (Num a) => [a] -> a product' = foldr1 (*) filter' :: (a -> Bool) -> [a] -> [a] filter' p = foldr (\x acc -> if p x then x : acc else acc) head' :: [a] -> a head' = foldr1 (\x _ -> x) last' :: [a] -> a last' = foldl1 (\_ x -> x) head is better implemented by pattern matching, but this just goes to show, you can still achieve it by using folds. Our reverse' definition is pretty clever, I think. We take a starting value of an empty list and then approach our list from the left and just prepend to our accumulator. In the end, we build up a reversed list. \acc x -> x : acc kind of looks like the : function, only the parameters are flipped. That's why we could have also written our reverse as foldl (flip (:)) . Another way to picture right and left folds is like this: say we have a right fold and the binary function is f and the starting value is z. If we're right folding over the list [3,4,5,6], we're essentially doing this: f 3 (f 4 (f 5 (f 6 z))). f is called with the last element in the list and the accumulator, that value is given as the accumulator to the next to last value and so on. If we take f to be + and the starting accumulator value to be 0, that's 3 + (4 + (5 + (6 + 0))). Or if we write + as a prefix function, that's (+) 3 ((+) 4 ((+) 5 ((+) 6 0))). Similarly, doing a left fold over that list with g as the binary function and z as the accumulator is the equivalent of g (g (g (g z 3) 4) 5) 6. If we use flip (:) as the binary function and as the accumulator (so we're reversing the list), then that's the equivalent of flip (:) (flip (:) (flip (:) (flip (:) 3) 4) 5) 6. And sure enough, if you evaluate that expression, you get [6,5,4,3]. scanl and scanr are like foldl and foldr, only they report all the intermediate accumulator states in the form of a list. There are also scanl1 and scanr1, which are analogous to foldl1 and foldr1. ghci> scanl (+) 0 [3,5,2,1] [0,3,8,10,11] ghci> scanr (+) 0 [3,5,2,1] [11,8,3,1,0] ghci> scanl1 (\acc x -> if x > acc then x else acc) [3,4,5,3,7,9,2,1] [3,4,5,5,7,9,9,9] ghci> scanl (flip (:)) [3,2,1] [,,[2,3],[1,2,3]] When using a scanl, the final result will be in the last element of the resulting list while a scanr will place the result in the head. Scans are used to monitor the progression of a function that can be implemented as a fold. Let's answer us this question: How many elements does it take for the sum of the roots of all natural numbers to exceed 1000? To get the squares of all natural numbers, we just do map sqrt [1..]. Now, to get the sum, we could do a fold, but because we're interested in how the sum progresses, we're going to do a scan. Once we've done the scan, we just see how many sums are under 1000. The first sum in the scanlist will be 1, normally. The second will be 1 plus the square root of 2. The third will be that plus the square root of 3. If there are X sums under 1000, then it takes X+1 elements for the sum to exceed 1000. sqrtSums :: Int sqrtSums = length (takeWhile (<1000) (scanl1 (+) (map sqrt [1..]))) + 1 ghci> sqrtSums 131 ghci> sum (map sqrt [1..131]) 1005.0942035344083 ghci> sum (map sqrt [1..130]) 993.6486803921487 We use takeWhile here instead of filter because filter doesn't work on infinite lists. Even though we know the list is ascending, filter doesn't, so we use takeWhile to cut the scanlist off at the first occurence of a sum greater than 1000. Function application with $ Alright, next up, we'll take a look at the $ function, also called function application. First of all, let's check out how it's defined: ($) :: (a -> b) -> a -> b f $ x = f x What the heck? What is this useless operator? It's just function application! Well, almost, but not quite! Whereas normal function application (putting a space between two things) has a really high precedence, the $ function has the lowest precedence. Function application with a space is left-associative (so f a b c is the same as ((f a) b) c)), function application with $ is right-associative. That's all very well, but how does this help us? Most of the time, it's a convenience function so that we don't have to write so many parentheses. Consider the expression sum (map sqrt [1..130]). Because $ has such a low precedence, we can rewrite that expression as sum $ map sqrt [1..130], saving ourselves precious keystrokes! When a $ is encountered, the expression on its right is applied as the parameter to the function on its left. How about sqrt 3 + 4 + 9? This adds together 9, 4 and the square root of 3. If we want get the square root of 3 + 4 + 9, we'd have to write sqrt (3 + 4 + 9) or if we use $ we can write it as sqrt $ 3 + 4 + 9 because $ has the lowest precedence of any operator. That's why you can imagine a $ being sort of the equivalent of writing an opening parentheses and then writing a closing one on the far right side of the expression. How about sum (filter (> 10) (map (*2) [2..10]))? Well, because $ is right-associative, f (g (z x)) is equal to f $ g $ z x. And so, we can rewrite sum (filter (> 10) (map (*2) [2..10])) as sum $ filter (> 10) $ map (*2) [2..10]. But apart from getting rid of parentheses, $ means that function application can be treated just like another function. That way, we can, for instance, map function application over a list of functions. ghci> map ($ 3) [(4+), (10*), (^2), sqrt] [7.0,30.0,9.0,1.7320508075688772] In mathematics, function composition is defined like this: , meaning that composing two functions produces a new function that, when called with a parameter, say, x is the equivalent of calling g with the parameter x and then calling the f with that result. In Haskell, function composition is pretty much the same thing. We do function composition with the . function, which is defined like so: (.) :: (b -> c) -> (a -> b) -> a -> c f . g = \x -> f (g x) Mind the type declaration. f must take as its parameter a value that has the same type as g's return value. So the resulting function takes a parameter of the same type that g takes and returns a value of the same type that f returns. The expression negate . (* 3) returns a function that takes a number, multiplies it by 3 and then negates it. One of the uses for function composition is making functions on the fly to pass to other functions. Sure, can use lambdas for that, but many times, function composition is clearer and more concise. Say we have a list of numbers and we want to turn them all into negative numbers. One way to do that would be to get each number's absolute value and then negate it, like so: ghci> map (\x -> negate (abs x)) [5,-3,-6,7,-3,2,-19,24] [-5,-3,-6,-7,-3,-2,-19,-24] Notice the lambda and how it looks like the result function composition. Using function composition, we can rewrite that as: ghci> map (negate . abs) [5,-3,-6,7,-3,2,-19,24] [-5,-3,-6,-7,-3,-2,-19,-24] Fabulous! Function composition is right-associative, so we can compose many functions at a time. The expression f (g (z x)) is equivalent to (f . g . z) x. With that in mind, we can turn ghci> map (\xs -> negate (sum (tail xs))) [[1..5],[3..6],[1..7]] [-14,-15,-27] ghci> map (negate . sum . tail) [[1..5],[3..6],[1..7]] [-14,-15,-27] But what about functions that take several parameters? Well, if we want to use them in function composition, we usually have to partially apply them just so much that each function takes just one parameter. sum (replicate 5 (max 6.7 8.9)) can be rewritten as (sum . replicate 5 . max 6.7) 8.9 or as sum . replicate 5 . max 6.7 $ 8.9. What goes on in here is this: a function that takes what max 6.7 takes and applies replicate 5 to it is created. Then, a function that takes the result of that and does a sum of it is created. Finally, that function is called with 8.9. But normally, you just read that as: apply 8.9 to max 6.7, then apply replicate 5 to that and then apply sum to that. If you want to rewrite an expression with a lot of parentheses by using function composition, you can start by putting the last parameter of the innermost function after a $ and then just composing all the other function calls, writing them without their last parameter and putting dots between them. If you have replicate 100 (product (map (*3) (zipWith max [1,2,3,4,5] [4,5,6,7,8]))), you can write it as replicate 100 . product . map (*3) . zipWith max [1,2,3,4,5] $ [4,5,6,7,8]. If the expression ends with three parentheses, chances are that if you translate it into function composition, it'll have three composition operators. Another common use of function composition is defining functions in the so-called point free style (also called the pointless style). Take for example this function that we wrote earlier: sum' :: (Num a) => [a] -> a sum' xs = foldl (+) 0 xs The xs is exposed on both right sides. Because of currying, we can omit the xs on both sides, because calling foldl (+) 0 creates a function that takes a list. Writing the function as sum' = foldl (+) 0 is called writing it in point free style. How would we write this in point free style? fn x = ceiling (negate (tan (cos (max 50 x)))) We can't just get rid of the x on both right right sides. The x in the function body has parentheses after it. cos (max 50) wouldn't make sense. You can't get the cosine of a function. What we can do is express fn as a composition of functions. fn = ceiling . negate . tan . cos . max 50 Excellent! Many times, a point free style is more readable and concise, because it makes you think about functions and what kind of functions composing them results in instead of thinking about data and how it's shuffled around. You can take simple functions and use composition as glue to form more complex functions. However, many times, writing a function in point free style can be less readable if a function is too complex. That's why making long chains of function composition is discouraged, although I plead guilty of sometimes being too composition-happy. The prefered style is to use let bindings to give labels to intermediary results or split the problem into sub-problems and then put it together so that the function makes sense to someone reading it instead of just making a huge composition chain. In the section about maps and filters, we solved a problem of finding the sum of all odd squares that are smaller than 10,000. Here's what the solution looks like when put into a function. oddSquareSum :: Integer oddSquareSum = sum (takeWhile (<10000) (filter odd (map (^2) [1..]))) Being such a fan of function composition, I would have probably written that like this: oddSquareSum :: Integer oddSquareSum = sum . takeWhile (<10000) . filter odd . map (^2) $ [1..] However, if there was a chance of someone else reading that code, I would have written it like this: oddSquareSum :: Integer oddSquareSum = let oddSquares = filter odd $ map (^2) [1..] belowLimit = takeWhile (<10000) oddSquares in sum belowLimit It wouldn't win any code golf competition, but someone reading the function will probably find it easier to read than a composition chain.
http://learnyouahaskell.com/higher-order-functions/
13
66
Standard: Data Analysis & Probability Read the NCTM Data Analysis & Probability Standard: Instructional programs from prekindergarten through grade 12 should enable all students to: - Formulate questions that can be addressed with data and collect, organize, and display relevant data to answer them - Select and use appropriate statistical methods to analyze data - Develop and evaluate inferences and predictions that are based on data - Understand and apply basic concepts of probability Featured Topic: Probability of Dice Tosses Students need many opportunities to informally experience the outcomes of tossing one die and then two dice.   Students should play games, gather data, collate class data and analyze those results to develop a conceptual understanding of the probability of different outcomes. - See Data Analysis: One-Die Toss Activities for more information and activities on the probability of the outcomes of tossing one die. - See Data Analysis: Two-Dice Toss Activities for more information and activities on the probability of the outcomes of tossing two dice. The activities found on these Mathwire.com webpages support students as they answer questions about the world around them. Learning to organize and analyze data are important life skills that students will use in both their professional and personal lives. - Glyph Activities - See pictures of Student Halloween Glyphs. - See pictures of Student Turkey Glyphs. - See pictures of Student Winter Glyphs. Data Collection & Probability Activities - Data Analysis Investigations encourage students to collect real, meaningful data, organize that data and analyze the data to draw conclusions and explain what they have learned. These investigations encourage students to apply mathematical analysis to real-life data and/or applications in order to investigate problems or issues. - See Data Analysis 2 for more suggested data collection activities. - See Data Analysis: Two-Dice Toss Activities for activities that develop student understanding of the probability of tossing two dice. This page includes many games that help students collect and analyze data. - See Data Analysis: One-Die Toss Activities for activities that develop student understanding of the probability of tossing one-die. This page includes many one-die games that help students collect and analyze data. - See Coin Flipping Activities for activities that develop student understanding of the probability of coin toss events. - See Winter Data Collection for seasonal data collection and graphing ideas. - Sampling Activities include background on this important statistical technique and the Cereal Toy Investigation that encourages students to use a simulation to analyze how many boxes of cereal they would have to buy to get all six different toys in a cereal toy promotion.   A Cereal Toy java applet is included so that students can collect data quickly to increase the sample size and compare the results to their initial small sample. - Online lesson plan for Will There Be a White Christmas This Year? encourages students to use statistical data to investigate weather patterns across the USA and construct a contour map of the probability of snow cover on Christmas Day. Additional Mathwire.com Resources - See more Data Analysis & Probability Math Templates: insert in sheet protectors for student use with dry erase markers or for teacher use as overhead transparencies. - See Problem Solving Resources for open-ended assessments that involve data analysis and probability. - See all Data Analysis & Probability Games. - See Literature Connections for data analysis and probability. - See more Data Analysis & Probability Links. - See all Enrichment Activities for data analysis and probability.
http://mathwire.com/standards/dataprob.html
13
132
Basics in Geology due to Plate Tectonic Movement Mountain building and valley formation is a result of plate tectonic movement, where stretching and subduction of the crust on this planet Earth, but where does the current textbook version of mankind’s geology explain land formations or part from the new alternate views? The Earth’s crust and its various geological formations are created differently in 3 primary zones: plate expansion, compression and tears within plates or their border fault lines due to shearing or rotational torque from the Earth's core, which applies a force to the underside of land and seafloor. Tectonic plate expansion or stretching is the principal factor in facilitating the creation of valleys and the various resultant forms of water reservoirs from marshes, small ponds, lakes to the Great Lakes and on the seafloor, the deep trenches. The crust of the Earth with its variable thickness is stretched when certain dynamic fall in to place. As the same force is delivered by core rotational torque along a similar latitude the transfer of this force through frictional contact to the underside of the Earth's crust causes stress points as a determinate of mass per a specific area of contact along the underside. With the distribution mass uneven between the continents and the seafloor, the force applied to a lighter mass creates a greater acceleration, thus eventually detaches at random weak points or established crust fracture lines. As support features along the fault lines still connected break and crumble, now unable to support the weight or mass of plate and dependant upon separation gap, there is a corresponding drop in elevation along the break lines, the gap edges in relation to the surrounding area. The greater the opening breach or evacuation of sub strata below the surface, the greater the depth due to gravitational acceleration and the sharper the angle to vertical of the valley cliffs as they back fill quickly before equalizing. Once a portion of a plate separates or fractures, there is a key feature to consider. Does the breakage occur on one side or does it encompass both sides along the entire perimeter approximately 90 degrees of the direction of movement? Examining the singularity perimeter break, the weak side develops to one edge of the fault or break line, where the fractured plate instead of separating along 2 somewhat parallel lines, the process creates a landscape that has an abrupt increase of elevation or sheer cliff just off the break point. The resultant if both sides are involved in perimeter break, the creation of a valley floor or potential path for a river or stream evolves. When the elevation of the land rises again beyond the depression of free flow, the water pools and a lake forms. If we examine this process, lets provide a more detailed description of what occurs. As support on all sides are loss due to tectonic movement involving the stretching of a plate, the future valley floor or fractured plate drops due to a gravity and the evacuation of mass as the plate stretches below the drop point. When a plate is stretched, due a differential of force to points on the under side of the crust along with support structure thinning and pocketing, this exploits weak points between distinct lines of stress in the crust due to density and fracturing, as it falls, this causes a piston action of a downward thrust against the adjacent land mass within collapsing gap. In order for area under compression to equalize the applied force, it is transferred to the surround edges, which are locally forced upwards due to soft crust expansion that is released along the edges. Once the edges achieve an equalization and eventually solidification or stabilization takes place, the valley takes form. General specs are determined as a function of the mass of the harden plate/surface of the locally affected area in relation to the viscosity and temperature of the underlying magma or a fluid state of surface matter, and the final depth reached is due to gravitational equilibrium relative surrounding of the lift of the cliffs along the border areas. It is this concept that geologist can use to predict tectonic movements or land features of pliable surrounding land masses subject to expansion. The key is approximating the fracture points along the edges of the plates where stretch, the velocity of separation, thickness of the plate and the texture of the plate mass which can vary from harden metamorphic rock to fragile sandstone. This is a start to establish a new base for geological plate expansion. Where the land is harden, the valley floor falls into the area vacated as the land stretched. Now your contemplating what force would cause a plate to stretch? The core of the Earth in its fluid form internally rotates due to attractions from gravitational and magnetic sources within the local area of the Milky Way galaxy. Within this fluid mass of the Earth's core, pockets of iron-nickel or heavy elements are attracted. The straight line attraction is bent by core gravity, with momentum conserved it over shoots optimum points of attraction and slows from frictional contact until attracted again. This motion is reinforced over time causing circular spin about the axis and due the drag, these pockets transfer the conservation of momentum to the entire fluid mass related to the core of the Earth. Core rotation achieves a balanced rotational torque due to the frictional coefficient between its outer surface and the underlying edge of the lower crust. It is this core rotational torque and the resistance of a balance of the mass of the crust that maintains the present approximate 24 hour surface spin about Earth's axis as a process of equalization. The rotation period remaining approximately 24 hours of established flow of time determined by mankind's records and standards until factors outside of current theories change events. There is a secondary process that needs to be considered during the overall expansion of plates, a progression of events that have points of give and take. This course of action within the plate, forces a state of compression or expansion, oscillating between the extremes until equilibrium is achieved. The action translates into a variable force applied to surface crust or the seafloor. It crumples due to build up of force that is obstructed, then collapses or creates a decreasing reflective force occurring back and forth until the energy diffuses away from pressure below, creating elevated areas of land. resultant of pressure points are dispersed about the surrounding plate, take 2 forms. The first, when the crust is squeezed the land reacts to relieve the pressure by creating bumps, thus forcing the land upwards into a series of rolling hills. Second, if the force pushes against a natural break line and separates, parallel to the primary line of separation the land falls. Cliffs form as magma quickly rushes in the fill the void created by the piston action upwards adds to the pressure, thus shear cliffs rising sharply above the adjacent landscape. Elevation is a direct result of plate compression, which provides the pop reinforced by a stable upward pressure from underlying layers. Where as the pliability of the land mass decreases, the sharp cliffs morph into slow rise. Where lateral forces are pulling and stretching the plate across a local geographic area, the process continues to the extreme and the supporting plate is stretched beyond support, the edges crumble and break, thus drops until support due to upward pressure from sub strata supports the overall mass of the descending plate and equalization or a valley base is established. If the land has dipped below the local average water table, the creation of streams, rivers, ponds and lakes take shape dependant upon flow about and around obstructions aided by gravity. In unique situations, where a large expanse of land is stretched in one direction and several parallel breaks occur, the geography of most rivers and major lakes initially lining up 90 degrees to the direction of the separation force. An example of this process can be seen in central New York State and surrounding areas with the creation of Finger Lakes, Lake Champlain and parallel rivers like the Genesee, Hudson and Connecticut in the northeast region of the United States. When there is a soft pull, this tends a series of vents or channels, which permeate the surface as pressurize magma exploit weak spots that open. Once within a fissure pressure expands, the moving crust or magma away from a central point, over time equalizes in the form of a tube. The same process has been observed on an extreme level with volcanic tubes. These vents when followed a sharp compression, pops the heated land that was in the vents, where the compression force is directly responsible the elevation of the pillars, as seen in South East Asia. In areas were the force stretching the tectonic plate tends to be diffused or pulled from many directions, the substrata develops weak points as the land expands faster than back flows can fill, creating many sub surface pockets. When structural surface support is lost, a group or series of shaped depressions defined by localized forces stretching the land surface are left permeating the surface. Where mean depth of the depressions falls several feet below the local water table lakes and ponds emerge. The fracture point [reference: the central focus of an extreme force is applied to a localized plate along a weak point or line where the connective support structure or crust at this point either collapses or tears] is key in conjunction with the force of stretch, as the depression shape elongates along the line of force stretching away from the a central point or line of maximum stress decreasing intensity in proportion to the average force of separation about this fracture point as compression or equalization causes a natural end to the breach. This is why lakes are deepest in their centers instead of a somewhat uniform depth. So how do some current theories explain the creation of these Great Lakes? Most scientists back the idea that ice sheets of a glacial origin gouged out the Great Lakes. Where numbers of support seem to validate a particular theory, in turn this creates an agreeable atmosphere among peers, but does that validate the theory? Examining the overall creep of the ice sheet in the direction away from a static North Pole, how do you explain lakes created at its extremes? Superior to the west and Huron in the middle are stretched at a 90 degree angle to Michigan which aligns with the southerly expansion of glaciers. Secondly, as the landscape is pulverized any deviation of glacial flow to expand below the mean level of the surface would have an exponentially decreased related to depth of applied force in the direction of flow. This due to the forces shear of a solid fluid mass, as a function of distance below the mean low point of unobstructed flow will slide across the frictional surface translating significantly less force to carve and create a surface depression. So how can we consider that the Great Lakes where remnants of past glaciers, when direction of flow does not match their shape elongation and the depth can be not be supported by the applied force distributed by the glacier in relation to depth below its free flow except Erie, which is the shallowest of the Great Lakes? What could account for the 1300 ft depression below the mean elevation of the surrounding landscape of Lake Superior? Where are the large moraine fields with debris left from an excavation of the magnitude needed to create the Great Lakes? The contradictions, "Lake Superior has its origins in the North American Mid-Continent Rift of 1.1 to 1.2 billion years ago, which produced a huge plume of hot mantle where the present lake sits. The crust tore apart, leaving an arc-shaped scar stretching form Kansas through Minnesota, then down to Michigan." Yet the single super continent Pangea existed 250 million years None of these questions have been addressed until recently, where some of these same theories were discussed on the program Universe, a broadcast originally aired on the, "History Channel" during early 2010. It was explained that Erie, Huron and Michigan were carved out of an ancient bowl and the remaining 2 Great Lakes did not follow the excepted theory. Thus the adjustment, the deeper Ontario and Superior were created by an expanding rift then gouged out by the glaciers of previous ice ages. This recent modification is closer to the truth, but is not what has occurred to create the Great Lakes. Weak points and rifts within the area of North American plate where the lakes were created, initially formed the ancient river beds hidden by geological modification and accumulation of sediments over time as the crust ripped towards what presently is the southeast. The depressions in the landscape expanded, due to the rotational torque off of the eastern edge of North America, which pulls down and away from the continent towards the equator. So how is the new concept of rotational torque created on the North American plate? It is the differential of force of core rotation to the underside of the crust, which loses efficiency as the transfer driving force decreases as the position at a point on the surface of the globe moves away from the equator, measured on Earth as latitude. Even though core rotation drags the entire surface crust as one, it is variable force that drags the land mass on the leading edges that initiates tears. This is what created the wide expanse of the Hudson Bay in North America and to a lesser degree, the Saint Lawrence Seaway. As the surface stretched, vast areas sunk due to loss of connectivity creating depressions in the crust as the surrounding land popped up. Over time these geological formations expanded as the rip tore open and widen the Hudson Bay, Saint Lawrence River valley and the exit flows. For Erie, the Niagara River and falls, Ontario it is the Saint Lawrence and in the mature area of the earliest Great Lakes the ancient river expanded and ripped into Lake Huron. During the last ice age the glaciers that formed only modified what was already there by smoothing and expanding some the features of the already present in the surface depressions. The was the case for all of the Great Lakes, when the ice melted, the depressions filled creating what is seen today. Currently, how does this theory correlate with the recent drops in water levels in the Great Lakes and surrounding land rise in height? The rift that created the Great Lakes again is active, and still separating at an increasing rate, thus the depth of the lakes increases and the water level drops to fill in the newly creates voids, unseen at the bottoms of the lakes. As connectivity is lost in some areas along the edges of the lakes pressure is released and the surrounding land pops up. Common sense prevails, as levels drop suddenly so does outflow, the water does not seep into the surrounding surface how could it without an exit flow? But if the plate stretched and a new fissure opened the void would be filled, thus the sudden drop in lake surface elevation. The Appearance of Sink Holes The recent frequency increase in the creation of sinkholes in some cases are being attributed to under water erosion of the surface land, but does this explanation back what is observed at the scene of devastation? First, is there evidence of a underground streambed, and more important signs of water flow? If you consider scene one must look at that the shape of the depression that would be drawn out in that direction of flow of water. Where did all the dirt and rubble get redeposit? None of these questions are or will be answered presently. Presently tectonic plate movement are on the increase, various pockets of voids and compression are created as the areas about the surface equalize. The common irregular sink hole forms as a void created as forces within the sub strata evacuate mass at a critical point, this action undermines support at the surface on Earth. The point, a pothole or sinkhole forms, but when this same event occurs in an area that was geologically active the sink hole takes on a more uniform appearance, at the extreme a lake forms. Lets examine why a sink hole may take on a uniform perimeter. As various points on Earth's heated surface once exposed to its thin atmosphere starts to cool after it was formed. There was strong venting of heat, as natural channels were needed to release the extreme pressures and temperature through what was once a pliable surface. On the intense end these channels once created not only discharge heat and pressure, but magma resulting in the formation of the past and present day volcanoes. More often the venting created what seem as seamless circular tube like channels that pocketed the landscape. The heat was dissipated into the surface layers allowing a fusion along the channel walls and the surrounding land mass, thus a new plug. If there was rise above the landscape due to a pressure bulge, erosion over time hid its presence. Lets consider the process as the Earth's crust formed. The heated gases now trapped, but lighter than the surrounding mass, rises through the column. Once the crust hardens and percolation adapts what is normal for the surface of the Earth. Thus, as large pockets of gas forms below the plugs creating various voids and pockets as the crust above hardens and the gas escapes by seepage through the soil. Presently with an increase in tectonic compression and expansion, the cohesive bonds holding the perimeter edges of the ancient plugs and the surrounding landscape breaks down and once support tendons are broken it falls back into the void. Thus mankind glimpses the process of equalization, where as pressure introduced receives a bounce back and the compromise is a vast smooth circular hole of unusual depth where none of their present textbook explanations of geology fits. Tectonic Plate Compression Compression of the tectonic plates on Earth is a process that forces the leading edges of the plates into subduction zones as a path of least resistance. When subduction is not a option the plate edge oppose each other creating a zone of compression at first to relieve pressure the land surface rumples with soft rises as intensity increases there is a pop upwards as hills are created. Where current theories part from what does happen is related to the Earth's core, which is driving tectonic movement instead of established idea of a slow continental creep that initiates from the Mid Atlantic Rift. The creep of plate expansion and subsequent compression and in some cases subduction along the leading edges could not build mountain ranges faster than the process of erosion by weathering. some cases are created in a similar spectacular fashion in areas of subduction and mountain building as sectional portion of plates fractured lost connectivity along its edges and fall great distances. So lets examine what happened at Yosemite National Park and are the park‘s spectacular landmarks formed from passage of In rare cases during mountain building, plate movement after a geological upheaval event starts to subside. The subduction plate faces a constant resistant during its advance, and with mountain building not being a uniform process, but a series of breaks, lifts or thrusts to relieve pressure. It is at the end of one these Earth changes that the effects creating a spectacular valley becomes permanent. In the case of Yosemite, the plate being forced under the mountain range to the east, allowed a thrust upward to geographic area, thus raising the elevation within the general area as mass gathered beneath, built up and released energy and mass in the direction of least resistance upwards. With subsiding plate movement beneath Yosemite, due to surface of the Earth is no longer locked at the Mid Atlantic Rift during an upheaval, rotation returns reducing the rate of plate subduction and stretching. This is the primary reason for the activity of plate tectonic movement to subside. With decreasing compression moving from the west to east, the resultant is a pull on the local plate as a ramp up of frictional force from the core accelerates the rotational spin of the Earth's crust until equalization balances forward motion and resistance. Along weak spots or fracture lines within the surface crust core rotation stretches the fractured plate to where the backend or westward edge end loses connectivity to the primary plate structure by ripping at the seam. The plate then snaps on the forward end due to mass unsupported on the backside. The fractured plate falls, accelerating the pop of the surrounding land mass due to compression of magma and its related kinetic energy from the fall of the plate mass and the viscosity of the underlying magma due to heat and composition of the surrounding land masses. The result in the Yosemite area was shear cliffs that harden and solidified quickly. The valley floor then settles at an elevation that is a compromise of the mass of the valley plate and upward pressure from the downward piston force of the surrounding cliffs acting on the valley floor. It is only that the tectonic plate movements, which were in closing phase of movement and modification that has made this valley with its unique features possible. If tectonic plate subduction was constant the valley would fell slightly or in most cases, a back fill would have occurred due to active compression of the moving plates. The resultant collision to the gap would now back fill, then bounce back against the forward moving plate thrust and it is force that once absorbed would causes rumpling of the plate at the weak points thus creating hills cascading down sloping away from the pressure. This would have been just another phase in the continuing process of mountain building and the creation of its normal foot hills. So what part did the glacier movement have in sculpting the landscape? Contrary to what is written in the geology textbooks, glaciers, erosion or falling rock did not carve out this valley, but did play part in smoothing the cliff walls, by rounding the peaks, and the creation some shallow lakes with moraines. The Yosemite Valley when created had sharper land features and greater altitudes for the mountain tops. The glaciers and weathering has soften these features over time to what is seen today. The Combination: The Creation of Niagara Falls Examining another unique area of the country, lets consider what forces and resultant actions of the general plates in the local area was responsible for the creation of Niagara Falls in the western New York State. We have been given an alternate view that the glaciers were not responsible for the creation of the elongated bowl shaped Great Lakes. Thus could not have carved the sheer drop offs and gorge of the Niagara River, which drops in elevation several hundred of feet between the lakes as a gorge formed in a tight geographical area counter to the surrounding area. Glacial movement would tend smooth out features as opposed to creating sharp points between elevations. Snow and ice falls as a blanket applying a somewhat even pressure as it compresses over time to the local landscape. Is there an alternate explanation, yes. If so what happen? Niagara Falls was created in its present form during the last major Earth upheaval due to a complex set of encounters with a smaller plate obstruction, rotation torque and the pop release at the base of the western edge of the Appalachian Mountain range coupled with the plate expansion flow assaulting the North American East Coast emerging from the Mid Atlantic Rift on the east. Planetary rotational torque applied to the North American Continent ripped open Lakes Erie and Ontario along fracture points, the flow lines of what was the ancient Saint Lawrence River, but there separation was established by an uplift in the crust thus separating the 2 lakes and a termination for Lake Erie, the Niagara River. The fracturing process continued as New York and the New England states were pulled to the south east and what was a narrow river channel ripped apart as a narrow gorge expanding up towards Lake Erie as the Earth changes subsided. The shear drops were created as connective support in the narrow valley collapsed in sections as the gorge backed up the river until reaching its present position. The mounds of debris that fell to the established valley floor and was not carried away, now resides at the foot of the falls. In the final concept, tears due to torque within tectonic plates a model first introduced in Zeta Talk, is a new approach that will provide some answers to fundamental geological questions. So how does rips in the crust due to torque, pull the land masses away from the primary plate causing the creation of great rift valleys, vast lakes and access to the ocean or sea via large bays and gulfs? If so, can we examine plate tectonics where this process has occurred? Addition information related to this tectonic movement is available in the supplemental paper Continental Drift. Lets address how does a rip occurs on the surface due to rotational torque? You have been introduced to the concept that the rotation of the Earth's surface is driven by its internal core rotation, instead of inherited conservation of momentum as the planet cooled and its radius was reduced. This is the driving force propelling the magma within the mantle beneath our crust. The efficiency of this core rotational force as it translates to the crust, diminishes as a function of the increasing angle moving away from the Earth’s equator to its geographical pole. Secondary, there are variable factors between the mantle and the crust, such as drag due to underside crevasses, viscosity and frictional coefficient between the driving core and the dependant surface shell, all of which dictates Earth‘s period of rotation. Where there is a fracture in the plate along its leading edge, an unequal force applied to the crust that decreases as one moves towards the pole. It is this action that creates a rotational torque that force surface areas to spin slowly clockwise towards the southeast in the northern hemisphere and counter clockwise in the southern hemisphere towards the equator. It is along the established weak points, that the breaks establish a direction of the rip either expanding from the north or south. In our 1st example, we shall examine what forces were at play during the creation of the Hudson Bay in Canada. With the differential in the force applied to the edge of the North American plate increasing as one moves north with the land masses in this present configuration. The initial rip occurred at weak fracture line in the tectonic plate along its northern end and opened up due the force ripping towards the southeast. After many adjustments the fractured plate now supports the seabed of the bay, look at similar contours of the west and east shores and how in some places could fit together above the lower bay area, which mimics Lake Michigan. The lower bay area was in the early process of ripping and expansion, thus taking on the look of one of the Great Lakes to its present south, which went through a similar process. So what force cause the differential along continental edges? With surface land mass held in place due to a periodic lock caused by the Mid Atlantic Rift attracted to a magnetic anomaly, while the Earth's core continues to rotate. Weak points on the edge of continental land masses, permeated by fault lines break. The efficiency force that drags the crust, thus Earth's rotation about its axis decrease as one approaches the poles. This creates a rotation torque or pull towards the equator in order to compensate as the stress within the Earth's surface crust tries to equalize. As the rip continued, the bay widens into its present shape seen today. Moving to the southwestern edge of North America, as the fault lines side past each, another a break occurred as the seabed subducted and what is now Baja California, was uplifted. Thus the formation of the mountain chain presently running the length of Baja. With a fracture line to the present east of the chain, there was a reduction in forward motion. As the subduction of the lighter material that force up the range left little translational force driving the plate to the east of the range This area to the east of the Baja chain now under goes stretch as the land ahead is pulled away from the point of subduction and the fracture line rips and support structures fails causing the land to sink and open a gap. This eventually separated Baja from the Mexico as the support strata below the crust was evacuated in pockets causing what was land above sea level to fall and form present the sea floor of the Gulf of California. On the westward side of the rip and the downward piston action the sea bed plate falling sends a wall of magna in 2 directions. There is a forcing up the advancing plate bordering the depression of the gulf reinforcing mountain building along the current peninsula of Baja. The continuing compression raises the surrounding land and as the initial ricochet subsides on the westward side of Mexico and slopes off from an elevated interior land mass back towards the Gulf of California, which is continuing today. This is the resultant as frictional force from core rotation continues on the east side of the fault line where stretching has occurred. As subduction continues in a diffused state on the eastside of the rip along the fault line, compression of similar land mass coupled with a reduced flow of lighter surface material related to the crust at first causes a rippling effect of the landscape takes form with a slight increased of elevation. There is a bounce back of force not enough to thrust up mountain chain that obstructs its movement, but instead pops foothills on the west edge of Mexico. This eventually leads to the mountain chain running north-south along Mexico as subduction returns. In California, we can examine the process that creates one of its primary features of the vast fertile plain between its mountain ranges. So how do we address the formation of the large fertile valley plain between two mountain ranges in the central part of California, where the wide expanse at both ends of the valley plain cannot be explained by a ripping motion of plate tectonics? Starting, some parts of California at one time was part of the sea floor, lifted by impact that created the present day Pacific Ocean. The Ocean Plate forced out most of the heavier mass leaving a lighter, thinner crust to support the ocean floor. When forced against the western edge of the North American plate the seabed subducted. Compression occurred and the path of least resistance for the surface mass was upwards creating the plates supporting the Sierra Nevada range to rise above the landscape. With forward motion due to core rotation diverted to thrusts upward a stretch of the surface crust initiated breaking along natural fault lines. The valley spread over a vast expanse. Once a shallow sea, the eons of decomposing sea life littering the bottom now accounts for the general flat areas of fertile farm land now dry surface land in central California. So the original plate that spreads as the foundation of the valley was once a the sea floor. As plate is subducted, the lighter crust decends at the barrier of the mountain range along the western coast as the heaver mass is uplifted due to the subduction. The heavier layer floating on top expands westward and thickens. Eventually as light plate forms and expands, the overall pressure once forcing up the mountain range now shifts westward and lifts the newly created light plate from the build up of backflow of mass subducted. Pressure once built up as point tends to equalize and retracts away the source of greatest resistance the mountain range in proportion to the backflow. As the plate lifts flow returns to a foundation that has cooled pressure builds the mountain range again pops under great force. A vacuum ensues sucking the lighter plate forward breaking its backside support, but at the same time the central point of the plate also becomes a stress point. Thinned now becomes a low point initiating a random path in which river may flow if rain exceeds land absorption. The light plate drops again forcing up the mountain range on the forward end and causing a lateral line of rumbled crust forced upward on the rear plate edge of the lighter plate, valley floor rises and the flow is slowly subducted under the original mountain range. This tectonic split is what you see today the Cascade Mountains to the North and Sierra Nevada Mountains to the South along the Eastern side. The valley has the Sacramento Valley to the north and the San Joaquin Valley to the south. The rumpled area is known as the Coastal Mountains. As this shallow sea was originally thrust upward during tectonic movements along the west coast, but how do we address the below sea levels in Death Valley? One has to consider what happens on land also occurs under our deep oceans and shallow sea floors with all its crevasses and trenches. When thrust upwards as dry land as a part of the tectonic movement process the plate now exposed to air its deepest parts are still intact. Some geological parts of the localized plate may still fall below the mean sea level. This is the case for Death Valley. This depression feature that creates the deepest point of dry land on the North American Continent only occurs where the water table is so low as to not hide it. This is why only the deepest points on Earth are in arid environments, like deserts, the alternative are the deepest points in lakes. The Difference between an abrupt process of Mountain Building and an Earthquake As Earth changes progress there will be unexplained observations that do not fit the classic textbook definitions of land shifting resulting from an earthquake. The destruction unusually severe, although in some cases will be downplayed from to some degree dependent the motives of the few pushed as, "Official Sources", but this is not what will capture the public eye. Yes there will be the usual rubble, destroyed buildings, but something is amiss. Land will disappear and what is left above sea level will be riddled with sink holes. Where established subduction zones exist the will be new land formations jutting up or elevated out of the earth, where no change was there before the quake. Scientist baffled with their old theories, will not be able to explain a process that is suppose to take place over the eons, now responsible for changes on the surface of Earth within minutes. This is what you can expect. All Rights Reserved: © Copyright 2009, 2010 Some basic ideas in this paper were inspired from: http://www.zetatalk.com/poleshft/p106.htm
http://www.grantchronicles.com/astro40.htm
13
56
|Internet media type|| |Uniform Type Identifier||public.html| |Developed by||World Wide Web Consortium & WHATWG| |Type of format||Markup language| HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets (like <html>), within the web page content. HTML tags most commonly come in pairs like </h1>, although some tags, known as empty elements, are unpaired, for example <img>. The first tag in a pair is the start tag, and the second tag is the end tag (they are also called opening tags and closing tags). In between these tags web designers can add text, tags, comments and other types of text-based content. The purpose of a web browser is to read HTML documents and compose them into visible or audible web pages. The browser does not display the HTML tags, but uses the tags to interpret the content of the page. Web browsers can also refer to Cascading Style Sheets (CSS) to define the appearance and layout of text and other material. The W3C, maintainer of both the HTML and the CSS standards, encourages the use of CSS over explicit presentational HTML markup. In 1980, physicist Tim Berners-Lee, who was a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. In 1989, Berners-Lee wrote a memo proposing an Internet-based hypertext system. Berners-Lee specified HTML and wrote the browser and server software in the last part of 1990. In that year, Berners-Lee and CERN data systems engineer Robert Cailliau collaborated on a joint request for funding, but the project was not formally adopted by CERN. In his personal notes from 1990 he listed "some of the many areas in which hypertext is used" and put an encyclopedia first. The first publicly available description of HTML was a document called "HTML Tags", first mentioned on the Internet by Berners-Lee in late 1991. It describes 18 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house SGML-based documentation format at CERN. Eleven of these elements still exist in HTML 4. HyperText Markup Language is a markup language that web browsers use to interpret and compose text, images and other material into visual or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, and these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system: these formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements (nested annotated ranges with attributes) rather than merely print effects, with also the separation of structure and processing; HTML has been progressively moved in this direction with CSS. Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force (IETF) with the mid-1993 publication of the first proposal for an HTML specification: "Hypertext Markup Language (HTML)" Internet-Draft by Berners-Lee and Dan Connolly, which included an SGML Document Type Definition to define the grammar. The draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes. Similarly, Dave Raggett's competing Internet-Draft, "HTML+ (Hypertext Markup Format)", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms. After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based. Further development under the auspices of the IETF was stalled by competing interests. Since 1996, the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium (W3C). However, in 2000, HTML also became an international standard (ISO/IEC 15445:2000). HTML 4.01 was published in late 1999, with further errata published through 2001. In 2004 development began on HTML5 in the Web Hypertext Application Technology Working Group (WHATWG), which became a joint deliverable with the W3C in 2008. XHTML is a separate language that began as a reformulation of HTML 4.01 using XML 1.0. It continues to be developed: HTML markup consists of several key components, including elements (and their attributes), character-based data types, character references and entity references. Another important component is the document type declaration, which triggers standards mode rendering. The following is an example of the classic Hello world program, a common test employed for comparing programming languages, scripting languages and markup languages. This example is made using 9 lines of code: <!DOCTYPE html> <html> <head> <title>HTML</title> </head> <body> <p>Hello world!</p> </body> </html> (The text between <html> and </html> describes the web page, and the text between <body> and </body> is the visible page content. The markup text '<title>Hello HTML</title>' defines the browser page title.) This Document Type Declaration is for HTML5. If the <!DOCTYPE html> declaration is not included, various browsers will revert to "quirks mode" for rendering. HTML documents are composed entirely of HTML elements that, in their most general form have three components: a pair of tags, a "start tag" and "end tag"; some attributes within the start tag; and finally, any textual and graphical content between the start and end tags, perhaps including other nested elements. The HTML element is everything between and including the start and end tags. Each tag is enclosed in angle brackets. The general form of an HTML element is therefore: <tag attribute1="value1" attribute2="value2">content</tag>. Some HTML elements are defined as empty elements and take the form <tag attribute1="value1" attribute2="value2" >. Empty elements may enclose no content, for instance, the BR tag or the inline IMG tag. The name of an HTML element is the name used in the tags. Note that the end tag's name is preceded by a slash character, "/", and that in empty elements the end tag is neither required nor allowed. If attributes are not mentioned, default values are used in each case. Header of the HTML document:<head>...</head>. Usually the title should be included in the head, for example: <head> <title>The Title</title> </head> Headings: HTML headings are defined with the <h1>Heading1</h1> <h2>Heading2</h2> <h3>Heading3</h3> <h4>Heading4</h4> <h5>Heading5</h5> <h6>Heading6</h6> <p>Paragraph 1</p> <p>Paragraph 2</p> Line breaks:<br />. The difference between <br /> and <p> is that 'br' breaks a line without altering the semantic structure of the page, whereas 'p' sections the page into paragraphs. Note also that 'br' is an empty element in that, while it may have attributes, it can take no content and it may not have an end tag. <p>This <br /> is a paragraph <br /> with <br /> line breaks</p> <!-- This is a comment --> Comments can help in the understanding of the markup and do not display in the webpage. There are several types of markup elements used in HTML: <h2>Golf</h2>establishes "Golf" as a second-level heading. Structural markup does not denote any specific rendering, but most web browsers have default styles for element formatting. Content may be further styled using Cascading Style Sheets (CSS). <b>boldface</b>indicates that visual output devices should render "boldface" in bold text, but gives little indication what devices that are unable to do this (such as aural devices that read the text aloud) should do. In the case of both <i>italic</i>, there are other elements that may have equivalent visual renderings but which are more semantic in nature, such as <em>emphasised text</em>respectively. It is easier to see how an aural user agent should interpret the latter two elements. However, they are not equivalent to their presentational counterparts: it would be undesirable for a screen-reader to emphasize the name of a book, for instance, but on a screen such a name would be italicized. Most presentational markup elements have become deprecated under the HTML 4.0 specification in favor of using CSS for styling. hrefattribute sets the link's target URL. For example the HTML markup, <a href="http://www.google.com/">Wikipedia</a>, will render the word " " as a hyperlink. To render an image as a hyperlink, an 'img' element is inserted as content into the 'a' element. Like 'br', 'img' is an empty element with attributes but no content or closing tag. <a href="http://example.org"><img src="image.gif" alt="descriptive text" width="50" height="50" border="0"></a>. Most of the attributes of an element are name-value pairs, separated by "=" and written within the start tag of an element after the element's name. The value may be enclosed in single or double quotes, although values consisting of certain characters can be left unquoted in HTML (but not XHTML) . Leaving attribute values unquoted is considered unsafe. In contrast with name-value pair attributes, there are some attributes that affect the element simply by their presence in the start tag of the element, like the ismap attribute for the There are several common attributes that may appear in many elements : idattribute provides a document-wide unique identifier for an element. This is used to identify the element so that stylesheets can alter its presentational properties, and scripts may alter, animate or delete its contents or presentation. Appended to the URL of the page, it provides a globally unique identifier for the element, typically a sub-section of the page. For example, the ID "Attributes" in classattribute provides a way of classifying similar elements. This can be used for semantic or presentation purposes. For example, an HTML document might semantically use the designation class="notation"to indicate that all elements with this class value are subordinate to the main text of the document. In presentation, such elements might be gathered together and presented as footnotes on a page instead of appearing in the place where they occur in the HTML source. Class attributes are used semantically in microformats. Multiple class values may be specified; for example class="notation important"puts the element into both the 'notation' and the 'important' classes. styleattribute to assign presentational properties to a particular element. It is considered better practice to use an element's classattributes to select the element from within a stylesheet, though sometimes this can be too cumbersome for a simple, specific, or ad hoc styling. titleattribute is used to attach subtextual explanation to an element. In most browsers this attribute is displayed as a tooltip. langattribute identifies the natural language of the element's contents, which may be different from that of the rest of the document. For example, in an English-language document: <p>Oh well, <span lang="fr">c'est la vie</span>, as they say in France.</p> The abbreviation element, abbr, can be used to demonstrate some of these attributes : <abbr id="anId" class="jargon" style="color:purple;" title="Hypertext Markup Language">HTML</abbr> This example displays as HTML; in most browsers, pointing the cursor at the abbreviation should display the title text "Hypertext Markup Language." As of version 4.0, HTML defines a set of 252 character entity references and a set of 1,114,050 numeric character references, both of which allow individual characters to be written via simple markup, rather than literally. A literal character and its markup counterpart are considered equivalent and are rendered identically. The ability to "escape" characters in this way allows for the characters & (when written as &, respectively) to be interpreted as character data, rather than markup. For example, a literal < normally indicates the start of a tag, and & normally indicates the start of a character entity reference or numeric character reference; writing it as & to be included in the content of an element or in the value of an attribute. The double-quote character ( "), when not used to quote an attribute value, must also be escaped as " when it appears within the attribute value itself. Equivalently, the single-quote character ( '), when not used to quote an attribute value, must also be escaped as ' (not as ' except in XHTML documents) when it appears within the attribute value itself. If document authors overlook the need to escape such characters, some browsers can be very forgiving and try to use context to guess their intent. The result is still invalid markup, which makes the document less accessible to other browsers and to other user agents that may try to parse the document for search and indexing purposes for example. Escaping also allows for characters that are not easily typed, or that are not available in the document's character encoding, to be represented within element and attribute content. For example, the acute-accented é), a character typically found only on Western European keyboards, can be written in any HTML document as the entity reference é or as the numeric references é, using characters that are available on all keyboards and are supported in all character encodings. Unicode character encodings such as UTF-8 are compatible with all modern browsers and allow direct access to almost all the characters of the world's writing systems. HTML defines several data types for element content, such as script data and stylesheet data, and a plethora of types for attribute values, including IDs, names, URIs, numbers, units of length, languages, media descriptors, colors, character encodings, dates and times, and so on. All of these data types are specializations of character data. The original purpose of the doctype was to enable parsing and validation of HTML documents by SGML tools based on the Document Type Definition (DTD). The DTD to which the DOCTYPE refers contains a machine-readable grammar specifying the permitted and prohibited content for a document conforming to such a DTD. Browsers, on the other hand, do not implement HTML as an application of SGML and by consequence do not read the DTD. HTML5 does not define a DTD; therefore, in HTML5 the doctype declaration is simpler and shorter: An example of an HTML 4 doctype is <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> This declaration references the DTD for the 'strict' version of HTML 4.01. SGML-based validators read the DTD in order to properly parse the document and to perform validation. In modern browsers, a valid doctype activates standards mode as opposed to quirks mode. In addition, HTML 4.01 provides Transitional and Frameset DTDs, as explained below. Semantic HTML is a way of writing HTML that emphasizes the meaning of the encoded information over its presentation (look). HTML has included semantic markup from its inception, but has also included presentational markup such as <center> tags. There are also the semantically neutral span and div tags. Since the late 1990s when Cascading Style Sheets were beginning to work in most browsers, web authors have been encouraged to avoid the use of presentational HTML markup with a view to the separation of presentation and content. In a 2001 discussion of the Semantic Web, Tim Berners-Lee and others gave examples of ways in which intelligent software 'agents' may one day automatically crawl the web and find, filter and correlate previously unrelated, published facts for the benefit of human users. Such agents are not commonplace even now, but some of the ideas of Web 2.0, mashups and price comparison websites may be coming close. The main difference between these web application hybrids and Berners-Lee's semantic agents lies in the fact that the current aggregation and hybridization of information is usually designed in by web developers, who already know the web locations and the API semantics of the specific data they wish to mash, compare and combine. An important type of web agent that does crawl and read web pages automatically, without prior knowledge of what it might find, is the web crawler or search-engine spider. These software agents are dependent on the semantic clarity of web pages they find as they use various techniques and algorithms to read and index millions of web pages a day and provide web users with search facilities without which the World Wide Web's usefulness would be greatly reduced. In order for search-engine spiders to be able to rate the significance of pieces of text they find in HTML documents, and also for those creating mashups and other hybrids as well as for more automated agents as they are developed, the semantic structures that exist in HTML need to be widely and uniformly applied to bring out the meaning of published text. Presentational markup tags are deprecated in current HTML and XHTML recommendations and are illegal in HTML5. Good semantic HTML also improves the accessibility of web documents (see also Web Content Accessibility Guidelines). For example, when a screen reader or audio browser can correctly ascertain the structure of a document, it will not waste the visually impaired user's time by reading out repeated or irrelevant information when it has been marked up correctly. The World Wide Web is composed primarily of HTML documents transmitted from web servers to web browsers using the Hypertext Transfer Protocol (HTTP). However, HTTP is used to serve images, sound, and other content, in addition to HTML. To allow the web browser to know how to handle each document it receives, other information is transmitted along with the document. This meta data usually includes the MIME type (e.g. text/html or application/xhtml+xml) and the character encoding (see Character encoding in HTML). In modern browsers, the MIME type that is sent with the HTML document may affect how the document is initially interpreted. A document sent with the XHTML MIME type is expected to be well-formed XML; syntax errors may cause the browser to fail to render it. The same document sent with the HTML MIME type might be displayed successfully, since some browsers are more lenient with HTML. The W3C recommendations state that XHTML 1.0 documents that follow guidelines set forth in the recommendation's Appendix C may be labeled with either MIME Type. XHTML 1.1 also states that XHTML 1.1 documents should be labeled with either MIME type. Most graphical email clients allow the use of a subset of HTML (often ill-defined) to provide formatting and semantic markup not available with plain text. This may include typographic information like coloured headings, emphasized and quoted text, inline images and diagrams. Many such clients include both a GUI editor for composing HTML e-mail messages and a rendering engine for displaying them. Use of HTML in e-mail is controversial because of compatibility issues, because it can help disguise phishing attacks, because of accessibility issues for blind or visually impaired people, because it can confuse spam filters and because the message size is larger than plain text. The most common filename extension for files containing HTML is .html. A common abbreviation of this is .htm, which originated because some early operating systems and file systems, such as DOS and the limitations imposed by FAT data structure, limited file extensions to three letters. An HTML Application (HTA; file extension ".hta") is a Microsoft Windows application that uses HTML and Dynamic HTML in a browser to provide the application's graphical interface. A regular HTML file is confined to the security model of the web browser's security, communicating only to web servers and manipulating only webpage objects and site cookies. An HTA runs as a fully trusted application and therefore has more privileges, like creation/editing/removal of files and Windows Registry entries. Because they operate outside the browser's security model, HTAs cannot be executed via HTTP, but must be downloaded (just like an EXE file) and executed from local file system . Since its inception, HTML and its associated protocols gained acceptance relatively quickly. However, no clear standards existed in the early years of the language. Though its creators originally conceived of HTML as a semantic language devoid of presentation details, practical uses pushed many presentational elements and attributes into the language, driven largely by the various browser vendors. The latest standards surrounding HTML reflect efforts to overcome the sometimes chaotic development of the language and to create a rational foundation for building both meaningful and well-presented documents. To return HTML to its role as a semantic language, the W3C has developed style languages such as CSS and XSL to shoulder the burden of presentation. In conjunction, the HTML specification has slowly reined in the presentational elements. There are two axes differentiating various variations of HTML as currently specified: SGML-based HTML versus XML-based HTML (referred to as XHTML) on one axis, and strict versus transitional (loose) versus frameset on the other axis. One difference in the latest HTML specifications lies in the distinction between the SGML-based specification and the XML-based specification. The XML-based specification is usually called XHTML to distinguish it clearly from the more traditional definition. However, the root element name continues to be 'html' even in the XHTML-specified HTML. The W3C intended XHTML 1.0 to be identical to HTML 4.01 except where limitations of XML over the more complex SGML require workarounds. Because XHTML and HTML are closely related, they are sometimes documented in parallel. In such circumstances, some authors conflate the two names as (X)HTML or X(HTML). Like HTML 4.01, XHTML 1.0 has three sub-specifications: strict, transitional and frameset. Aside from the different opening declarations for a document, the differences between an HTML 4.01 and XHTML 1.0 document-in each of the corresponding DTDs-are largely syntactic. The underlying syntax of HTML allows many shortcuts that XHTML does not, such as elements with optional opening or closing tags, and even EMPTY elements which must not have an end tag. By contrast, XHTML requires all elements to have an opening tag and a closing tag. XHTML, however, also introduces a new shortcut: an XHTML tag may be opened and closed within the same tag, by including a slash before the end of the tag like this: <br/>. The introduction of this shorthand, which is not used in the SGML declaration for HTML 4.01, may confuse earlier software unfamiliar with this new convention. A fix for this is to include a space before closing the tag, as such: To understand the subtle differences between HTML and XHTML, consider the transformation of a valid and well-formed XHTML 1.0 document that adheres to Appendix C (see below) into a valid HTML 4.01 document. To make this translation requires the following steps: langattribute rather than the XHTML xml:langattribute. XHTML uses XML's built in language-defining functionality attribute. xmlns=URI). HTML has no facilities for namespaces. <?xml version="1.0" encoding="utf-8"?>). text/html. For both HTML and XHTML, this comes from the HTTP Content-Typeheader sent by the server. Those are the main changes necessary to translate a document from XHTML 1.0 to HTML 4.01. To translate from HTML to XHTML would also require the addition of any omitted opening or closing tags. Whether coding in HTML or XHTML it may just be best to always include the optional tags within an HTML document rather than remembering which tags can be omitted. A well-formed XHTML document adheres to all the syntax requirements of XML. A valid document adheres to the content specification for XHTML, which describes the document structure. The W3C recommends several conventions to ensure an easy migration between HTML and XHTML (see HTML Compatibility Guidelines). The following steps can be applied to XHTML 1.0 documents only: langattributes on any elements assigning language. <br />instead of By carefully following the W3C's compatibility guidelines, a user agent should be able to interpret the document equally as HTML or XHTML. For documents that are XHTML 1.0 and have been made compatible in this way, the W3C permits them to be served either as HTML (with a text/html MIME type), or as XHTML (with an application/xml MIME type). When delivered as XHTML, browsers should use an XML parser, which adheres strictly to the XML specifications for parsing the document's contents. HTML 4 defined three different versions of the language: Strict, Transitional (once called Loose) and Frameset. The Strict version is intended for new documents and is considered best practice, while the Transitional and Frameset versions were developed to make it easier to transition documents that conformed to older HTML specification or didn't conform to any specification to a version of HTML 4. The Transitional and Frameset versions allow for presentational markup, which is omitted in the Strict version. Instead, cascading style sheets are encouraged to improve the presentation of HTML documents. Because XHTML 1 only defines an XML syntax for the language defined by HTML 4, the same differences apply to XHTML 1 as well. The Transitional version allows the following parts of the vocabulary, which are not included in the Strict version: u)(Deprecated. can confuse a visitor with a hyperlink.) center(Deprecated. use CSS instead.) font(Deprecated. use CSS instead.) basefont(Deprecated. use CSS instead.) background(Deprecated. use CSS instead.) and bgcolor(Deprecated. use CSS instead.) attributes for body(required element according to the W3C.) element. align(Deprecated. use CSS instead.) attribute on form, paragraph ( p) and heading ( align(Deprecated. use CSS instead.), noshade(Deprecated. use CSS instead.), size(Deprecated. use CSS instead.) and width(Deprecated. use CSS instead.) attributes on align(Deprecated. use CSS instead.), objectelement is only supported in Internet Explorer (from the major browsers)) elements align(Deprecated. use CSS instead.) attribute on align(Deprecated. use CSS instead.) and bgcolor(Deprecated. use CSS instead.) on bgcolor(Deprecated. use CSS instead.), bgcolor(Deprecated. use CSS instead.) attribute on clear(Obsolete) attribute on type(Deprecated. use CSS instead.), compact(Deprecated. use CSS instead.) and start(Deprecated. use CSS instead.) attributes on menu(Deprecated. use CSS instead.) list (no substitute, though unordered list is recommended) dir(Deprecated. use CSS instead.) list (no substitute, though unordered list is recommended) isindex(Deprecated.) (element requires server-side support and is typically added to documents server-side, inputelements can be used as a substitute) applet(Deprecated. use the language(Obsolete) attribute on script element (redundant with the target(Deprecated in the formelements.) attribute on a, client-side image-map ( The Frameset version includes everything in the Transitional version, as well as the frameset element (used instead of body) and the In addition to the above transitional differences, the frameset specifications (whether XHTML 1.0 or HTML 4.01) specifies a different content model, with body, that contains either frame elements, or optionally noframes with a As this list demonstrates, the loose versions of the specification are maintained for legacy support. However, contrary to popular misconceptions, the move to XHTML does not imply a removal of this legacy support. Rather the X in XML stands for extensible and the W3C is modularizing the entire specification and opening it up to independent extensions. The primary achievement in the move from XHTML 1.0 to XHTML 1.1 is the modularization of the entire specification. The strict version of HTML is deployed in XHTML 1.1 through a set of modular extensions to the base XHTML 1.1 specification. Likewise, someone looking for the loose (transitional) or frameset specifications will find similar extended XHTML 1.1 support (much of it is contained in the legacy or frame modules). The modularization also allows for separate features to develop on their own timetable. So for example, XHTML 1.1 will allow quicker migration to emerging XML standards such as MathML (a presentational and semantic math language based on XML) and XForms-a new highly advanced web-form technology to replace the existing HTML forms. In summary, the HTML 4.01 specification primarily reined in all the various HTML implementations into a single clearly written specification based on SGML. XHTML 1.0, ported this specification, as is, to the new XML defined specification. Next, XHTML 1.1 takes advantage of the extensible nature of XML and modularizes the whole specification. XHTML 2.0 was intended to be the first step in adding new features to the specification in a standards-body-based approach. The WhatWG considers their work as living standard HTML for what constitutes the state of the art in major browser implementations by Apple (Safari), Google (Chrome), Microsoft (IE), Mozilla (Firefox), Opera (Opera), and others. HTML5 is specified by the HTML Working Group of the W3C following the W3C process. As of 2013[update] both specifications are similar and mostly derived from each other, i.e., the work on HTML5 started with an older WhatWG draft, and later the WhatWG living standard was based on HTML5 drafts in 2011. HTML lacks some of the features found in earlier hypertext systems, such as typed links, source tracking, fat links and others. Even some hypertext features that were in early versions of HTML have been ignored by most popular web browsers until recently, such as the link element and in-browser Web page editing. There are some WYSIWYG editors (What You See Is What You Get), in which the user lays out everything as it is to appear in the HTML document using a graphical user interface (GUI), often similar to word processors. The editor renders the document rather than show the code, so authors do not require to have extensive knowledge of HTML. The WYSIWYG editing model has been criticized, primarily because of the low quality of the generated code; there are voices advocating a change to the WYSIWYM model (What You See Is What You Mean). WYSIWYG editors remain a controversial topic because of their perceived flaws such as: ▪ Premium designs ▪ Designs by country ▪ Designs by U.S. state ▪ Most popular designs ▪ Newest, last added designs ▪ Unique designs ▪ Cheap, budget designs ▪ Design super sale DESIGNS BY THEME ▪ Accounting, audit designs ▪ Adult, sex designs ▪ African designs ▪ American, U.S. designs ▪ Animals, birds, pets designs ▪ Agricultural, farming designs ▪ Architecture, building designs ▪ Army, navy, military designs ▪ Audio & video designs ▪ Automobiles, car designs ▪ Books, e-book designs ▪ Beauty salon, SPA designs ▪ Black, dark designs ▪ Business, corporate designs ▪ Charity, donation designs ▪ Cinema, movie, film designs ▪ Computer, hardware designs ▪ Celebrity, star fan designs ▪ Children, family designs ▪ Christmas, New Year's designs ▪ Green, St. Patrick designs ▪ Dating, matchmaking designs ▪ Design studio, creative designs ▪ Educational, student designs ▪ Electronics designs ▪ Entertainment, fun designs ▪ Fashion, wear designs ▪ Finance, financial designs ▪ Fishing & hunting designs ▪ Flowers, floral shop designs ▪ Food, nutrition designs ▪ Football, soccer designs ▪ Gambling, casino designs ▪ Games, gaming designs ▪ Gifts, gift designs ▪ Halloween, carnival designs ▪ Hotel, resort designs ▪ Industry, industrial designs ▪ Insurance, insurer designs ▪ Interior, furniture designs ▪ International designs ▪ Internet technology designs ▪ Jewelry, jewellery designs ▪ Job & employment designs ▪ Landscaping, garden designs ▪ Law, juridical, legal designs ▪ Love, romantic designs ▪ Marketing designs ▪ Media, radio, TV designs ▪ Medicine, health care designs ▪ Mortgage, loan designs ▪ Music, musical designs ▪ Night club, dancing designs ▪ Photography, photo designs ▪ Personal, individual designs ▪ Politics, political designs ▪ Real estate, realty designs ▪ Religious, church designs ▪ Restaurant, cafe designs ▪ Retirement, pension designs ▪ Science, scientific designs ▪ Sea, ocean, river designs ▪ Security, protection designs ▪ Social, cultural designs ▪ Spirit, meditational designs ▪ Software designs ▪ Sports, sporting designs ▪ Telecommunication designs ▪ Travel, vacation designs ▪ Transport, logistic designs ▪ Web hosting designs ▪ Wedding, marriage designs ▪ White, light designs ▪ Magento store designs ▪ OpenCart store designs ▪ PrestaShop store designs ▪ CRE Loaded store designs ▪ Jigoshop store designs ▪ VirtueMart store designs ▪ osCommerce store designs ▪ Zen Cart store designs ▪ Flash CMS designs ▪ Joomla CMS designs ▪ Mambo CMS designs ▪ Drupal CMS designs ▪ WordPress blog designs ▪ Forum designs ▪ phpBB forum designs ▪ PHP-Nuke portal designs ANIMATED WEBSITE DESIGNS ▪ Flash CMS designs ▪ Silverlight animated designs ▪ Silverlight intro designs ▪ Flash animated designs ▪ Flash intro designs ▪ XML Flash designs ▪ Flash 8 animated designs ▪ Dynamic Flash designs ▪ Flash animated photo albums ▪ Dynamic Swish designs ▪ Swish animated designs ▪ jQuery animated designs ▪ WebMatrix Razor designs ▪ HTML 5 designs ▪ Web 2.0 designs ▪ 3-color variation designs ▪ 3D, three-dimensional designs ▪ Artwork, illustrated designs ▪ Clean, simple designs ▪ CSS based website designs ▪ Full design packages ▪ Full ready websites ▪ Portal designs ▪ Stretched, full screen designs ▪ Universal, neutral designs CORPORATE ID DESIGNS ▪ Corporate identity sets ▪ Logo layouts, logo designs ▪ Logotype sets, logo packs ▪ PowerPoint, PTT designs ▪ Facebook themes VIDEO, SOUND & MUSIC ▪ Video e-cards ▪ After Effects video intros ▪ Special video effects ▪ Music tracks, music loops ▪ Stock music bank GRAPHICS & CLIPART ▪ Pro clipart & illustrations, $19/year ▪ 5,000+ icons by subscription ▪ Icons, pictograms |Custom Logo Design $149 ▪ Web Programming ▪ ID Card Printing ▪ Best Web Hosting ▪ eCommerce Software ▪ Add Your Link| |© 1996-2013 MAGIA Internet Studio ▪ About ▪ Portfolio ▪ Photo on Demand ▪ Hosting ▪ Advertise ▪ Sitemap ▪ Privacy ▪ Maria Online|
http://www.qesign.com/sale.php?x=HTML
13
743
A famous urban legend states that a penny dropped from the top of the Empire State Building will punch a hole in the sidewalk below. Given the height of the building and the hardness of the penny, that seems like a reasonable possibility. Whether it's true or not is a matter that can be determined scientifically. Before we do that, though, let's get some background. Falling rocks can be dangerous and, the farther they fall, the more dangerous they become. Falling raindrops, snowflakes, and leaves, however, are harmless no matter how far they fall. The distinction between those two possibilities has nothing to do with gravity, which causes all falling objects to accelerate downward at the same rate. The difference is entirely due to air resistance. Air resistance—technically known as drag—is the downwind force an object experiences as air moves passed it. Whenever an object moves through the air, the two invariably push on one another and they exchange momentum. The object acts to drag the air along with it and the air acts to drag the object along with it, action and reaction. Those two aerodynamic forces affect the motions of the object and air, and are what distinguish falling snowflakes from falling rocks. Two types of drag force affect falling objects: viscous drag and pressure drag. Viscous drag is the friction-like effect of having the air rub across the surface of the object. Though important to smoke and dust particles in the air, viscous drag is too weak to affect larger objects significantly. In contrast, pressure drag is strongly affects most large objects moving through the air. It occurs when airflow traveling around the object breaks away from the object's surface before reaching the back of the object. That separated airflow leaves a turbulent wake behind the object—a pocket of air that the object is effectively dragging along with it. The wider this turbulent wake, the more air the object is dragging and the more severe the pressure drag force. The airflow separation occurs as the airflow is attempting to travel from the sides of the object to the back of the object. At the sides, the pressure in the airflow is especially low due as it bends to arc around the sides. Bernoulli's equation is frequently invoked to help explain the low air pressure near the sides of the object. As this low-pressure air continues toward the back of the object, where the pressure is much greater, the airflow is moving into rising pressure and is pushed backward. It is decelerating. Because of inertia, the airflow could be expected to reach the back of the object anyway. However, the air nearest the object's surface—boundary layer air—rubs on that surface and slows down. This boundary layer doesn't quite make it to the back of the object. Instead, it stops moving and consequently forms a wedge that shaves much of the airflow off of the back of the object. A turbulent wake forms and the object begins to drag that wake along with it. The airflow and object are then pushing on one another with the forces of pressure drag. Those pressure drag forces depend on the amount of air in the wake and the speed at which the object is dragging the wake through the passing air. In general, the drag force on the object is proportional to the cross sectional area of its wake and the square of its speed through the air. The broader its wake and the faster it moves, the bigger the drag force it experiences. We're ready to drop the penny. When we first release it at the top of the Empire State Building, it begins to accelerate downward at 9.8 meters-per-second2—the acceleration due to gravity—and starts to move downward. If no other force appeared, the penny would move according to the equations of motion for constant downward acceleration, taught in most introductory physics classes. It would continue to accelerate downward at 9.8 meters-per-second2, meaning that its downward velocity would increase steadily until the moment it hit sidewalk. At that point, it would be traveling downward at approximately 209 mph (336 km/h) and it would do some damage to the sidewalk. That analysis, however, ignores pressure drag. Once the penny is moving downward through the air, it experiences an upward pressure drag force that affects its motion. Instead of accelerating downward in response to its weight alone, the penny now accelerates in response to the sum of two force: its downward weight and the upward drag force. The faster the penny descends through the air, the stronger the drag force becomes and the more that upward force cancels the penny's downward weight. At a certain downward velocity, the upward drag force on the penny exactly cancels the penny's weight and the penny no longer accelerates. Instead, it descends steadily at a constant velocity, its terminal velocity, no matter how much farther drops. The penny's terminal velocity depends primarily on two things: its weight and the cross sectional area of its wake. A heavy object that leaves a narrow wake will have a large terminal velocity, while a light object that leaves a broad wake will have a small terminal velocity. Big rocks are in the first category; raindrops, snowflakes, and leaves are in the second. Where does a penny belong? It turns out that a penny is more like a leaf than a rock. The penny tumbles as it falls and produces a broad turbulent wake. For its weight, it drags an awful lot of air behind it. As a result, it reaches terminal velocity at only about 25 mph (40 km/h). To prove that, I studied pennies fluttering about in a small vertical wind tunnel. Whether the penny descends through stationary air or the penny hovers in rising air, the physics is the same. Of course, it's much more convenient in the laboratory to observe the hovering penny interacting with rising air. Using a fan and plastic pipe, I created a rising stream of air and inserted a penny into that airflow. At low air speeds, the penny experiences too little upward drag force to cancel its weight. The penny therefore accelerated downward and dropped to the bottom of the wind tunnel. At high air speeds, the penny experienced such a strong upward drag force that it blew out of the wind tunnel. When the air speed was just right, the penny hovered in the wind tunnel. The air speed was then approximately 25 mph (40 km/h). That is the terminal velocity of a penny. The penny tumbles in the rising air. It is aerodynamically unstable, meaning that it cannot maintain a fixed orientation in the passing airstream. Because the aerodynamic forces act mostly on the upstream side of the penny, they tend to twist that side of the penny downstream. Whichever side of the penny is upstream at one moment soon becomes the downstream side, and the penny tumbles. As a result of this tumbling, the penny disturbs a wide swath of air and leaves a broad turbulent wake. It experiences severe pressure drag and has a low terminal velocity. The penny is an example of an aerodynamically blunt object—one in which the low-pressure air arcing around its sides runs into the rapidly increasing pressure behind it and separates catastrophically to form a vast wake. The opposite possibility is an aerodynamically streamlined object—one in which the increasing pressure beyond the object's sides is so gradual that the airflow never separates and no turbulent wake forms. A penny isn't streamlined, but a ballpoint pen could be. Almost any ballpoint pen is less blunt than a penny and some pens are approximately streamlined. Moreover, pens weigh more than pennies and that fact alone favors a higher terminal velocity. With a larger downward force (weight) and a smaller upward force (drag), the pen accelerates to a much greater terminal velocity than the penny. If it is so streamlined that it leaves virtually no wake, like the aerofoil shapes typical of airplane components, it will have an extraordinarily large terminal velocity—perhaps several hundred miles per hour. Some pens tumble, however, and that spoils their ability to slice through the air. To avoid tumbling, a pen must "weathervane"—it must experience most of its aerodynamic forces on its downstream side, behind its center of mass. Arrows and small rockets have fletching or fins to ensure that they travel point first through the air. A ballpoint pen can achieve that same point-first flight if its shape and center of mass are properly arranged. Almost any ballpoint pen dropped into my wind tunnel plummeted to the bottom. I was unable to make the air rise fast enough to observe hovering behavior in those pens. Whether they would tend to tumble in the open air was difficult to determine because of the tunnel's narrowness. Nonetheless, it's clear that a heavy, streamlined, and properly weighted pen dropped from the Empire State Building would still be accelerating downward when it reached the sidewalk. Its speed would be close to 209 mph at that point and it would indeed damage the sidewalk. As a final test of the penny's low terminal velocity, I built a radio-controlled penny dropper and floated it several hundred feet in the air with a helium-filled weather balloon. On command, the dropper released penny after penny and I tried to catch them as they fluttered to the ground. Alas, I never managed to catch one properly in my hands. It was a somewhat windy day and the ground at the local park was uneven, but that's hardly an excuse—I'm simply not good at catching things in my hands. Several of the pennies did bounce off my hands and one even bounced off my head. It was fun and I was more in danger of twisting my ankle than of getting pierced by a penny. The pennies descended so slowly that they didn't hurt at all. Tourist below the Empire State Building have nothing fear from falling pennies. Watch out, however, for some of the more streamlined objects that might make that descent. If by smart meters you mean the devices that monitor power usage and possibly adjust power consumption to periodically, then I don't see how they can affect health. Their communications with the smart grid are of no consequence to human health and having the power adjusted on household devices is unlikely to be a health issue (unless they cut off your power during a blizzard or a deadly heat wave). The radiated power from all of these wireless communications devices is so small that we have yet to find mechanisms whereby they could cause significant or lasting injury to human tissue. If there is any such mechanism, the effects are so weak that the risk associated with it are dwarfed by much more significant risks of wireless communication: the damage to traditional community, the decline of ordinary human interaction, and the surge in distracted driving. The Japanese did stop the chain reactions in the Fukushima Daiichi reactors, even before the tsunami struck the plant. The problem that they're having now is not the continued fissioning of uranium, but rather the intense radioactivity of the uranium daughter nuclei that were created while the chain reactions were underway. Those radioactive fission fragments are spontaneously decaying now and there is nothing that can stop that natural decay now. All they can do now is to try to contain those radioactive nuclei, keep them from overheating, and wait for them to decay into stable pieces. The uranium atom has the largest naturally occurring nucleus in nature. It contains 92 protons, each of which is positively charged, and those 92 like charges repel one another ferociously. Although the nuclear force acts to bind protons together when they touch, the repulsion of 92 protons alone would be too much for the nuclear force—the protons would fly apart in almost no time. To dilute the electrostatic repulsion of those protons, each uranium nucleus contains a large number of uncharged neutrons. Like protons, neutrons experience the attractive nuclear force. But unlike protons, neutrons don't experience the repulsive electrostatic force. Two neutron-rich combinations of protons and neutrons form extremely long-lived uranium nuclei: uranium-235 (92 protons, 143 neutrons) and uranium-238 (92 protons, 146 neutrons). Each uranium nucleus attracts an entourage of 92 electrons to form a stable atom and, since the electrons are responsible for the chemistry of an atom, uranium-235 and uranium-238 are chemically indistinguishable. When the thermal fission reactors of the Fukushima Daiichi plant were in operation, fission chain reactions were shattering the uranium-235 nuclei into fragments. Uranium-238 is more difficult to shatter and doesn't participate much in the reactor's operation. On occasion, however, a uranium-238 nucleus captures a neutron in the reactor and transforms sequentially into neptunium-239 and then plutonium-239. The presence of plutonium-239 in the used fuel rods is one of the problems following the accident. The main problem, however, is that the shattered fission fragment nuclei in the used reactor fuel are overly neutron-rich, a feature inherited from the neutron-rich uranium-235 nuclei themselves. Midsize nuclei, such as iodine (with 53 protons), cesium (with 55 protons), and strontium (with 38 protons), don't need as many neutrons to dilute out the repulsions between their protons. While fission of uranium-235 can produce daughter nuclei with 53 protons, 55 protons, or 38 protons, those fission-fragment versions of iodine, cesium, and strontium nuclei have too many neutrons and are therefore unstable—they undergo radioactive decay. Their eventual decay has nothing to do with chain reactions and it cannot be prevented. How quickly these radioactive fission fragment nuclei decay depends on exactly how many protons and neutrons they have. Three of the most common and dangerous nuclei present in the used fuel rods are iodine-131 (8 days half-life), cesium-137 (30 year half-life), and strontium-90 (29 year half-life). Plutonium-239 (24,200 year half-life) is also present in those rods. When these radioactive nuclei are absorbed into the body and then undergo spontaneous radioactive decay, they damage molecules and therefore pose a cancer risk. Our bodies can't distinguish the radioactive versions of these chemical elements from the nonradioactive ones, so all we can do to minimize our risk is to avoid exposure to them or to encourage our bodies to excrete them by saturating our bodies with stable versions. By asking me to "neglect possible discharges," you're asking me to neglect what actually happens. There will be a discharge, specifically a phenomenon known as "field emission". Neglect that discharge, then yes, the sphere can in principle store an unlimited amount of charge. But on route to infinity, I will have had to ignore several other exotic discharges and then the formation of a black hole. What will really happen is a field emission discharge. The repulsion between like charges will eventually become so strong that those charges will push one another out of the metal and into the vacuum, so that charges will begin to stream outward from the metal sphere. Another way to describe that growing repulsion between like charges involves fields. An electric charge is surrounded by a structure in space known as an electric field. An electric field exerts forces on electric charges, so one electric charge pushes on other electric charges by way of its electric field. As more and more like charges accumulate on the sphere, their electric fields overlap and add so that the overall electric field around the sphere becomes stronger and stronger. The charges on the sphere feel that electric field, but they are bound to the metal sphere by chemical forces and it takes energy to pluck one of them away from the metal. Eventually, the electric field becomes so strong that it can provide the energy needed to detach a charge from the metal surface. The work done by the field as it pushes the charge away from sphere supplies the necessary energy and the charge leaves the sphere and heads out into the vacuum. The actually detachment process involves a quantum physics phenomenon known as tunneling, but that's another story. The amount of charge the sphere can store before field emission begins depends on the radius of the sphere and on whether the charge is positive or negative. The smaller that radius, the faster the electric field increases and the sooner field emission starts. It's also easier to field-emit negative charges (as electrons) than it is to field-emit positive charges (as ions), so a given sphere will be able to hold more positive charge than negative charge. Modern brushless DC motors are amazing devices that can handle torque reversals instantly. In fact, they can even generate electricity during those reversals! Instant reversals of direction, however, aren't physically possible (because of inertia) and aren't actually what your friend wants anyway. I'll say more about the distinction between torque reversals and direction reversals in a minute. In general, a motor has a spinning component called the rotor that is surrounded by a stationary component called the stator. The simplest brushless DC motor has a rotor that contains permanent magnets and a stator that consists of electromagnets. The magnetic poles on the stator and rotor can attract or repel one another, depending on whether they like or opposite poles—like poles repel; opposite poles attract. Since the electronics powering the stator's electromagnets can choose which of the stator's poles are north and which are south, those electronics determine the forces acting on the rotor's poles and therefore the direction of torque on the rotor. To twist the rotor forward, the electronics make sure that the stator's poles are always acting to pull or push the rotor's poles in the forward direction so that the rotor experiences forward torque. To twist the rotor backward, the electronics reverses all those forces. Just because you reverse the direction of torque on the rotor doesn't mean that the rotor will instantly reverse its direction of rotation. The rotor (along with the rider of the scooter) has inertia and it takes time for the rotor to slow to a stop and then pick up speed in the opposite direction. More specifically, a torque causes angular acceleration; it doesn't cause angular velocity. During that reversal process, the rotor is turning in one direction while it is being twisted in the other direction. The rotor is slowing down and it is losing energy, so where is that energy going? It's actually going into the electronics which can use this electricity to recharge the batteries. The "motor" is acting as a "generator" during the slowing half of the reversal! That brushless DC motors are actually motor/generators makes them fabulous for electric vehicles of all types. They consume electric power while they are making a vehicle speed up, but they generate electric power while they are slowing a vehicle down. That's the principle behind regenerative braking—the vehicle's kinetic energy is used to recharge the batteries during braking. With suitable electronics, your friend's electric scooter can take advantage of the elegant interplay between electric power and mechanical power that brushless DC motors make possible. Those motors can handle torque reversals easily and they can even save energy in the process. There are limits, however, to the suddenness of some of the processes because huge flows of energy necessitate large voltages and powers in the motor/generators and their electronics. The peak power and voltage ratings of all the devices come into play during the most abrupt and strenuous changes in the motion of the scooter. If your friend wants to be able to go from 0 to 60 or from 60 to 0 in the blink of eye, the motor/generators and their electronics will have to handle big voltages and powers. Although that sounds like a simple question, it has a complicated answer. Gravity does affect light, but it doesn't affect light's speed. In empty space, light is always observed to travel at "The Speed of Light." But that remark hides a remarkable result: although two different observers will agree on how fast light is traveling, they may disagree in their perceptions of space and time. When those observers are in motion relative to one another, they'll certainly disagree about the time and distance separating two events (say, two firecrackers exploding at separate locations). For modest relative velocities, their disagreement will be too small to notice. But as their relative motion increases, that disagreement will become substantial. That is one of the key insights of Einstein's special theory of relativity. But even when two observers are not moving relative to one another, gravity can cause them to disagree about the time and distance separating two events. When those observers are in different gravitational circumstances, they'll perceive space and time differently. That effect is one of the key insights of Einstein's general theory of relativity. Here is a case in point: suppose two observers are in separate spacecraft, hovering motionless relative to the sun, and one observer is much closer to the sun than the other. The closer observer has a laser pointer that emits a green beam toward the farther observer. Both observers will see the light pass by and measure its speed. They'll agree that the light is traveling at "The Speed of Light". But they will not agree on the exact frequency of the light. The farther observer will see the light as slightly lower in frequency (redder) than the closer observer. Similarly, if the farther observer sends a laser pointer beam toward the closer observer, the closer observer will see the light as slightly higher in frequency (bluer) than the farther observer. How can these two observers agree on the speed of the beams but disagree on their frequencies (and colors)? They perceive space and time differently! Time is actually passing more slowly for the closer observer than for the farther observer. If they look carefully at each others' watches, the farther observer will see the closer observer's watch running slow and the closer observer will see the farther observer's watch running fast. The closer observer is actually aging slightly more slowly than the farther observer. These effects are usually very subtle and difficult to measure, but they're real. The global positioning system relies on ultra-precise clocks that are carried around the earth in satellites. Those satellites move very fast relative to us and they are farther from the earth's center and its gravity than we are. Both difference affect how time passes for those satellites and the engineers who designed and operate the global positioning system have to make corrections for the time-space effects of special and general relativity. Liquid water can evaporate to form gaseous water (i.e., steam) at any temperature, not just at its boiling temperature of 212 F. The difference between normal evaporation and boiling is that, below water's boiling temperature, evaporation occurs primarily at the surface of the liquid water whereas at or above water's boiling temperature, bubbles of pure steam become stable within the liquid and water can evaporate especially rapidly into those bubbles. So boiling is a just a rapid form of evaporation. What you are actually seeing when raindrops land on warm surfaces is tiny water droplets in the air, a mist of condensation. Those droplets happen in a couple of steps. First, the surface warms a raindrop and speeds up its evaporation. Second, a small portion of warm, especially moist air rises upward from the evaporating raindrop. Third, that portion of warm moist air cools as it encounters air well above the warmed surface. The sudden drop in temperature causes the moist air to become supersaturated with moisture—it now contains more water vapor than it can retain at equilibrium. The excess moisture condenses to form tiny water droplets that you see as a mist. This effect is particularly noticeable when it's raining because the humidity in the air is already very near 100%. The extra humidity added when the warmed raindrops evaporate is able to remain gaseous only in warmed air. Once that air cools back to the ambient temperature, the moisture must condense back out of it, producing the mist. Solid ice is less dense than liquid water, meaning that a liter of ice has less mass (and weighs less) than a liter of water. Any object that is less dense than water will float at the surface of water, so ice floats. That lower-density objects float on water is a consequence of Archimedes' principle: when an object displaces a fluid, it experiences an upward buoyant force equal in amount to the weight of the displaced fluid. If you submerge a piece of ice completely in water, that piece of ice will experience an upward buoyant force that exceeds the ice's weight because the water it displaces weighs more than the ice itself. The ice then experiences two forces: its downward weight and the upward buoyant force from the water. Since the upward force is stronger than the downward force, the ice accelerates upward. It rises to the surface of the water, bobs up and down a couple of times, and then settles at equilibrium. At that equilibrium, the ice is displacing a mixture of water and air. Amazingly enough, that mixture weighs exactly as much as the ice itself, so the ice now experiences zero net force. That's why its at equilibrium and why it can remain stationary. It has settled at just the right height to displace its weight in water and air. As for why ice is less dense than water, that has to do with the crystal structure of solid ice and the more complicated structure of liquid water. Ice's crystal structure is unusually spacious and it gives the ice crystals their surprisingly low density. Water's structure is more compact and dense. This arrangement, with solid water less dense than liquid water, is almost unique in nature. Most solids are denser than their liquids, so that they sink in their liquids. The electric circuit that powers your lamp extends only as far as a nearby transformer. That transformer is located somewhere near your house, probably as a cylindrical object on a telephone pole down the street or as a green box on a side lawn a few houses away. A transformer conveys electric power from one electric circuit to another. It performs this feat using several electromagnetic effects associated with changing electric currents—changes present in the alternating current of our power grid. In this case, the transformer is moving power from a high-voltage neighborhood circuit to a low-voltage household circuit. For safety, household electric power uses relatively low voltages, typically 120 volt in the US. But to deliver significant amounts of power at such low voltages, you need large currents. It's analogous to delivering hydraulic power at low pressures; low pressures are nice and safe, but you need large amounts of hydraulic fluid to carry much power. There is a problem, however, with sending low voltage electric power long distances: it's inefficient because wires waste power as heat in proportion to the square of the electric current they carry. Using our analogy again, sending hydraulic power long distances as a large flow of hydraulic fluid at low pressure is wasteful; the fluid will rub against the pipes and waste power as heat. To send electric power long distances, you do better to use high voltages and small currents (think high pressure and small flows of hydraulic fluid). That requires being careful with the wires because high voltages are dangerous, but it is exactly how electric power travels cross-country in the power grid: very high voltages on transmission lines that are safely out of reach. Finally, to move power from the long-distance high-voltage transmission wires to the short-distance low-voltage household wires, they use transformers. The long-distance circuit that carries power to your neighborhood closes on one side of the transformer and the short-distance circuit that carries power to your lamp closes on the other side of the transformer. No electric charges pass between those two circuits; they are electrically insulated from one another inside the transformer. The electric charges that are flowing through your lamp go round and round that little local circuit, shuttling from the transformer to your lamp and back again. The f-number of a lens measures the brightness of the image that lens casts onto the camera's image sensor. Smaller f-numbers produce brighter images, but they also yield smaller depths of focus. The f-number is actually the ratio of the lens' focal length to its effective diameter (the diameter of the light beam it collects and uses for its image). Your zoom lens has a focal length that can vary from 70 to 300 mm and a minimum f-number of 5.6. That means the when it is acting as a 300 mm telephoto lens, its effective light gathering surface is about 53 mm in diameter (300 mm divided by 5.6 gives a diameter of 53 mm). If you examine the lens, I think that you'll find that the front optical element is about 53 mm in diameter; the lens is using that entire surface to collect light when it is acting as a 300 mm lens at f-5.6. But when you zoom to lower focal lengths (less extreme telephoto), the lens uses less of the light entering its front surface. Similarly, when you dial a higher f-number, you are closing a mechanical diaphragm that is strategically located inside the lens and causing the lens to use less light. It's easy for the lens to increase its f-number by throwing away light arriving near the edges of its front optical element, but the lens can't decrease its f-number below 5.6; it can't create additional light gathering surface. Very low f-number lenses, particularly telephoto lenses with their long focal lengths, need very large diameter front optical elements. They tend to be big, expensive, and heavy. Smaller f-numbers produce brighter images, but there is a cost to that brightness. With more light rays entering the lens and focusing onto the image sensor, the need for careful focusing becomes greater. The lower the f-number, the more different directions those rays travel and the harder it is to get them all to converge properly on the image sensor. At low f-numbers, only rays from a specific distance converge to sharp focus on the image sensor; rays from objects that are too close or too far from the lens don't form sharp images and appear blurry. If you want to take a photograph in which everything, near and far, is essentially in perfect focus, you need to use a large f-number. The lens will form a dim image and you'll need to take a relatively long exposure, but you'll get a uniformly sharp picture. But if you're taking a portrait of a person and you want to blur the background so that it doesn't detract from the person's face, you'll want a small f-number. The preferred portrait lenses are moderately telephoto—they allow you to back up enough that the person's face doesn't bulge out at you in the photograph—and they have very low f-numbers—their large front optical elements gather lots of light and yield a very shallow depth of focus. Both the fork and the food are almost certainly safe. While the microwave oven is operating, electric current will flow through the fork and electric charge will accumulate momentarily on the tips of the fork's tines. However, most forks are thick enough to handle the current without becoming noticeably hot and have tines that are dull enough to accumulate the charge without sparking. The end result is that the fork doesn't do much while the oven is operating; it reflects the microwaves and therefore alters the cooking pattern slightly, but you probably won't be able to tell. Once the cooking is over, the fork is just as it was before you put it in the oven and the food is basically just microwaved food. If a fork has particularly sharp tines, however, then you should be careful not to put in the microwave oven. Sharp metal objects can and do spark in the microwave oven. Those sparks are probably more of a fire hazard than a food safety hazard—they can ignite the food or its container and start a fire. Yes. My solution is to fill the well hole with objects that are dense enough and hydrodynamically streamlined enough to descend by gravity alone through the upward flow of oil. As they accumulate in the 3+ mile deep well hole, those objects will impede the flow until it becomes a trickle. Large steel balls (e.g., cannonballs) should do the trick. If they are large enough, they will have a downward terminal velocity, even as they move through the upward flowing oil. Because they descend, they will eventually accumulate at the bottom of the well hole and form a coarse "packed powder." That powder will use its enormous weight and its resistance to flow to stop the leak. Most importantly, building the powder doesn't require any seals or pressurization at the top of the well hole, so it should be easy to do. The packed powder will exert downward drag forces on the upward flow of oil and gas, slowing its progress and decreasing its pressure faster than gravity alone. With 3+ miles of hole to fill, the dense steel objects should impede the flow severely. As the flow rate diminishes, the diameters of the metal spheres can be reduced until they are eventually only inches or even centimeters in diameter. The oil and gas will then be forced to flow through fine channels in the "powder," allowing viscous drag and pressure drag to extract all of the pressure potential energy from the flow and convert that energy into thermal energy. The flow will, in effect, be attempting to lift thousands of pounds of metal particles and it will fail. It will ooze though the "packed powder" at an insignificant rate. Another way to think about my technique is that it gradually increases the average density of the fluid in the well hole until that column of fluid is so heavy that the high pressure at the bottom of the hole is unable to lift it. The liquid starts out as a light mixture of oil and gas, but it gradually transforms into a dense mixture of oil, gas, and iron. Viscous forces and drag forces effectively couple the materials phases together to form a single fluid. Once that fluid is about 50% iron by volume, its average density will be so high (4 times the density of water) that it will stop flowing upward. If iron isn't dense enough (7.8 times water), you could use silver cannonballs (10.5 times water). Then you could say that "silver bullets" stopped the leak! The failed "top kill" concept also intended to fill the well hole with a dense fluid: heavy mud. But it required pushing the oil and gas down the well hole to make room for the mud. That displacement process proved to be impossible because it required unobtainable pressures and pumping power. My approach takes no pressurization or pumping at all because it doesn't actively displace the oil and gas. Including deformable lead spheres in the mixture will further plug the upward flow. The lead will deform under the weight of metal overhead and will fill voids and narrow channels. Another refinement of this dense-fill concept would be to drop bead chains down the well hole. The first large ball in such a chain would be a "tug boat" that is capable of descending against the upward flow all by itself. It would be followed by progressively smaller balls that need to draft (travel in the wake of) the balls ahead of them in order to descend into the well. Held together by steel or Kevlar cord, those bead chains would accumulate at the bottom of the well and impede the flow more effectively than individual large balls. Especially streamlined (non-spherical) objects such as steel javelins, darts, rods, and rebar could also be dropped into the well at the start of the filling process. In fact, sturdy sacks filled with junk steel objects—nuts and bolts—might even work. Anything that descends into the well hole is good and smaller particles are better. The point is not to form a seal, since the enormous pressure that will develop beneath any seal will blow it upward. The point is always to form narrow channels through which the oil and gas will struggle to flow. A video of this idea appears at: http://www.youtube.com/watch?v=8H29H_1vTHo and a manuscript detailing this idea appears on the Physics ArXiv: http://arxiv.org/abs/1006.0656. I'm trying to find a home for it in the scientific literature, but so far Applied Physics Letters, Physic Review E (which includes the physics of fluids), and PLoS (Public Library of Science) One have turned it down—they want articles with new physics in them, not articles applying old physics to new contexts, no matter how important those contexts. It's no wonder that the public views science as arcane and irrelevant. As the candle burns, its wax melts into a liquid, that liquid "wicks" up the wick (like water flowing up into a paper towel), and then the extreme heat of the flame vaporizes the wax (it is become gaseous wax). Once the wax is a gas, it burns in much the same way that natural gas burns — it reacts with oxygen in the air to become water and carbon dioxide. That reaction released chemical potential energy as thermal energy. One important difference between a candle flame and a natural gas flame: whereas the flame of a well-adjusted natural gas burner emits very little light (a dim blue glow), the flame of a candle is quite visible. That's because the wax vapor in a candle flame isn't mixed well with air before it begins to burn. Instead of burning quickly and completely, as natural gas does in a burner that premixes the gas with air, the wax vapor in a candle flame burns gradually as it continues to mix with air. The partially burned wax forms tiny carbon particles. Those carbon particles are so hot that they glow yellow-hot — they emit thermal radiation. In other words, they are "incandescent". It's those glowing carbon particles that produce the candle's yellowish light. Eventually the carbon particles burn away to carbon dioxide. Our eyes sense color by measuring the relative brightnesses of the red, green, and blue portions of the light spectrum. When all three portions of the spectrum are present in the proper amounts, we perceive white. The color sensing cells in our eyes are known as cone cells and they can detect only three different bands of color. One type of cone cell is sensitive to light in the red portion of the spectrum, the second type is sensitive to the green portion of the spectrum, and the third type is sensitive to the blue portion of the spectrum. Their sensitivities overlap somewhat, so light in the yellow and orange portions of the spectrum simultaneously affects both the red sensitive cone cells and the green sensitive ones. Our brains interpret color according to which of three cone cells are being stimulated and to what extent. When both our red sensors and our green sensors are being stimulated, we perceive yellow or orange. That scheme for sensing color is simple and elegant, and it allows us to appreciate many of the subtle color variations in our world. But it means that we can't distinguish between certain groups of lights. For example, we can't distinguish between (1) true yellow light and (2) a carefully adjusted mixture of true red plus true green. Both stimulate our red and green sensors just enough to make us perceive yellow. Those groups of lights look exactly the same to us. Similarly, we can't distinguish between (3) the full spectrum of sunlight and (4) a carefully adjusted mixture of true red, true green, and true blue. Those two groups stimulate all three types of cone cells and make us perceive white. They look identical to us. That the primary colors of light are red, green, and blue is the result of our human physiology and the fact that our eyes divide the spectrum of light into those three color regions. If our eyes were different, the primary colors of light would be different, too. Many things in our technological world exploit mixtures of those three primary colors to make us see every possible color. Computer monitors, televisions, photographs, and color printing all make us see what they want us to see without actually reproducing the full light spectrum of the original. For example, if you used a light spectrum analyzer to study a flower and a photograph of that flower, you'd discover that their light spectra are different. Those spectra stimulate our eyes the same way, but the details of the spectra are different. We can't tell them apart. Most four-tube fluorescent fixtures are effectively two separate two-tube units. They share the same ballast, but otherwise each pair of tubes is independent of the other. Removing one of those pairs from the fixture will save nearly half the energy and expense, and is a good idea if you don't need the extra illumination. The two tubes within a pair operate in series: current flowing as a discharge through the gas in one tube also flows through the gas in the other tube. That's why they both go out simultaneously. Only one of them is actually dead, but since the dead one has lost its ability to sustain a discharge, it can't pass any current on to its partner. Replacing the dead tube is usually enough to get the pair working again, at least for while. Leaving dead tubes in a fixture isn't the same as removing unnecessary tubes. Tubes often die slow, lingering deaths during which they sustain weak or flickering discharges that consume some energy without providing much light. Also, most fluorescent fixtures heat the electrodes at the ends of the tubes to start the discharge. During startup, the ballast runs an electric current through each electrode (hence the two metal contacts at each end of the tube) and the heated electrodes introduces electric charges into the gas so the discharge can start. That heating current is only necessary during starting, but if the discharge never starts then the ballast may continue to heat the electrodes for days, weeks, or years. If you look at the ends of a tube that fails to start, you may see the electrodes glowing red hot. Because of that heater current, leaving a failed fluorescent tube in a fixture can be waste of energy and money. Be careful removing those tubes from the fixture—although they produce no light, they can still be hot at their ends. Polymers are simply giant molecules that were formed by sticking together a great many small molecules. The properties of a given polymer depend on which small molecules it contains and how those molecules were assembled. To help your students visualize this idea, I'd go right to two familiar models: snap-together beads ("pop beads") and spaghetti. Snap-together beads are a perfect model for many polymers. As individual beads, you can pour them like a liquid and move your hand through them easily. But once you begin snapping them together into long chains, they develop new properties that weren't present in the beads themselves. For example, they get tangled together and don't flow so easily any more. That emergence of new properties is exactly what happens in many polymers. For example, ethylene is a simple gas molecule, but if you stick ethylene molecules together to form enormous chains, you get polyethylene (more specifically, high-density polyethylene, recycling number 2, milk-jug plastic). Ethylene molecules are called "monomers" and the giant chains that are made from them are called "polymers". Polyethylene retains some of the chemical properties of its monomer units, namely that it doesn't react with most other chemicals and almost nothing sticks to it. But polyethylene also has properties that the monomer units didn't have: polyethylene is a sturdy, flexible solid. You can stretch it without breaking it. That happens because you can make its polymer molecules slide across one another, but you can't untangle the tangles. To get an idea of what it's like to work with molecules that can slide through each other but may not be able to untangle themselves, shift over to cooked and drained spaghetti. If you dice the spaghetti up into tiny pieces, it's like the monomers—nothing to tangle. You can pour the tiny pieces like a liquid. But trying doing that with a bowl of long spaghetti noddles. They're so tangled up that they can't do much. In fact, if you let the water dry up to some extent, the stuff will become a sturdy, flexible solid, just like HDPE! There is much more to say about polymers, for example, they're not all simple straight chains and some of them cross-link so that they can't untangle no matter what you do. But this should be a good start. Polymer molecules are everywhere, including in paper and hair. Paper is primarily cellulose, giant molecules built out of sugar molecules. Hair is protein polymer, giant molecules built out of protein monomer units. They're both sturdy, stretchy, flexible solids and they're both softened by water—which acts as a molecular lubricant for the polymer molecules. Not all polymers are sturdy, or stretchy, or flexible, but a good many are. The door of a microwave oven is carefully designed to reflect microwaves so that they can't escape from the oven. That mesh that you see in the door isn't plastic, it's metal. Metal surfaces reflect microwaves and, even though the mesh has holes in it to allow you to observe the food, it acts as a perfect mirror for the microwaves. Basically, the holes are so much smaller than the 12.2-cm wavelength of the 2.45-GHz microwave that the microwave cannot propagate through the holes. Electric currents flow through the metal mesh as the microwave hits it and those currents re-radiate the microwave in the reflected direction. Since the holes aren't big enough to disrupt that current flow, the mesh reflects the microwaves as effectively as a solid metal surface would. As for how your cell phone and the cell tower can communicate for miles despite all the intervening stuff, it's actually a challenge. The microwaves from your phone and the tower are partly absorbed and partly reflected each time they encounter something in your environment, so they end up bouncing their way through an urban landscape. That's why cell towers have multiple antennas and extraordinarily sophisticated transmitting and receiving equipment. They are working like crazy to direct their microwaves at your phone as effectively as possible and to receive the microwaves from your phone even though those waves are very weak and arrive in bits and pieces due to all the scattering events they experience during their passage. Indoor cell phone reception is typically pretty poor unless the building has its own internal repeaters or microcells. There are times when you don't get any reception because the microwaves from the cell phone and tower are almost completely absorbed or reflected. For example, if you were to stand in a metalized box, the microwaves from your cell phone would be trapped in the box and would not reach the cell tower. Similarly, the microwaves from the cell tower would not reach you. Moreover, the box doesn't have to be fully metalized; a metal mesh or a transparent conductor is enough to reflect the microwaves. Transparent conductors are materials that conduct relatively low-frequency currents but don't conduct currents at the higher frequencies associated with visible light. They're used in electronic displays (e.g., computer monitors and digital watches) and in energy-conserving low-E windows. I haven't experimented with cell phone reception near low-E windows, but I'm eager to give it a try. I suspect that a room entirely walled by low-E windows will have lousy cell phone reception. Incandescent lightbulbs will be phased out beginning with 100-watt bulbs in 2012 and ending with 40-watt bulbs in 2014. The reason for this phase out is simple: incandescent lightbulbs are horribly energy inefficient. Light is a form of energy, so you can compare the visible light energy emitted by any lamp to the energy that lamp consumes. According to that comparison, an incandescent lightbulb is roughly 5% efficient—a 100-watt incandescent bulb emits about 5 watts of visible light. In contrast, a fluorescent lamp is typically about 20% energy efficient—a 25-watt fluorescent lamp emits about 5 watts of visible light. Another way to compare incandescent and fluorescent lamps is via their lumens per watt. The lumen is a standard unit of usable illumination and it incorporates factors such as how sensitive our eyes are to various colors of light. If you divide a light source's light output in lumens by its power input in watts, you'll obtain its lumens per watt. For the incandescent lightbulb appearing at the left of the photograph, that calculation yields 16.9 lumens/watt. For the "long life" bulb at the center of the photograph, it give only 15.3 lumens/watt. And for the color-improved bulb on the right of the photograph, the value is only 12.6 lumens/watt. Our grandchildren will look at this photograph of long forgotten incandescent bulbs and be amazed that we could squander so much energy on lighting. The fluorescent lamp in the other photograph is far more efficient. It produces more useful illumination than any of the three incandescent bulbs, yet it consumes just over a quarter as much power. Dividing its light out in lumens by its power consumption in watts yields 64.6 lumens/watts. It is 4 times as energy efficient as the best of the incandescent lightbulbs. Some fluorescent lamps are even more efficient than that. Another feature to compare is life expectancy. Even the so-called "long life" incandescent predicts a 1500 hour life, which is only 15% of the predicted life for the fluorescent lamp (10,000 hours). Although the fluorescent costs more, it quickly pays for itself in energy use and less frequent replacement. You should recycle a fluorescent lamp because it does contain a tiny amount of mercury, but overall it's a much more environmentally friendly light source. Adding salt to water won't make everything float, but it will work for an object that just barely sinks in pure water. A hard-boiled egg is the most famous example: the egg will sink in pure water, but float in concentrated salt water. To explain why that happens, I need to tell you about the two forces that act on the egg when it's in the water. First, the egg has its weight—it's being pulled downward by gravity. That weight force tends to make the egg sink. Second, the egg is being pushed upward by the water around it with a force known as "the buoyant force." The buoyant force tends to make the egg float. It's a battle between those two forces and the strongest one wins. The buoyant force exists because the water that is now surrounding the egg used to be surrounding an egg-shaped blob of water and it was pushing up on that blob of water just hard enough to support the blob's weight. Now that the egg has replace the egg-shaped blob of water, the surrounding water is still pushing up the same amount as before and that upward force on the egg is the buoyant force. Since the buoyant force on the egg is equal in amount to the weight of the water that used to be there, it can support the egg only if the egg weighs no more than the egg-shaped blob of water. If the egg is heavier than that blob of water, the buoyant force will be too weak to support it and the egg will sink. It so happens that a hard-boiled egg weighs slightly more than an egg-shaped blob of pure water, so it sinks in pure water. But that egg weighs slightly less than an egg-shaped blob of very salty water. Adding salt to the water increases the water's weight significantly while having only a small effect on the water's volume. Salt water is heavier, cup for cup, than fresh water and it produces stronger buoyant forces. In general, any object that weighs more than the fluid it displaces sinks in that fluid. And any object that weighs less than the fluid it displaces floats. You are another good example of this: you probably sink in fresh water, particularly after letting out all the air in your lungs. But you float nicely in extremely salty water. The woman in this photograph is floating like a cork in the ultra-salty water of the Dead Sea. When you use a microwave oven to heat water in a glass or glazed container, the water will have difficulty boiling properly. That's because boiling is an accelerated version of evaporation in which water vapor evaporates not only from the water's upper surface, but also through the surface of any water vapor bubbles the water happens to contain. I use the phrase "happens to contain" because that is where all the trouble lies. Below water's boiling temperature, bubbles of water vapor are unstable—they are quickly crushed by atmospheric pressure and vanish into the liquid. At or above water's boiling temperature, those water vapor bubbles are finally dense enough to withstand atmospheric pressure and they grow via evaporation, rise to the surface, and pop. At that point, I'd probably call the water vapor by its other name: steam. But where do those steam or water vapor bubbles come from in the first place? Forming water vapor bubbles in the midst of liquid water, a process called nucleation, is surprisingly difficult and it typically happens at hot spots or non-wetted defects (places where the water doesn't completely coat the surface and there is trapped air). When you boil water in a metal pot on the stove, there are hot spots and defects galore and nucleating the bubbles is not a problem. When you boil water in a glass or glazed container using a microwave oven, however, there are no significant hot spots and few non-wetted defects. The water boils fitfully or not at all. The "not at all" possibility can lead to disaster. Water that's being heated in a metal pot on the stove boils so vigorously that the stove is unable to heat it more than tiny bit above its boiling temperature. All the heat that's flowing into the water is consumed by the process of transforming liquid water into gaseous water, so the water temperature doesn't rise. Water that's being heated in a glass container in a microwave oven boils so fitfully that you can heat it above its boiling temperature. It's simply not able to use up all the thermal energy it receives via the microwaves and its temperature keeps rising. The water becomes superheated. Most of the time, there are enough defects around to keep the water boiling a bit and it superheats only a small amount. When you remove the container of water from the microwave oven and toss in some coffee powder or a teabag, thus dragging air bubbles below the surface, the superheated water boils into those air bubbles. A stream of bubbles suddenly appears on the surface of the water. Most people would assume that those bubbles had something to do with the powder or teabag, not with the water itself. Make no mistake, however, the water was responsible and those bubbles are mostly steam, not air. Occasionally, though, the water fails to boil at all or stops boiling after it manages to wet the last of the defects on the glass or glazed surface. I've made this happen deliberately many times and it's simply not that hard to do. It can easily happen by accident. With no bubbles to assist evaporation, the water's only way to get rid of heat is via evaporation from its top surface. If the microwave oven continues to add thermal energy to the water while it is having such difficulty getting rid of that energy, the water's temperature will skyrocket and it will superheat severely. Highly superheated water is explosive. If something causes nucleation in that water, a significant fraction of the water will flash to steam in the blink of an eye and blast the remaining liquid water everywhere. That boiling-hot water and steam are a major burn hazard and the blast can break the container or blow it across the room. I've heard from a good number of people who have been seriously hurt by exploding superheated water produced accidentally in microwave ovens. It's a hazard people should take seriously. After that long introduction, it's time to answer your question. Yes, I believe that the microwave makers are responsible for advising people of this hazard. Moreover, they know that they are responsible for doing it. If you look at any modern microwave oven user manual, you will find a discussion of superheating or overheating. Look at your manual, I'll bet it's in there. But that discussion will almost certainly be buried in the middle of an long list of warnings. For example, in one manual, the discussion of overheated water appears as item 17 of 22, after such entries as "4. Install or locate this appliance only in accordance with the provided installation instructions" and "12. Do not immerse cord or plug in water". To be fair to the manufacturer, warning 17 is longest of the bunch and it suggests mostly reasonable precautions (although I'm not so happy with recommendation 17a: "Do not overheat the liquid."). No Duh. I think the issue is this: most product warnings are provided not out of any sincere concern for the consumer, but out of fear of litigation. A manufacturer's goal when providing those warnings is therefore to be absolutely comprehensive so that they can point to a line in a user manual in court and claim to have fulfilled their responsibility. The number and order of the warnings makes no difference; they just have to be in there somewhere. So all those warnings you ignore in product literature aren't really about consumer safety, they're about product liability. You ignore them because everything now comes with a thousand of them, ranging from the reasonable to the ridiculous. For my research, I ordered 99.999% pure sodium chloride (i.e., ultrapure table salt). It came with a 6-page Material Safety Data Sheet that identifies it as an "Xi Irritant", noting that it is "Irritating to eyes, respiratory system and skin" and recommending first aid measures that include: "After inhalation: supply fresh air. If required, provide artificial respiration. Keep patient warm. Seek immediate medical advice.So much for swimming in the ocean... By design and by accident, our society has lost the ability to distinguish real risk from imaginary risk. We treat all risks as equal and spend way too much time worrying about the wrong ones. If you want to be safer around your cell phone, for example, you should worry more about driving with it in your hand than about the microwave radiation it emits. The current evidence is that your risk of injury or death due to a cell-phone related accident far outweighs your risk from cell-phone microwave exposure. Even if further research proves that cell phone microwave exposure is injurious, we should be acting according to our best current assessments of risk, not according to fears and beliefs. That said, I'd like to see product literature rank their warnings according to risk and put the real risks in a separate place where they can't be overlooked or ignored. Put the real consumer safety stuff where the consumers will see it and put the product liability stuff somewhere else where the lawyers can find it. For a microwave oven, there are probably about half a dozen real risks that people should know about. Several of them are relatively obvious (e.g., don't heat sealed containers) and some are not obvious (e.g., liquids heated in the microwave can become superheated and explode). Maybe we'll get a handle on risk someday. In the meantime, inform your friends and children that they should be careful about heating liquids in the microwave, particularly in glass or glazed containers. Just knowing that superheating is possible would probably halve the number of burns and other injuries that result from superheating accidents. When you run a microwave oven without any food inside, there is nothing to absorb the microwaves and they build up inside the cooking chamber. Eventually, something has to absorb them and that something is the oven's microwave source—its magnetron. The magnetron isn't good at handling excessive power that returns to it from the cooking chamber and it can be damaged as a result. In all my years of experimenting with microwave ovens, I've only killed a magnetron once. But then again, I haven't run a microwave oven for more than a minute or two without anything inside it. If the oven works again after cooling down, then you're probably OK. The oven may have thermal interlocks in its microwave source to prevent that source from overheating and becoming a fire hazard. If the oven fails to work after an hour of cooling off, then you're probably out of luck. The magnetron and/or its power supply are likely to be fried and in need of replacement. High heeled shoes can produce enormous pressures on a wooden floor and dent it permanently. To understand why that happens, let's start with a pair of flat-heeled shoes and consider the forces and pressures in that situation. When a women stands on the floor, the floor must support her weight. Specifically, she isn't accelerating so the net force on her must equal zero. That implies that the floor must exert an upward force on her that exactly cancels her downward weight. She is motionless and stays motionless because there is no overall force on her. Because the floor is pushing upward on her shoes, her shoes must be pushing downward on the floor. It's an example of the famous "action and reaction" principle known as Newton's third law: if you push on something, it pushes back equally hard in the opposite direction. Anyway, her shoes are pushing down hard on the floor.Now for the pressure part of the story. Because she is wearing flats, her shoes are pushing against a large area of the floor and the pressure—the force per area—she produces on the floor is relatively small. For example, if she weighs 130 pounds (580 newtons) and her shoes have a contact area of 10 square inches (65 square centimeters), then the pressure she exerts on the floor is about 13 pounds-per-square-inch (9 newtons-per-square-centimeter or 90,000 pascals). That's a gentle pressure that won't permanently dent most woods. It might dent cork or balsa, but that's about it. But when she wears high heels, most of her weight is supported by a very small area of flooring. If the heels are narrow spikes with a contact area of 0.1 square inches (0.65 square centimeters) and she puts all of her weight briefly on one of the heels, she may exert a pressure of 1300 pounds-per-square-inch (9000 newtons-per-square-centimeter or 90 million pascals) on the floor. That's an enormous pressure that will permanently dent most wooden floors. You can experiment with these ideas simply by supporting the weight of your right hand with the open palm of your left hand. If you lay your right fist on your left palm, you won't feel any discomfort in your left hand. The pressure on your left palm is very small. But if you instead point right index finger into your left palm and use that finger to support the entire weight of your right hand, it won't feel so comfortable. If you shift all of the weight to your fingernail, it'll start to hurt your left palm. What you're doing is reducing the area of your left palm that is supporting your right hand and as that area gets smaller, the pressure on your left palm increases. Beyond a certain pressure, it feels uncomfortable. Long before your palm dents permanently, you'll decide to stop the experimenting. The cable would indeed lengthen when you pulled it. In fact, you would produce a wave of stretching motion that travels along the cable at the speed of sound in that cable. That's because you can't directly influence the cable beyond what you can touch. You can only pull on your end of the cable, causing it to accelerate and move, and let it then pull on the portion of cable adjacent to it. Each portion of cable responds to being pulled by accelerating, moving, and consequently pulling on the portion of cable adjacent to it. There will be a long series of actions—pulling, accelerating, moving, and pulling again—that propagates your influence along the cable. A wave will travel along the cable, a wave consisting of a local reduction in the cable's density. It's a stretching wave. In that respect, the wave is a type of sound wave—a density fluctuation that propagates through a medium. How quickly the density wave travels along the cable depends on how stiff the cable is and on its average mass density. The stiffer the cable, the more strongly each portion can influence its neighboring portions and the faster the density wave will travel. The greater the cable's mass density, the more inertia it has and the slower it respond to pulls, so the density wave will travel slower. A cable made from a stiff, low-density material carries sound faster than a soft, high-density material. A steel cable should carry your wave at about 6100 meters/second (3.8 miles/second). But a diamond cable would reach 12000 meters/second (7.5 miles/second) because of its extreme stiffness and a beryllium cable would approach 13000 meters/second (8.0 miles/second) because of its extremely low mass density. Regardless of which material you choose, you're clearly not going to be able to send any signals faster than the speed of light. It would take a density wave more than 100,000 years to travel the 5-light year length of your cable. And sadly, friction-like dissipation effects in the cable would turn the density wave's energy into thermal energy in a matter of seconds, so it would barely get started on its journey before vanishing into randomness. While I'm sorry to hear that your dog isn't well, I doubt that electromagnetic fields are responsible for her infirmities. The fields from the telephone adapter are too weak to have any significant effect and 60-Hz electromagnetic fields don't appear to be dangerous even at considerably stronger levels. To begin with, plug-in power adapters are designed to keep their electromagnetic fields relatively well contained. They're engineered that way not because of safety concerns but because their overall energy efficiencies would diminish if they accidentally conveyed power to their surroundings. Keeping their fields inside keeps their energy inside, where it belongs. Moreover, any electric and magnetic fields emerging from an adapter probably don't propagate as waves and instead fall off exponentially with distance. As a result, it should be fairly difficult to detect electric or magnetic fields more than a few inches from the adapter. Even if the adapter did project significant electric and magnetic fields all the way to where your dog sleeps, it's still unlikely that they would cause any harm. For years, researchers have been looking for a correlation between high-voltage electric power lines and a variety of human illnesses, notably childhood cancers such as leukemia. As far as I know, no such correlation has ever been demonstrated. In all likelihood, if there are any risks to being near 60-Hz electric or magnetic fields, those risks aren't large enough to be easily recognized. In contrast to power adapters, cell phones deliberate emit electric and magnetic fields in order to communicate with distant receivers on cell phone towers. Those fields are woven together to form electromagnetic waves that propagate long distances and definitely don't vanish inches from a cell phone. Any electromagnetic hazard due to a power adapter pales in comparison to the same for cell phones. Furthermore, cell phone operate at much higher frequencies than the alternating current power line. A typical cell phone frequency is approximately 1 GHz (1,000,000,000 Hz), while ordinary alternating current electric power operates at 60 Hz (50 Hz in Europe). Higher frequencies carry more energy per quanta or "photon" and are presumably more dangerous. But even though cell phones are held right against heads and radiate microwaves directly into brain tissue, they still doen't appear to be significantly dangerous. As unfond as I am of cell phones, I can't condemn them because of any proven radiation hazard. Their biggest danger appears to be driving with them; I don't understand why they haven't been banned from the hands of drivers. Lastly, there are no obvious physical mechanisms whereby weak to moderate electric and magnetic fields at 60-Hz would cause damage to human or canine tissue. We're essentially non-magnetic, so magnetic fields have almost no effect on us. And electric fields just push charges around in us but that alone doesn't cause any obvious trouble. Research continues into the safety of electromagnetic fields at all frequencies, but this low-frequency stuff (power lines and cell phones) doesn't seem to be unsafe. Although yours isn't a physics question, it's one that's interesting to me and easy to answer. The person who sets up the website gets to choose the domain. That's all there is to it. As long as the complete domain name hasn't already been registered, you can pay a fee and register it. For example, I chose to register this website as www.howeverythingworks.org because I feel more like an organization (of one person) than a commercial enterprise. I could have registered it as www.howeverythingworks.in, but that would imply I'm in India and I'm not. The only exception that I know of is .edu, which is restricted to educational institutions. I would not be allowed to register this website as www.howeverythingworks.edu. Actually, I could have registered this website as www.howeverythingworks.com, but I would have had to purchase that domain name from someone else. It is registered to a cybersquatter—someone who registers a domain name in hopes of selling it at a profit to someone else. Cybersquatting was hugely popular during the internet bubble, when companies were paying vast amounts of money for particular domain names. But these days, who wants to pay thousands of dollars for a name? I'm totally happy to be www.howeverythingworks.org and I'll let someone else pay the big bucks to purchase www.howeverythingworks.com. In the meantime, that domain is just a link to advertising and an offer to sell the domain name. The glass window itself isn't important to the microwave oven's operation, but the metal grid associated with that window certainly is. The grid forms the sixth side of the metal box that traps the microwaves so they cook food effectively. In principle, you can remove all the glass and still cook food, but I think that would be a bad idea. The grid isn't very sturdy on its own and if it develops cuts or holes, it will allow microwaves to leak out of the oven. You want those microwaves to stay inside the box to cook the food and not to escape to cook you. Even if the oven door has multiple layers of glass, those layers are there for your protection. If you touch the outside of the metal grid while the oven is on or get close enough to it through the last layer of glass, you'll be able to absorb some microwave power and it'll probably hurt. That's because while the holes in the grid are too small to allow the microwaves to propagate through them and truly escape from the oven, they do allow an "evanescent wave" to exist just outside each hole in the grid. That evanescent wave dies off exponentially with distance beyond the hole, so it won't travel around the room. But you don't want to put your finger in it. For inexpensive microwave ovens, you're probably best off simply recycling the oven. I'm not happy about the modern everything-is-disposable state of appliances and equipment, but I can't say that it's cost effective to repair an oven that costs less than about $100. For more expensive microwave ovens, you can usually replace the window or the door. We have had a GE combination microwave and convection oven over our stove top for about 10 years and the door started to come apart about 18 months ago. I purchased a replacement microwave oven door over the web for $140 and installed it myself. It works beautifully. If you're not handy or are concerned about microwave leaks, you should probably have it replaced professionally. But you can look up the parts themselves online at a number of web sites and get an idea of what the cost will be. As long you shutdown the computer first, turning off the power strip is fine. Essentially all modern household computer devices are designed to shut themselves down gracefully when they lose electrical power and that's exactly what they down when you turn off the power strip. In fact, turning off the power strip is likely to save energy as well. Many computer devices have two different "off" switches: one that stops them from doing their normal functions and one that actually cuts off all electrical power. Computers in particular don't really turn off until you reach around back and flip the real power switch on the computer's power supply. The same is true of television monitors and home theater equipment. In general, any device that has a remote control or that can wake itself up to respond to a pretty button or to some other piece of equipment is never truly off until you shut off its electrical power. Our homes are now filled with electronic gadgets that are always on, waiting for instructions. Keeping them powered up even at a low level consumes a small amount of electrical power and it adds up. Last I heard, this always-on behavior of our gadgets consumes something on the order of 1% of our electrical power. Whatever it is, it's too much. So by turning off your power strip and completely stopping the flow of power to your computer, your speakers, your monitor, etc., you are saving energy. You lose the convenience of being able to turn everything on from your couch with a remote, but who cares. Energy is too precious to waste for such nonessential conveniences. Unlike sound waves or ocean waves, radios waves do not travel in a material. Radio waves are a class of electromagnetic waves and consist of nothing more than electric and magnetic fields. Since they don't require any medium through which to travel, they can go right through empty space. That's why we're able to see the stars, after all. The idea of a wave that travels through space itself was a rather disorienting notion to scientists in the late 1800s. They were used to the idea that waves are disturbances in a tangible material or "medium": fluctuations in the density of air, ripples on the surface of water, vibrations of a taut string. Having observed that light and radio waves are electromagnetic waves, they set about looking for the medium that supported those waves. They were expecting to find this "luminiferous aether" but they failed. In fact, the absence of an aether led in part to Einstein's theory of special relativity. The structure of a radio wave, or any electromagnetic wave, is quite simple. It consists only of a fluctuating electric field and a fluctuating magnetic field. An electric field is a structure in space that affects electric charge; it pushes on charge and causes that charge to accelerate. Similarly, a magnetic field is a structure that affects magnetic pole. Remarkably, changing electric fields produce magnetic fields and changing magnetic fields produce electric fields. That interrelatedness allows the wave's fluctuating electric field to produce its fluctuating magnetic field and vice verse. The wave's electric and magnetic fields endless recreate one another. Although electric charge or magnetic pole is needed to emit or receive a radio wave, that wave can travel perfectly well for billions of light years without involving any charge or pole. It travels through space itself. Because the oven's microwave frequency is more than 20 times higher than anything a normal radio receives, I'd be surprised if the radio would notice even a pretty severe microwave leak. What you describe doesn't sound like it's caused by the microwaves. It sounds like it's caused by an electrical problem in the oven's high-voltage power supply. An older oven would have used a heavy transformer, a capacitor, and a diode to convert ordinary household AC power to high-voltage DC power for its magnetron microwave tube. But since your oven was made recently, it probably uses a switching power supply to produce that high voltage. That supply contains a much more sophisticated electronic switching system to convert household AC power to high-voltage DC power. The new approach is cheaper and lighter, so it's taking over in microwave ovens. Just because it's more sophisticated, however, doesn't mean it's more reliable. My guess is that the unit in your oven has a problem. If it has an intermittent contact in it or if there is a conducting path that is sparking somewhere in the power supply or in the unit as whole, they'll be randomly fluctuating currents present in the oven and those current fluctuations will produce radio waves. A sparking wire or carbonized patch on the power supply will start and stop the flow of current erratically and that can easily cause interference on the AM band. Ordinary AM radio is very susceptible to radio-frequency interference at around 1 MHz and sparking stuff tends to produce such radio waves. A car with a bad ignition system, a lawn mower, and a thunderstorm all interfere beautifully with AM reception. And I suspect that you've got a similar electrical problem in your oven. I doubt that your oven is a microwave hazard, but you should probably have a repair person to take a look at it. It shouldn't have anything sparking inside it. It probably won't work and it's definitely not safe. Instead of tricking your friends, you risk cooking them. Here is why I think you'd better leave your plan as a thought experiment only. Those YouTube videos were complete fakes; they didn't pop any popcorn while the camera was rolling. To make it appear that the cell phones were popping the corn, the people who produced the videos dropped already prepared popcorn into the frame and then photoshopped away the unpopped kernels. When you watch the video, it looks like the kernels are popping, but they're really just disappearing via video editing as precooked popcorn is sprinkle onto the set from above. The reason they had to use video trickery is pretty clear: to pop popcorn with microwaves, those microwaves have to be extremely intense. Each kernel contains only a tiny amount of water and it's the water that heats up when the kernel is exposed to microwaves. If the microwaves aren't intense enough, the heat they deposit in the kernel's water will flow out to the rest of the kernel and into the environment too quickly for the kernel's water to superheat and then flash to steam. Even when you put popcorn kernels in a closed microwave oven, it takes a minute or two for the kernels to accumulate enough thermal energy to pop. In that closed microwave oven, the microwaves bounce around inside the metal cooking chamber and their intensity increases dramatically. It's like sending the beam from a laser pointer into a totally mirrored room—the light energy in that room will build up until it is extremely bright in there. In the closed cooking chamber of the oven, the microwave energy also builds up until the microwave intensity is enough to pop the corn. How intense? Well, a typical microwave oven produces 700 watts of microwave power. Since the cooking chamber is nearly empty when you're popping popcorn, the cooking chamber accumulates a circulating power of very roughly 50,000 watts. Although that power is spread out over the cross section of the oven, the microwaves are still seriously intense -- thousands of watts per square inch. To put that in perspective, a cell phone transmits a maximum of 2 watts and that power is spread out over at least 5 square inches so the intensity is less than 1 watt per square inch. When I saw those videos in Summer 2008, I realized that there was no way cell phones were ever going to pop popcorn. They certainly wouldn't do it while they are ringing, because that's when they are primarily receiving microwaves, not when they're transmitting them. It's when you're talking that your cell phone is regularly producing microwaves. It was all obviously just fun and games. So what about your disassembled microwave oven? Since there is no metal box to trap the microwaves and accumulate energy, they'll only have one shot at popping the corn kernels. The microwaves will emerge from the magnetron's waveguide at high intensity, but they'll spread out quickly once there is nothing to guide them. You could probably pop kernel right at the mouth of the magnetron but not a few inches away. Unless you use microwave optics to focus those microwaves, they'll have spread too much by the time they get through the table and reach the kernels of popcorn and the kernels will probably never pop. If that were the whole story, the worst that would happen with your experiment would be that it wouldn't cook popcorn. But there is a real hazard here. Sending about 700 watts of microwaves into the room isn't exactly safe. It's something like having a red hot coal emitting 700 watts of infrared light, except that you won't see anything with your eyes and this microwave "light" is coherent (i.e., laser-like) so it can focus really tightly. You'd hate to have some metal structure in the room or even inside the walls of the room focus the microwaves onto you. You absorb microwave much better than the corn kernels and you'll "pop" long before they do. Actually, your eyes are particularly sensitive to microwave heating and you might not notice the damage until too late. Without instruments to observe the pattern of microwaves in the room when the magnetron is on, I wouldn't want to be in the room. A basketball depends on pressurized air for its bounciness. When the ball hits the court, it compresses that air and the air stores energy in its compression. The ball's rebound is powered by the air returning to its original characteristics. The ball's skin, on the other hand, isn't all that bouncy and doesn't store energy well. To bounce well, the basketball needs to store energy in its air during the bounce, not in its skin. That's why it's important to have an air pump so that you can keep your basketball properly inflated. When you cool a basketball, however, you reduce the pressure of its air. That's because the air molecules have less thermal energy at colder temperatures and thermal energy is responsible for air pressure. A basketball that was properly inflated at warm temperature becomes under-inflated when you cool it down. At the same time, the basketball's skin becomes less elastic and more leathery at cool temperatures. So the basketball suffers from under-inflation and from a leathery, not-very-bouncy skin. If you cool a basketball to low enough temperature, its skin will freeze and become brittle. Just how low the temperature has to go depends on the material used in to make the basketball. I've never seen a basketball shatter on the court, even in pretty cold weather, so I doubt you can "freeze" one in a household freezer. But I'm sure that a dip in liquid nitrogen at -395 °F would do the trick. I often freeze rubber handballs in liquid nitrogen for my class and then shatter them on the floor. To help you visualize how a string can vibrate at several frequencies at once, I wrote a flash program that shows you what a vibrating string looks like. That program should appear below this note. It allows you to adjust eight parameters: the amplitudes of the string's four simplest vibrational modes (its fundamental vibration through its fourth harmonic vibration) and the phases of those modes. The program starts with a pure fundamental vibration of the string, which is easy to visualize. But you can turn on the second, third, and fourth harmonic vibrations to whatever extent you like. What you'll observe is that a string that's vibrating at several frequencies at once has a complicated shape, but doesn't look all that unfamiliar. It's simply a mixture of several standing waves that evolve at different rates. As a result, it exhibits a fancy rippling shape that you've probably see on a jump rope or a clothesline. If you look carefully at the string while it's vibrating in a mixture of several harmonics, you'll see that it has only one shape at any moment in time. It's just a jiggling string, after all. The parts of that shape, however, are evolving at different rates in time and those parts are actually the different harmonics going through their individual motions at their own frequencies. The speed of light in vacuum, as denoted by the letter c, is truly a constant of nature and one of its most influential constant at that. Even if light didn't exist, the speed of light in vacuum would. It is a key component of the relationship between space and time known as special relativity. But while the speed of light in vacuum is a constant, the speed of light in matter isn't. Light is an electromagnetic wave and consists of electric and magnetic fields. Electric fields push on electric charge and matter contains electric charges, so light and matter interact. That interaction normally slows light down; the light gets delayed by the process of shaking the electric charges. In air, this slowing effect is tiny, less than 1 part in a thousand. In glass, plastic, or water, light is slowed by about 30 or 40%. In diamond, the interaction is strong enough to slow light by 60%. In silicon solar cells, light is slowed by 70%. And so it goes. To really slow light down, however, you need to choose a specific frequency of light and let it interact with a material that is resonant with that light. Because a resonant material responds extremely strongly to the light's electric field, it delays the light by an enormous amount. And by choosing just the right wavelength of light to match a particular collection of resonant atoms, Lene Hau and her colleagues managed to bring light essentially to a halt. The light lingers nearly forever with the atoms in their apparatus and it barely makes any headway. Yes, you can tell how fully you have consolidated a powder by the extent to which it scatters light. The more perfect the packing, the more transparent the powder becomes. It's a matter of homogeneity: the more perfect the packing, the more homogeneous the material and the easier it is for light to travel straight through it. To understand why light scatter depends on homogeneity, consider what happens when light pass through clear particles. Even though they are clear, light still interacts with them, as evidenced by rainbows, clouds, and even the blue sky. How best to think about that interaction depends on the size of the particles. If the particles are large, like smooth beads of glass or plastic, then they exhibit the familiar refraction and reflection effects of window panes and lenses. If the particles are small, like air molecules and tiny water droplets, then they exhibit a more antenna-like interaction with light. In effect, those tiny particles occasionally absorb and reemit the light waves, particularly at the short-wavelength (i.e., blue) end of the light spectrum. Both types of interactions are quite familiar to us. Large particles scatter light about without any color bias and exhibit a white appearance. The more surface area a collection of particles has, the more light that collection scatters. For example, a large ice crystal is clear but crushed ice or snow is white. Similarly, a bowl of water is clear but a mist of water droplets is white. Lastly, a bowl of air is clear, but a froth of air bubbles in water is white. As you can see, the transparent particles don't have to be solids or liquids to scatter light, they can even be gases! On the other hand, truly tiny particles scatter light about according to wavelength and color. In most cases, shorter-wavelength (blue) light scatters more than longer-wavelength (red) light. That effect, known as Rayleigh scattering, is responsible for the blue sky and the red sunset. In a nutshell then, large transparent particles appear white and tiny transparent particles appear colored (typically bluish). And the more particles there are, the more light is scattered. Returning to your question, a loose powder of transparent particles scatters light like crazy and appears white or possible colored, depending on particle size. As you pack the powder more and more tightly together, its surfaces join together and it starts to lose the ability to scatter light; it becomes less white and more translucent. When the consolidation is almost complete, the material acquires a slightly hazy look due to scattering by the occasional voids left inside the otherwise transparent material. Finally, when the material is fully consolidated and there is no internal surface left in the powder, it is homogeneous and clear. So sending light through a packed transparent powder and measuring the amount and color of the scattered light tells you a lot about how well consolidated that powder is. Yes, the temperature of the gas will rise as you shake it. It's a subtle effect, so insulating the container by putting it in vacuum is probably a good idea. As you shake the container, its moving walls bat the tiny gas molecules around, sometimes adding energy to them and sometimes taking it away. On average however, those moving walls add energy to the gas molecules and thereby increase the gas's temperature. A simple way to see why that's the case is to picture the gas as composed of many little bouncing balls inside the container. Those balls are perfectly elastic so they rebound from a stationary wall without changing their speeds at all. But the walls of the container aren't stationary, they move back and forth as you shake the container. Because of the moving walls, the balls change their speeds as they rebound. A ball that bounces off a wall that is moving toward it gains speed during its bounce, like a pitched ball rebounding from a swung bat. On the other hand, a ball that bounces off a wall that is moving away from it loses speed during its bounce, like a pitched ball rebounding from a bat during a bunt. If both types of bounces were equally common in every way then, on average, the balls (or actually the gas molecules) would neither gain nor lose speed as the result of bounces off the walls and the gas temperature would remain unchanged. But the bounces aren't equally common. It's more likely that a moving ball will hit a wall that is moving toward it than that it will hit a wall that is moving away from it. It's a geometry problem; you get wet faster when you run toward a sprinkler than when you run away from the sprinkler. So, on average, the balls (or gas molecules) gain speed as the result of bounces off the walls and the gas temperature increases. How large this effect is depends on the relative speeds of the gas molecules and the walls. The effect becomes enormous when the walls move as fast or faster than the gas molecules but is quite subtle when the gas molecules move faster than the walls. Since air molecules typically move at about 500 meters per second (more than 1000 mph) at room temperature, you'll have to shake the container pretty violently to see a substantial heating of the gas. You're right that DC (direct current) power transmission has some important advantages of AC (alternating current) power transmission. In alternating current power transmission, the current reverses directions many times per second and during each reversal there is very little power being transmitted. With its power surging up and down rhythmically, our AC power distribution system is wasting about half of its capacity. It's only using the full capacity of its transmission lines about half of each second. Direct current power, in contrast, doesn't reverse and can use the full capacity of the transmission lines all the time. DC power also avoids the phase issues that make the AC power grid so complicated and fragile. It's not enough to ensure that all of the generators on the AC grid are producing the correct amounts of electrical power; those generators also have to be synchronized properly or power will flow between the generators instead of to the customers. Keeping the AC power grid running smoothly is a tour-de-force effort that keeps lots of people up at night worrying about the details. With DC power, there is no synchronization problem and each generating plant can concentrate on making sure that their generators are producing the correct amounts of power at the correct voltages. Lastly, alternating currents tend to flow on the outsides of conductors due to a self-interaction between the alternating current and its own electromagnetic fields. For 60-cycle AC, this "skin effect" is about 1 cm for copper and aluminum wires. That means that as the radius of a transmission line increases beyond about 1 cm, its current capacity stops increasing in proportion to the cross section of the wire and begins increasing in proportion to the surface area of the wire. For very thick wires, the interior metal is wasted as far as power delivery is concerned. It's just added weight and cost. Since direct current has no skin effect, however, the entire conductor can be carry current and there is no wasted metal. That's a big plus for DC power distribution. The great advantage of AC power transmission has always been that it can use transformers to convey power between electrical circuits. Transformers make it easy to move AC power from a medium-voltage generating circuit to an ultrahigh-voltage transmission line circuit to a medium-voltage city circuit to a low-voltage neighborhood circuit. DC power transmission can't use transformers directly because transformers need alternating currents to move power from circuit to circuit. But modern switching electronics has made it possible to convert electrical power from DC to AC and from AC to DC easily and efficiently. So it is now possible to move DC power between circuits by converting it temporarily into AC power, sending it through a transformer, and returning it to DC power. They can even use higher frequency AC currents and consequently smaller transformers to move that power between circuits. It's a big win on all ends. While I haven't followed the developments in this arena closely, I would not be surprised if DC power transmission started to take hold in the United State as we transition from fossil fuel power plants to renewable energy sources. Using those renewable sources effectively will require that we handle long distance transmission better than we do now and we'll have to develop lots of new transmission infrastructure. It might well be DC transmission. A traditional fluorescent lamp needs a ballast to limit the current flowing through its gas discharge. That's because gas discharges have strange electrical characteristics, most notably a regime of "negative" electrical resistance: the voltage drop across the discharge actually decreases as the current in the discharge increases. If you connect a gas discharge lamp to a voltage source without anything to limit the current and start the discharge, the current flowing through the lamp will rise essentially without limit and the lamp will quickly destroy itself. As a kid, I blew up several small neon lamps by connecting them directly to the power line without any current limiter. That was not a clever or safe idea, so don't try it! The standard current limiter for fluorescent lamps and other discharge lamps that are powered from 60-cycle (or 50-cycle) alternating current has been an electromagnetic coil known as a ballast. When that coil is in series with the discharge, the coil's self-inductance limits how quickly the current flowing through the lamp can rise and therefore how much power the lamp can consume before the alternating current reverses direction. The discharge winks on and off with each current reversal and never draws more current than it can tolerate. Unfortunately, the lamp's light also winks on and off and some people can see that flicker, especially with their peripheral vision. Actually, the ballast usually has another job to do in a traditional fluorescent lamp: it acts as a transformer to provide the current needed to heat the electrode filaments at the ends of the lamp. Heating those electrodes helps drive electrons out of the metal and into the lamp's gas so that the gas becomes electrically conducting. In total then, the ballast receives alternating current electric power from the power line and prepares it so that all the lamp filaments are heated properly and a limited current flows through the lamp from one electrode to the other. In modern fluorescent lamps with heated electrodes, however, the role of the ballast has been usurped by a more sophisticated electronic power conditioning device. That device converts 60-cycle alternating current electric power into a series of electrical energy pulses, typically at about 40,000 pulses per second, and delivers them to the lamp. The lamp's flicker is almost undetectable because it is so fast and the limited energy in each pulse prevents the discharge from consuming too much power. It's a much better system. Compact fluorescent lamps use it exclusively. So where might high voltage fit into this story? Well, there are some fluorescent lamps that don't heat their electrodes with filaments. They rely on the discharge itself to drive electrons out of the electrodes and into the gas to sustain the discharge. But that begs the question: "how does such a lamp start its discharge?" It uses high voltage. Because of cosmic rays and natural radioactivity, gases always have some electric charges in them: ions and electrons. When the voltage difference between the two ends of the lamp becomes very large, the electric field in the lamp propels those naturally occurring ions and electrons into the constituents of the lamp violently enough to start the lamp's discharge. The voltages needed to start these "cold cathode" lamps are typically in the low thousands of volts. For example, the cold cathode fluorescent lamps used in laptop computer displays start at about 2000 volts and then operate at much lower voltages. When you carry the firewood up the hill, you transfer energy to it and increase its gravitational potential energy. When you then burn the wood, you seem to make this energy disappear. After all, there doesn't appear to be any difference between burning wood in the valley and burning wood on the top of the hill. The wood is gone either way. But appearances can be deceiving. Since energy is a conserved quantity, the energy that you invest in the firewood can't disappear. It simply becomes difficult to find because it is dispersed in the burned gases that were once the wood. To find that energy, imagine compressing the burned gases into a small container to make their weight more noticeable and reduces buoyant effects due to the atmosphere. You could then carry those burned gases, which include all of the firewood's atoms, back down the hill. As you descended, the container of burned gases would transfer its gravitational potential energy to you. I've swept a number of details under the rug, such as the fact that many of the oxygen atoms in your container were originally part of the atmosphere rather than the log. But even when all those details are taken into account, the answer is the same: the firewood's gravitational energy doesn't disappear, it just gets more difficult to find. No, those gases don't absorb microwaves significantly regardless of frequency. Diatomic molecules are nearly oblivious to long wavelength electromagnetic waves. In fact, that's why they don't contribute to the "greenhouse effect." Oxygen does have an unusual absorption band in the near infrared, but that's about it. It's quite possible that the pattern of microwaves inside your oven is more intense at some places than in others — that's why most microwaves have carousels in them to move the food around. I don't think that the pattern will change much with age, but it's possible that your oven isn't producing as much microwave power as it once did and you notice the low-intensity regions more than before. It's not a true "fault", but it is a nuisance. If you get tired of putting up with it, you should probably replace the oven. It used to be that you could purchase carousel inserts for the ovens, but I don't see them for sale anymore. Since light carries energy, a laser beam can't simply disappear after a couple of feet — something would have to absorb it and its energy. Since the atmosphere is extremely transparent to visible light, it won't do the trick. Since eye safety requires limiting the amount of laser power that can enter a person's eye, you can make a laser more eye-safe by enlarging its beam. Even a powerful laser can be eye-safe if only a small fraction of the laser light can enter a person's iris and focus on their retina. Although it's natural to think of a laser beam as a narrow pencil of light that stays narrow forever, that's not really the case. The diameter of a laser beam changes with distance from its source. The beams from typical lasers, including laser pointers, start relatively narrow and widen as gradually as the physics of light propagation will allow. But with the help lenses, you can change that widening process dramatically. For example, if you send a typical laser beam through a converging lens that has a focal length of 1 foot, the laser beam will converge to a very narrow "beam waist" 1 foot beyond the lens and will then spread relatively quickly with distance. It will return to its original diameter 1 foot beyond its waist and to 10 times its original diameter 10 feet beyond its waist. With its light spread out by a factor of 10 in both height and width, it will have only 1/100th the intensity (power per unit area) of the original beam. Because of its large size, only a fraction of the beam and its light power will now enter a person's iris and focus on their retina. Using this scheme, you can have a beam that is extremely intense for the first 2 feet, including a super-intense waist at the 1-foot mark. But beyond that point, the beam spreads quickly and soon becomes so wide that it is no longer a eye hazard. The problem for planes isn't the temperature, it's the humidity. When the air reaches 100% relative humidity, moisture in that air begins to condense on objects such as plane wings. The moisture can also condense into rain, snow, or sleet and then fall onto those plane wings. If the temperature of overly moist air is 32 F or below, planes preparing for takeoff can accumulate heavy burdens of ice. When water vapor condenses as ice directly onto the wings themselves, that condensation process is called deposition and is familiar to you as frost. Deposition is a relatively slow process, so most of the trouble for planes occurs when it is actually snowing or sleeting. Removing the ice then requires either heat or chemicals. When the plane is flying at high altitudes, however, the air is extremely dry. Even though the air temperature is far below the freezing temperature of water, the fraction of water molecules in the air is nearly zero and the relative humidity is much less than 100%. That means that an ice cube suspended in that dry air would actually evaporate away to nothing. Technically, that "evaporation" of ice directly into water vapor is call sublimation and you've seen it before. Think of all the foods that have experienced freezer burn in your frost-free (i.e., extremely dry air) refrigerator or the snow that has mysteriously disappeared from the ground during a dry spell even though the temperature has never risen above freezing. Both are cases of sublimation — where water molecules left the ice to become moisture in the air. Since our eyes are only sensitive to light that's in the visible range, any "smart" optical system would have to present whatever it detects as visible light. That means it has to either shift the frequencies/wavelengths of non-visible electromagnetic radiation into the visible range or image that non-visible radiation and present a false-color reproduction to the viewer. Let's consider both of these schemes. The first approach, shifting the frequencies/wavelengths, is seriously difficult. There are optical techniques for adding and subtracting optical waves from one another and thereby shifting their frequencies/wavelengths, but those techniques work best with the intense waves available with lasers. For example, the green light produced by some laser pointers actually originated as invisible infrared light and was doubled in frequency via a non-linear optical process in a special crystal. The intensity and pure frequency of the original infrared laser beam makes this doubling process relatively efficient. Trying to double infrared light coming naturally from the objects around you would be extraordinarily inefficient. In general, trying to shift the frequencies/wavelengths of the various electromagnetic waves in your environment so that you can see them is pretty unlikely to ever work as a way of seeing the invisible portions of the electromagnetic spectrum. The second approach, imaging invisible portions of the electromagnetic spectrum and then presenting a false-color reproduction to the viewer, is relatively straightforward. If it's possible to image the radiation and detect it, it's possible to present it as a false-color reproduction. I'm talking about a camera that images and detects invisible electromagnetic radiation and a computer that presents a false-color picture on a monitor. Imaging and detecting ultraviolet and x-ray radiation is quite possible, though materials issues sometimes makes the imaging tricky. Imaging and detecting infrared light is easy in some parts of the infrared spectrum, but detection becomes problematic at long wavelengths, where the detectors typically need to be cooled to extremely low temperatures. Also, the resolution becomes poor at long wavelengths. Camera systems that image ultraviolet, x-ray, and infrared radiation exist and you can buy them from existing companies. They're typically expensive and bulky. There are exceptions such as near-infrared cameras — silicon imaging chips are quite sensitive to near infrared and ordinary digital cameras filter it out to avoid presenting odd-looking images. In other words, the camera would naturally see farther into the infrared than our eyes do and would thus present us with images that don't look normal. In summary, techniques for visualizing many of the invisible portions of the electromagnetic spectrum exist, but making them small enough to wear as glasses... that's a challenge. That said, it's probably possible to make eyeglasses that image and detect infrared or ultraviolet light and present false-color views to you on miniature computer monitors. Such glasses may already exist, although they'd be expensive. As for making them small enough to wear as contact lenses... that's probably beyond what's possible, at least for the foreseeable future. During wine making, the amount of dissolved carbon dioxide (and possibly oxygen gas) can easily exceed its equilibrium concentration. That means that the liquid contains more dissolved gas than it would have if exposed to the atmosphere for a long period of time and had thereby reached its equilibrium concentration of the gas. Having too much dissolved gas does not, however, mean that this gas will leave quickly. For example, when you open a bottle of carbonated beverage the carbon dioxide is out of equilibrium. Although the gas was in equilibrium at the high pressure of the sealed bottle, it instantly became out of equilibrium when the bottle was opened and the density of gaseous carbon dioxide suddenly decreased. Nonetheless, it can take days for the excess carbon dioxide to come out of solution and leave. You've probably noticed that carbonated beverages take hours or days to "go flat." Part of the reason why it takes so long for the dissolved gases to come out of solution is that the gas can only leave through the exposed surface of the liquid. In an open bottle of carbonated beverage that may be only a few square inches or a few dozen square centimeters. The dissolved gas has to find its way to that exposed surface and break free of the liquid. That's a slow process. The same thing is happening in your wine: the dissolve carbon dioxide and oxygen gases must normally find their way to the top of the tank and then break free to enter the gaseous region at the top of the tank — another slow processes. To speed the escape of dissolved gases, you can enlarge the exposed surface of the liquid by bubbling an inert gas through the liquid. Here, inert gas is any gas that doesn't dissolve significantly in the liquid and that doesn't affect the liquid if it does dissolve. Nitrogen is great for wine because it doesn't interact chemically with the wine. As you let bubbles of nitrogen float upward through the wine, you provide exposed surface within the body of the liquid wine and allow carbon dioxide and oxygen to break free of the liquid and enter those bubbles. The spherical interface between the gas bubble and the surrounding liquid is a busy, active place — gas molecules are moving between the gas and liquid in both directions. Because carbon dioxide is over-concentrated in the liquid, it is statistically more likely for a carbon dioxide molecule to leave the liquid and enter the bubble's gas than the other way around. It takes a little energy to break those carbon dioxide molecules free of the liquid and that need for energy affects the balance between dissolved carbon dioxide and gaseous carbon dioxide at equilibrium. The harder it is for the carbon dioxide molecules to obtain the energy they need to escape from the liquid, the greater the equilibrium concentration of dissolved carbon dioxide — the saturated concentration. But your wine is supersaturated, containing more than the equilibrium concentration of dissolved carbon dioxide, so carbon dioxide molecules go from liquid to gas more often than the other way around. When the degree of supersaturation (excess gas concentration) is high, the transfer of gas molecules from liquid to gas bubble can be fast enough to make the bubbles grow in size significantly as they float up through the wine. You can see this type of rapid bubble growth in a glass of freshly poured soda, beer, or champagne. In beer, champagne, and your wine, however, the liquid surface of the bubble contains various natural chemicals that alter the interface with the gas and affect bubble growth. The "tiny bubbles" of good champagne reflect that influence. Another way to provide the extra exposed surface in the wine and thereby allow the supersaturated dissolved gases to come out of solution would be to agitate the wine so violently that empty cavities open up within the wine. Although that approach would provide lots of extra surface, it would probably not be good for the wine. Bubbling gas through the wine is a much more gentle. The exact choice of gas barely matters as long as it is chemically inert in the wine. Argon or helium would be just as effective, but they're more expensive (and in the case of helium, precious). The temperature of the gas doesn't matter significantly, but the temperature of the wine does. The cooler the wine, the higher the concentration of dissolved carbon dioxide and oxygen it will contain at equilibrium so you'll remove more of those gases if you do your bubbling while the wine is relatively warm. Most likely, the ant never left the floor or walls of the microwave oven, where it was as close as possible to those metal surfaces. The six sides of the cooking chamber in a microwave oven are made from metal (or painted metal) because metal reflects microwaves and keeps them bouncing around inside the chamber. Metals are good conductors of electricity and effectively "short out" any electric fields that are parallel to their surfaces. Microwaves reflect from the metal walls because those walls force the electric fields of the microwaves to cancel parallel to their surfaces and that necessitates a reflected wave to cancel the incident wave. Because of that cancellation at the conducting surfaces, the intensity of the microwaves at the walls is zero or very close to zero. The ant survived by staying within a tiny fraction of the microwave wavelength (about 12.4 cm) of the metal surfaces, where there is almost zero microwave intensity. Had the ant ventured out onto your cup, it would have walked into real trouble. Once exposed to the full intensity of the microwaves, it would not have fared so well. I think that you've rediscovered an experiment in which people cut a grape almost in half, open the two halves like a book and lay it flat on a plate. In the microwave, the thin bridge between the halves carbonizes and than emits flames. Basically, the fruit pieces or berries are acting as antennas for the microwaves, which drive electric currents through the narrow bridges between parts. The berries aren't great conductors, but they're not true insulators either. Those bridges overheat (like an overloaded extension cord) and burn up. The flames come from the burning bridges. If you let the flames go on long enough and enough carbon develops, you'll probably start getting plasma balls in the oven (lots of fun, but not great for the oven... you can scorch its top surface because those plasma balls rise and skittle around the ceiling of the oven). Anyway, you can probably find the carbon areas if you look closely enough, but they're no worse than a little burnt toast. Yours is actually a complicated question. After you open the soda, the CO2 dissolved in the soda is no longer in equilibrium with the gas above soda. When you cap the bottle, CO2 will gradually escape from the liquid until it forms a dense gas so that CO2 molecules from that gas return to the liquid solution as often as they leave the solution for the gas. In other words, the equilibrium between dissolved CO2 and gaseous CO2 has to be reestablished. By shrinking the volume of gas over the soda, your boyfriend reduces the number of CO2 molecules that must enter the gas phase in order to reestablish that equilibrium. BUT, when dense gas develops in the squeezed bottle, the high pressure of that gas will reinflate the bottle to its original size. The benefits of shrinking the gas volume will thus be lost. To succeed in keeping more of the CO2 molecules in solution, you have to make sure that the squeezed bottle stays squeeze. That's hard to do. You're probably better off pouring the soda gently into a smaller bottle, one that just barely holds all of the liquid. That smaller bottle won't expand as a dense gas of CO2 forms above the liquid soda and the soda will reestablish its equilibrium without losing too many of its dissolved CO2 molecules. When you watch something move, what you really notice is the change in the angle at which see you it. Nearby objects don't have to be traveling fast to make you turn your head quickly to watch them go by so you perceive them as moving rapidly. An object that is heading directly toward you or away from you doesn't appear to be moving nearly as quickly because its change in angle is much smaller. When you watch a distant object move, you don't see it change angles quickly so you perceive it as moving relatively slowly. Take the moon for example: it is moving thousands of miles an hour yet you can't see it move at all. It's just so far away that you see no angular change. And when you look down from a high-flying jet, the distant ground is changing angles slowly and therefore looks like it's not moving fast. In principle, the brownie would heat up faster by radiation in a hot environment and cool off faster by radiation in a cold environment. A black object is better at both absorbing thermal radiation and emitting thermal radiation, so the brownie would soak up more thermal radiation in the hot environment and give off more thermal radiation in the cold environment. In practice, however, most of the radiation involved in baking these desserts and letting them cool on a kitchen counter is in the infrared and it's hard to tell just what color a brownie or cake is in the infrared. It's likely that both are pretty dark when viewed in infrared light. Basically, even things that look white to your eye are often gray or black in the infrared. Thus I suspect that both the brownie and cake absorb most of the thermal radiation they receive while being baked and emit thermal radiation efficienty while they're cooling on the counter. Light has no charge at all. It consists only of electric and magnetic field, each endlessly recreating the other as the pair zip off through empty space at the speed of light. The fact that light waves can travel in vacuum, and don't need any material to carry them, was disturbing to the physicists who first studied light in detail. They expected to find a fluid-like aether, a substance that was the carrier of electromagnetic waves. Instead, they found that those waves travel through truly empty space. One thing led to another, and soon Einstein proposed that the speed of light was profoundly special and that space and time were interrelated by way of that speed of light. What you propose to do is far more difficult than you imagine. Determining the chemical contents of food is hard, even with a well-equipped laboratory and permission to destroy the food in order to study it. The idea of analyzing a casserole in detail simply by beaming microwaves at it is science fiction. Think how much easier airport security would be if they could chemically analyze everything that came in the front door just by beaming microwaves at it. That said, however, let me make two comments. First, the question quickly turns to computer interface issues, as though the chemical analysis part is trivial in comparison to computer presentation part. Physical science and computer science are truly different fields and not everything in the scientific domain can be reduced to a software package. Physics and chemistry haven't disappeared with the advent of computers and there will never be a firmware upgrade for your microwave oven that will turn it into a nutritional analysis laboratory. As a society, we've gone a bit too far in replacing science education with technology education, particularly computer software. Second, while remote chemical analysis isn't easy, it can be done in certain cases with the clever use of physics and chemistry. One of my friends here at Virginia, Gaby Laufer, has developed an instrument that studies the infrared light transmitted by the air and can determine whether that air contains any of a broad variety of toxic or dangerous gases in a matter of seconds. Air's relative transparency makes it easier to analyze than an opaque casserole, but even when you can see through something it's not trivial to see what it contains. Gaby's instrument does a phenomenal job of fingerprinting the gas's absorption features and identifying trouble. Note added: a reader informed me that there are now microwave ovens that can read bar codes and adjust their cooking to match the associated food. A scale in the base of the oven can determine the food's weight and cook it properly. Another reader suggested that a microwave oven might be able to measure the food's microwave absorption and weight in order to adjust cooking power and time. While that's also a good possibility, ovens that sense food temperature or the humidity inside the oven can achieve roughly the same result by turning themselves off at the appropriate time. That's exactly right! Coasting and zero net force go hand-in-hand: when an object is experiencing zero net force, it doesn't accelerate and thus it coasts. A coasting object is an inertial object, meaning that it moves at a steady pace along a straightline path. And if the coasting object is at rest, it stays at rest. To clarify the term "net force," note that when an object is experiencing several separate forces, it doesn't accelerate in response to each one individually. Instead, it accelerates in response to the sum of all the forces acting on it: the net force. Remember that forces have directions associated with them (forces are vector quantities), so when you sum them you must consider their directions carefully. The proper force to consider in Newton's second law is actually the net force on the object. If you know both the net force on the object and the object's mass, you can predict the object's acceleration. And if the net force is zero, then the object doesn't accelerate at all — it coasts. I figure that some day, we'll turn to our landfills as resources for precious elements like copper and gold. That assumes, of course, that we survive global warming. In the meantime, we'll just keep throwing stuff out. Despite the scary title "microwave radiation," a microwave oven is basically just another household electronic device. It is an extremely close relative of a convention cathode-ray-tube television set. If you're OK with putting CRT televisions and computer monitors in the landfill, you should have no problems with putting microwave ovens there, too. Even when the microwave oven is on, all it has inside it is microwave radiation and that's just not a big deal. The instant you turn it off, it doesn't even have those microwaves in it. It's just boring inert electronic parts and they'll sit in the landfill for generations, rusting and decaying like every other abandoned electronic gadget. I'd rather see it go to a recycling center and have its precious materials returned to the resource bin, but as landfill junk goes, it's not all that bad. Given that toxic chemicals are the primary concern with landfills, microwave ovens are probably rather innocuous. They have no radioactive contents and although the high-voltage capacitor might have oil in it, that oil can no longer be the toxic PCBs that were common a few decades ago. Even when that oil leaks into the environment, it's probably not going to do much. So there you have it, microwave ovens go to their graves no more loudly or dangerously than old televisions or computers or cell phones. In fact, I might start calling cell phones "microwave phones" because that's exactly what they are. They communicate with the base unit by way of microwave radiation. Given the number of people who have cell phones semi-permanently installed in their ears, concerns about microwave radiation should probably be redirect from microwave ovens to "microwave phones." Think about it next time your six-year-old talks for an hour with her best friend on that "microwave phone." While it's easy to push on water, it's hard to pull on water. When you drink soda through a straw, you may feel like you're pulling on the water, but you're not. What you are actually doing is removing some air from the space inside the straw and above the water, so that the air pressure in that space drops below atmospheric pressure. The water column near the bottom of the straw then experiences a pressure imbalance: the usual atmospheric pressure below it and less-than-atmospheric pressure above it. That imbalance provides a modest upward force on the water column and pushes it up into your mouth. So far, so good. But if you make that straw longer, you'll need to suck harder. That's because as the column of water gets taller, it gets heavier. It needs a more severe pressure imbalance to push it upward and support it. By the time the straw and water column get to be about 40 feet tall, you'll need to suck every bit of air out from inside the straw because the pressure imbalance needed to support a 40-foot column of water is approximately one atmosphere of pressure. If the straw is taller than 40 feet, you're simply out of luck. Even if you remove all the air from within the straw, the atmospheric pressure of the water below the straw won't be able to push the water up the straw higher than about 40 feet. To get the water to rise higher in the straw, you'll need to install a pump at the bottom. The pump increases the water pressure there to more than 1 atmosphere, so that there is a bigger pressure imbalance available and therefore the possibility of supporting a taller column of water. OK, so returning to your question: once a well is more than about 40 feet deep, getting the water to the surface requires a pump at the bottom. That pump can boost the water pressure well above atmospheric and thereby push the water to the surface despite the great height and weight of the water column. Suction surface pumps are really only practical for water that's a few feet below the surface; after that, deep pressure pumps are a much better idea. Your daughter's question is a cute one. I like it because it highlights the distinction between the speed of light and all other speeds. The speed of light is unimaginably special in our universe. Strange though it may sound, even if light didn't exist there would still be the speed of light and it would still have the same value. The speed of light is part of the geometry of space-time and the fact that light travels at "the speed of light" is almost a cosmic afterthought. Gravity and the so-called "strong force" also travel at that speed. OK, so there is actually a multi-way tie for first place in the speed rankings. Your daughter's question is what comes next? The actual answer is that it's a many-way tie between everything else. With enough energy, you can get anything moving at just under the speed of light, at least in principle. For example, subatomic particles such as electrons, protons, and even atomic nuclei are routinely accelerated to just under the speed of light in sophisticated machines around the world. The universe itself has natural accelerators that whip subatomic particles up until they are traveling so close to the speed of light that it's hard to tell that they aren't quite at the speed of light. Nonetheless, I assure you that they're not. The speed of light is so special that nothing that has any mass at all can possibly travel at the speed of light. Only the ephemeral non-massive particles such as light particles (photons), gravity particles (gravitons), and strong force particles (gluons) can actually travel at the speed of light. In fact, once photons, gravitons, and gluons begin to interact with matter, they don't travel at the speed of light either. It's sort of a guilt-by-association: as soon as these massless particles leave the essential emptiness of the vacuum and begin to interact with matter, even they can't travel at the speed of light anymore. That said, I can still offer the likely second place finisher on the speed list. I'm going to skip over light, gravity, and the strong force traveling in extremely dilute matter because that's sort of cheating — if you take something that naturally travels at the speed of light and slow it down the very, very slightest bit, of course it will come ridiculously close to the speed of light. In real second place are almost certainly cosmic ray particles. These cosmic rays are actually subatomic particles that are accelerated to fantastic energies by natural processes in the cosmos. How such accelerators work is still largely a mystery but some of the cosmic ray particles that reach our atmosphere have truly astonishing energies — once in a while a single cosmic ray particle that is smaller than an atom will carry enough energy with it that it is capable of moving small ordinary objects around. Even if it carries the energy of a fly, that's a stupendous amount of energy for an atomic fragment. Those cosmic ray particles are traveling so close to the speed of light that it would be a photo-finish with light itself. As a general observation, the bottleneck in scientific research and technological innovation is almost always the ideas, not the equipment. Occasionally, a revolutionary piece of equipment comes on the scene and makes a whole raft of developments possible overnight. But a commercial superconducting magnet isn't revolutionary; you can buy one off the shelf. As a result, all the innovations that were waiting for magnets like that to become available were mopped up long ago and any new innovations will take new ideas. Coming up with good ideas is hard work and if I had them, I'd have gotten hold of such a magnet myself. Although science is often taught as formulas and factoids, it's really about thinking and observing, and good ideas are nearly always more important than good equipment. Good ideas don't linger unstudied for long when commercial equipment is all it takes to pursue them. Like a camera, your eye collects light from the scene you're viewing and tries to form a real image of that scene on your retina. The eye's front surface (its cornea) and its internal lens act together to bend all the light rays from some distant feature toward one another so that they illuminate one spot on your retina. Since each feature in the scene you're viewing forms its own spot, your eye's cornea and lens are forming a real image of the scene in front of you. If that image forms as intended, you see a sharp, clear rendition of the objects in front of you. But if your eye isn't quite up to the task, the image may form either before or after your retina so that you see a blurred version of the scene. The optical elements in your eye that are responsible for this image formation are the cornea and the lens. The cornea does most of the work of converging the light so that it focuses, while the lens provides the fine adjustment that allows that focus to occur on your retina. If you're farsighted, the two optical elements aren't strong enough to form an image of nearby objects on your retina so you have trouble getting a clear view while reading. Your eye needs help, so you wear converging eyeglasses. Those eyeglasses boost the converging power of your eye itself and allow your eye to form sharp images of nearby objects on your retina. If you're nearsighted, the two optical elements are too strong and need to be weakened in order to form sharp images of distant objects on your retina. That's why you wear diverging eyeglasses. People are surprised when I tell them that they're nearsighted or farsighted. They wonder how I know. My trick is simple: I look through their eyeglasses at distant objects. If those objects appear enlarged, the eyeglasses are converging (like magnifying glasses) and the wearer must be farsighted. If those objects appear shrunken, the eyeglasses are diverging (like the security peepholes in doors) and the wearer is nearsighted. Try it, you'll find that it's easy to figure out how other people see by looking through their glasses as they wear them. Those touch pads are sensing your presence electronically, not mechanically. More specifically, electric charge on the pad pushes or pulls on electric charge on your finger and the pad's electronics can tell that you are there by how charge on the pad reacts to charge on your finger. Because your finger and your body conduct electricity, the pad's electric charge is actually interacting with the electric charge on your entire body. In contrast, a straw is insulating, so the pad can only interact with charge at its tip, and while your car keys are conducting, they are too small to have the effect that your body has on that pad. There are at least two ways for a pad and its electronics to sense your body and its electric charges. The first way is for the electronics to apply a rapidly alternating electric charge to the pad and to watch for the pad's charge to interact with charge outside the pad (i.e., on your body). When the pad is by itself, the electronics can easily reverse the pad's electric charge because that charge doesn't interact with anything. But when your hand is near the pad or touching it, it's much harder for the electronics to reverse the pad's electric charge. If you're touch the pad, the electronics has to reverse your charge, too, so the electronics sense a new sluggishness in the pad's response to charge changes. Even when you're not quite touching the pad, the electronics has some add difficulty reversing the pad's charge. That's because the pad's charge causes your finger and body to become electrically polarized: charges opposite to those on the pad are attracted onto your finger from your body so that your finger becomes electrically charged opposite to the charge of the pad. When the electronics then tries to withdraw the charge from the pad in order to reverse the pad's charge, your finger's charge acts to make that withdrawal difficult. The electronics finds that it must struggle to reverse the pad's charge even though you're not in direct contact with the pad. Overall, your finger complicates the charge reversals whenever it's near or touching the pad. The second way for the pad's electronics to sense your presence is to let your body act as an antenna for electromagnetic influences in the environment. We are awash in electric and magnetic fields of all sorts and the electric charge on your body is in ceaseless motion as a result. You've probably noticed that touching certain input wires of a stereo amplifier produces lots of noise in the speakers; that's partly a result of the electromagnetic noise in our environment showing up as moving charge on your body. The little pad on the soda dispenser picks up a little of this electromagnetic noise all by itself. When you approach or touch the pad, however, you dramatically increase the amount of electromagnetic noise in the pad. The pad's electronics easily detect that new noise. In short, soda dispenser pads are really detecting large electrically conducting objects. Their ability to sense your finger even before it makes contact is important because they need to work when people are wearing gloves. I first encountered electrical touch sensors in elevators when I was a child and I loved to experiment with them. Conveniently, they'd light up when they detected something and there was no need to clean up spilled soda. We'd try triggering them with elbows and noses, and a whole variety of inanimate objects. They were already pretty good, but modern electronics has made touch pads even better. The touch switches used by some lamps and other appliances function in essentially the same way. What thrills me about your question is that while we've all noticed this effect, we're never taught why it happens. Let me ask your question in another way: we know that opening a window makes the clothes dry faster, but how do the clothes know that the window is open? Who tells them? The explanation is both simple and interesting: the rate at which water molecules leave the cloths doesn't depend on whether the window is open or closed, but the rate at which water molecules return to the cloths certainly does. That return rate depends on the air's moisture content and can range from zero in dry air to extremely fast in damp air. Air's moisture content is usually characterized by its relative humidity, with 100% relative humidity meaning that air's water molecules land on surfaces exactly as fast as water molecules in liquid water leave its surface. When you expose a glass of water to air at 100% relative humidity, the glass will neither lose nor gain water molecules because the rates at which water molecules leave the water and land on the water are equal. Below 100% relative humidity, the glass will gradually empty due to evaporation because leaving will outpace landing. Above 100% relative humidity, the glass will gradually fill due to condensation because landing will outpace leaving. The same story holds true for wet clothes. The higher the air's relative humidity, the harder it becomes for water to evaporate from the cloths. Landing is just too frequent in the humid air. At 100% relative humidity the clothes won't dry at all, and above 100% relative humidity they'll actually become damper with time. When you dry clothes in a room with the window open and the relative humidity of the outdoor air is less than 100%, water molecules will leave the clothes more often than they'll return, so the clothes will dry. But when the window is closed, the leaving water molecules will remain trapped in the room and will gradually increase the room air's relative humidity. The drying process will slow down as the water-molecule return rate increases. When the room air's relative humidity reaches 100%, drying will cease altogether. Water "plasticizes" the cotton. A plasticizer is a chemical that dissolves into a plastic and lubricates its molecules so that they can move across one another more easily. Cotton is almost pure cellulose, a polymer consisting of sugar molecules linked together in long chains. Since sugar dissolves easily in water, water dissolves easily in cellulose. Even though cellulose scorches before it melts, it can be softened by heat and water. When you iron cotton pants, the steam dissolves into the cellulose molecules and allows the fabric to smooth out beautifully. You're right. Whether the microwave oven is grounded or not makes no difference on its screen's ability to prevent microwave leakage. In fact, the whole idea of grounding something is nearly meaningless at such high frequencies. Since electrical influences can't travel faster than the speed of light and light only travels 12.4 cm during one cycle of the oven's microwaves, the oven can't tell if it's grounded at microwave frequencies; its power cord is just too long and there just isn't time for charge to flow all the way through that cord during a microwave cycle. When you ground an appliance, you're are making it possible for electric charge to equilibrate between that appliance and the earth. The earth is approximately neutral, so a grounded appliance can't retain large amounts of either positive or negative charge. That's a nice safety feature because it means that you won't get a shock when you touch the appliance, even if one of its power wires comes loose and touches the case. Any charge that the power wire tries to deposit on the case will quickly flow to the earth as the appliance and earth equilibrate. But charge can't escape from the appliance through the grounding wire instantly. Light takes about 1 nanosecond to travel 1 foot and electricity takes a little longer than that. For charge to leave your appliance for the earth might well require 50 nanoseconds or more. That's not a problem for ordinary power distribution, so grounding is generally a great idea. Each cycle of the 60-Hz AC power in the U.S. takes 18 milliseconds to complete, so the appliance and earth have plenty of time to equilibrate with one another. But a cycle of the microwave power in the oven takes less about 0.4 nanoseconds to complete and there's just no time for the appliance and earth to equilibrate. At microwave frequencies, the electric current flowing through a long wire is wavelike, meaning that at one instant in time the wire has both positive and negative patches, spaced half a wavelength apart along its length. It's carrying an electromagnetic ripple. The metal screen on the oven's door has to reflect the microwaves all by itself. It does this without a problem because the holes are so much smaller than 12.4 centimeters that currents easily flow around them during a cycle of the microwaves. Those currents are able to compensate for the holes in the screens and cause the microwaves to reflect perfectly. No. Birds do this all the time. What protects the bird is the fact that it doesn't complete a circuit. It touches only one wire and nothing else. Although there is a substantial charge on the power line and some of that charge flows onto the bird when it lands, the charge movement is self-limiting. Once the bird has enough charge on it to have the same voltage as the power line, charge stops flowing. And even though the power line's voltage rises and falls 60 times a second (or 50 times a second in some parts of the world), the overall charge movement at 10,000 volts just isn't enough to bother the bird much. At 100,000 volts or more, the charge movement is uncomfortable enough to keep birds away, so you don't see them landing on the extremely high-voltage transmission lines that travel across vast stretches of countryside. The story wouldn't be the same if the bird made the mistake of spanning the gap from one wire to another. In that case, current could flow through the bird from one wire to the other and the bird would run the serious risk of becoming a flashbulb. Squirrels occasionally do this trick when they accidentally bridge a pair of wires. Some of the unexpected power flickers that occur in places where the power lines run overhead are caused by squirrels and occasionally birds vaporizing when they let current flow between power lines. If both of you were electrically neutral before the kiss, nothing would happen. Evidently, one of you has developed a net charge and that charge is suddenly spreading itself out onto the other person during the kiss. That charge flow is an electric current and you feel currents flowing through your body as a shock. Most likely, one of you has been in contact with a insulating surface that has exchanged charge with you. For example, if you walked across wool carpeting in rubber-soled shoes, that carpeting has probably transferred some of its electrons to your shoes and your shoes have then spread those electrons out onto you. Rubber binds electrons more tightly than wool and so your shoes tend to steal a few of electrons from wool whenever it gets a chance. If you walk around a bit or scuff your feet, you'll typically end up with quite a large number of stolen electrons on your body. When you then go and kiss Uncle Al, about half of those electrons spread suddenly onto him and that current flow is shocking! You have it exactly right. Water itself is burned hydrogen, and the energy required to separate water into hydrogen and oxygen is equal to the energy released when the hydrogen subsequently burns back into water. Energy in and energy out. Just as in bicycling, if you want to roll downhill, you have to pedal uphill first. Anyone who claims to be able to extract useful energy through a process that starts with water and ends with water is a charlatan. Either they aren't producing any useful energy or it's coming from some other source. In these sorts of frauds, there is usually some electrical component that is supposedly needed to keep a minor part of the apparatus functioning. That component isn't insignificant at all; it's what actually keeps the entire apparatus functioning! Hydrogen has such a mythical aura to it, but in the context of energy, it's just another fuel. Actually, it's more of any energy storage medium than a basic fuel. That's because hydrogen doesn't occur naturally on earth and can only be produced by consuming another form of energy. There is so much talk about "the hydrogen economy"Âť and the notion that hydrogen will rescue us from our dependence on petroleum. Sadly, politicians who promote hydrogen as the energy panacea neither understand science nor respect those who do. Since it takes just as much energy to produce hydrogen from water as is released when that hydrogen burns back into water, hydrogen alone won't save us. As we grow progressively more desperate for useable energy, the amount of fraud and misinformation will only increase. There are only a few true sources for useable energy: solar energy (which includes wind power, hydropower, and biomass), fossil fuels (which include petroleum and coal), geothermal energy, and nuclear fuels. Hydrogen is not among them; it can be produced only at the expense of one of the others. Even ethanol, which is touted as an environmentally sound replacement for petroleum, has its problems; producing a gallon of ethanol can all too easily consume a gallon of petroleum. Where energy is concerned, watch out for fraud, hype, PR, and politics. If we survive the coming energy and climate crises, it will be because we've learned to conserve energy and to obtain it primarily from solar and perhaps nuclear sources. It will also be because we've learned to set politics and self-interest aside long enough to make accurate analyses and sound decisions. The watt is a unit of power, equivalent to the joule-per-second. One joule is about the amount of energy it takes to raise a 12 ounce can of soda 1 foot. A 60 watt lightbulb uses 60 joules-per-second, so the power it consumes could raise a 24-can case of soda 2.5 feet each second. Most tables are about 2.5 feet above the floor. Next time you leave a 60-watt lightbulb burning while you're not in the room, imagine how tired you'd get lifting one case of soda onto a table every second for an hour or two. That's the mechanical effort required at the generating plant to provide the 60-watts of power you're wasting. If don't need the light, turn off lightbulb! What a great question! I love it. The answer is no, but there's much more to the story. I'll begin to looking at how dust settles in calm air near the ground. That dust experiences its weight due to gravity, so it tends to descend. Each particle would fall like a rock except that it's so tiny that it experiences overwhelming air resistance. Instead of falling, it descends at an incredibly slow terminal velocity, typically only millimeters per second. It eventually lands on whatever is beneath it, so a room's floor gradually accumulates dust. But dust also accumulates on vertical walls and even on ceilings. That dust is held in place not by its weight but by electrostatic or chemical forces. When you go into an abandoned attic, most of the dust is on the floor, but there's a little on the walls and on the ceiling. OK, now to the space shuttle. The shuttle is orbiting the earth, which means that although it has weight and is falling freely, it never actually reaches the earth because it's heading sideways so fast. Without gravity, its inertia would carry it horizontally out into space along a straight line path. Gravity, however, bends that straight line path into an elliptical arc that loops around the earth as an orbit. So far no real surprises: dust near ground level settles in calm air and the shuttle orbits the earth. The surprise is that particles of space dust particles also orbit the earth! The shuttle orbits above the atmosphere, where there is virtual no air. Without air to produce air resistance, the dust particles also fall freely. Those with little horizontal speed simply drop into the atmosphere and are lost. But many dust particles have tremendous horizontal speeds and orbit the earth like tiny space shuttles or satellites. Whether they are dropping toward atmosphere or orbiting the earth, these space dust particles are typically traveling at velocities that are quite different in speed or direction from the velocity of the space shuttle. The relative speed between a dust particle and the shuttle can easily exceed 10,000 mph. When such a fast-moving dust particle hits the space shuttle, it doesn't "settle."Âť Rather, it collides violently with the shuttle's surface. These dust-shuttle collisions erode the surfaces of the shuttle and necessitate occasional repairs or replacements of damaged windows and sensors. Astronauts on spacewalks also experience these fast collisions with space dust and rely on their suits to handle all the impacts. Without any air to slow the relative speeds and cushion the impacts, its rare that a particle of space dust lands gracefully on the shuttle's surface. In any case, gravity won't hold a dust particle in place on the shuttle because both the shuttle and dust are falling freely and gravity doesn't press one against the other. But electrostatic and chemical attractions can hold some dust particles in place once they do land. So the shuttle probably does accumulate a very small amount of accumulated space dust during its travels. The #2-pencil requirement is mostly historical. Because modern scantron systems can use all the sophistication of image sensors and computer image analysis, they can recognize marks made with a variety of materials and they can even pick out the strongest of several marks. If they choose to ignore marks made with materials other than pencil, it's because they're trying to be certain that they're recognizing only marks made intentionally by the user. Basically, these systems can "see" most of the details that you can see with your eyes and they judge the markings almost as well as a human would. The first scantron systems, however, were far less capable. They read the pencil marks by shining light through the paper and into Lucite light guides that conveyed the transmitted light to phototubes. Whenever something blocked the light, the scantron system recorded a mark. The marks therefore had to be opaque in the range of light wavelengths that the phototubes sensed, which is mostly blue. Pencil marks were the obvious choice because the graphite in pencil lead is highly opaque across the visible light spectrum. Graphite molecules are tiny carbon sheets that are electrically conducting along the sheets. When you write on paper with a pencil, you deposit these tiny conducting sheets in layers onto the paper and the paper develops a black sheen. It's shiny because the conducting graphite reflects some of the light waves from its surface and it's black because it absorbs whatever light waves do manage to enter it. A thick layer of graphite on paper is not only shiny black to reflected light, it's also opaque to transmitted light. That's just what the early scantron systems needed. Blue inks don't absorb blue light (that's why they appear blue!), so those early scantron systems couldn't sense the presence of marks made with blue ink. Even black inks weren't necessarily opaque enough in the visible for the scantron system to be confident that it "saw" a mark. In contrast, modern scantron systems used reflected light to "see" marks, a change that allows scantron forms to be double-sided. They generally do recognize marks made with black ink or black toner from copiers and laser printers. I've pre-printed scantron forms with a laser printer and it works beautifully. But modern scantron systems ignore marks made in the color of the scantron form itself so as not to confuse imperfections in the form with marks by the user. For example, a blue scantron form marked with blue ink probably won't be read properly by a scantron system. As for why only #2 pencils, that's a mechanical issue. Harder pencil leads generally don't produce opaque marks unless you press very hard. Since the early scantron machines needed opacity, they missed too many marks made with #3 or #4 pencils. And softer pencils tend to smudge. A scantron sheet filled out using a #1 pencil on a hot, humid day under stressful circumstances will be covered with spurious blotches and the early scantron machines confused those extra blotches with real marks. Modern scantron machines can easily recognize the faint marks made by #3 or #4 pencils and they can usually tell a deliberate mark from a #1 pencil smudge or even an imperfectly erased mark. They can also detect black ink and, when appropriate, blue ink. So the days of "be sure to use a #2 pencil" are pretty much over. The instruction lingers on nonetheless. One final note: I had long suspected that the first scanning systems were electrical rather than optical, but I couldn't locate references. To my delight, Martin Brown informed me that there were scanning systems that identified pencil marks by looking for their electrical conductivity. Electrical feelers at each end of the markable area made contact with that area and could detect pencil via its ability to conduct electric current. To ensure enough conductivity, those forms had to be filled out with special pencils having high conductivity leads. Mr. Brown has such an IBM Electrographic pencil in his collection. This electrographic and mark sense technology was apparently developed in the 1930s and was in wide use through the 1960s. Power outages come in a variety of types, one of which involves a substantial decrease in the voltage supplied to your home. The most obvious effect of this voltage decrease is the dimming of the incandescent lights, which is why it's called a "brownout." The filament of a lightbulb is poor conductor of electricity, so keeping an electric charge moving through it steadily requires a forward force. That forward force is provided by the voltage difference between the two wires: the one that delivers charges to the filament and the one that collects them back from the filament. As the household voltage decreases, so does the force on each charge in the filament. The current passing through the filament decreases and the filament receives less electric power. It glows dimly. At the risk of telling you more than you ever want to know, I'll point out that the filament behaves approximately according to Ohm's law: the current that flows through it is proportional to the voltage difference between its two ends. The larger that voltage difference, the bigger the forces and the more current that flows. This ohmic behavior allows incandescent lightbulbs to survive decreases in voltage unscathed. They don't, however, do well with increases in voltage, since they'll then carry too much current and receive so much power that they'll overheat and break. Voltage surges, not voltage decreases, are what kill lightbulbs. The other appliances you mention are not ohmic devices and the currents that flow through them are not simply proportional to the voltage supplied to your home. Motors are a particularly interesting case; the average current a motor carries is related in a complicated way to how fast and how easily it's spinning. A motor that's turning effortlessly carries little average current and receives little electric power. But a motor that is struggling to turn, either because it has a heavy burden or because it can't obtain enough electric power to overcome starting effects, will carry a great deal of average current. An overburdened or non-starting motor can become very hot because it's wiring deals inefficiently with the large average current, and it can burn out. While I've never heard of a refrigerator motor dying during a brownout, it wouldn't surprise me. I suspect that most appliance motors are protected by thermal sensors that turn them off temporarily whenever they overheat. Modern electronic devices are also interesting with respect to voltage supply issues. Electronic devices operate on specific internal voltage differences, all of which are DC — direct current. Your home is supplied with AC — alternating current. The power adapters that transfer electric power from the home's AC power to the device's DC circuitry have evolved over the years. During a brownout, the older types of power adapters simply provide less voltage to the electronic devices, which misbehave in various ways, most of which are benign. You just want to turn them off because they're not working properly. It's just as if their batteries are worn out. But the most modern and sophisticated adapters are nearly oblivious to the supply voltage. Many of them can tolerate brownouts without a hitch and they'll keep the electronics working anyway. The power units for laptops are a case in point: they can take a whole range of input AC voltages because they prepare their DC output voltages using switching circuitry that adjusts for input voltage. They make few assumptions about what they'll be plugged into and do their best to produce the DC power required by the laptop. In short, the motors in your home won't like the brownout, but they're probably protected against the potential overheating problem. The electronic appliances will either misbehave benignly or ride out the brownout unperturbed. Once in a while, something will fail during a brownout. But I think that most of the damage is down during the return to normal after the brownout. The voltages bounce around wildly for a second or so as power is restored and those fluctuations can be pretty hard some devices. It's probably worth turning off sensitive electronics once the brownout is underway because you don't know what will happen on the way back to normal. That tear in the window screen presents three potential problems: microwave leakage, evanescent waves, and arcing. As long as the hole is small, less than a centimeter or so, it's not likely to allow much microwave leakage. The oven's microwaves have a wavelength of 12.4 centimeters and they'll reflect from conducting surfaces with holes much smaller than that wavelength. A foot from your oven, there probably won't be any significant microwave intensity, although the only way to be sure is with a microwave leakage meter. The evanescent wave problem is more likely. When any electromagnetic wave reflects from a conducting surface that has small holes in it, there is what is known as an evanescent wave extending into and somewhat beyond each hole. It's as though the wave is trying to figure out whether or not it can pass through the opening and so it tries. Even when it discovers that the hole is far too small for it pass through (i.e., much smaller than its wavelength), it still offers electromagnetic intensity in the region just beyond the hole. The extent of the evanescent wave increases with the size of the hole. The microwave oven's screen has very small holes and it is located inside the glass window. The evanescent waves associated with those holes cut off so quickly that you can hold your hand against the glass and not expose your skin to significant microwaves. But once you've torn a larger hole in the screen, the evanescent waves can extend farther through that screen and perhaps out beyond the surface of the glass window. If you press your hand against the window just in front of the tear while the microwave oven is on, you may burn your hand. Finally, there is the issue of arcing. To reflect the microwaves, the conducting screen must carry electric currents. The microwaves' electric fields push electric charge back and forth in the conducting screen and it is that moving charge (i.e., electric current) that ultimately redirects the microwaves back into the cooking chamber as a reflection. Those electric currents in the screen are real and they're not going to take kindly to that tear. It's a weak spot in the conducting surface through which they flow. Weak electrical paths can heat up like lightbulb filaments when they carry currents. Moreover, charge that should flow across the torn region can accumulate on sharp edges and leap through the air as an arc. If either of these processes happens, it may scorch the window and the screen, and cause increasing trouble. You could be lucky: the leakage could be zero, the evanescent waves could remain far enough inside the window to never cause injury, and the tear could never heat up or arc. But the risk of operating this damaged microwave oven is not insignificant. Since it's an installed unit, I'd suggest replacing the screen or the door. There are a number of websites that sell replacement parts for microwave ovens and I have used them to replace the door on our microwave oven. There is nothing hypothetical about the earth orbiting the moon; it's as real as the moon orbiting the earth. The earth and the moon are simply two huge balls in otherwise empty space and though the mass of one is 81 times the mass of the other, they're both in motion. More specifically, they're in orbit around their combined center of mass — the effective location of the earth-moon system. Since the earth is so much more massive than the moon, their combined center of mass is 81 times closer to the middle of the earth than it is to the middle of the moon. In fact, it's inside the earth, though not at the middle of the earth. As a result, the earth's orbital motion takes the form of a wobble rather than a more obvious looping path. Nonetheless, the earth is orbiting. I hope that you can see that there is no reason why the earth should be fixed in space while the moon orbits about it. You've been sold a bill of goods. The mistaken notion that the moon orbits a fixed earth is a wonderful example of the "factoid science" that often passes for real science in our society. Because thinking and understanding involve hard work, people are more comfortable when the thought and understanding have been distilled out of scientific issues and they've been turned into memorizable sound bites. Those sound bites are easy to teach and easy to test, but they're mostly mental junk food. A good teacher, like a good scientist, will urge you to question such factoids until you understand the science behind them and why they might or might not be true. When my children were young, I often visited their schools to help teach science. In third grade, the required curriculum had them categorizing things into solutions or mixtures. Naturally, I showed them a variety of things that are neither solutions nor mixtures. It was a blast. Science is so much more interesting than a collection of 15-second sound bites. I'll assume that by "bigger lens" you mean one that is larger in diameter and that therefore collects all the light passing through a larger surface area. While a larger-diameter lens can project a brighter image onto the image sensor or film than a smaller-diameter lens, that's not the whole story. Producing a better photo or video involves more than just brightness. Lenses are often characterized by their f-numbers, where f-number is the ratio of effective focal length to effective lens diameter. Focal length is the distance between the lens and the real image it forms of a distant object. For example, if a particular converging lens projects a real image of the moon onto a piece of paper placed 200 millimeters (200 mm) from the lens, then that lens has a focal length of 200 mm. And if the lens is 50 mm in diameter, it has an f-number of 4 because 200 mm divided by 50 mm is 4. Based on purely geometrical arguments, it's easy to show that lenses with equal f-numbers project images of equal brightness onto their image sensors and the smaller the f-number, the brighter the image. Whether a lens is a wide-angle or telephoto, if it has an f-number of 4, then its effective focal length is four times the effective diameter of its light gathering lens. Since telephoto lenses have long focal lengths, they need large effective diameters to obtain small f-numbers. But notice that I referred always to "effective diameter" and "effective focal length" when defining f-number. That's because there are many modern lenses that are so complicated internally that simply dividing the lens diameter by the distance between the lens and image sensor won't tell you much. Many of these lenses have zoom features that allow them to vary their effective focal lengths over wide ranges and these lenses often discard light in order to improve image quality and avoid dramatic changes in image brightness while zooming. You might wonder why a lens would ever choose to discard light. There are at least two reasons for doing so. First, there is the issue of image quality. The smaller the f-number of a lens, the more precise its optics must be in order to form a sharp image. Low f-number lenses are bringing together light rays from a wide range of angles and getting all of those rays to overlap perfectly on the image sensor is no small feat. Making a high-performance lens with an f-number less than 2 is a challenge and making one with an f-number of less than 1.2 is extremely difficult. There are specialized lenses with f-numbers below 1 and Canon sold a remarkable f0.95 lens in the early 1960's. The lowest f-number camera lens I have ever owned is an f1.4. Secondly, there is the issue of depth-of-focus. The smaller the f-number, the smaller the depth of focus. Again, this is a geometry issue: a low-f-number lens is bringing together light rays from a wide range of angles and those rays only meet at one point before separating again. Since objects at different distances in front of the lens form images at different distances behind the lens, it's impossible to capture sharp images of both objects at once on a single image sensor. With a high-f-number lens, this fact isn't a problem because the light rays from a particular object are rather close together even when the object's image forms before or after the image sensor. But with a low-f-number lens, the light rays from a particular object come together acceptably only at one particular distance from the lens. If the image sensor isn't at that distance, then the object will appear all blurry. If a zoom lens didn't work to keep its f-number relatively constant while zooming from telephoto to wide angle, its f-number would decrease during that zoom and its depth-of-focus would shrink. To avoid that phenomenon, the lens strategically discards light so as to keep its f-number essentially constant during zooming. In summary, larger diameter lenses tend to be better at producing photographic and video images, but that assumes that they are high-quality and that they can shrink their effective diameters in ways that allow them to imitate high-quality lenses of smaller diameters when necessary. But flexible characteristics always come at some cost of image quality and the very best lenses are specialized to their tasks. Zoom lenses can't be quite as good as fixed focal length lenses and a large-diameter lens imitating a small-diameter lens by throwing away some light can't be quite as good as a true small-diameter lens. As for my sources, one of the most satisfying aspects of physics is that you don't always need sources. Most of the imaging issues I've just discussed are associated with simple geometric optics, a subject that is part of the basic toolbox of an optical physicist (which I am). You can, however, look this stuff up in any book on geometrical optics. Yes, but it's not a good idea. Depending on the type of plate, you can either damage your microwave oven or damage the plate. If a plate is "microwave safe," it will barely absorb the microwaves and heat extremely slowly. In effect, the microwave oven will be operating empty and the electromagnetic fields inside it will build up to extremely high levels. Since the walls of the oven are mirrorlike and the plate is almost perfectly transparent to microwaves, the electromagnetic waves streaming out of the oven's magnetron tube bounce around endlessly inside the oven's cooking chamber. The resulting intense fields can produce various types of electric breakdown along the walls of the cooking chamber and thereby damage the surface with burns or arcs. Furthermore, the intense microwaves in the cooking chamber will reflect back into the magnetron and can upset its internal oscillations so that it doesn't function properly. Although magnetrons are astonishingly robust and long-lived, they don't appreciate having to reabsorb their own emitted microwaves. In short, your plates will heat up slowly and you'll be aging your microwave oven in the process. You could wet the plates before putting them in the microwave oven to speed the heating and decrease the wear-and-tear on the magnetron, but then you'd have to dry the plates before use. If a plate isn't "microwave safe," then it will absorb microwaves and heat relatively quickly. If it absorbs the microwaves uniformly and well, then you can probably warm it to the desired temperature without any problems as long as you know exactly how many seconds it takes and adjust for the total number of plates you're warming. If you heat a plate too long, bad things will happen. It may only amount to burning your fingers, but some plates can't take high temperatures without melting, cracking, or popping. Unglazed ceramics that have soaked up lots of water will heat rapidly because water absorbs microwaves strongly. Water trapped in pores in such ceramics can transform into high-pressure steam, a result that doesn't seem safe to me. And if a plate absorbs microwaves nonuniformly, then you'll get hotspots or burned spots on the plate. Metalized decorations on a plate will simply burn up and blacken the plate. Cracks that contain water will overheat and the resulting thermal stresses will extend the cracks further. So this type of heating can be stressful to the plates. You can only go a few feet under water before you'll no longer be able to draw air into your lungs through that hose. It's a pressure problem. The water pressure outside your chest increases rapidly as you go deeper, but the air pressure inside the hose and your mouth barely changes at all. Pretty soon, you'll have so much more pressure outside your lungs than inside them that you won't be able to draw in any more air. Your muscles just won't be strong enough. The water pressure increases quickly with depth because each layer of water must support the weight of all the water layers above it. Since water is dense, heavy stuff, the weight piles on quickly and it takes only 10 meters (34 feet) of descent to increase the water pressure from atmospheric to twice atmospheric. In contrast, the air in the hose is light, fluffy stuff, so its pressure increases rather slowly with depth. Even though each layer of air has to support the weight of all the layers of air above it, the rise in pressure is extremely gradual. It takes miles of atmosphere above the earth for the air pressure to build up to atmospheric pressure near the ground. The air pressure in your hose is therefore approximately unchanged by your descent into the water. With the water pressure outside rising quickly as you go deeper and the air pressure in your mouth rising incredibly slowly as you go deeper, you quickly find it hard to breathe. Your muscles can push your chest outward against a modest pressure imbalance between outside and inside. But by the time you're a few feet below the surface, you just can't draw air into your lungs through that hose anymore. You need pressurized air, such as that provided by a scuba outfit or a deep-sea diver's compressor system. Despite the freezer's low temperature and the motionlessness of all the frozen foods inside it, there is still plenty of microscopic motion going on. Every surface inside the freezer is active, with individual molecules landing and leaving all the time. Whenever a molecule on the surface of a piece of food manages to gather enough thermal energy from its neighbors, it will break free of the surface and zip off into the air as a vapor molecule. And whenever a vapor molecule in the air collides with the surface of another piece of food, it may stick to that surface and remain there indefinitely. Since the freezer has a nearly airtight seal, the air it contains remains inside it for a long time. That means that the odor molecules that occasionally break free of a pungent casserole at one end of the freezer have every opportunity to land on and stick to an ice cube at the other end. With time, the ice cube acquires the scent of the casserole and becomes unappealing. To stop this migration of molecules, you should seal each item in the freezer in its own container. That way, any molecules that leave the food's surface will eventually return to it. Since ice cubes are normally exposed to the air in the freezer, keeping the odor molecules trapped in their own sealed containers keeps the freezer air fresh and the ice cubes odor-free. No, there is no square-peg in round-hole effect going on in microwave ovens. Microwaves reflect from conducting surfaces, just as light waves reflect from shiny metals, and they can't pass through holes in conducting surfaces if those holes are substantially smaller than their wavelengths. The holes in the conducting mesh covering the microwave oven's window are simply too small for the microwaves and the microwaves are reflected by that mesh. Microwaves themselves have no well-defined shape but they do have firm rules governing their overall structures. Books usually draw microwaves (and all other electromagnetic waves) as wavy lines, as though something was truly going up and down in space. From that misleading representation, it's easy for people to suppose that electromagnetic waves can't get through certain openings. In reality, electromagnetic waves consist of electric and magnetic fields (influences that push on electric charge and magnetic pole, respectively) that point up and down in a rippling fashion, but nothing actually travels up and down per say. The spatial structures of these fields are governed by Maxwell's equations, a set of four famous relationships that bind electricity and magnetism into a single, unified classical theory. Maxwell's equations dictate the structures of electromagnetic waves and predict that electromagnetic waves on one side of a conducting surface can't propagate through to the other side of that surface. Even if there are small holes in the conducting surface, holes that are much smaller that the wavelength of the waves, those waves can't propagate through the surface. More specifically, the fields die off exponentially as they try to penetrate through the holes and the waves don't propagate on the far side. The choice of round holes in the oven mesh is simply a practical one. You can pack round holes pretty tightly in a surface while leaving their conducting boundaries relatively robust. And round holes treat all electromagnetic waves equally because they have no wide or narrow directions. Paper consists mostly of cellulose, a natural polymer (i.e. plastic) built by stringing together thousands of individual sugar molecules into vast chains. Like the sugars from which it's constructed, cellulose's molecular pieces cling tightly to one another at room temperature and make it rather stiff and brittle. Moreover, cellulose's chains are so entangled with one another that it couldn't pull apart even if its molecular pieces didn't cling so tightly. These effects are why it's so hard to reshape cellulose and why wood or paper don't melt; they burn or decompose instead. In contrast, chicle — the polymer in chewing gum — can be reshaped easily at room temperature. Even though pure cellulose can't be reshaped by melting, it can be softened with water and/or heat. Like ordinary sugar, cellulose is attracted to water and water molecules easily enter its chains. This water lubricates the chains so that the cellulose becomes somewhat pliable and heat increases that pliability. When you iron a damped cotton or linen shirt, both of which consist of cellulose fibers, you're taking advantage of that enhanced pliability to reshape the fabric. But even when dry, fibrous materials such as paper, cotton, or linen have some pliability because thin fibers of even brittle materials can bend significantly without breaking. If you bend paper gently, its fibers will bend elastically and when you let the paper relax, it will return to its original shape. However, if you bend the paper and keep it bent for a long time, the cellulose chains within the fibers will begin to move relative to one another and the fibers themselves will begin to move relative to other fibers. Although both of these motions can be facilitated by moisture and heat, time along can get the job done at room temperature. Over months or years in a tightly rolled shape, a sheet of paper will rearrange its cellulose fibers until it adopts the rolled shape as its own. When you then remove the paper from its constraints, it won't spontaneously flatten out. You'll have to reshape it again with time, moisture, and/or heat. If you press it in a heavy book for another long period, it'll adopt a flat shape again. The rear window of a car is made of tempered glass — the glass is heated approximately to its softening temperature and then cooled abruptly to put its surface under compression, leaving its inside material under tension. That tempering process makes the glass extremely strong because its compressed surface is hard to tear. But once a tear does manage to propagate through the compressed surface layer into the tense heart of the glass, the entire window shreds itself in a process called dicing fracture — it tears itself into countless little cubes. The stresses frozen into the tempered glass affect its polarizability and give it strange characteristics when exposed to the electromagnetic fields in light. This stressed glass tends to rotate polarizations of the light passing through it. As a result, you see odd reflections of the sky (skylight is polarized to some extent). Those polarization effects become immediately apparent when you wear polarizing sunglasses. Mercury does expand with temperature; moreover, it expands more rapidly with temperature than glass goes. That's why the column of mercury rises inside its glass container. While both materials expand as they get hotter, the mercury experiences a larger increase in volume and must flow up the narrow channel or "capillary" inside the glass to find room for itself. Mercury is essentially incompressible so that, as it expands, it pushes as hard as necessary on whatever contains it in order to obtain the space it needs. That's why a typical thermometer has an extra chamber at the top of its capillary. That chamber will receive the expanding mercury if it rises completely up the capillary so that the mercury won't pop the thermometer if it is overheated. In short, the force pushing mercury up the column can be enormous. The force pushing mercury back down the column as it cools is tiny in comparison. Mercury certainly does contract when cooled, so that the manufacturer is telling you nonsense. But just because the mercury contracts as it cools doesn't mean that it will all flow back down the column. The mercury needs a push to propel it through its narrow channel. Mercury is attracted only weakly to glass, so it doesn't really adhere to the walls of its channel. However, like all liquids, mercury has a viscosity, a syrupiness, and this viscosity slows its motion through any pipe. The narrower the pipe, the harder one has to push on a liquid to keep it flowing through that pipe. In fact, flow through a pipe typically scales as the 4th power of that pipe's radius, which is why even modest narrowing of arteries can dramatically impair blood flow in people. The capillaries used in fever thermometers are so narrow that mercury has tremendous trouble flowing through them. It takes big forces to push the mercury quickly through such a capillary. During expansion, there is easily enough force to push the mercury up through the capillary. However, during contraction, the forces pushing the mercury back down through the capillary are too weak to keep the column together. That's because the only thing above the column of liquid mercury is a thin vapor of mercury gas and that vapor pushes on the liquid much too feebly to have a significant effect. And while gravity may also push down on the liquid if the thermometer is oriented properly, it doesn't push hard enough to help much. The contracting column of mercury takes hours to drift downward, if it drifts downward at all. It often breaks up into sections, each of which drifts downward at its own rate. And, as two readers (Michael Hugh Knowles and Miodrag Darko Matovic) have both pointed out to me in recent days, there is a narrow constriction in the capillary near its base and the mercury column always breaks at that constriction during contraction. Since the top portion of the mercury column is left almost undisturbed when the column breaks at the constriction, it's easy to read the highest temperature reached by the thermometer. Shaking the thermometer hard is what gets the mercury down and ultimately drives it through the constriction so that it rejoins into a single column. In effect, you are making the glass accelerate so fast that it leaves the mercury behind. The mercury isn't being pushed down to the bottom of the thermometer; instead, the glass is leaping upward and the mercury is lagging behind. The mercury drifts to the bottom of the thermometer because of its own inertia. You're right that the glass tube acts as a magnifier for that thin column of mercury. Like a tall glass of water, it acts as a cylindrical lens that magnifies the narrow sliver of metal into a wide image. As long as the oven's metal bottom is sound underneath the rust, there isn't a problem. The cooking chamber walls are so thick and highly conducting that they reflect the microwaves extremely well even when they have a little rust on them. However, if the metal is so rusted that it loses most of its conductivity in the rust sites, you'll get local heating across the rusty patches and eventually leakage of microwaves. If you're really concerned that there may be trouble, run the microwave oven empty for about 20 seconds and then (carefully!) touch the rusty spots. If they aren't hot, then the metal underneath is doing its job just fine. The salesperson you spoke to was simply wrong. If you'll allow me to stand on my soapbox for a minute, I'll tell you that this is a perfect example of how important it is for everyone to truly learn basic science while they're in school and not to simply suffer through the classes as a way to obtain a degree. The salesperson is apparently oblivious to the differences between types of "radiation," to the short- and long-term effects of those radiations, and to the importance of intensity in radiation. Let's start with the differences in types of radiation. Basically, anything that moves is radiation, from visible light, to ultraviolet, to X-rays, to microwaves, to alpha particles, to neutrons, and even to flying pigeons. These different radiations do different things when they hit you, particularly the pigeons. While "ionizing radiations" such as X-rays, ultraviolet, alpha particles, and neutrons usually have enough localized energy to do chemical damage to the molecules they hit, "non-ionizing radiation" such as microwaves and pigeons do not damage molecules. When you and your organic friend worry about toxic changes in food or precancerous changes in your tissue, what really worry you are molecular changes. Microwaves and pigeons don't cause those sorts of changes. Microwaves effectively heat food or tissue thermally, while pigeons bruise food or tissue on impact. Wearing a lead apron while working around ionizing radiation makes sense, although a simple layer of fabric or sunscreen is enough to protect you from most ultraviolet. To protect yourself against pigeons, wear a helmet. And to protect yourself against microwaves, use metal. The cooking chamber of the microwave oven is a metal box (including the screened front window). So little microwave "radiation" escapes from this metal box that it's usually hard to detect, let alone cause a safety problem. There just isn't much microwave intensity coming from the oven and intensity matters. A little microwaves do nothing at all to you; in fact you emit them yourself! If you want to detect some serious microwaves, put that microwave detector near your cellphone! The cellphone's job is to emit microwaves, right next to your ear! Before you give up on microwave ovens, you should probably give up on cellphones. That said, I think the worst danger about cellphones is driving into a pedestrian or a tree while you're under the influence of the conversation. Basically, non-ionizing radiation such as microwaves is only dangerous if it cooks you. At the intensities emitted by a cellphone next to your ear, it's possible that some minor cooking is taking place. However, the cancer risk is almost certainly nil. Despite all this physics reality, salespeople and con artists are still more than happy to sell you protection against the dangers of modern life. I chuckle at the shields people sell to install on your cellphones to reduce their emissions of harmful radiation. The whole point of the cellphone is to emit microwave signals to the receiving tower, so if you shield it you spoil its operation! It would be like wrapping an X-ray machine in a lead box to protect the patient. Sure, the patient would be safe but the X-ray machine would barely work any more. Returning to the microwave cooking issue, once the food comes out of the microwave oven, there are no lingering effects of its having been cooked with microwaves. There is no convincing evidence of any chemical changes in the food and certain no residual cooking microwaves around in the food. If you're worried about toxic changes to your food, avoid broiling or grilling. Those high-surface-temperature cooking techniques definitely do chemical damage to the food, making it both tasty and potentially a tiny bit toxic. One of the reasons why food cooked in the microwave oven is so bland is because those chemical changes don't happen. As a result, microwave ovens are better for reheating than for cooking. No, you cannot store charged gases in any simple container. If you try to store a mixture of positively and negatively charge gas particles in a single container, those opposite charges will attract and neutralize one another. And if you try to store only one type of charge in a container, those like charges will repel and push one another to the walls of the container. If the container itself conducts electricity, the charges will escape to the outside of the container and from there into the outside world. And if the container is insulating, the charges will stick to its inside surface and you'll have trouble getting them to leave. Moreover, you'll have trouble putting large numbers of those like-charged gas particles into the container in the first place because the ones that enter first will repel any like charges that follow. I like to view problems like this one in terms of momentum: when it reaches the pavement, a falling egg has a large amount of downward momentum and it must get rid of that downward momentum gracefully enough that it doesn't break. The whole issue in protecting the egg is in extracting that momentum gracefully. Momentum is a conserved physical quantity, meaning that it cannot be created or destroyed. It can only be passed from one object to the other. When you let go of the packaged egg and it begins to fall, the downward momentum that gravity transfers into the egg begins to accumulate in the egg. Before you let go, your hand was removing the egg's downward momentum as fast as gravity was adding it, but now the egg is on its own! Because momentum is equal to an object's mass times its velocity, the accumulating downward momentum in the egg is reflected in its increasing downward speed. With each passing second, the egg receives another dose of downward momentum from the earth. By the time the egg reaches the pavement, it's moving downward fast and has a substantial amount of downward momentum to get rid of. Incidentally, the earth, which has given up this downward momentum, experiences an opposite response—it has acquired an equal amount of upward momentum. However, the earth has such a huge mass that there is no noticeable increase in its upward speed. To stop, the egg must transfer all of its downward momentum into something else, such as the earth. It can transfer its momentum into the earth by exerting a force on the ground for a certain amount of time. A transfer of momentum, known as an impulse, is the product of a force times a time. To get rid of its momentum, the egg can exert a large force on the ground for a short time or a small force for a long time, or anything in between. If you let it hit the pavement unprotected, the egg will employ a large force for a short time and that will be bad for the egg. After all, the pavement will push back on the egg with an equally strong but oppositely directed force and punch a hole in the egg. To make the transfer of momentum graceful enough to leave the egg intact, the protective package must prolong the momentum transfer. The longer it takes for the egg to get rid of its downward momentum, the smaller the forces between the egg and the slowing materials. That's why landing on a soft surface is a good start: it prolongs the momentum transfer and thereby reduces the peak force on the egg. But there is also the issue of distributing the slowing forces uniformly on the egg. Even a small force can break the egg if it's exerted only on one tiny spot of the egg. So spreading out the force is important. Probably the best way of distributing the slowing force would be to float the egg in the middle of a fluid that has the same average density as the egg. But various foamy or springy materials will distribute the forces nearly as well. In summary, (1) you want to bring the egg to a stop over as long as period of time as possible so as to prolong the transfer of momentum and reduce the slowing forces and (2) you want to involve the whole bottom surface of the egg in this transfer of momentum so that the slowing forces are exerted uniformly on the egg's bottom surface. As for the actual impact force on the egg, you can determine this by dividing the egg's momentum just before impact (its downward speed times its mass) by the time over which the egg gets rid of its momentum. I'm beginning to think that movies and television do a huge disservice to modern society by blurring the distinction between science and fiction. So much of what appears on the big and little screen is just fantasy. The walls of your home are simply hard to look through. They block visible, infrared, and ultraviolet light nearly perfectly and that doesn't leave snoopers many good options. A person sitting outside your home with a thermal camera—a device that "sees" the infrared light associated with body-temperature objects—or a digital camera is going to have a nice view of your wall, not you inside. There are materials that, while opaque to visible light, are relatively transparent to infrared light, such as some plastics and fabrics. However, typical wall materials are too thick and too opaque for infrared light to penetrate. Sure, someone can put a camera inside your home and access it via an optical fiber or radio waves, but at that point, they might as well just peer through your window. The only electromagnetic waves that penetrate walls well are radio waves, microwaves, and X rays. If someone builds an X ray machine around your home, they'll be able to see you, or at least your bones. Don't forget to wave. And, in principle, they could use the radar technique to look for you with microwaves, but you'd be a fuzzy blob at best and lost in the jumble of reflections from everything else in your home. As for using a laser to monitor your conversations from afar, that's a real possibility. Surfaces vibrate in the presence of sound and it is possible to observe those vibrations via reflected light. But the technical work involved is substantial and it's probably easier to just put a bug inside the house or on its surface. Since I first posted this answer, several people have pointed out to me that terahertz radiation also penetrates through some solid surfaces and could be used to see through the walls of homes. In fact, the whole low-frequency end of the electromagnetic spectrum (radio, microwaves, terahertz waves) can penetrate through electrically insulating materials in order to "observe" conducting materials inside a home and the whole high-frequency end of that spectrum (X-rays and gamma rays) can penetrate through simple atoms (low atomic number) in order to "observe" complex atoms inside a home. Still, these approaches to seeing through walls require the viewers to send electromagnetic waves through the house and those waves can be detected by the people inside. They're also not trivial to implement. I suppose that people could use ambient electromagnetic waves to see what's happening in a house, but that's not easy, either. Where there's a will, there's a way: stealth aircraft have been detected by way of the dark spot they produce in the ambient radio spectrum and the insides of the pyramids have been studied by looking at cosmic rays passing through them. Nonetheless, I don't think that many of us need worry about being studied through the walls of our homes. While it may seem as though there is some grand conspiracy among physicists to deny validation to those inventors, nothing could be farther from the truth. Physicists generally maintain a healthy skepticism about whatever they hear and are much less susceptible to dogmatic conservativism than one might think. However, physicists think long and deep about the laws that govern the universe, especially about their simplicity and self-consistency. In particular, they learn how even the slightest disagreement between a particular law and the observed behavior of the universe indicates either a problem with that law (typically an oversimplification, but occasionally a complete misunderstanding) or a failure in the observation. The law of energy conservation is a case in point: if it actually failed to work perfect even one time, it would cease to be a meaningful law. The implications for our understanding of the universe would be enormous. Physicists have looked for over a century for a failure of energy conservation and have never found one; not a single one. (Note: relativistic energy conservation involves mass as well as energy, but that doesn't change the present story.) The laws of both energy conservation and thermodynamics are essentially mathematical laws—they depend relatively little on the specific details of our universe. Just about the only specific detail that's important is time-translation symmetry: as far as we can tell, physics doesn't change with time—physics today is the same as it was yesterday and as it will be tomorrow. That observation leads, amazingly enough, to energy conservation: energy cannot be created or destroy; it can only change forms or be transferred between objects. Together with statistical principals, we can derive thermodynamics without any further reference to the universe itself. And having developed energy conservation and the laws of thermodynamics, the game is over for free-energy motors and generators. They just can't work. It's not a matter of looking for one special arrangement that works among millions that don't. There are exactly zero arrangements that work. It's not a matter of my bias, unless you consider my belief that 2 plus 2 equals 4 to be some sort of bias. You can look all you like for a 2 that when added to another 2 gives you a 5, but I don't expect you to succeed. About once every month or two, someone contacts me with a new motor that turns for free or a generator that creates power out of nowhere. The pattern always repeats: I send them the sad news that their invention will not work and they respond angrily that I am not listening, that I am biased, and that I am part of the conspiracy. Oh well. There isn't much else I can do. I suppose I could examine each proposal individually at length to find the flaw, but I just don't have the time. I'm a volunteer here and this is time away from my family. Instead, I suggest that any inventor who believes he or she has a free-energy device build that device and demonstrate it openly for the physics community. Take it to an American Physical Society conference and present it there. Let everyone in the audience examine it closely. Since anyone can join the APS and any APS member can talk at any major APS conference, there is plenty of opportunity. If someone succeeds in convincing the physics community that they have a true free-energy machine, more power to them (no pun intended). But given the absence of any observed failure of time-translation symmetry, and therefore the steadfast endurance of energy conservation laws, I don't expect any successful devices. You're both right about temperature being associated with kinetic energy in molecules: the more kinetic energy each molecule has, the hotter the substance (e.g. a person) is. But not all kinetic energy "counts" in establishing temperature. Only the disordered kinetic energy, the tiny chucks of kinetic energy that belong to individual particles in a material contributes to that material's temperature. Ordered kinetic energy, such as the energy in a whole person who's running, is not involved in temperature. Whether an ice cube is sitting still on a table or flying through the air makes no difference to its temperature. It's still quite cold. Friction's role with respect to temperature is in raising that temperature. Friction is a great disorderer. If a person running down the track falls and skids along the ground, friction will turn that person's ordered kinetic energy into disordered kinetic energy and the person will get slightly hotter. No energy was created or destroyed in the fall and skid, but lots of formerly orderly kinetic energy became disordered kinetic energy—what I often call "thermal kinetic energy." The overall story is naturally a bit more complicated, but the basic idea here is correct. Once energy is in the form of thermal kinetic energy, it's stuck... like a glass vase that has been dropped and shattered into countless pieces, thermal kinetic energy can't be entirely reconstituted into orderly kinetic energy. Once energy has been distributed to all the individual molecules and atoms, getting them all to return their chunks of thermal kinetic energy is hopeless. Friction, even at the molecular level, isn't important at this point because the energy has already been fragmented and the most that any type of friction can do is pass that fragmented energy about between particles. So friction creates thermal kinetic energy (out of ordered energies of various types)... in effect, it makes things hot. It doesn't keep them hot; they do that all by themselves. As the snow settles and becomes denser, it may feel "heavier", but its total weight doesn't change much. The same water molecules are simply packing themselves into a smaller space. So while each shovel-full of the dense stuff really does weigh more than a shovel-full of the light stuff, the total number of water molecules present on your deck and their associated weight is still the same. In actually, some of the water molecules have almost certainly left via a form of solid-to-gas evaporation known technically as "sublimation." You have seen this conversion of ice into gas when you have noticed that old ice cubes in your freezer are smaller than they used to be or when you see that the snow outside during a cold spell seems to vanish gradually without ever melting. Sublimation is also the cause of "freezer burn" for frozen foods left without proper wrapping. Not surprisingly, no "free electricity" machines are ever released to real scientists for testing. That's because the results of such testing are certain: those machines simply can't work for very fundamental and incontrovertible reasons. Like so many "scientific" conmen, the purveyors of this particular scam claim to be victims of a hostile scientific establishment, which refuses to accept their brilliant discoveries. They typically attack the deepest and most central tenets of science and claim that a conspiracy is perpetuating belief on those tenets. Their refusal to submit their work to scientific peer review is supposedly based on a fear that such review will be biased and subjective, controlled by the conspiracy. The sad reality is that the "scientific establishment" is more than willing to examine the claims, but those claims won't survive the process of inspection. In some cases, the authors of the claims are truly self-deluded and are guilty only of pride and ignorance. But in other cases, the authors are real conmen who are out to make a buck at public expense. They should be run out of town on a rail. > What a remarkable story! As much as I like to think I can predict what should happen in many cases, there is just nothing like a good experiment to bring some reality to the situation. Your microwave evidently sent a significant fraction of its 900 watts of microwave radiation through that crack between cooking chamber and door and roasted your finger instantly. This is a good cautionary tale for those who are careless or curious with potentially dangerous household gadgets. While I continue to think that serious injuries are unlikely even in a leaky microwave oven, you have shown that there are cases of real danger. Fortunately, you had time to snap you finger away. It's like Class 3 lasers, which are now common in the form of laser pointers and supermarket checkout systems: they can damage your vision if you stare into them, but your blink reflex is fast enough to keep you from suffering injury. Thanks for the anecdote and I'm glad your finger recovered. I suspect that the air inside the car is vibrating the way it does inside an organ pipe or in a soda bottle when you blow carefully across the bottle's lip. This resonant effect is common in cars when one rear passenger window is opened slightly. In that case, air blowing across the opening in the window is easily deflected into or out of the opening and drives the air in the passenger compartment into vigorous vibration. In short, the car is acting like a giant whistle and because of its enormous size, its pitch is too low for you to hear. Instead, you feel the vibration as a sickening pulsation in the air pressure. For the one-open-window problem, the solution is simple: open another window. That shifts the resonant frequency of the car's air and also helps to dampen the vibrations. Alternatively, you can close the opened window. In your case, the resonance appears to involve a less visible opening into the car, perhaps near the rear bumper. If you can close that leak, you may be able to stop the airflow from driving the air in the car into resonance. If you are unable to find the leak, your best bet is to do exactly what you've done: open another window. Assuming that the wearer doesn't let the helmet move and that the object that hits the helmet is rigid, my answer is approximately yes. If a 20-pound rigid object hits the hat from a height of 2 feet, that object will transfer just over 40 foot-pounds of energy to the helmet in the process of coming to a complete stop. The "just over" has to do with the object's continued downward motion as it dents the hat and the resulting release of additional gravitational potential energy. Also, the need for a rigid dropped object lies in a softer object's ability to absorb part of the impact energy itself; a dropped 20-pound sack of flour will cause less damage than a dropped 20-pound anvil. However, the true meaning of the "40 foot-pound" specification is that the safety helmet is capable of absorbing 40 foot-pounds of energy during an impact on its crown. This energy is transferred to the helmet by doing work on it: by pushing its crown downward as the crown dents downward. The product of the downward force on the crown times the distance the crown moves downward gives the total work done on the helmet and this product must not exceed 40 foot-pounds or the helmet may fail to protect the wearer. Since the denting force typically changes as the helmet dents, this varying force must be accounted for in calculating the total work done on the helmet. While I'm not particularly familiar with safety helmets, I know that bicycle helmets don't promise to be useable after absorbing their rated energies. Bicycle helmets contain energy-absorbing foam that crushes permanently during severe impacts so that they can't be used again. Some safety helmets may behave similarly. Finally, an object dropped from a certain height acquires an energy of motion (kinetic energy) equal to its weight times the height from which it was dropped. As long as that dropped object isn't too heavy and the helmet it hits dents without moving overall, the object's entire kinetic energy will be transferred to the helmet. That means that a 20-pound object dropped from 2 feet on the helmet will deposit 40 found-pounds of energy in the helmet. But if the wearer lets the helmet move downward overall, some of the falling object's energy will go into the wearer rather than the helmet and the helmet will tolerate the impact easily. On the other hand, if the dropped object is too heavy, the extra gravitational potential energy released as it dents the helmet downward will increase the energy transferred to the helmet. Thus a 4000-pound object dropped just 1/100th of a foot will transfer much more than 40 foot-pounds of energy to the helmet. Stirring the coffee involves a transfer of energy from you to the coffee. That's because you are doing physical work on the coffee by pushing it around as it moves in the direction of your push. What began as chemical energy in your body becomes thermal energy in the coffee. That said, the amount of thermal energy you can transfer to the coffee with any reasonable amount of stirring is pretty small and you'd lose patience with the process long before you achieved any noticeable rise in coffee temperature. I think that the effect you notice is more one of mixing than of heating. Until you mix the milk into the coffee, you may have hot and cold spots in your cup and you may notice the cold spots most strongly. Stealth aircraft are designed to absorb most of the microwave radiation that hits them and to reflect whatever they don't absorb away from the microwave source. That way, any radar system that tries to see the aircraft by way of its microwave reflection is unlikely to detect anything returning from the aircraft. In effect, the stealth aircraft is "black" to microwaves and to the extent that it has any glossiness to its surfaces, those surfaces are tipped at angles that don't let radar units see that glossiness. Since most radar units emit bright bursts of microwaves and look for reflections, stealth aircraft are hard to detect with conventional radar. Just as you can't see a black bat against the night sky by shining a flashlight at it, you can't see a stealth aircraft against the night sky by shining microwaves at it. Like any black object, the stealth aircraft will heat up when exposed to intense electromagnetic waves. But trying to cook a stealth aircraft with microwaves isn't worth the trouble. If someone can figure out where it is enough to focus intense microwaves on it, they can surely find something better with which to damage it. As for detecting the stealth aircraft with the help of cell phones, that brings up the issue of what is invisibility. Like a black bat against the night sky, it's hard to see a stealth aircraft simply by shining microwaves at it. Those microwaves don't come back to you so you see no difference between the dark sky and the dark plane. But if you put the stealth aircraft against the equivalent of a white background, it will become painfully easy to see. Cell phones provide the microwave equivalent of a white background. If you look for microwave emission near the ground from high in the sky, you'll see microwaves coming at you from every cell phone and telephone tower. If you now fly a microwave absorbing aircraft across that microwave-rich background, you'll see the dark image as it blocks out all these microwave sources. Whether or not this effect was used in the Balkans, I can't say. But it does point out that invisibility is never perfect and that excellent camouflage in one situation may be terrible in another. As I discussed previously, the sky is blue because tiny particles in the atmosphere (dust, clumps of air molecules, microscopic water droplets) are better at deflecting shorter wavelength blue light than they are at deflecting longer wavelength red light. As sunlight passes through the atmosphere, enough blue light is deflected (or more technically Rayleigh scattered) by these particles to give the atmosphere an overall blue glow. The sun itself is slightly reddened by this process because a fraction of its blue light is deflected away before it reaches our eyes. But at sunrise and sunset, sunlight enters our atmosphere at a shallow angle and travels a long distance before reaching our eyes. During this long passage, most of the blue light is deflected away and virtually all that we see coming to us from the sun is its red and orange wavelengths. The missing blue light illuminates the skies far to our east during sunrise and to our west during sunset. When the loss of blue light is extreme enough, as it is after a volcanic eruption, so little blue light may reach your location at times that even the sky itself appears deep red. The particles in air aren't good at deflecting red wavelengths, but if that's all the light there is they will give the sky a dim, red glow. A bicycle is my favorite example of a dynamically stable object. Although the bicycle is unstable at rest (statically unstable), it is wonderfully stable when moving forward (dynamically stable). To understand this distinction, let's start with the bicycle motionless and then start moving forward. At rest, the bicycle is unstable because it has no base of support. A base of support is the polygon formed by an object's contact points with the ground. For example, a table has a square or rectangular base of support defined by its four legs as they touch the floor. As long as an object's center of gravity (the effective location of its weight) is above this base of support, the object is statically stable. That stability has to do with the object's increasing potential (stored) energy as it tips-tipping a statically stable object raises its center of gravity and gravitational potential energy, so that it naturally accelerates back toward its upright position. Since a bicycle has only two contact points with the ground, the base of support is a line segment and the bicycle can't have static stability. But when the bicycle is heading forward, it automatically steers its wheels underneath its center of gravity. Just as you can balance a broom on you hand if you keep moving your hand under the broom's center of gravity, a bicycle can balance if it keeps moving its wheels under its center of gravity. This automatic steering has to do with two effects: gyroscopic precession and bending of the bicycle about its steering axis. In the gyroscopic precession steering, the spinning wheel behaves as a gyroscope. It has angular momentum, a conserved quantity of motion associated with spinning, and this angular momentum points toward the left (a convention that you can understand by pointing the curved fingers of your right hand around in the direction of the tire's motion; your thumb will then point to the left). When the bicycle begins to lean to one side, for example to the left, the ground begins to twist the front wheel. Since the ground pushes upward on the bottom of that wheel, it tends to twist the wheel counter-clockwise according to the rider. This twist or torque points toward the rear of the bicycle (again, when the fingers of your right hand arc around counterclockwise, your thumb will point toward the rear). When a rearward torque is exerted on an object with a leftward angular momentum, that angular momentum drifts toward the left-rear. In this case, the bicycle wheel steers toward the left. While I know that this argument is difficult to follow, since angular effects like precession challenge even first-year physics graduate students, but the basic result is simple: the forward moving bicycle steers in the direction that it leans and naturally drives under its own center of gravity. You can see this effect by rolling a coin forward on a hard surface: it will automatically balance itself by driving under its center of gravity. In the bending effect, the leaning bicycle flexes about its steering axis. If you tip a stationary bicycle to the left, you see this effect: the bicycle will steer toward the left. That steering is the result of the bicycle's natural tendency to lower its gravitational potential energy by any means possible. Bending is one such means. Again, the bicycle steers so as to drive under its own center of gravity. These two automatic steering effects work together to make a forward moving bicycle surprisingly stable. Children's bicycles are designed to be especially stable in motion (for obvious reasons) and one consequence is that children quickly discover that they can ride without hands. Adult bicycles are made less stable because excessive stability makes it hard to steer the bicycle. The "ink dots on a balloon" idea provides the answer to your question. In that simple analogy, the ink dots represent stars and galaxies and the balloon's surface represents the universe. Inflating the balloon is then equivalent to having the universe expand. As the balloon inflates, the stars and galaxies drift apart so that an ant walking on the surface of the balloon would have to travel farther to go from one "star" to another. A similar situation exists in our real universe: everything is drifting farther apart. The ant lives on the surface of the balloon, a two-dimensional world. The ant is unaware of the third dimension that you and I can see when we look at the balloon. The only directions that the ant can move in are along the balloon's surface. The ant can't point toward the center of the balloon because that's not along the surface that the ant perceives. To the ant, the balloon has no center. It lives in a continuous, homogeneous world, which has the weird property that if you walk far enough in any direction, you return to where you started. Similarly, we see our universe as a three-dimensional world. If there are spatial dimensions beyond three, we are unaware of them. The only directions that we can move in are along the three dimensions of the universe that we perceive. The overall structure of the universe is still not fully understood, but let's suppose that the universe is a simple closed structure like the surface of a higher-dimensional balloon. In that case, we wouldn't be able to point to a center either because that center would exist in a dimension that we don't perceive. To us, the universe would be a continuous, homogeneous structure with that same weird property: if you traveled far enough in one direction, you'd return to where you started. While "centrifugal force" is something we all seem to experience, it truly is a fictitious force. By a fictitious force, I mean that it is a side effect of acceleration and not a cause of acceleration. There is no true outward force acting on an object that's revolving around a center. Instead, that object's own inertia is trying to make it travel in a straight-line path that would cause it to drift farther and farther away from the center. The one true force acting on the revolving object is an inward one-a centripetal force. The object is trying to go straight and the centripetal force is pulling it inward and bending the object's path into a circle. To get a feel for the experiences associated with this sort of motion, let's first imagine that you are the revolving object and that you're swinging around in a circle at the end of a rope. In that case, your inertia is trying to send you in a straight-line path and the rope is pulling you inward and deflecting your motion so that you go in a circle. If you are holding the rope with your hands, you'll feel the tension in the rope as the rope pulls on you. (Note that, in accordance with Newton's third law of motion, you pull back on the rope just as hard as it pulls on you.) The rope's force makes you accelerate inward and you feel all the mass in your body resisting this inward acceleration. As the rope's force is conveyed throughout your body via your muscles and bones, you feel your body resisting this inward acceleration. There's no actual outward force on you; it's just your inertia fighting the inward acceleration. You'd feel the same experience if you were being yanked forward by a rope-there would be no real backward force acting on you yet you'd feel your inertia fighting the forward acceleration. Now let's imagine that you are exerting the inward force on an object and that that object is a heavy bucket of water that's swinging around in a circle. The water's inertia is trying to make it travel in a straight line and you're pulling inward on it to bend its path into a circle. The force you exert on the bucket is quite real and it causes the bucket to accelerate inward, rather than traveling straight ahead. Since you're exerting an inward force on the bucket, the bucket must exert an inward force on you (Newton's third law again). It pulls outward on your arm. But there isn't anything pulling outward on the bucket, no mysterious "centrifugal force." Instead, the bucket accelerates in response to an unbalance force on it: you pull it inward and nothing pulls it outward, so it accelerates inward. In the process, the bucket exerts only one force on its surroundings: an outward force on your arm. As for the operation of a centrifuge, it works by swinging its contents around in a circle and using their inertias to make them separate. The various items in the centrifuge have different densities and other characteristics that affect their paths as they revolve around the center of the centrifuge. Inertia tends to make each item go straight while the centrifuge makes them bend inward. The forces causing this inward bending have to be conveyed from the centrifuge through its contents and there's a tendency for the denser items in the centrifuge to travel straighter than the less dense items. As a result, the denser items are found near the outside of the circular path while the less dense ones are found near the center of that path. During the defrost cycle, the microwave oven periodically turns off its magnetron so that heat can diffuse through the food naturally, from hot spots to cold spots. These quiet periods allow frozen parts of the food to melt the same way an ice cube would melt if you threw it into hot water. While the magnetron is off, it isn't emitting any microwaves and the food is just sitting there spreading its thermal energy around. A transformer's current regulation involves a beautiful natural feedback process. To begin with, a transformer consists of two coils of wire that share a common magnetic core. When an alternating current flows through the primary coil (the one bringing power to the transformer), that current produces an alternating magnetic field around both coils and this alternating magnetic field is accompanied by an alternating electric field (recall that changing magnetic fields produce electric fields). This electric field pushes forward on any current passing through the secondary coil (the one taking power out of the transformer) and pushes backward on the current passing through the primary coil. The net result is that power is drawn out of the primary coil current and put into the secondary coil current. But you are wondering what controls the currents flowing in the two coils. The circuit it is connected to determines the current in the secondary coil. If that circuit is open, then no current will flow. If it is connected to a light bulb, then the light bulb will determine the current. What is remarkable about a transformer is that once the load on the secondary coil establishes the secondary current, the primary current is also determined. Remember that the current flowing in the secondary coil is itself magnetic and because it is an alternating current, it is accompanied by its own electric field. The more current that is allowed to flow through the secondary coil, the stronger its electric field becomes. The secondary coil's electric field opposes the primary coil's electric field, in accordance with a famous rule of electromagnetism known as Lenz's law. The primary coil's electric field was pushing backward on current passing through the primary coil, so the secondary coil's electric field must be pushing forward on that current. Since the backward push is being partially negated, more current flows through the primary coil. The current in the primary coil increases until the two electric fields, one from the primary current and one from the secondary current, work together so that they extract all of the primary current's electrostatic energy during its trip through the coil. This natural feedback process ensures that when more current is allowed to flow through the transformer's secondary coil, more current will flow through the primary coil to match. As far as anyone has been able to determine so far, the wattage is so small that this microwave radiation doesn't affect us. Not all radiations are the same, and radio or microwave radiation is particularly nondestructive at low intensities. It can't do direct chemical damage and at low wattage can't cause significant RF (radio frequency) heating. At present, there is thus no plausible physical mechanism by which these phones can cause injury. I don't think that one will ever be found, so you're probably just fine. Paper towels are made out of finely divided fibers of cellulose, the principal structural chemical in cotton, wood, and most other plants. Cotton is actually a polymer, which like any other plastic is a giant molecule consisting of many small molecules linked together in an enormous chain or treelike structure. The small molecules or "monomers" that make up cellulose are sugar molecules. We can't get any nutritional value out of cellulose because we don't have the enzymes necessary to split the sugars apart. Cows, on the other hand, have microorganisms in their stomachs that produce the necessary enzymes and allow the cows to digest cellulose. Despite the fact that cellulose isn't as tasty as sugar, it does have one important thing in common with sugar: both chemicals cling tightly to water molecules. The presence of many hydroxyl groups (-OH) on the sugar and cellulose molecules allow them to form relatively strong bonds with water molecules (HOH). This clinginess makes normal sugar very soluble in water and makes water very soluble in cellulose fibers. When you dip your paper towel in water, the water molecules rush into the towel to bind to the cellulose fibers and the towel absorbs water. Incidentally, this wonderful solubility of water in cellulose is also what causes shrinkage and wrinkling in cotton clothing when you launder it. The cotton draws in water so effectively that the cotton fibers swell considerably when wet and this swelling reshapes the garment. Hot drying chases the water out of the fibers quickly and the forces between water and cellulose molecules tend to compress the fibers as they dry. The clothes shrink and wrinkle in the process. Sunlight consists not only of light across the entire visible spectrum, but of invisible infrared and ultraviolet lights as well. The latter is probably what is causing the color-changing effects you mention. Ultraviolet light is high-energy light, meaning that whenever it is emitted or absorbed, the amount of energy involved in the process is relatively large. Although light travels through space as waves, it is emitted and absorbed as particles known as photons. The energy in a photon of ultraviolet light is larger than in a photon of visible light and that leads to interesting effects. First, some molecules can't tolerate the energy in an ultraviolet photon. When these molecules absorb such an energetic photon, their electrons rearrange so dramatically that the entire molecule changes its structure forever. Among the organic molecules that are most vulnerable to these ultraviolet-light-induced chemical rearrangements are the molecules that are responsible for colors. The same electronic structural characteristics that make these organic molecules colorful also make them fragile and susceptible to ultraviolet damage. As a result, they tend to bleach white in the sun. Second, some molecules can tolerate high-energy photons by reemitting part of the photon's energy as new light. Such molecules absorb ultraviolet or other high-energy photons and use that energy to emit blue, green, or even red photons. The leftover energy is converted into thermal energy. These fluorescent molecules are the basis for the "neon" colors that are so popular on swimwear, in colored markers, and on poster boards. When you expose something dyed with fluorescent molecules to sunlight, the dye molecules absorbs the invisible ultraviolet light and then emit brilliant visible light. Whenever you accelerate, you experience a gravity-like sensation in the direction opposite that acceleration. Thus when you accelerate to the left, you feel as though gravity were pulling you not only downward, but also to the right. The rightward "pull" isn't a true force; it's just the result of your own inertia trying to prevent you from accelerating. The amount of that rightward "pull" depends on how quickly you accelerate to the left. If you accelerate to the left at 9.8 meters/second2, an acceleration equal in amount to what you would experience if you were falling freely in the earth's gravity, the rightward gravity-like sensation you feel is just as strong as the downward gravity sensation you would feel when you are standing still. You are experiencing a rightward "fictitious force" of 1 g. The g-force you experience whenever you accelerate is equal in amount to your acceleration divided by the acceleration due to gravity (9.8 meters/second2) and points in the direction opposite your acceleration. Often the true downward force of gravity is added to this figure, so that you start with 1 g in the downward direction when you're not accelerating and continue from there. If you are on a roller coaster that is accelerating you upward at 19.6 meters/second2, then your total experience is 3 g's in the downward direction (1 g from gravity itself and 2 g's from the upward acceleration). And if you are accelerating downward at 9.8 meters/second2, then your total experience is 0 g's (1 g downward for gravity and 1 g upward from the downward acceleration). In this last case, you feel weightless-the weightlessness of a freely falling object such as an astronaut, skydiver, or high jumper. Note added: A reader pointed out that I never actually answered the question. He's right! So here is the answer: they use accelerometers. An accelerometer is essentially a test mass on a force sensor. When there is no acceleration, the test mass only needs to be supported against the pull of gravity (i.e., the test mass's weight), so the force sensor reports that it is pushing up on the test mass with a force equal to the test mass's weight. But once the accelerometer begins to accelerate, the test mass needs an additional force in order to accelerate with the accelerometer. The force sensor detects this additional force and reports it. If you carry an accelerometer with you on a roller coaster, it will report the force it exerts on the test mass at each moment during the trip. A recording device can thus follow the "g-forces" throughout the ride. As far as how accelerometers work, modern ones are generally based on tiny mechanical systems known as MEMS (Micro-Electro-Mechanical Systems). Their test masses are associated with microscopic spring systems and the complete accelerometer sensor resides on a single chip. Both processes allow dissolved gases to escape from the water so that they can't serve as seed bubbles for boiling. When you heat water and then let it cool, the gases that came out of solution as small bubbles on the walls of the container escape into the air and are not available when you reheat the water. When you let the water sit out overnight, those same dissolved gases have time to escape into the air and this also reduces the number and size of the gas bubbles that form when you finally heat the water. Without those dissolved gases and the bubbles they form during heating it's much harder for the steam bubbles to form when the water reaches boiling. The water can then superheat more easily. A helium balloon experiences an upward force that is equal to the weight of the air it displaces (the buoyant force on the balloon) minus its own weight. At sea level, air weighs about 0.078 pounds per cubic foot, so the upward buoyant force on a cubic foot of helium is about 0.078 pounds. A cubic foot of helium weighs only about 0.011 pounds. The difference between the upward buoyant force on the cubic foot of helium and the weight of the helium is the amount of extra weight that the helium can lift, which is about 0.067 pounds per cubic foot. To lift a 100 pound person, you'll need about 1500 cubic feet of helium in your balloon. Don't operate the oven open. You're just asking for trouble. The oven will emit between 500 and 1100 watts of microwaves, depending on its rating, and you don't need to be exposed to such intense microwaves. The chamber effect is important; without the sealed chamber, the microwaves pass through the food only about once before heading off into the kitchen and you. The food won't cook well and you'll be bathed in the glow from a kilowatt source of invisible "light." Imagine standing in front of a 10-kilowatt light bulb (which emits about 1 kilowatt of visible light and the rest is other forms of heat) and then imagine that you can't see light at all and can only feel it when it is causing potential damage. Would you feel safe? Your video camera won't enjoy the microwave exposure, either. If you want to videotape your experiments without having to view them through the metal mesh on the door, you can consider drilling a small hole in the side of the cooking chamber. If you keep the hole's diameter to a few millimeters, the microwaves will not leak out. Then put one of the tiny inexpensive video cameras that widely available a centimeter or so away from that hole. You should get a nice unobstructed view of the cooking process without risking life and limb. The cooking chamber of a microwave oven has mesh-covered holes to permit air to enter and exit. The holes in the metal mesh are small enough that the microwaves themselves cannot pass through and are instead reflected back into the cooking chamber. However, those holes are large enough that air (or light in the case of the viewing window) can pass through easily. Sending air through the cooking chamber keeps the cooking chamber from turning into a conventional hot oven and it carries food smells out into the kitchen. When you put fans in front of the vents, you are probably causing the air conditioner to pump roughly the same amount of heat out of the room air as it would at 75 °F without the fans. As a result, the fans probably aren't making the air conditioner work less and aren't saving much electricity. In fact, the fans themselves consume electricity and produce heat that the air conditioner must then remove, so in principle the fans are a waste of energy. However, if the fans are directing the cold air in a way that makes you more comfortable without having to cool all the room air or if the fans are creating fast moving air that cools you via evaporation more effectively, then you may be experiencing a real savings of electricity. To figure out which is the case, you'd have to log the time the air conditioner cycles on during a certain period while the fans were off and the thermostat set to 75 °F and then repeat that measurement during a similar period with the fans on and the thermostat set to 78 °F. If the fans significantly reduce the units runtime while leaving you just as comfortable, then you're saving power. The bulb will operate perfectly well, regardless of which way you connected the lamp's two wires. Current will still flow in through one wire, pass through the bulb's filament, and return to the power company through the other wire. The only shortcoming of reversing the connections is that you will end up with the "hot" wire connected to the outside of the socket and bulb, rather than to the central pin of the socket and bulb. That's a slight safety issue: if you touch the hot wire with one hand and a copper pipe with the other, you'll get a shock. That's because a large voltage difference generally exists between the hot wire and the earth itself. In contrast, there should be very little voltage difference between the other wire (known as "neutral") and the earth. In a properly wired lamp, the large spade on the electric plug (the neutral wire) should connect to the outside of the bulb socket. That way, when you accidentally touch the bulb's base as you screw it in or out, you'll only be connecting your hand to the neutral wire and won't receive a shock. If you miswire the lamp and have the hot wire connected to the outside of the socket, you can get a shock if you accidentally touch the bulb base at any time. Superheated water doesn't always wait until triggered before undergoing sudden boiling. All that's needed to start an explosion is for something to introduce an initial "seed" bubble into the liquid. Sometimes the container already has everything necessary to form a seed bubble and it's just a matter of getting the water hot enough to start that process. Many seed bubbles begin as trapped air in tiny crevices. As the water gets hotter, the size of any trapped air pocket grows and eventually it may be able to break free as a real seed bubble. When water is sufficiently superheated, just a single seed bubble is enough to start an explosion and empty the container completely. In your case, the coffee flash boiled spontaneously after something inside it nucleated the first bubble. This sort of accident happens fairly often and we rarely think much about it as we sponge up the spilled liquid inside the microwave oven. But had your friend been unlucky enough to stop heating the coffee a second or two before that POP, she might have been injured while taking the coffee out of the oven. The moral of this story is to avoid overcooking any liquid in the microwave oven. If you must drink your coffee boiling hot, pay attention to it as it heats up so that it doesn't cook too long and then let it sit for a minute after the oven turns off. If you don't like your coffee boiling hot, then don't heat it to boiling at all. I'm afraid that there's no easy answer to this question. You can use a microwave oven to superheat water in any container that doesn't assist bubble formation. How a particular container behaves is hard for me to say without experimenting. I'd heat a small amount of water (1/2 cup or less) in the container and look at it through the oven's window to see if the water boils nicely, with lots of steam bubbles streaming upward from many different points on the inner surface of the container. The more easily water boils in the container, the less likely it is to superheat when you cook it too long. (If you try this experiment, leave the potentially superheated water in the closed microwave oven to cool!) Glass containers are clearly the most likely to superheat water because their surfaces are essentially perfect. Glasses have the characteristics of frozen liquids and a glass surface is as smooth as... well, glass. When you overheat water in a clean glass measuring cup, your chances of superheating it at least mildly are surprisingly high. The spontaneous bubbling that occurs when you add sugar, coffee powder, or a teabag to microwave-heated water is the result of such mild superheating. Fortunately, severe superheating is much less common because defects, dirt, or other impurities usually help the water boil before it becomes truly dangerous. That's why most of us avoid serious injuries. However, even non-transparent microwaveable containers often have glass surfaces. Ceramics are "glazed," which means that they are coated with glass for both sealing and decoration. Many heavy mixing bowls are glass or glass-ceramics. As you can see, it's hard to get away from trouble. I simply don't know how plastic microwaveable containers behave when heating water; they may be safe or they may be dangerous. If you're looking for a way out of this hazard, here are my suggestions. First, learn to know how long a given amount of liquid must be heated in your microwave in order to reach boiling and don't cook it that long. If you really need to boil water, be very careful with it after microwaving or boil it on a stovetop instead. My microwave oven has a "beverage" setting that senses how hot the water is getting. If the water isn't hot enough when that setting finishes, I add another 30 seconds and then test again. I never cook the water longer than I need to. Cooking water too long on a stovetop means that some of it boils away, but doing the same in a microwave oven may mean that it becomes dangerously superheated. Your children can still "cook" soup in the microwave if they use the right amount of time. Children don't like boiling hot soup anyway, so if you figure out how long it takes to heat their soup to eating temperature and have them cook their soup only that long, they'll never encounter superheating. As for dad's coffee water, same advice. If dad wants his coffee boiling hot, then he should probably make it himself. Boiling water is a hazard for children even without superheating. Second, handle liquids that have been heated in a microwave oven with respect. Don't remove a liquid the instant the oven stops and then hover over it with your face exposed. If the water was bubbling spasmodically or not at all despite heavy heating, it may be superheated and deserves particular respect. But even if you see no indications of superheating, it takes no real effort to be careful. If you cooked the water long enough for it to reach boiling temperature, let it rest for a minute per cup before removing it from the microwave. Never put your face or body over the container and keep the container at a safe distance when you add things to it for the first time: powdered coffee, sugar, a teabag, or a spoon. Finally, it would be great if some entrepreneurs came up with ways to avoid superheating altogether. The makers of glass containers don't seem to recognize the dangers of superheating in microwave ovens, despite the mounting evidence for the problem. Absent any efforts on their parts to make the containers intrinsically safer, it would be nice to have some items to help the water boil: reusable or disposable inserts that you could leave in the water as it cooked or an edible powder that you could add to the water before cooking. Chemists have used boiling chips to prevent superheating for decades and making sanitary, nontoxic boiling sticks for microwaves shouldn't be difficult. Similarly, it should be easy to find edible particles that would help the water boil. Activated carbon is one possibility. Last night's report wasn't meant to scare you away from using your microwave oven or keep you from heating water in it. It was intended to show you that there is a potential hazard that you can avoid if you're informed about it. Microwave ovens are wonderful devices and they prepare food safely and efficiently as long as you use them properly. "Using them properly" means not heating liquids too long in smooth-walled containers. Water doesn't always boil when it is heated above its normal boiling temperature (100 °C or 212 °F). The only thing that is certain is that above that temperature, a steam bubble that forms inside the body of the liquid will be able to withstand the crushing effects of atmospheric pressure. If no bubbles form, then boiling will simply remain a possibility, not a reality. Something has to trigger the formation of steam bubbles, a process known as "nucleation." If there is no nucleation of steam bubbles, there will be no boiling and therefore no effective limit to how hot the water can become. Nucleation usually occurs at hot spots during stovetop cooking or at defects in the surfaces of cooking vessels. Glass containers have few or no such defects. When you cook water in a smooth glass container, using a microwave oven, it is quite possible that there will be no nucleation on the walls of the container and the water will superheat. This situation becomes even worse if the top surface of the water is "sealed" by a thin layer of oil or fat so that evaporation can't occur, either. Superheated water is extremely dangerous and people have been severely injured by such water. All it takes is some trigger to create the first bubble-a fork or spoon opening up the inner surface of the water or striking the bottom of the container-and an explosion follows. I recently filmed such explosions in my own microwave (low-quality movie (749KB), medium-quality movie (5.5MB)), or high-quality movie (16.2MB)). As you'll hear in my flustered remarks after "Experiment 13," I was a bit shaken up by the ferocity of the explosion I had triggered, despite every expectation that it would occur. After that surprise, you'll notice that I became much more concerned about yanking my hand out of the oven before the fork reached the water. I recommend against trying this dangerous experiment, but if you must, be extremely careful and don't superheat more than a few ounces of water. You can easily get burned or worse. For a reader's story about a burn he received from superheated water in a microwave, touch here. Here is a sequence of images from the movie of my experiment, taken 1/30th of a second apart: The spoon will have essentially no effect at all on the food. Metal left in the microwave oven during cooking will only cause trouble if (a) it is very thin or (b) it has sharp edges or points. The microwaves push electric charges back and forth in metal, so if the metal is too thin, it will heat up like the filament of a light bulb and may cause a fire. And if the metal has sharp edges or points, charges may accumulate on those sharp spots and then leap into space as a spark. But because your spoon was thick and had rounded edges, the charges that flowed through it during cooking didn't have any bad effects on the spoon: no heating and no sparks. As far as the food is concerned, the presence of the spoon redirected the microwaves somewhat, but probably without causing any noticeable changes in how the food cooked. There is certainly no residual radiation of any sort and the food is no more likely to cause cancer after being cooked with metal around than had there been no spoon with it. In general, leaving a spoon in a cup of coffee or bowl of oatmeal isn't going to cause any trouble at all. I do it all the time. In fact, having a metal spoon in the liquid may reduce the likelihood of superheating the liquid, a dangerous phenomenon that occurs frequently in microwave cooking. Superheated liquids boil violently when you disturb them and can cause serious injuries as a result. No, you are right. In the long run, the number of CO2 molecules left in the bottle when you close it is all that matters. Those molecules will drift in and out of the liquid and gas phases until they reach equilibrium. At the equilibrium point, there will be enough molecules in the gas phase to pressurize the bottle and enough in the liquid phase to give the beverage a reasonable amount of bite. By giving the sealed bottle a shake, your mother-in-law is simply speeding up the approach to equilibrium. She is helping the CO2 molecules leave the beverage and enter the gas phase. The bottle then pressurizes faster, but at the expense of dissolved molecules in the beverage itself. If there is any chance that you'll drink more before equilibrium has been reached, you do best not to shake the bottle. That way, the equilibration process will be delayed as much as possible and you may still be able to drink a few more of those CO2 molecules rather than breathing them. Incidentally, shaking a new bottle of soda just before you open it also speeds up the equilibration process. For an open bottle, equilibrium is reached when essentially all the CO2 molecules have left and are in the gas phase (since the gas phase extends over the whole atmosphere). That's not what you want at all. Instead, you try not to shake the beverage so that it stays away from equilibrium (and flatness) as long as possible. For most opened beverages, equilibrium is not a tasty situation. The simple answer to your question is yes, you can do it. But you'll encounter two significant problems with trying to turn your ordinary TV into a projection system. First, the lens you'll need to do the projection will be extremely large and expensive. Second, the image you'll see will be flipped horizontally and vertically. You'll have to hang upside-down from your porch railing, which will make drinking a beer rather difficult. About the lens: in principle, all you need is one convex lens. A giant magnifying glass will do. But it has a couple of constraints. Because your television screen is pretty large, the lens diameter must also be pretty large. If it is significantly smaller than the TV screen, it won't project enough light onto your wall. And to control the size of the image it projects on the wall, you'll need to pick just the right focal length (curvature) of the lens. You'll be projecting a real image on the wall, a pattern of light that exactly matches the pattern of light appearing on the TV screen. The size and location of that real image depends on the lens's focal length and on its distance from the TV screen. You'll have to get these right or you'll see only a blur. Unfortunately, single lenses tend to have color problems and edge distortions. Projection lenses need to be multi-element carefully designed systems. Getting a good quality, large lens with the right focal length is going to cost you. The other big problem is more humorous. Real images are flipped horizontally and vertically relative to the light source from which they originate. Unless you turn your TV set upside-down, your wall image will be inverted. And, without a mirror, you can't solve the left-right reversal problem. All the writing will appear backward. Projection television systems flip their screen image to start with so that the projected image has the right orientation. Unless you want to rewire your TV set, that's not going to happen for you. Good luck. A submerged object's buoyancy (the upward force exerted on it by a fluid) is exactly equal to the weight of the fluid it displaces. In this case, the upward buoyant force on the bathysphere is equal in amount to the weight of the water it displaces. Since the bathysphere is essentially incompressible, it always displaces the same volume of water. And since water is essentially incompressible, that fixed volume of water always weighs the same amount. That's why the bathysphere experiences a constant upward force on it due to the surrounding water. To sink the bathysphere, they weight it down with heavy metal particles. And to allow the bathysphere to float back up, they release those particles and reduce the bathysphere's total weight. The microwaves would flow out of the oven's cooking chamber like light streaming out of a brightly illuminated mirrored box. If you were nearby, some of those microwaves would pass through you and your body would absorb some of them during their passage. This absorption would heat your tissue so that you would feel the warmth. In parts of your body that have rapid blood circulation, that heat would be distributed quickly to the rest of your body and you probably wouldn't suffer any rapid injuries. But in parts of your body that don't have good blood flow, such as the corneas of your eyes, tissue could heat quickly enough to be permanently damaged. In any case, you'd probably feel the warmth and realize that something was wrong before you suffered any substantial permanent injuries. When you lift the sack, you are pushing it upward (to support its weight) and it is moving upward. Since the force you exert on the sack and the distance it is traveling are in the same direction, you are doing work on the sack. As a result, the sack's energy is increasing, as evidenced by the fact that it is becoming more and more dangerous to a dog sitting beneath it. But when you carry the sack horizontally at a steady pace, the upward force you exert on the sack and the horizontal distance it travels are at right angles to one another. You don't do any work on the sack in that case. The evidence here is that the sack doesn't become any more dangerous; its speed doesn't increase and neither does its altitude. It just shifts from one place to an equivalent one to its side. I'm afraid that you're facing a difficult problem. Magnetic levitation involving permanent magnets is inherently and unavoidably unstable for fundamental reasons. One permanent magnet suspended above another permanent magnet will always crash. That's why all practical maglev trains use either electromagnets with feedback circuitry (magnets that can be changed electronically to correct for their tendencies to crash) or magnetoelectrodynamic levitation (induced magnetism in a conducting track, created by a very fast moving (>100 mph) magnetized train). There are no simple fixes if what you have built so far is based on permanent magnets alone. Unfortunately, you have chosen a very challenging science fair project. The more pressure a basketball has inside it, the less its surface dents during a bounce and the more of its original energy it stores in the compressed air. Air stores and returns energy relatively efficiently during a rapid bounce, so the pressurized ball bounces high. But an underinflated ball dents deeply and its skin flexes inefficiently. Much of the ball's original energy is wasted in heating the bending skin and it doesn't bounce very high. In general, the higher the internal pressure in the ball, the better it will bounce. However, the ball doesn't bounce all by itself when you drop it on a flexible surface. In that case, the surface also dents and is responsible for part of the ball's rebound. If that surface handles energy inefficiently, it may weaken the ball's bounce. For example, if you drop the ball on carpeting, the carpeting will do much of the denting, will receive much of the ball's original energy, and will waste its share as heat. The ball won't rebound well. My guess is that you dropped the ball on a reasonably hard surface, but one that began to dent significantly when the ball's pressure reached 12psi. At that point, the ball was extremely bouncy, but it was also so hard that it dented the surface and let the surface participate strongly in the bouncing. The surface probably wasn't as bouncy as the ball, so it threw the ball relatively weakly into the air. I'd suggest repeating your experiment on the hardest, most massive surface you can find. A smooth cement or thick metal surface would be best. The ball will then do virtually all of the denting and will be responsible for virtually all of the rebounding. In that case, I'll bet that the 12psi ball will bounce highest. Fluorescent paints and many laundry detergents contain fluorescent chemicals-chemicals that absorb ultraviolet light and use its energy to produce visible light. Fluorescent paints are designed to do exactly that, so they certainly contain enough "phosphor" for that purpose. Detergents have fluorescent dyes or "brighteners" added because it helps to make fabrics appear whiter. Aging fabric appears yellowish because it absorbs some blue light. To replace the missing blue light, the brighteners absorb invisible ultraviolet and use its energy to emit blue light. If you can't alter the air's humidity, warm air will definitely heat up your window faster and defrost it faster than cold air. The only problem with using hot air is that rapid heating can cause stresses on the window and its frame because the temperature will rise somewhat unevenly and lead to uneven thermal expansion. Such thermal stress can actually break the window, as a reader informed me recently: "On one of the coldest days of this Boston winter, I turned up the heat full blast to defrost the windshield. The outside of the window was still covered with ice, which I figured would melt from the heat. After about 10 minutes of heating, the windshield "popped" and a fracture about 8 inches long developed. The windshield replacement company said I would have to wait a day for service, since this happened to so many people over the cold evening that they were completely booked." If you're nervous about breaking the windshield, use cooler air. About the humidity caveat: if you can blow dry air across your windshield, that will defrost it faster than just about anything else, even if that air is cold. The water molecules on your windshield are constantly shifting back and forth between the solid phase (ice) and the gaseous phase (steam or water vapor). Heating the ice will help more water molecules leave the ice for the water vapor, but dropping the density of the water vapor will reduce the number of water molecules leaving the water vapor for the ice. Either way, the ice decreases and the water vapor increases. Since you car's air condition begins drying the air much soon after you start the car than its heater begins warming the air, many modern cars concentrate first on drying the air rather than on heating it. Batteries are "pumps" for electric charge. A battery takes an electric current (moving charge) entering its negative terminal and pumps that current to its positive terminal. In the process, the battery adds energy to the current and raises its voltage (voltage is the measure of energy per unit of electric charge). A typical battery adds 1.5 volts to the current passing through it. As it pumps current, the battery consumes its store of chemical potential energy so that it eventually runs out and "dies." If you send a current backward through a battery, the battery extracts energy from the current and lowers its voltage. As it takes energy from the current, the battery adds to its store of chemical potential energy so that it recharges. Battery charges do exactly that: they push current backward through the batteries to recharge them. This recharging only works well on batteries that are designed to be recharged since many common batteries undergo structural damage as their energy is consumed and this damage can't be undone during recharging. When you use a chain of batteries to power an electric device, you must arrange them so that each one pumps charge the same direction. Otherwise, one will pump and add energy to the current while the other extracts energy from the current. If all the batteries are aligned positive terminal to negative terminal, then they all pump the same direction and the current experiences a 1.5 volt (typically) voltage rise in passing through each battery. After passing through 2 batteries, its voltage is up by 3 volts, after passing through 3 batteries, its voltage is up by 4.5 volts, and so on. A parabolic dish microphone is essentially a mirror telescope for sound. A parabolic surface has the interesting property that all sound waves that propagate parallel its central axis travel the same distance to get to its focus. That means that when you aim the dish at a distant sound source, all of the sound from that object bounces off the dish and converges toward the focus in phase—with its pressure peaks and troughs synchronized so that they work together to make the loudest possible sound vibrations. The sound is thus enhanced at the focus, but only if it originated from the source you're aiming at. Sound from other sources misses the focus. If you put a sensitive microphone in the parabolic dish's focus, you'll hear the sound from the distant object loud and clear. Not significantly. Air doesn't absorb them well, which is why the air in a microwave oven doesn't get hot and why satellite and cellular communication systems work so well. The molecules in air are poor antennas for this long-wavelength electromagnetic radiation. They mostly just ignore it. Devices that sense your presence are either bouncing some wave off you or they are passively detecting waves that you emit or reflect. The wave-bouncing detectors emit high frequency (ultrasonic) sound waves or radio waves and then look for reflections. If they detect changes in the intensity or frequency pattern of the reflected waves, they know that something has moved nearby and open the door. The passive detectors look for changes in the infrared or visible light patterns reaching a detector and open the door when they detect such changes. What a neat observation! Digital cameras based on CCD imaging chips are sensitive to infrared light. Even though you can't see the infrared light streaming out of the remote control when you push its buttons, the camera's chip can. This behavior is typical of semiconductor light sensors such as photodiodes and phototransistors: they often detect near infrared light even better than visible light. In fact, a semiconductor infrared sensor is exactly what your television set uses to collect instructions from the remote control. The color filters that the camera employs to obtain color information misbehave when they're dealing with infrared light and so the camera is fooled into thinking that it's viewing white light. That's why your camera shows a white spot where the remote's infrared source is located. I just tried taking some pictures through infrared filters, glass plates that block visible light completely, and my digital camera worked just fine. The images were as sharp and clear as usual, although the colors were odd. I had to use incandescent illumination because fluorescent light doesn't contain enough infrared. It would be easy to take pictures in complete darkness if you just illuminated a scene with bright infrared sources. No doubt there are "spy" cameras that do exactly that. No, there is no sound in space. That's because sound has to travel as a vibration in some material such as air or water or even stone. Since space is essentially empty, it cannot carry sound, at least not the sorts of sound that we are used to. Ice will melt fastest in whatever delivers heat to it fastest. In general that will be water because water conducts heat and carries heat better than air. But extremely hot air, such as that from a torch, will beat out very cold water, such as ice water, in melting the ice. The laser you're using is a neodymium-YAG laser. It uses a crystal of YAG (yttrium aluminum garnet), a synthetic gem that was once sold as an imitation diamond, that has been treated with neodymium atoms to give it a purple color. When placed in a laser cavity and exposed to intense visible light, this crystal gives off the infrared light you describe. You can't see this light but, at up to 600 watts, it is actually incredibly bright. You don't want to look at it or even at its reflection from a surface that you're machining. That's because the lens of your eye focuses it onto your retina and even though your retina won't see any light, it will experience the heat. It's possible to injure your eyes by looking at this light, particularly if you catch a direct reflection of the laser beam in your eye. In all likelihood, the manufacturer of this unit has shielded all the light so that none of it reaches your eyes. If that's not the case, you should wear laser safety glasses that block 1064 nm light. But it's also possible that the irritation you're experiencing is coming from the burned material that you are machining. Better ventilation should help. High voltage power supplies, which may be present in the laser, could also produce ozone. Ozone has a spicy fresh smell, like the smell after a lightning storm, and it is quite irritating to eyes and nose. The answer is gravity. Gravity smashes the planets into spheres. To understand this, imagine trying to build a huge mountain on the earth's surface. As you begin to heap up the material for your mountain, the weight of the material at the top begins to crush the material at the bottom. Eventually the weight and pressure become so great that the material at the bottom squeezes out and you can't build any taller. Every time you put new stuff on top, the stuff below simply sinks downward and spreads out. You can't build bumps bigger than a few dozen miles high on earth because there aren't any materials that can tolerate the pressure. In fact, the earth's liquid core won't support mountains much higher than the Himalayas—taller mountains would just sink into the liquid. So even if a planet starts out non-spherical, the weight of its bumps will smash them downward until the planet is essentially spherical. The flattened poles are the result of rotation—as the planet spins, the need for centripetal (centrally directed) acceleration at its equator causes its equatorial surface to shift outward slightly, away from the planet's axis of rotation. The planet is therefore wider at its equator than it is at its poles. Yes, this sort of accident can and does happen. The water superheated and then boiled violently when disturbed. Here's how it works: Water can always evaporate into dry air, but it normally only does so at its surface. When water molecules leave the surface faster than they return, the quantity of liquid water gradually diminishes. That's ordinary evaporation. However, when water is heated to its boiling temperature, it can begin to evaporate not only from its surface, but also from within. If a steam bubble forms inside the hot water, water molecules can evaporate into that steam bubble and make it grow larger and larger. The high temperature is necessary because the pressure inside the bubble depends on the temperature. At low temperature, the bubble pressure is too low and the surrounding atmospheric pressure smashes it. That's why boiling only occurs at or above water's boiling temperature. Since pressure is involved, boiling temperature depends on air pressure. At high altitude, boiling occurs at lower temperature than at sea level. But pay attention to the phrase "If a steam bubble forms" in the previous paragraph. That's easier said than done. Forming the initial steam bubble into which water molecules can evaporate is a process known as "nucleation." It requires a good number of water molecules to spontaneously and simultaneously break apart from one another to form a gas. That's an extraordinarily rare event. Even in a cup of water many degrees above the boiling temperature, it might never happen. In reality, nucleation usually occurs at a defect in the cup or an impurity in the water—anything that can help those first few water molecules form the seed bubble. When you heat water on the stove, the hot spots at the bottom of the pot or defects in the pot bottom usually assist nucleation so that boiling occurs soon after the boiling temperature is reached. But when you heat pure water in a smooth cup using a microwave oven, there may be nothing present to help nucleation occur. The water can heat right past its boiling temperature without boiling. The water then superheats—its temperature rising above its boiling temperature. When you shake the cup or sprinkle something like sugar or salt into it, you initiate nucleation and the water then boils violently. Fortunately, serious microwave superheating accidents are fairly unusual. However, they occur regularly and some of the worst victims require hospital treatment. I have heard of extreme cases in which people received serious eye injuries and third degree burns that required skin grafts and plastic surgery. You can minimize the chance of this sort of problem by not overcooking water or any other liquid in the microwave oven, by waiting about 1 minute per cup for that liquid to cool before removing it from the microwave if there is any possibility that you have superheated it, and by being cautious when you first introduce utensils, powders, teabags, or otherwise disturb very hot liquid that has been cooked in a microwave oven. Keep the water away from your face and body until you're sure it's safe and don't ever hover over the top of the container. Finally, it's better to have the liquid boil violently while it's inside the microwave oven than when it's outside on your counter and can splatter all over you. Once you're pretty certain that the water is no longer superheated, you can ensure that it's safe by deliberately nucleating boiling before removing the cup from the microwave. Inserting a metal spoon or almost any food into the water should trigger boiling in superheated water. A pinch of sugar will do the trick, something I've often noticed when I heat tea in the microwave. However, don't mess around with large quantities of superheated water. If you have more than 1 cup of potentially superheated water, don't try to nucleate boiling until you've waited quite a while for it to cool down. I've been scalded by the stuff several times even when I was prepared for an explosion. It's really dangerous. The relative stabilities of liquid and gaseous water depend on both temperature and pressure. To understand this, consider what is going on at the surface of a glass of water. Water molecules in the liquid water are leaving the water's surface to become gas above it and water molecules in the gas are landing and joining the liquid water below. It's like a busy airport, with lots of take-offs and landings. If the glass of water is sitting in an enclosed space, the arrangement will eventually reach equilibrium—the point at which there is no net transfer of molecules between the liquid in the glass and the gas above it. In that case, there will be enough water molecules in the gas to ensure that they land as often as they leave. The leaving rate (the rate at which molecules break free from the liquid water) depends on the temperature. The hotter the water is, the more frequently water molecules will be able to break away from their buddies and float off into the gas. The landing rate (the rate at which molecules land on the water's surface and stick) depends on the density of molecules in the gas. The more dense the water vapor, the more frequently water molecules will bump into the liquid's surface and land. As you raise the temperature of the water in your glass, the leaving rate increases and the equilibrium shifts toward higher vapor density and less liquid water. By the time you reach 100° Celsius, the equilibrium vapor pressure is atmospheric pressure, which is why water tends to boil at this temperature (it can form and sustain steam bubbles). Above this temperature the equilibrium vapor pressure exceeds atmospheric pressure. The liquid water and the gas above it can reach equilibrium, but only if you allow the pressure in your enclosed system to exceed atmospheric pressure. However, if you open up your enclosed system, the water vapor will spread out into the atmosphere as a whole and there will be a never-ending stream of gaseous water molecules leaving the glass. Above 100° C, liquid water can't exist in equilibrium with atmospheric pressure gas, even if that gas is pure water vapor. So how can you superheat water? Don't wait for equilibrium! The road to equilibrium may be slow; it may take minutes or hours for the liquid water to evaporate away to nothing. In the meantime, the system will be out of equilibrium, but that's ok. It happens all the time: a snowman can't exist in equilibrium on a hot summer day, but that doesn't mean that you can't have a snowman at the beach... for a while. Superheated water isn't in equilibrium and, if you're patient, something will change. But in the short run, you can have strange arrangements like this without any problem. While helium itself doesn't actually defy gravity, it is lighter than air and floats upward as descending air pushes it out of the way. Like a bubble in water, the helium goes up to make room for the air going down. The buoyant force that acts on the helium is equal to the weight of air that the helium displaces. A cubic foot of air weighs about 0.078 pounds so the upward buoyant force on a cubic foot of helium is about 0.078 pounds. A cubic foot of helium weighs only about 0.011 pounds. The difference between the upward buoyant force on the cubic foot of helium and the weight of the helium is the amount of extra weight that the helium can lift; about 0.067 pounds. Since you weigh 85 pounds, it would take about 1300 cubic feet of helium to lift you and a thin balloon up into the air. That's a balloon about 13.5 feet in diameter. Illumination matters because your skin only reflects light to which it's exposed. When you step into a room illuminated only by red light your skin appears red, not because it's truly red but because there is only red light to reflect. Ordinary incandescent bulbs produce a thermal spectrum of light with a "color temperature" of about 2800° C. A thermal light spectrum is a broad, featureless mixture of colors that peaks at a particular wavelength that's determined only by the temperature of the object emitting it. Since the bulb's color temperature is much cooler than that of the sun's (5800° C), the bulb appears much redder than the sun and emits relatively little blue light. A fluorescent lamp, however, synthesizes its light spectrum from the emissions of various fluorescent phosphors. Its light spectrum is broad but structured and depends on the lamp's phosphor mixture. The four most important phosphor mixtures are cool white, deluxe cool white, warm white, and deluxe warm white. These mixtures all produce more blue than an incandescent bulb, but the warm white and particularly the deluxe warm white tone down the blue emission to give a richer, warmer glow at the expense of a little energy efficiency. Cool white fluorescents are closer to natural sunlight than either warm white fluorescents or incandescent bulbs. To answer your question about shaves: without blue light in the illumination, it's not that easy to distinguish beard from skin. Since incandescent illumination is lacking in blue light, a shave looks good even when it isn't. But in bright fluorescent lighting, beard and skin appear sharply different and it's easy to see spots shaving has missed. As for makeup illumination, it's important to apply makeup in the light in which it will be worn. Blue-poor incandescent lighting downplays blue colors so it's easy to overapply them. When the lighting then shifts to blue-rich fluorescents, the blue makeup will look heavy handed. Some makeup mirrors provide both kinds of illumination so that these kinds of mistakes can be avoided. After falling for a long time, an object will descend at a steady speed known as its "terminal velocity." This terminal velocity exists because an object moving through air experiences drag forces (air resistance). These drag forces become stronger with speed so that as a falling object picks up speed, the upward air resistance it experiences gradually becomes stronger. Eventually the object reaches a speed at which the upward drag forces exactly balance its downward weight and the object stops accelerating. It is then at "terminal velocity" and descends at a steady pace. The terminal velocity of an object depends on the object's size, shape, and density. A fluffy object (a feather, a parachute, or a sheet of paper) has a small terminal velocity while a compact, large, heavy object (a cannonball, a rock, or a bowling ball) has a large terminal velocity. An aerodynamic object such as an arrow also has a very large terminal velocity. A person has a terminal velocity of about 200 mph when balled up and about 125 mph with arms and feet fully extended to catch the wind. Popular in movies as a source of long glowing sparks, a Tesla coil is basically a high-frequency, very high-voltage transformer. Like most transformers, the Tesla coil has two circuits: a primary circuit and a secondary circuit. The primary circuit consists of a capacitor and an inductor, fashioned together to form a system known as a "tank circuit". A capacitor stores energy in its electric field while an inductor stores energy in its magnetic field. When the two are wired together in parallel, their combined energy sloshes back and forth from capacitor to inductor to capacitor at a rate that's determined by various characteristics of the two devices. Powering the primary of the Tesla coil is a charge delivery system that keeps energy sloshing back and forth in the tank circuit. This delivery system has both a source of moderately high voltage electric current and a pulsed transfer system to periodically move charge and energy to the tank. The delivery system may consist of a high voltage transformer and a spark gap, or it may use vacuum tubes or transistors. The secondary circuit consists of little more than a huge coil of wire and some electrodes. This coil of wire is located around the same region of space occupied by the inductor of the primary circuit. As the magnetic field inside that inductor fluctuates up and down in strength, it induces current in the secondary coil. That's because a changing magnetic field produces an electric field and the electric field surrounding the inductor pushes charges around and around the secondary coil. By the time the charges in the secondary coil emerge from the coil, they have enormous amounts of energy; making them very high voltage charges. They accumulate in vast numbers on the electrodes of the secondary circuit and push one another off into the air as sparks. While most circuits must form complete loops, the Tesla coil's secondary circuit doesn't. Its end electrodes just spit charges off into space and let those charges fend for themselves. Many of them eventually work their ways from one electrode to the other by flowing through the air or through objects. But even when they don't, there is little net build up of charge anywhere. That's because the direction of current flow through the secondary coil reverses frequently and the sign of the charge on each electrode reverses, too. The Tesla coil is a high-frequency device and its top electrode goes from positively charged to negatively charge to positively charged millions of times a second. This rapid reversal of charge, together with reversing electric and magnetic fields means that a Tesla coil radiates strong electromagnetic waves. It therefore interferes with nearby radio reception. Finally, it has been pointed out to me by readers that a properly built Tesla coil is resonant—that the high-voltage coil has a natural resonance at the same frequency that it is being excited by the lower voltage circuit. The high-voltage coil's resonance is determined by its wire length, shape, and natural capacitance. Yes. The paint is simply decoration on the metal walls. The cooking chamber of the microwave has metal walls so that the microwaves will reflect around inside the chamber. Thick metal surfaces are mirrors for microwaves and they work perfectly well with or without thin, non-conducting coatings of paint. Just before burning their fuels, both engines compress air inside a sealed cylinder. This compression process adds energy to the air and causes its temperature to skyrocket. In a spark ignition engine, the air that's being compressed already contains fuel so this rising temperature is a potential problem. If the fuel and air ignite spontaneously, the engine will "knock" and won't operate at maximum efficiency. The fuel and air mixture is expected to wait until it's ignited at the proper instant by the spark plug. That's why gasoline is formulated to resist ignition below a certain temperature. The higher the "octane" of the gasoline, the higher its certified ignition temperature. Virtually all modern cars operate properly with regular gasoline. Nonetheless, people frequently put high-octane (high-test or premium) gasoline in their cars under the mistaken impression that their cars will be better for it. If your car doesn't knock significantly with regular gasoline, use regular gasoline. A diesel engine doesn't have spark ignition. Instead, it uses the high temperature caused by extreme compression to ignite its fuel. It compresses pure air to high temperature and pressure, and then injects fuel into this air. Timed to arrive at the proper instant, the fuel bursts into flames and burns quickly in the superheated compressed air. In contrast to gasoline, diesel fuel is formulated to ignite easily as soon as it enters hot air. An audio speaker generates sound by moving a surface back and forth through the air. Each time the surface moves toward you, it compresses the air in front of it and each time the surface moves away from you, it rarefies that air. By doing this repetitively, the speaker forms patterns of compressions and rarefactions in the air that propagate forward as sound. The magnet is part of the system that makes the surface move. Attached to the surface itself is a cylindrical coil of wire and this coil fits into a cylindrical channel cut into the speaker's permanent magnet. That magnet is carefully designed so that its magnetic field lines radiate outward from the inside of the channel to the outside of the channel and thus pass through the cylindrical coil the way bicycle spokes pass through the rim of the wheel. When an electric current is present in the wire, the moving electric charges circulate around this cylinder and cut across the magnetic field lines. But whenever a charge moves across a magnetic field line, it experiences a force known as the Lorenz force. In this case, the charges are pushed either into or out of the channel slot, depending on which way they are circulating around the coil. The charges drag the coil and surface with them, so that as current flows back and forth through the coil, the coil and surface pop in and out of the magnet channel. This motion produces sound. It's a common misconception that the microwaves in a microwave oven excite a natural resonance in water. The frequency of a microwave oven is well below any natural resonance in an isolated water molecule, and in liquid water those resonances are so smeared out that they're barely noticeable anyway. It's kind of like playing a violin under water—the strings won't emit well-defined tones in water because the water impedes their vibrations. Similarly, water molecules don't emit (or absorb) well-defined tones in liquid water because their clinging neighbors impede their vibrations. Instead of trying to interact through a natural resonance in water, a microwave oven just exposes the water molecules to the intense electromagnetic fields in strong, non-resonant microwaves. The frequency used in microwave ovens (2,450,000,000 cycles per second or 2.45 GHz) is a sensible but not unique choice. Waves of that frequency penetrate well into foods of reasonable size so that the heating is relatively uniform throughout the foods. Since leakage from these ovens makes the radio spectrum near 2.45 GHz unusable for communications, the frequency was chosen in part because it would not interfere with existing communication systems. As for there being a laser in a microwave oven, there isn't. Lasers are not the answer to all problems and so the source for microwaves in a microwave oven is a magnetron. This high-powered vacuum tube emits a beam of coherent microwaves while a laser emits a beam of coherent light waves. While microwaves and light waves are both electromagnetic waves, they have quite different frequencies. A laser produces much higher frequency waves than the magnetron. And the techniques these devices use to create their electromagnetic waves are entirely different. Both are wonderful inventions, but they work in very different ways. The fact that this misleading information appears in a science book, presumably used in schools, is a bit discouraging. It just goes to show you that you shouldn't believe everything read in books or on the web (even this web site, because I make mistakes, too). Your son has magnetized the shadow mask that's located just inside the screen of your color television. It's a common problem and one that can easily be fixed by "degaussing" the mask (It'll take years or longer to fade on its own, so you're going to have to actively demagnetize the mask). You can have it done professionally or you can buy a degaussing coil yourself and give it a try (Try a local electronics store or contact MCM Electronics, (800) 543-4330, 6" coil is item #72-785 for $19.95 and 12" coil is item #72-790 for $32.95). Color sets create the impression of full color by mixing the three primary colors of light—blue, green, and red—right there on the inside surface of the picture tube. A set does the mixing by turning on and off three separate electron beams to control the relative brightnesses of the three primary colors at each location on the screen. The shadow mask is a metal grillwork that allows the three electrons beams to hit only specific phosphor dots on the inside of the tube's front surface. That way, electrons in the "blue" electron beam can only hit blue-glowing phosphors, while those in the "green" beam hit green-glowing phosphors and those in the "red" beam hit red-glowing phosphors. The three beams originate at slightly different locations in the back of the picture tube and reach the screen at slightly different angles. After passing through the holes in the shadow mask, these three beams can only hit the phosphors of their color. Since the shadow mask's grillwork and the phosphor dots must stay perfectly aligned relative to one another, the shadow mask must be made of a metal that has the same thermal expansion characteristics as glass. The only reasonable choice for the shadow mask is Invar metal, an alloy that unfortunately is easily magnetized. Your son has magnetized the mask inside your set and because moving charged particles are deflected by magnetic fields, the electron beams in your television are being steered by the magnetized shadow mask so that they hit the wrong phosphors. That's why the colors are all washed out and rearranged. To demagnetize the shadow mask, you should expose it to a rapidly fluctuating magnetic field that gradually decreases in strength until it vanishes altogether. The degaussing coils I mentioned above plug directly into the AC power line and act as large, alternating-field electromagnets. As you wave one of these coils around in front of the screen, you flip the magnetization of the Invar shadow mask back and forth rapidly. By slowly moving this coil farther and farther away from the screen, you gradually scramble the magnetizations of the mask's microscopic magnetic domains. The mask still has magnetic structures at the microscopic level (this is unavoidable and a basic characteristic of all ferromagnetic metals such as steel and Invar). But those domains will all point randomly and ultimately cancel each other out once you have demagnetized the mask. By the time you have the coil a couple of feet away from the television, the mask will have no significant magnetization left at the macroscopic scale and the colors of the set will be back to normal. Incidentally, I did exactly this trick to my family's brand new color television set in 1965. I had enjoyed watching baseball games and deflecting the pitches wildly on our old black-and-white set. With only one electron beam, a black-and-white set needs no shadow mask and has nothing inside the screen to magnetize. My giant super alnico magnet left no lingering effect on it. But when the new set arrived, I promptly magnetized its shadow mask and when my parent watched the "African Queen" that night, the colors were not what you'd call "natural." The service person came out to degauss the picture tube the next day and I remember denying any knowledge of what might have caused such an intense magnetization. He and I agreed that someone must have started a vacuum cleaner very close to the set and thus magnetized its surface. I was only 8, so what did I know anyway. Finally, as many readers have pointed out, many modern televisions and computer monitors have built-in degaussing coils. Each time you turn on one of these units, the degaussing circuitry exposes the shadow mask to a fluctuating magnetic field in order to demagnetize it. If your television set or monitor has such a system, then turning it on and off a couple of times should clear up most or all of the magnetization problems. However, you may have to wait about 15 minutes between power on/off cycles because the built-in degaussing units have thermal protection that makes sure they cool down properly between uses. Flies travel at modest speeds relative to the air that surrounds them. Since the outside air is nearly motionless relative to the ground (usually), a fly outside the van is also nearly motionless. When the fast-moving van collides with the nearly motionless fly, the fly's inertia holds it in place while the van squashes it. But when the fly is inside the van, the fly travels about in air that is moving with the van. If the van is moving at 70 mph, then so is the air inside it and so is the fly. In fact, everything inside the van moves more or less together and from the perspective of the van and its contents, the whole world outside is what is doing the moving—the van itself can be considered stationary and the van's contents are then also stationary. As long as the fly and the air it is in are protected inside the van, the movement of the outside world doesn't matter. The fly buzzes around in its little protected world. But if the van's window is open and the fly ventures outside just as a signpost passes the car, the fly may get creamed by a collision with the "moving" sign. Everything is relative and if you consider the van as stationary, then it is undesirable for the van's contents to get hit by the moving items in the world outside (passing trees, bridge abutments, or oncoming vehicles. In the classical view of the world, the view before the advent of quantum theory, nature seemed entirely deterministic and mechanical. If you knew exactly where every molecule and atom was and how fast it was moving, you could perfectly predict where it would be later on. In principle, this classical world would allow you to throw a 6 every time. Of course, you'd have to know everything about the air's motion, the thermal energy in the die, and even the pattern of light in the room. But the need for enormous amounts of information just means that controlling the dice will be incredibly hard, not that it will be impossible. For simple throws, you could probably get by without knowing all that much about the initial conditions. As the throws became more complicated and more sensitive to initial conditions, you'd have to know more and more. However, quantum mechanics makes controlling the die truly impossible. The problem stems from the fact that position and velocity information are not fully defined at the same time in our quantum mechanical universe. In short, you can't know exactly where a die is and how fast it is moving at the same time. And that doesn't mean that you can't perform these measurements well. It means that the precise values don't exist together; they are limited by Heisenberg uncertainty. So quantum physics imposes a fundamental limit on how well you can know the initial conditions before your throw and it thus limits your ability to control the outcome of that throw. How much quantum physics affects your ability to throw a 6 depends on the complexity of the throw. If you just drop a die a few inches onto a table, you can probably get a 6 most of the time, despite quantum mechanics and without even knowing much classical information. But as you begin throwing the die farther, you'll begin to lose control of it because of quantum mechanics and uncertainty. In reality, you'll find classical physics so limiting that you'll probably never observe the quantum physics problem. Knowing everything about a system is already unrealistic, even in a classical universe. The problems arising from quantum mechanics are really just icing on the cake for this situation. What a great find! This site is filled with pseudo-science at its best. I don't know the history or training of these people, but it's pure garbage. They use the words of science but without any meaningful content. Just as putting on a crown doesn't make you a king, using phrases like "action and reaction" and "Newton's third law" doesn't mean that you are discussing real science. I watched the video on the "Counter Rotation Device" and found the discussion of "Newton's Fourth Law of Motion" quite amusing. The speaker claims that this fourth law was discovered about 30 years ago by a person now at their research lab. It is based on Newton's third law, which the speaker simplifies to "for every action there is an equal and opposite reaction." In a nutshell, his fourth law claims that you can take the reaction caused by a particular action and apply it to the action in the same direction—action causes reaction which causes more action which causes more reaction and so on. Pretty soon you have so much action and reaction that anything becomes possible. The video goes on to show devices that yield more power than they consume and that can easily become net sources of energy—by using part of the output energy from one of these energy multiplying devices to power that device, you can create endless energy from nothing at all. Sadly enough, it's all just nonsense. Newton's third law is not as flexible as the speaker supposes and this endless feedback process in which reaction is used as action to produce more reaction is ridiculous. A more accurate version of Newton's third law is: "Whenever one object pushes on a second object, the second object pushes back on the first object equally hard but in the opposite direction". Thus when you push on the handle of a water pump, that handle pushes back on you with an equal but oppositely directed force. The speaker's claim is that there is a way to use the handle's push on you as part of your push on the handle so that, with your help, the handle essentially pushes itself through action and reaction. You can then pump water almost without effort. Sorry, this is just nonsense. It's mostly just playing with the words action and reaction in their common language form: if you scare me, I react by jumping. That action and reaction has nothing to do with physics. The speaker uses at least three clever techniques to make his claims more compelling and palatable. First, he refers frequently to a power-company conspiracy that is out to destroy his company and its products. Conspiracy theories are so popular these days that having a conspiracy against you makes you more believable. Second, he describes the fellow who discovered the fourth law of motion as a basement inventor who has taken on the rigid scientific establishment. Ordinary people love to see pompous, highly educated academics brought low by other ordinary people; it's kind of a team spirit issue. And third, he makes casual use of technical looking equipment and jargon, as though he is completely at ease in the world of advanced technology. Movies have made it easier to trust characters like Doc Brown from "Back to the Future" than to trust real scientists. In fact, there is no power-company conspiracy because there is no free electricity. The proof is in the pudding: if these guys really could make energy from nothing, they'd be doing it every day and making a fortune. They would be the power companies. If they were interested in public welfare rather than money, they'd have given their techniques away already. If they were interested in proving the scientific establishment wrong, they'd have accepted challenges by scientific organization and demonstrated their devices in controlled situations (where they can't cheat). The fact is, they're just frauds and of no more interest to the power companies than snake oil salespeople are to doctors. No decent people want to see others defrauded of money, property, or health, but the free electricity people present no real threat to the power companies. The popular notion that an ordinary person is likely to upset established science is an unfortunate product of the anti-intellectual climate of our present world. Becoming a competent scientist is generally hard work and requires dedication, time, and an enormous amount of serious thinking. Physics is hard, even for most physicists. The laws governing the universe are slowly being exposed but it has taken very smart, very hardworking people almost half a millennium to get to the current state of understanding. Each new step requires enormous effort and a detailed understanding of a good part of the physics that is already known. Still, there is a common myth that some clever and lucky individual with essentially no training or knowledge of what has been discovered before will make some monumental breakthrough. The movies are filled with such events. Unfortunately, it won't happen. In new or immature fields or subfields, it is possible for an essentially untrained or self-trained genius to jump in and discover something important. Galileo and Newton probably fit this category in physics and Galois and Ramanujan probably fit it in mathematics. But most of physics is now so mature that broad new discoveries are rare, and accessible only to those with extremely good understandings of what is already known. A basement tinkerer hasn't got a prayer. Finally, real scientists don't always walk around in white lab coats looking serious, ridiculing the less educated, and trying to figure out how to trick the government into funding yet another silly, fraudulent, or unethical research project. In fact, most scientists wear practical clothes, have considerable humor, enjoy speaking with ordinary folk about their science, and conduct that science because they love and believe in it rather than as a means to some diabolic end. These scientists use the words of science in their conversations because it is the appropriate language for their work and there is meaning in each word and each sentence. The gibberish spoken by "scientists" in movies is often offensive to scientists in the same way that immigrant groups find it offensive when people mock their native languages. I don't know about any patent history for the free electricity organization but everyone should be aware that not all patented items actually do what they're supposed to. In principle, the U.S. Patent Office only awards a patent when it determines that a concept has not been patented previously, is not already known, is not obvious, and is useful. The utility requirement should eliminate items that don't actually work. One of my readers, a patent attorney, reports that he regularly invokes the utility regulation while escorting the "inventors" of impossible devices such as "free electricity" to the door. They consider him part of the conspiracy against them, but he is doing us all a service by keeping foolishness out of the patent system. However, proving that something doesn't work often takes time and money, so sometimes nonfunctional items get patented. Thus a patent isn't always a guarantee of efficacy. Patented nonsense is exactly that: nonsense. Finally, how do I know that Free Electricity is really not possible? Couldn't I have missed something somewhere in the details? No. The impossibility of this scheme is rooted in the very groundwork of physics; at the deepest level where there is no possibility of mistake. For the counter rotation device to generate 15 kilowatts of electricity out of nothing, it would have to be a net source of energy—the device would be creating energy from nothing. That process would violate the conservation of energy, whereby energy cannot be created or destroyed but can only be transferred from one object to another or converted from one form to another. Recognizing that our universe is relativistic (it obeys the laws of special relativity), the actual conserved quantity is mass/energy, but the concept is the same: you can't make mass/energy from nothing. The origin of this conservation law lies in a mathematical theorem noted first by C. G. J. Jacobi and fully developed by Emmy Noether, that each symmetry in the laws of physics gives rise to a conserved quantity. The fact that a translation in space—shifting yourself from one place to another—does not change the laws of physics gives rise to a conserved quantity: momentum. The fact that a rotation—changing the direction in which you are facing—does not change the laws of physics gives rise to another conserved quantity: angular momentum. And the fact that waiting a few minutes—changing the time at which you are—does not change the laws of physics gives rise to a third conserved quantity: energy. The conservation of energy is thus intimately connected with the fact that the laws of physics are the same today as they were yesterday and as they will be tomorrow. Scientists have been looking for over a century for any changes in the laws of physics with translations and rotations in space and with movement through time, and have never found any evidence for such changes. Thus momentum, angular momentum, and energy are strictly conserved in our universe. For the counter rotation device to create energy from nothing, all of physics would have to be thrown in the trashcan. The upset would be almost as severe as discovering that 1+1 = 3. Furthermore, a universe in which physics was time-dependent and energy was not conserved would be a dangerous place. Free electricity devices would become the weapons of the future—bombs and missiles that released energy from nothing. Moreover, as the free electricity devices produced energy from nothing, the mass/energy of the earth would increase and thus its gravitational field would also increase. Eventually, the gravity would become strong enough to cause gravitational collapse and the earth would become a black hole. Fortunately, this is all just science fiction because free electricity isn't real. Generators and motors are very closely related and many motors that contain permanent magnets can also act as generators. If you move a permanent magnet past a coil of wire that is part of an electric circuit, you will cause current to flow through that coil and circuit. That's because a changing magnetic field, such as that near a moving magnet, is always accompanied in nature by an electric field. While magnetic fields push on magnetic poles, electric fields push on electric charges. With a coil of wire near the moving magnet, the moving magnet's electric field pushes charges through the coil and eventually through the entire circuit. A convenient arrangement for generating electricity endlessly is to mount a permanent magnet on a spindle and to place a coil of wire nearby. Then as the magnet spins, it will turn past the coil of wire and propel currents through that coil. With a little more engineering, you'll have a system that looks remarkably like the guts of a typical permanent magnet based motor. In fact, if you take a common DC motor out of a toy and connect its two electrical terminals to a 1.5 V light bulb or a light emitting diode (try both directions with an LED because it can only carry current in one direction), you'll probably be able to light that bulb or LED by spinning the motor's shaft rapidly. A DC motor has a special switching system that converts the AC produced in the motor's coils into DC for delivery to the motor's terminals, but it's still a generator. So the easiest answer to your question is: "find a nice DC motor and turn its shaft". Iron and most steels are intrinsically magnetic. By that, I mean that they contain intensely magnetic microscopic domains that are randomly oriented in the unmagnetized metal but that can be aligned by exposure to an external magnetic field. In pure iron, this alignment vanishes quickly after the external field is removed, but in the medium carbon steel of a typical screwdriver, the alignment persists days, weeks, years, or even centuries after the external field is gone. To magnetize a screwdriver permanently, you should expose it briefly to a very strong magnetic field. Touching the screwdriver's tip to one pole of a strong magnet will cause some permanent magnetization. Rubbing or tapping the screwdriver also helps to free up its domains so that they can align with this external field. But the better approach is to put the screwdriver in a coil of wire that carries a very large DC electric current. The current only needs to flow for a fraction of a second—just long enough for the domains to align. A car battery is a possibility, but it has safety problems: it can deliver an incredible current (400 amperes or more) for a long time (minutes) and can overheat or even explode your coil of wire. Moreover, it may leak hydrogen gas, which can be ignited by the sparks that will inevitably occur while you are magnetizing your screwdriver. A safer choice for the current source is a charged electrolytic capacitor—a device that stores large quantities of separated electric charge. A charged capacitor can deliver an even larger current than a battery can, but only for a fraction of a second—only until the capacitor's store of separated charge is exhausted. Looking at one of my hobbyist electronics catalogs, Marlin P. Jones, 800-652-6733, I'd pick a filter capacitor with a capacity of 10,000 microfarads and a maximum voltage of 35 volts (Item 12104-CR, cost: $1.50). Charging this device with three little 9V batteries clipped together in a series (27 volts overall) will leave it with about 0.25 coulombs of separated charge and just over 3.5 joules (3.5 watt-seconds or 3.5 newton-meters) of energy. Make sure that you get the polarity right—electrolytic filter capacitors store separated electric charge nicely but you have to put the positive charges and negative charges on the proper sides. [To be safe, work with rubber gloves and, as a general rule, never touch anything electrical with more than one hand at a time. Remember that a shock across your heart is much more dangerous than a shock across you hand. And while 27 volts is not a lot and is unlikely to give you a shock under any reasonable circumstances, I can't accept responsibility for any injuries. If you're not willing to accept responsibility yourself, don't try any of this.] If you wrap about 100 turns of reasonably thick insulated wire (at least 18 gauge, but 12 gauge solid-copper home wiring would be better) around the screwdriver and then connect one end of the coil to the positively charged side of the capacitor and the other end of the coil to the negatively charged side, you'll get a small spark (wear gloves and safety glasses) and a huge current will flow through the coil. The screwdriver should become magnetized. If the magnetization isn't enough, repeat the charging-discharging procedure a couple of times, always with the same connections so that the magnetization is in the same direction. It turns out that the electrons in copper travel quite slowly even though "electricity" travels at almost the speed of light. That's because there are so many mobile electrons in copper (and other conductors) that even if those electrons move only an inch per second, they comprise a large electric current. Picture the electrons as water flowing through a pipe or river and now consider the Mississippi River. Even if the Mississippi is flowing only inches per second, it sure carries lots of water past St. Louis each second. The fact that electricity itself travels at almost the speed of light just means that when you start the electrons moving at one end of a long wire, the electrons at the other end of the wire also begin moving almost immediately. But that doesn't mean that an electron from your end of the wire actually reaches the far end any time soon. Instead, the electrons behave somewhat like water in a long hose. When you start the water moving at one end, it pushes on water in front of it, which pushes on water in front of it, and so on so that water at the far end of the hose begins to leave the hose almost immediately. In the case of water, the motion proceeds forward at the speed of sound. In a wire, the motion proceeds forward at the speed of light in the wire (actually the speed at which electromagnetic waves propagate along the wire), which is only slightly less than the speed of light in vacuum. Note for the experts: as one of my readers (KT) points out, the water-in-a-hose analogy for current-in-a-wire is far from perfect. Current in a wire flows throughout the wire, including at its surface, and the wire's resistance to steady current flow scales as the cross-sectional area of the wire. In contrast, water in a hose only flows through the open channel inside the hose and the hose's resistance to flow scales approximately as the fourth power of that channel's diameter. Actually, faster moving fluids don't necessarily have lower pressure. For example, a bottle of compressed air in the back of a pickup truck is still high-pressure air, even though it's moving fast. The real issue here is that when fluid speeds up in passing through stationary obstacles, its pressure drops. For example, when air rushes into the open but stationary mouth of a vacuum cleaner, that air experiences not only a rise in speed, it also experiences a drop in pressure. Similarly, when water rushes out of the nozzle of a hose, its speed increases and its pressure drops. This is simply conservation of energy: as the fluid gains kinetic energy, it must lose pressure energy. However, if there are sources of energy around—fans, pumps, or moving surfaces—then these exchanges of pressure for speed may no longer be present. That's why I put in the qualifier of there being only stationary obstacles. Just as most good camera lenses have more than one optical element inside them, so your eye has more than one optical element inside it. The outside surface of your eye is curved and actually acts as a lens itself. Without this surface lens, your eye can't bring the light passing through it to a focus on your retina. The component in your eye that is called "the lens" is actually the fine adjustment rather than the whole optical system. When you put your eye in water, the eye's curved outer surface stops acting as a lens. That's because light travels at roughly the same speed in water as it does in your eye and that light no longer bends as it enters your eye. Everything looks blurry because the light doesn't focus on your retina anymore. But by inserting an air space between your eye and a flat plate of glass or plastic, you recover the bending at your eye's surface and everything appears sharp again. The only source of common light source that presents any real danger to a child with a magnifying glass is the sun. If you let sunlight pass through an ordinary magnifying glass, the convex lens of the magnifier will cause the rays of sunlight to converge and they will form a real image of the sun a short distance after the magnifying glass. This focused image will appear as a small, circular light spot of enormous brilliance when you let it fall onto a sheet of white paper. It's truly an image—it's round because the sun is round and it has all the spatial features that the sun does. If the image weren't so bright and the sun had visible marks on its surface, you'd see those marks nicely in the real image. The problem with this real image of the sun is simply that it's dazzlingly bright and that it delivers lots of thermal power in a small area. The real image is there in space, whether or not you put any object into that space. If you put paper or some other flammable substance in this focused region, it may catch on fire. Putting your skin in the focus would also be a bad idea. And if you put your eye there, you're in serious trouble. So my suggestion with first graders is to stay in the shade when you're working with magnifying glasses. As soon as you go out in direct sunlight, that brilliant real image will begin hovering in space just beyond the magnifying glass, waiting for someone to put something into it. And many first graders just can't resist the opportunity to do just that. Converting units is always a matter of multiplying by 1. But you must use very fancy versions of 1, such as 60 seconds/1 minute and 1 gallon/3.7854 liters. Since 60 seconds and 1 minute are the same amount of time, 60 seconds/1 minute is 1. Similarly, since 1 gallon (U.S. liquid) and 3.7854 liters are the same amount of volume, 1 gallon/3.7854 liters is 1. So suppose that you have measured the flow of water through a pipe as 283 liters/second. You can convert to gallons/minute by multiplying 283 liters/second by 1 twice: (283 liters/second)(60 seconds/1 minute)(1 gallon/3.7854 liters). When you complete this multiplication, the liter units cancel, the second units cancel, and you're left with 4,486 gallons/minute. As a number of readers have informed me, the watches you're referring to generate electricity that then powers a conventional electronic watch. These electromechanical watches use mechanical work done by wrist motions on small weights inside the watches to generate electricity. Seiko's watch spins a tiny generator—a coil of wire moves relative to a magnetic field and electric charges are pushed through the coil as a result. I have been told that other watches exist that use piezoelectricity—the electricity that flows when certain mechanical objects are deformed or strained—to generate their electricity. In any case, your wrist motion is providing the energy that becomes electric power. These electromechanical watches are the modern descendants of the automatic mechanical watches. An automatic watch had a main spring that was wound by the motion of the wearer's hand. A small mass inside the watch swung back and forth on the end of a lever. Because of its inertia, this mass resisted changes in velocity and it moved relative to the watch body whenever the watch accelerated. If you like, you can picture the mass as a ball that rolls about inside a wagon as you roll the wagon around an obstacle course. When the lever turned back and forth relative to the watch body, the watch was able to extract energy from it. Gears attached to the lever allowed the watch to use the mass's energy to wind its mainspring. The energy extracted from the mass with each swing was very small, but it was enough to keep the mainspring fully wound. Ultimately, this energy came from your hand—you did work on the watch in shaking it about and some of this energy eventually wound up in the mainspring. These same sorts of motions are what power the electromechanical watches of today. Instead of winding a spring, your wrist motions swing weights about inside the watches and these moving weights spin generators to produce electric power. Actually, the system of cloud and ground that produces lightning is itself a giant capacitor and the lightning is a failure of that capacitor. Like all capacitors, the system consists of two charged surfaces separated by an insulating material. In this case, the charged surfaces are the cloud bottom and the ground, and the insulating material is the air. During charging, vast amounts of separated electric charge accumulate on the two surfaces—the cloud bottom usually becomes negatively charged and the ground below it becomes positively charge. These opposite charges produce an intense electric field in the region between the cloud and the ground, and eventually the rising field causes charge to begin flowing through the air: a stroke of lightning. In principle, you could tap into a cloud and the ground beneath and extract the capacitor's charge directly with wires. But this would be a heroic engineering project and unlikely to be worth the trouble. And catching a lightning strike in order to charge a second capacitor is not likely to be very efficient: most of the energy released during the strike would have to dissipate in the air and relatively little of it could be allowed to enter the capacitor. That's because no realistic capacitor can handle the voltage in lightning. Here's the detailed analysis. The power released during the strike is equal to the strike's voltage times its current: the voltage between clouds and ground and the current flowing between the two during the strike. Voltage is the measure of how much energy each unit of electric charge has and current is the measure of how many units of electric charge are flowing each second. Their product is energy per second, which is power. Added up over time, this power gives you the total energy in the strike. If you want to capture all this energy in your equipment, it must handle all the current and all the voltage. If it can only handle 1% of the voltage, it can only capture 1% of the strike's total energy. While the current flowing in a lightning strike is pretty large, the voltage involved is astonishing: millions and millions of volts. Devices that can handle the currents associated with lightning are common in the electric power industry but there's nothing reasonable that can handle lightning's voltage. Your equipment would have to let the air handle most of that voltage. The air would extract power from the flowing current in the lightning bolt and turning it into light, heat, and sound. Your equipment would then extract only a token fraction of the stroke's total energy. Finally, your equipment would have to prepare the energy properly for delivery on the AC power grid—its voltage would have to be lowered dramatically and a switching system would have to convert the static charge on the capacitors to an alternating flow of current in the power lines. When he established his temperature scale, Daniel Gabriel Fahrenheit defined 32 degrees "Fahrenheit" (32 F) as the melting temperature of ice—the temperature at which ice and water can coexist. When you assemble a mixture of ice and water and allow them to reach equilibrium (by waiting, say, 3 minutes) in a reasonably insulated container (something that does not allow much heat to flow either into or out of the ice bath), the mixture will reach and maintain a temperature of 32 F. At that temperature and at atmospheric pressure, ice and water are both stable and can coexist indefinitely. To see why this arrangement is stable, consider what would happen if something tried to upset it. For example, what would happen if this mixture were to begin losing heat to its surroundings? Its temperature would begin to drop but then the water would begin to freeze and release thermal energy: when water molecules stick together, they release chemical potential energy as thermal energy. This thermal energy release would raise the temperature back to 32 F. The bath thus resists attempts at lowering its temperature. Similarly, what would happen if the mixture were to begin gaining heat from its surroundings? Its temperature would begin to rise but then the ice would begin to melt and absorb thermal energy: separating water molecules increases their chemical potential energy and requires an input of thermal energy. This lost thermal energy would lower the temperature back to 32 F. The bath thus resists attempts at raising its temperature. So an ice/water bath self-regulates its temperature at 32 F. The only other quantities affecting this temperature are the air pressure (the bath temperature could shift upward by about 0.003 degrees F during the low pressure of a hurricane) and dissolved chemicals (half an ounce of table salt per liter of bath water will shift the bath temperature downward by about 1 degree F). It's true that the force of gravity decreases with depth, so that if you were to find yourself in a cave at the center of the earth, you would be completely weightless. However, pressure depends on more than local gravity: it depends on the weight of everything being supported overhead. So while you might be weightless, you would still be under enormous pressure. Your body would be pushing outward on everything around you, trying to prevent those things from squeezing inward and filling the space you occupy. In fact, your body would not succeed in keeping those things away and you would be crushed by their inward pressure. More manageable pressures surround us everyday. Our bodies do their part in supporting the weight of the atmosphere overhead when we're on land or the weight of the atmosphere and a small part of the ocean when we're swimming at sea. The deeper you go in the ocean, the more weight there is overhead and the harder your body must push upward. Thus the pressure you exert on the water above you and the pressure that that water exerts back on you increases with depth. Even though gravity is decreasing as you go deeper and deeper, the pressure continues to increase. However, it increases a little less rapidly as a result of the decrease in local gravity. The foam consists of tiny air bubbles surrounded by very thin films of soap and water. When light enters the foam, it experiences partial reflections from every film surface it enters or exits. That is because light undergoes a partial reflection whenever it changes speed (hence the reflections from windows) and the speed of light in soapy water is about 30% less than the speed of light in air. Although only about 4% of the light reflects at each entry or exit surface, the foam contains so many films that very little light makes it through unscathed. Instead, virtually all of the light reflects from film surfaces and often does so repeatedly. Since the surfaces are curved, there is no one special direction for the reflections and the reflected light is scattered everywhere. And while an individual soap film may exhibit colors because of interference between reflections from its two surfaces, these interference effects average away to nothing in the dense foam. Overall, the foam appears white—it scatters light evenly, without any preference for a particular color or direction. White reflections appear whenever light encounters a dense collection of unoriented transparent particles (e.g. sugar, salt, clouds, sand, and the white pigment particles in paint). As for the fact that even colored soaps create only white foam, that's related to the amount of dye in the soaps. It doesn't take much dye to give bulk soap its color. Since light often travels deep into a solid or liquid soap before reflecting back to our eyes, even a modest amount of dye will selectively absorb enough light to color the reflection. But the foam reflects light so effectively with so little soap that the light doesn't encounter much dye before leaving the lather. The reflection remains white. To produce a colored foam, you would have to add so much dye to the soap that you'd probably end up with colored hands as well. Fortunately, you don't have to wait that long. From astronomical observations, we are fairly certain that the laws of physics as we know them apply throughout the visible universe. It wouldn't take large changes in the physical laws to radically change the structures of atoms, molecules, stars, and galaxies. So the fact that the light and other particles we see coming from distant places is so similar to what we see coming from nearby sources is pretty strong evidence that the laws of physics don't change with distance. Also, the fact that the light we see from distant sources has been traveling for a long time means that the laws of physics don't seem to have changed much (if at all) with time, either. While there are theories that predict subtle but orderly changes in the laws of physics with time and location, effectively making those laws more complicated, no one seriously thinks that the laws of physics change radically and randomly from place to place in the Universe. Nearly all metals are crystalline, meaning that their atoms are arranged in neat and orderly stacks, like the piles of oranges or soup cans at the grocery store or the cannonballs at the courthouse square. When you bend a metal, its crystals can deform either by changing the spacings between atoms or by letting those atoms slide past one another as great moving sheets of atoms. When the atoms keep their relative orientations but change their relative spacings, the deformation is called elastic. When the atom sheets slide about and move, the deformation is called plastic. Metals that bend permanently are experiencing plastic deformation. Their atoms change their relative orientations during the bend and they lose track of where they were. Once plastic deformation has occurred, the metal can't remember how to get back to its original shape and stays bent. Metals that bend only temporarily and return to their original shape when freed from stress are experiencing elastic deformation. Their sheets of atoms aren't sliding about and they can easily spring back to normal when the stresses go away. Naturally, springs are made from materials that experience only elastic deformation in normal circumstances. Hardened metals such as spring steel are designed and heat-treated so that the atomic sliding processes, known technically as "slip," are inhibited. When you bend them and let go, they bounce back to their original shapes. But if you bend them too far, they either experience plastic deformation or they break. Non-crystalline materials such as glass also make good springs. But since these amorphous materials have no orderly rows of atoms, they can't experience plastic deformation at all. They behave as wonderful springs right up until you bend them too far. Then, instead of experience plastic deformation and bending permanently, they simply crack in two. One last detail: there are a few exotic materials that undergo complicated deformations that are neither temporary nor permanent. With changes in temperature, these shape memory materials can recover from plastic deformation and spring back to their original shapes. A superconductor is a material that carries electric current without any loss of energy. Currents lose energy as they flow through normal wires. This energy loss appears as a voltage drop across the material—the voltage of the current as it enters the material is higher than the voltage of the current when it leaves the material. But in a superconductor, the current doesn't lose any voltage at all. As a result, currents can even flow around loops without stopping. Currents are magnetic and superconducting magnets are based on the fact that once you get a current flowing around a loop of superconductor, it keeps going forever and so does its magnetism. At low speeds, mass and energy appear to be separate quantities. Mass is the measure of inertia and can be determined by shaking an object. Energy is the measure of how much work an object can do and can be determined by letting it do that work. Conveniently enough, the object's weight—the force gravity exerts on it—is exactly proportional to its mass, which is why people carelessly interchange the words "mass" and "weight," even though they mean different things. But when something is moving at speeds approaching the speed of light, mass and kinetic energy no longer separate so easily. In fact, the relativistic equations of motion are more complicated than those describing slow objects and the way in which gravity affects fast objects is more complicated than simply giving them "weight." Overall, you can view the bending of light by gravity in one of two ways. First, you can view it approximately as gravity affecting not on mass, but also energy so that light falls because its energy gives it something equivalent to a "weight." Second, you can view it more accurately as the bending of light as caused by a change in the shape of space and time around a gravitating object. Space is curved, so that light doesn't travel straight as it moves past gravitating objects—it follows the curves of space itself. The second or Einsteinian view, which correctly predicts twice as much bending of light as the first or Newtonian view, is a little disconcerting. That's why it took some time for the theory of general relativity to be widely accepted. (Thanks to DP for pointing out the factor of two.) The helium balloon is the least dense thing in the car and is responding to forces exerted on it by the air in the car. To understand this, consider what happens to you, the air, and finally the helium balloon as the car first starts to accelerate forward. When the car starts forward, inertia tries to keep all of the objects in the car from moving forward. An object at rest tends to remain at rest. So the car must push you forward in order to accelerate you forward and keep you moving with the car. As the car seat pushes forward on you, you push back on the car seat (Newton's third law) and dent its surface. Your perception is that you are moving backward, but you're not really. You're actually moving forward; just not quite as quickly as the car itself. The air in the car undergoes the same forward acceleration process. Its inertia tends to keep it in place, so the car must push forward on it to make it accelerate forward. Air near the front of the car has nothing to push it forward except the air near the back of the car, so the air in the front of the car tends to "dent" the air in the back of the car. In effect, the air shifts slightly toward the rear of the car. Again, you might think that this air is going backward, but it's not. It's actually moving forward; just not quite as quickly as the car itself. Now we're ready for the helium balloon. Since helium is so light, the helium balloon is almost a hollow, weightless shell that displaces the surrounding air. As the car accelerates forward, the air in the car tends to pile up near the rear of the car because of its inertia. If the air can push something out of its way to get more room near the rear of the car, it will. The helium balloon is that something. As inertia causes the air to drift toward the rear of the accelerating car, the nearly massless and inertialess helium balloon is squirted toward the front of the car to make more room for the air. There is actually a horizontal pressure gradient in the car's air during forward acceleration, with a higher pressure at the rear of the car than at the front of the car. This pressure gradient is ultimately what accelerates the air forward with the car and it's also what propels the helium balloon to the front of the car. Finally, when the car is up to speed and stops accelerating forward, the pressure gradient vanishes and the air returns to its normal distribution. The helium balloon is no longer squeezed toward the front of the car and it floats once again directly above the gear shift. One last note: OGT from Lystrup, Denmark points out that when you accelerate a glass of beer, the rising bubbles behave in the same manner. They move toward the front of the glass as you accelerate it forward and toward the back of the glass as you bring it to rest. Most objects make no light of their own and are visible only because they reflect some of the light that strikes them. Without sunlight (or any other light source), these passive objects would appear black. Black is what we "see" when there is no light reaching our eyes from a particular direction. The only objects we would see would be those that made their own light and sent it toward our eyes. The fact that we see mostly reflected light makes for some interesting experiments. A red object selectively reflects only red light; a blue object reflects only blue light; a green object reflects only green light. But what happens if you illuminate a red object with only blue light? The answer is that the object appears black! Since it is only able to reflect red light, the blue light that illuminates it is absorbed and nothing comes out for us to see. That's why lighting is so important to art. As you change the illumination in an art gallery, you change the variety of lighting colors that are available for reflection. Even the change from incandescent lighting to fluorescent lighting can dramatically change the look of a painting or a person's face. That's why some makeup mirrors have dual illumination: incandescent and fluorescent. The one exception to this rule that objects only reflect the light that strikes them is fluorescent objects. These objects absorb the light that strikes them and then emit new light at new colors. For example, most fluorescent cards or pens will absorb blue light and then emit green, orange, or red light. Try exposing a mixture of artwork and fluorescent objects to blue light. The artwork will appear blue and black: blue wherever the art is blue and black wherever the art is either red, green, or black. But the fluorescent objects will display a richer variety of colors because those objects can synthesize their own light colors. If we neglect the mass of the rope, the two teams always exert equal forces on one another. That's simply an example of Newton's third law—for every force team A exerts on team B, there is an equal but oppositely directed force exerted by team B on team A. While it might seem that these two forces on the two teams should always balance in some way so that the teams never move, that isn't the case. Each team remains still or accelerates in response to the total forces on that team alone, and not on the teams as a pair. When you consider the acceleration of team A, you must ignore all the forces on team B, even though one of those forces on team B is caused by team A. There are two important forces on team A: (1) the pull from team B and (2) a force of friction from the ground. That force of friction approximately cancels the pull from the team B because the two forces are in opposite horizontal directions. As long as the two forces truly cancel, team A won't accelerate. But if team A doesn't obtain enough friction from the ground, it will begin to accelerate toward team B. The winning team is the one that obtains more friction from the ground than it needs and accelerates away from the other team. The losing team is the one that obtains too little friction from the ground and accelerates toward the other team. An ordinary wire will carry electric current in either direction, while a diode will only carry current in one direction. That's because the electric charges in a wire are free to drift in either direction in response to electric forces but the charges in a diode pass through a one-way structure known as a p-n junction. Charges can only approach the junction from one side and leave from the other. If they try to approach from the wrong side, they discover that there are no easily accessible quantum mechanical pathways or "states" in which they can travel. Sending the charges toward the p-n junction from the wrong side can only occur if something provides the extra energy needed to reach a class of less accessible quantum mechanical states. Light can provide that extra energy, which is why many diodes are light sensitive—they will conduct current in the wrong direction when exposed to light. That is the basis for many light sensitive electronic devices and for most photoelectric or "solar" cells. The amount of hot water that's cooling doesn't necessarily determine which bowl of water will cool fastest. That depends on how quickly each gram of the hot water loses heat, a rate that depends both on how much hotter the water is than its surroundings and on how that water is exposed to those surroundings. In general, hot water loses heat through its surface so the more surface that's exposed, the faster it will lose heat. But surface that's exposed to air will lose heat via evaporation and will be particularly important in cooling the water. In answer to your question, my guess is that the larger bowl of water also exposes much more of that water to the air. Although the larger bowl had more water in it, it allowed that water to exchange heat faster with its environment. If the larger bowl contained twice as much water but let that water lose heat twice as fast, the two bowls would maintain equal temperatures. If you want to see the effect of thermal mass in slowing the loss of temperature, you'll need to control heat loss. Try letting equal amounts of hot water cool in two identical containers—one wrapped in insulation and covered with clear plastic wrap (to prevent evaporation) and one open to the air. You'll see a dramatic change in cooling rate. And if you want to compare unequal amounts of water, use two indentical containers that are only exposed to the cooler environment through a controlled amount of surface area. For example, try two identical insulated cups, one full of water and one only half full. If both lose heat only through their open tops, the full cup should cool more slowly than the half full cup. I'd suggest finding a hollow rubber ball with a relatively thin, flexible skin and putting different things inside it. You can just cut a small hole and tape it over after you put in "the stuff." Compare the ball's bounciness when it contains air, water, shaving cream, beans, rice, and so on. Just drop it from a consistent height and see how high it rebounds. The ratio of its rebound height to its drop height is a good measure of how well the ball stores energy when it hits the ground and how well it uses that energy to rebound. A ball that bounces to full height is perfect at storing energy while a ball that doesn't bounce at all is completely terrible at storing energy. You'll get something in between for most of your attempts—indicating that "the stuff" is OK but not perfect at storing energy during the bounce. The missing energy isn't destroyed, it's just turned into thermal energy. The ball gets a tiny bit hotter with every bounce. You won't get any important quantitative results from this sort of experiment, but it'll be fun anyway. I wonder what fillings will make the ball bounce best or worst? It is science. The needle is able to enter latex without tearing it because the latex molecules are stretching out of the way of the needle without breaking. Like all polymers (plastics), latex consists of very large molecules. In latex, these molecules are basically long chains of atoms that are permanently linked to one another at various points along their lengths. You can picture a huge pile of spaghetti with each pasta strand representing one latex molecule. Now picture little links connecting pairs of these strands at random, so that when you try to pick up one strand, all the other strands come with it. That's the way latex looks microscopically. You can't pull the strands of latex apart because they are all linked together. But you can push a spoon between the strands. That is what happens when you carefully weave a needle into a latex balloon—the needle separates the polymer strands locally, but doesn't actually pull them apart or break them. Since breaking the latex molecules will probably cause the balloon to tear and burst, you have to be very patient and use a very sharp needle. I usually oil the needle before I do this and I don't try to insert the needle in the most highly stressed parts of the balloon. The regions near the tip of the balloon and near where it is filled are the least stressed and thus the easiest to pierce successfully with a needle. A reader has informed me that coating the needle with Vasoline is particularly helpful. One final note: a reader pointed out that it is also possible to put a needle through a balloon with the help of a small piece of adhesive tape. If you put the tape on a patch of the inflated balloon, it will prevent the balloon from ripping when you pierce the balloon right through the tape. This "cheaters" approach is more reliable than trying to thread the needle between the latex molecules, but it's less satisfying as well. But it does point out the fact that a balloon bursts because of tearing and that if you prevent the balloon from tearing, you can pierce it as much as you like. A dehumidifier makes use of the fact that water tends to be individual gas molecules in the air at higher temperatures but condensed liquid molecules on surfaces at lower temperatures. At its heart, a dehumidifier is basically a heat pump, one that transfers heat from one surface to another. Its components are almost identical to those in an air conditioner or refrigerator: a compressor, a condenser, and an evaporator. The evaporator acts as the cold surface, the source of heat, and the condenser acts as the hot surface, the destination for that heat. When the unit is operating and pumping heat, the evaporator becomes cold and the condenser becomes hot. A fan blows warm, moist air from the room through the evaporator coils and that air's temperature drops. This temperature drop changes the behavior of water molecules in the air. When the air and its surroundings were warm, any water molecule that accidentally bumped into a surface could easily return to the air. Thus while water molecules were always landing on surfaces or taking off, the balance was in favor of being in the air. But once the air and its surroundings become cold, any water molecules that bump into a surface tend to stay there. Water molecules are still landing on surfaces and taking off, but the balance is in favor of staying on the surface as either liquid water or solid ice. That's why dew or frost form when warm moist air encounters cold ground. In the dehumidifier, much of the air's water ends up dripping down the coils of the evaporator into a collection basin. All that remains is for the dehumidifier to rewarm the air. It does this by passing the air through the condenser coils. The thermal energy that was removed from the air by the evaporator is returned to it by the condenser. In fact, the air emerges slightly hotter than before, in part because it now contains all of the energy used to operate the dehumidifier and in part because condensing moisture into water releases energy. So the dehumidifier is using temperature changes to separate water and air. To make good ice cream, you want to freeze the cream in such a way that the water in the cream forms only very tiny ice crystals. That way the ice cream will taste smooth and creamy. The simplest way to achieve this goal is to stir the cream hard while lowering its temperature far enough to freeze the water in it and to make the fat solidify as well. That's where the ice and salt figure in. By itself, melting ice has a temperature of 0° C (32° F). When heat flows into ice at that temperature, the ice doesn't get hotter, it just transforms into water at that same temperature. Separating the water molecules in ice to form liquid water takes energy and so heat must flow into the ice to make it melt. But if you add salt to the ice, you encourage the melting process so much that the ice begins to use its own internal thermal energy to transform into water. The temperature of the ice drops well below 0° C (32° F) and yet it keeps melting. Eventually, the drop in temperature stops and the ice and salt water reach an equilibrium, but the mixture is then quite cold—perhaps -10° C (14° F) or so. To melt more ice, heat must flow into the mixture. When you place liquid cream nearby, heat begins to flow out of the cream and into the ice and salt water. More ice melts and the liquid cream get colder. Eventually, ice cream starts to form. Stirring keeps the ice crystals small and also ensures that the whole creamy liquid freezes uniformly. During a bounce from a rigid surface, the ball's surface dents. Denting a surface takes energy and virtually all of the ball's energy of motion (kinetic energy) goes into denting its own surface. For a moment the ball is motionless and then it begins to rebound. As the ball undents, it releases energy and this energy becomes the ball's new energy of motion. The issue is in how well the ball's surface stores and then releases this energy. The ideal ball experiences only elastic deformation—the molecules within the ball do not reorganize at all, but only change their relative spacings during the dent. If the molecules reorganize—sliding across one another or pulling apart in places—then some of the denting energy will be lost due to internal friction-like effects. Even if the molecules slide back to their original positions, they won't recover all the energy and the ball won't bounce to its original height. In general, harder rubber bounces more efficiently than softer rubber. That's because the molecules in hard rubber are too constrained to be able to slide much. A superball is very hard and bounces well. But there are also sophisticated thermal effects that occur in some seemingly hard rubbers that cause them to lose their stored energy. Ozone is an unstable molecule that consists of three oxygen atoms rather than then usual two. Because of its added complexity, an ozone molecule can interact with a broader range of light wavelengths and has the wonderful ability to absorb harmful ultraviolet light. The presence of ozone molecules in our upper atmosphere makes life on earth possible. However, because ozone molecules are chemically unstable, they can be depleted by contaminants in the air. Ozone molecules react with many other molecules or molecular fragments, making ozone useful as a bleach and a disinfectant. Molecules containing chlorine atoms are particularly destructive of ozone because a single chlorine atom can facilitate the destruction of many ozone molecules through a chlorine recycling process. In contrast, nitrogen molecules are extremely stable. They are so stable that there are only a few biological systems that are capable of separating the two nitrogen atoms in a nitrogen molecule in order to create organic nitrogen compounds. Without these nitrogen-fixing organisms, life wouldn't exist here. Because nitrogen molecules are nearly unbreakable, they survive virtually any amount or type of chemical contamination. Yes, fluorescents are more energy efficient overall. To begin with, fluorescent lights have a much longer life than incandescent lights—the fluorescent tube lasts many thousands of hours and its fixture lasts tens of thousands of hours. So the small amount of energy spent building an incandescent bulb is deceptive—you have to build a lot of those bulbs to equal the value of one fluorescent system. Second, although there is considerable energy consumed in manufacturing the complicated components of a fluorescent lamp, it's unlikely to more than a few kilowatt-hours—the equivalent of the extra energy a 100 watt incandescent light uses up in a week or so of typical operation. So it may take a week or two to recover the energy cost of building the fluorescent light, but after that the energy savings continue to accrue for years and years. First, your bus can't be going at the speed of light because massive objects are strictly forbidden from traveling at that speed. Even to being traveling near the speed of light would require a fantastic expenditure of energy. But suppose that the bus were traveling at 99.999999% of the speed of light and you were to run toward its front at 0.000002% of the speed of light (about 13 mph or just under a 5 minute mile). Now what would happen? First, the bus speed I quoted is in reference to some outside observer because the seated passengers on the bus can't determine its speed. After all, if the shades are pulled down on the bus and it's moving at a steady velocity, no one can tell that it's moving at all. So let's assume that the bus speed I gave is according to a stationary friend who is watching the bus zoom by from outside. While you are running toward the front of the bus at 0.000002% of the speed of light, your speed is in reference to the other passengers in the bus, who see you moving forward. The big question is what does you stationary friend see? Actually, your friend sees you running toward the front of the bus, but determines that your personal speed is only barely over 99.999999%. The two speeds haven't added the way you'd expect. Even though you and the bus passengers determine that you are moving quickly toward the front of the bus, your stationary friend determines that you are moving just the tiniest bit faster than the bus. How can that be? The answer lies in the details of special relativity, but here is a simple, albeit bizarre picture. Your stationary friend sees a deformed bus pass by. Ignoring some peculiar optical effects due to the fact that it takes time for light to travel from the bus to your friend's eyes so that your friend can see the bus, your friend sees a foreshortened bus—a bus that is smashed almost into a pancake as it travels by. While you are in that pancake, running toward the front of the bus, the front is so close to the rear that your speed within the bus is miniscule. Why the bus becomes so short is another issue of special relativity. Heat pipes use evaporation and condensation to move heat quickly from one place to another. A typical heat pipe is a sealed tube containing a liquid and a wick. The wick extends from one end of the tube to the other and is made of a material that attracts the liquid—the liquid "wets" the wick. The liquid is called the "working fluid" and is chosen so that it tends to be a liquid the temperature of the colder end of the pipe and tends to be a gas at the temperature of the hotter end of the pipe. Air is removed from the pipe so the only gas it contains is the gaseous form of the working fluid. The pipe functions by evaporating the liquid working fluid into gas at its hotter end and allowing that gaseous working fluid to condense back into a liquid at its colder end. Since it takes thermal energy to convert a liquid to a gas, heat is absorbed at the hotter end. And because a gas gives up thermal energy when it converts from a gas to a liquid, heat is released at the colder end. After a brief start-up period, the heat pipe functions smoothly as a rapid conveyor of heat. The working fluid cycles around the pipe, evaporating from the wick at the hot end of the pipe, traveling as a gas to the cold end of the pipe, condensing on the wick, and then traveling as a liquid to the hot end of the pipe. Near room temperature, heat pipes use working fluids such as HFCs (hydrofluorocarbons, the replacements for Freons), ammonia, or even water. At elevated temperatures, heat pipes often use liquid metals such as sodium. Sound consists of small fluctuations in air pressure. We hear sound because these changes in air pressure produce fluctuating forces on various structures in our ears. Similarly, microphones respond to the changing forces on their components and produce electric currents that are effectively proportional to those forces. Two of the most common types of microphones are capacitance microphones and electromagnetic microphones. In a capacitance microphone, opposite electric charges are placed on two closely spaced surfaces. One of those surfaces is extremely thin and moves easily in response to changes in air pressure. The other surface is rigid and fixed. As a sound enters the microphone, the thin surface vibrates with the pressure fluctuations. The electric charges on the two surfaces pull on one another with forces that depend on the spacing of the surfaces. Thus as the thin surface vibrates, the charges experience fluctuating forces that cause them to move. Since both surfaces are connected by wires to audio equipment, charges move back and forth between the surfaces and the audio equipment. The sound has caused electric currents to flow and the audio equipment uses these currents to record or process the sound information. In an electromagnetic microphone, the fluctuating air pressure causes a coil of wire to move back and forth near a magnet. Since changing or moving magnetic fields produce electric fields, electric charges in the coil of wire begin to move as a current. This coil is connected to audio equipment and again uses these currents to represent sound. When air flows past an airplane wing, it breaks into two airstreams. The one that goes under the wing encounters the wing's surface, which acts as a ramp and pushes the air downward and forward. The air slows somewhat and its pressure increases. Forces between this lower airstream and the wing's undersurface provide some of the lift that supports the wing. But the airstream that goes over the wing has a complicated trip. First it encounters the leading edge of the wing and is pushed upward and forward. This air slows somewhat and its pressure increases. So far, this upper airstream isn't helpful to the plane because it pushes the plane backward. But the airstream then follows the curving upper surface of the wing because of a phenomenon known as the Coanda effect. The Coanda effect is a common behavior in fluids—viscosity and friction keep them flowing along surfaces as long as they don't have to turn too quickly. (The next time your coffee dribbles down the side of the pitcher when you poured too slowly, blame it on the Coanda effect.) Because of the Coanda effect, the upper airstream now has to bend inward to follow the wing's upper surface. This inward bending involves an inward acceleration that requires an inward force. That force appears as the result of a pressure imbalance between the ambient pressure far above the wing and a reduced pressure at the top surface of the wing. The Coanda effect is the result (i.e. air follows the wing's top surface) but air pressure is the means to achieve that result (i.e. a low pressure region must form above the wing in order for the airstream to arc inward and follow the plane's top surface). The low pressure region above the wing helps to support the plane because it allows air pressure below the wing to be more effective at lifting the wing. But this low pressure also causes the upper airstream to accelerate. With more pressure behind it than in front of it, the airstream accelerates—it's pushed forward by the pressure imbalance. Of course, the low pressure region doesn't last forever and the upper airstream has to decelerate as it approaches the wing's trailing edge—a complicated process that produces a small amount of turbulence on even the most carefully designed wing. In short, the curvature of the upper airstream gives rise to a drop in air pressure above the wing and the drop in air pressure above the wing causes a temporary increase in the speed of the upper airstream as it passes over much of the wing. Dissolving solids in water always lowers the water's freezing temperature by an amount that's proportional to the density of dissolved particles. If you double the density of particles in water, you double the amount by which the freezing temperature is lowered. While salt and sugar both dissolve in water and thus both lower its freezing temperature, salt is much more effective than sugar. That's because salt produces far more dissolved particles per pound or per cup than sugar. First, table salt (sodium chloride) is almost 40% more dense than cane sugar (sucrose), so that a cup of salt weighs much more than a cup of cane sugar. Second, a salt molecule (NaCl) weighs only about 8.5% as much as a sucrose molecule (C12H22O11), so there are far more salt molecules in a pound of salt than sugar molecules in a pound of sugar. Finally, when salt dissolves in water, it decomposes into ions: Na+ and Cl-. That decomposition doubles the density of dissolved particles produced when salt dissolves. Sugar molecules remain intact when they dissolve, so there is no doubling effect. Thus salt produces a much higher density of dissolved particles than sugar, whether you compare them cup for cup or pound for pound, and thus lowers water's freezing temperature more effectively. That's why the salt water is so slow to freeze. They measure the volume of liquid they deliver and shut off when they have dispensed enough soda to fill the cup. Accurate volumetric flowmeters, such as those used in the dispensers, typically have a sophisticated paddlewheel assembly inside that turns as the liquid goes through a channel. When the paddlewheel has gone around the right number of times, an electronic valve closes to stop the flow of liquid. Yes, there would be a simple relationship between the periods of the three pendulums. That's because the period of a pendulum depends only on its length and on the strength of gravity. Since a pendulum's period is proportional to the square root of its length, you would have to make your model four times as long to double the time it takes to complete a swing. A typical grandfather's clock has a 0.996-meter pendulum that takes 2 seconds to swing, while a common wall clock has a 0.248-meter pendulum that takes 1 second to swing. Note that the effective length of the pendulum is from its pivot to its center of mass or center of gravity. A precision pendulum has special temperature compensating components that make sure that this effective length doesn't change when the room's temperature changes. There certainly is such a mechanism. The air at a jetliner's cruising altitude is much too thin to support life so it must be compressed before introducing it into the airplane's passenger cabin. The compressed air is actually extracted from an intermediate segment of the airplane's jet engines. In the course of their normal operations, these engines collect air entering their intake ducts, compress that air with rotary fans, inject fuel into the compressed air, burn the mixture, and allow the hot, burned gases to stream out the exhaust duct through a series of rotary turbines. The turbines provide the power to operate the compressor fans. Producing the stream of exhaust gas is what pushes the airplane forward. But before fuel is injected into the engine's compressed air, there is a side duct that allows some of that compressed air to flow toward the passenger cabin. So the engine is providing the air you breathe during a flight. There is one last interesting point about this compressed air: It is initially too hot to breathe. Even though air at 30,000 feet is extremely cold, the act of compressing it causes its temperature to rise substantially. This happens because compressing air takes energy and that energy must go somewhere in the end. It goes into the thermal energy of the air and raises the air's temperature. Thus the compressed air from the engines must be cooled by air conditioners before it goes into the passenger cabin. You are right that adding salt to water raises the water's boiling temperature. Contrary to one's intuition, adding salt to water doesn't make it easier for the water to boil, it makes it harder. As a result, the water must reach a higher temperature before it begins to boil. Any foods you place in this boiling salt water (e.g. eggs or pasta) find themselves in contact with somewhat hotter water and should cook faster as a result. That's because most cooking is limited by the boiling temperature of water in or around food and anything that lowers this boiling temperature, such as high altitude, slows most cooking while anything that raises the boiling temperature of water, such as salt or the use of a pressure cooker, speeds most cooking. However, it takes so much salt to raise the boiling temperature of water enough to affect cooking times that this can't be the main motivation for cooking in salted water. By the time you've salted the water enough to raise its boiling temperature more than a few degrees, you've made the water too salty for cooking. It's pretty clear that salting your cooking water is basically a matter of taste, not temperature. If you were directly between the two planets, their gravitational forces on you would oppose one another and at least partially cancel. Which planet would exert the stronger force on you would depend on their relative masses and on your distances from each of them. If one planet pulled on you more strongly than the other, you would find yourself falling toward that planet even though the other planet's gravity would oppose your descent and prolong the fall. However, there would also be a special location between the planets at which their gravitational forces would exactly cancel. If you were to begin motionless at that point in space, you wouldn't begin to fall at all. While the planets themselves would move and take the special location with them, there would be a brief moment when you would be able to hover in one place. But there is something I've neglected: you aren't really at one location in space. Because your body has a finite size, the forces of gravity on different parts of your body would vary subtly according to their exact locations in space. Such variations in the strength of gravity are normally insignificant but would become important if you were extremely big (e.g. the size of the moon) or if the two planets you had in mind were extremely small but extraordinarily massive (e.g. black holes or neutron stars). In those cases, spatial variations in gravity would tend to pull unevenly on your body parts and might cause trouble. Such uneven forces are known as tidal forces and are indeed responsible for the earth's tides. While the tidal forces on a spaceship traveling between the earth and the moon would be difficult to detect, they would be easy to find if the spaceship were traveling between two small and nearby black holes. In that case, the tidal forces could become so severe that they could rip apart not only the spaceship and its occupants, but also their constituent molecules, atoms, and even subatomic particles. These purported gravitational anomalies are just illusions. Because gravity is a relatively weak force, enormous concentrations of mass are required to create significant gravitational fields. Since it takes the entire earth to give you your normal weight, the mass concentration needed to cancel or oppose the earth's gravitation field in only one location would have to be extraordinary. While objects capable of causing such bizarre effects do exist elsewhere in our universe (e.g. black holes and neutron stars), there fortunately aren't any around here. As a result, the strength of the gravitational field at the earth's surface varies less than 1% over the earth's surface and always points almost exactly toward the center of the earth. Any tourist attraction that claims to have gravity pointing in some other direction with some other strength is claiming the impossible. This is an interesting question because it brings up the tricky issue of what is the temperature in a microwave oven. In fact, there is no specific temperature in the oven because the microwaves that do the cooking are not thermal. Rather than emerging from a hot object with a well-defined temperature, these microwaves are produced in a coherent fashion by a vacuum tube. Like the light emerging from a laser, these microwaves can heat objects they encounter as hot as you like, or at least until heat begins to escape from those objects as fast as it's being added. So instead of measuring the "temperature of the microwave oven," people normally put thermometers in the food to measure the food's temperature. This works well as long as the thermometers don't interact with the microwaves in ways that make them either hotter or inaccurate. Electronic thermometers are common in high-end microwaves. There is nothing special about these electronic thermometers except that they are carefully shielded so that the microwaves don't heat them or affect their readings. By "shielded," I mean that each of these thermometers has a continuous metallic sheath that reflects the microwaves. This sheath extends from the wall of the oven's cooking chamber all the way to the thermometer probe's tip so that the microwaves themselves can't enter the measurement electronics. Since the sheath reflects microwaves, the thermometer isn't heated by the microwaves and only measures the temperature of the food it contacts. On the other hand, putting a mercury thermometer in a microwave oven isn't a good idea. While mercury is a metal and will reflect most of the microwaves that strike it, the microwaves will push a great many electric charges up and down the narrow column of mercury. This current flow will cause heating of the mercury because the column is too thin to tolerate the substantial current without becoming warm. The mercury can easily overheat, turn to gas, and explode the thermometer. (A reader of this web site reported having blown up a mercury thermometer just this way as a child.) Moreover, as charges slosh up and down the mercury column, they will periodically accumulate at the upper end. Since there is only a thin vapor of mercury gas above this upper surface, the accumulated charges will probably ionize this vapor and create a luminous mercury discharge. The thermometer would then turn into a mercury lamp, emitting ultraviolet light. I used microwave-powered mercury lamps similar to this in my thesis research fifteen years ago and they work very nicely. When you view something in a flat mirror, you are looking at a virtual image of the object and this virtual image isn't located on the surface of the mirror. Instead, it's located on the far side of the mirror at a distance exactly equal to the distance from the mirror to the actual object. In effect, you are looking through a window into a "looking glass world" and seeing a distant object on the other side of that window. The reflected light reaching your eyes has all the optical characteristics of having come the full distance from that virtual image, through the mirror, to your eyes. The total distance between what you are seeing and your eyes is the sum of the distance from your eyes to the mirror plus the distance from the mirror to the object. That's why you must use your distance glasses to see most reflected objects clearly. Even when you observe your own face, you are seeing it as though it were located twice as far from you as the distance from your face to the mirror. While your book's claim is well intended, it's actually incorrect. The author is trying to point out that atoms aren't created or destroyed during the reaction and that all the reactant atoms are still present in the products. But equating the conservation of atoms with the conservation of mass overlooks any mass loss associated with changes in the chemical bonds between atoms. While bond masses are extremely small compared to the masses of atoms, they do change as the results of chemical reactions. However even the most energy-releasing or "exothermic" reactions only produce overall mass losses of about one part in a billion and no one has yet succeeded in weighing matter precisely enough to detect such tiny changes. Heater-based refrigerators make use of an absorption cycle in which a refrigerant is driven out of solution as a gas in a boiler, condenses into a liquid in a condenser, evaporates back into a gas in an evaporator, and finally goes back into solution in an absorption unit. The cooling effect comes during the evaporation in the evaporator because converting a liquid to a gas requires energy and thus extracts heat from everything around the evaporating liquid. The most effective modern absorption cycle refrigerators use a solution of lithium bromide (LiBr) in water. What enters the boiler is a relatively dilute solution of LiBr (57.5%) and what leaves is dense, pure water vapor and a relatively concentrated solution of LiBr (64%). The pure water vapor enters a condenser, where it gives up heat to its surroundings and turns into liquid water. To convert this liquid water back into gas, all that has to happen is for its pressure to drop. That pressure drop occurs when the water enters a low-pressure evaporator through a narrow orifice. As the water evaporates, it draws heat from its surroundings and refrigerates them. Finally, something must collect this low pressure water vapor and carry it back to the boiler. That "something" is the concentrated LiBr solution. When the low-pressure water vapor encounters the concentrated LiBr solution in the absorption unit, it quickly goes back into solution. The solution becomes less concentrated as it draws water vapor out of the gas above it. This diluted solution then returns to the boiler to begin the process all over again. Overall, the pure water follows one path and the LiBr solution follows another. The pure water first appears as a high-pressure gas in the boiler (out of the boiling LiBr solution), converts to a liquid in the condenser, evaporates back into a low-pressure gas in the evaporator, and finally disappears in the absorption unit (into the cool LiBr solution). Meanwhile, the LiBr solution shuttles back and forth between the boiler (where it gives up water vapor) and the absorption unit (where it picks up water vapor). The remarkable thing about this whole cycle is that its only moving parts are in the pump that moves LiBr solution from the absorption unit to the boiler. Its only significant power source is the heater that operates the boiler. That heater can use propane, kerosene, electricity, waste heat from a conventional power plant, and so on. Since the discovery of relativity, people have recognized that there is energy associated with rest mass and that the amount of that energy is given by Einstein's famous equation: E=mc2. However, the energy associated with rest mass is hard to release and only tiny fractions of it can be obtained through conventional means. Chemical reactions free only parts per billion of a material's rest mass as energy and even nuclear fission and fusion can release only about 1% of it. But when equal quantities of matter and antimatter collide, it's possible for 100% of their combined rest mass to become energy. Since two metric tons is 2000 kilograms and the speed of light is 300,000,000 meters/second, the energy in Einstein's formula is 1.8x1020 kilogram-meters2/second2 or 1.8x1020 joules. To give you an idea of how much energy that is, it could keep a 100-watt light bulb lit for 57 billion years. While it's true that microwaves twist water molecules back and forth, this twisting alone doesn't make the water molecules hot. To understand why, consider the water molecules in gaseous steam: microwaves twist those water molecules back and forth but they don't get hot. That's because the water molecules beginning twisting back and forth as the microwaves arrive and then stop twisting back and forth as the microwaves leave. In effect, the microwaves are only absorbed temporarily and are reemitted without doing anything permanent to the water molecules. Only by having the water molecules rub against something while they're twisting, as occurs in liquid water, can they be prevented from remitting the microwaves. That way the microwaves are absorbed and never remitted—the microwave energy becomes thermal energy and remains behind in the water. Visualize a boat riding on a passing wave—the boat begins bobbing up and down as the wave arrives but it stops bobbing as the wave departs. Overall, the boat doesn't absorb any energy from the wave. However, if the boat rubs against a dock as it bobs up and down, it will converts some of the wave's energy into thermal energy and the wave will have permanently transferred some of its energy to the boat and dock. Yes, VCR's work on the same principle as an audio tape player: as a magnetized tape moves past the playback head, that tape's changing magnetic field produces a fluctuating electric field. This electric field pushes current back and forth through a coil of wire and this current is used to generate audio signals (in a tape player) or video and audio signals (in a VCR). However, there is one big difference between an audio player and a VCR. In an audio player, the tape moves past a stationary playback head. In a VCR, the tape moves past a spinning playback head. When you pause an audio tape player, the tape stops moving and there is no audio signal. But when you pause a VCR, the playback head continues to spin. As the playback head (actually 2 or even 4 heads that trade off from one another) sweeps across a few inches of the tape, it experiences the changing magnetic fields and fluctuating electric fields needed to produce the video and audio signals. That's why you can still see the image from a paused VCR. To prevent the spinning playback heads from wearing away the tape, most VCRs limit the pause time to about 5 minutes. A transformer transfers power between two or more electrical circuits when each of those circuits is carrying an alternating electric current. Transfers of this sort are important because many electric power systems have incompatible circuits—one circuit may use large currents of low voltage electricity while another circuit may use small currents of high voltage electricity. A transformer can move power from one circuit of the electric power system to another without any direct connections between those circuits. Now for the technical details: a transformer is able to make such transfers of power because (1) electric currents are magnetic, (2) the magnetic fields from an alternating electric current changes with time, (3) a time-varying magnetic field creates an electric field, and (4) an electric fields pushes on electric charges and electric currents. Overall, one of the alternating currents flowing through a transformer creates a time-varying magnetic field and thus an electric field in the transformer. This electric field does work on (transfers power to) another alternating current flowing through the transformer. At the same time, this electric field does negative work on (saps power from) the original alternating current. When all is said and done, the first current has lost some of its power and the second current has gained that missing power. Yes. For very fundamental reasons, light can't change its speed in vacuum; it always travels at the so-called "speed of light." So light that is traveling straight downward toward a celestial object doesn't speed up; only its frequency and energy increase. But light that is traveling horizontally past a celestial object will bend in flight, just as a satellite will bend in flight as it passes the celestial object. This trajectory bending is a consequence of free fall. While the falling of light as it passes through a gravitational field is a little more complicated than for a normal satellite—the light's trajectory must be studied with fully relativistic equations of motion—both objects fall nonetheless. Diodes are one-way devices for electric current and are thus capable of separating positive charges from negative charges and keeping them apart. Those charges can separate by moving away from one another in the diode's allowed direction and then can't get back together because doing so would require them to move through the diode in the forbidden direction. Given a diode's ability to keep separated charges apart, all that's needed to start collecting separated charges is a source of energy. This energy is required to drive the positive and negative charges apart in the first place. One such energy source is a particle of light—a photon. When a photon with the right amount of energy is absorbed near the one-way junction of the diode, it can produce an electron-hole pair (a hole is a positively charged quasiparticle that is actually nothing more than a missing electron). The junction will allow only one of these charged particles to cross it and, having crossed, that particle cannot return. Thus when the diode is exposed to light, separated charge begins to accumulate on its two ends and a voltage difference appears between those ends. While most of the "science" in that movie is actually nonsense, the use of lightning as a source of power has some basis in reality. The current in a lightning bolt is enormous, peaking at many thousands of amperes, and the voltages available are fantastically high. With so much current and voltage available, the flow of current during a lightning strike can be very complicated. Even though Doc Brown provided one path through which the lightning current could flow into the ground, he only conducted a fraction of the overall current. The remaining current flowed through the wire and into the "flux capacitor." This branching of the current is common during a lightning strike and makes lightning particularly dangerous. You don't have to be struck directly by lightning or to be in contact with the main conducting pathway between the strike and the earth for you to be injured. Current from the strike can branch out in complicated ways and follow a variety of unexpected paths to ground. You don't want to be on any one of them. Doc Brown wasn't seriously hurt because it was only a movie. In real life, people don't recover so quickly. Your lights are dimming because something is reducing the voltage of the electricity in your house. The lights expect the electric current passing through them to experience a specific voltage drop—that is, they expect each electric charge to leave behind a certain amount of energy as the result of its passage through the lights. If the voltage of electricity in your house is less than the expected amount, the lights won't receive enough energy and will glow dimly. The most probable cause for this problem is some power-hungry device in or near your house that cycles on every 5 or 10 minutes. In all likelihood, this device contains a large motor—motors have a tendency to draw enormous currents while they are first starting to turn, particularly if they are old and in need of maintenance. The wiring and power transformer systems that deliver electricity to your neighborhood and house have limited capacities and cannot transfer infinite amounts of power without wasting some of it. In general, wires waste power in proportion to the square of the current they are carrying. While the amount of power wasted in your home's wiring is insignificant in normal situations, it can become sizeable when the circuits are overloaded. This wasted power in the wiring appears as a loss of voltage—a loss of energy per charge—at your lights and appliances. When the heavy equipment turns on and begins to consume huge amounts of power, the wiring and other electric supply systems begin to waste much more power than normal and the voltage reaching your lights is significantly reduced. Your lights dim until the machinery stops using so much power. To find what device that's making your lights dim, listen carefully the next time your lights fade. You'll probably hear an air conditioner, a fan, or even an elevator starting up somewhere, either in your house or in your neighborhood. There may be nothing you can do to fix the problem, but it's possible that replacing a motor or its bearings will reduce the problem. Another possible culprit is an electric heating system—a hot water heater, a radiant heater, an oven, a toaster, or even a hair-dryer. These devices also consume large amounts of power and, in an older house with limited electric services, may dim the lights. To keep soda carbonated, you should minimize the rate at which carbon dioxide molecules leave the soda and maximize the rate at which those molecules return to it. That way, the net flow of molecules out of the soda will be small. To reduce the leaving rate, you should cool the soda—as long as ice crystals don't begin to form, cooling the soda will make it more difficult for carbon dioxide molecules to obtain the energy they need to leave the soda and will slow the rate at which they're lost. To increase the return rate, you should increase the density of gaseous carbon dioxide molecules above the soda—sealing the soda container or pressurizing it with extra carbon dioxide will speed the return of carbon dioxide molecules to the soda. Also, minimizing the volume of empty bottle above the soda will make it easier for the soda to pressurize that volume itself. The soda will lose some of its carbon dioxide while filling that volume, but the loss will quickly cease. One final issue to consider is surface area: the more surface area there is between the liquid soda and the gas above it, the faster molecules are exchanged between the two phases. Even if you don't keep carbon dioxide gas trapped above soda, you can slow the loss of carbonation by keeping the soda in a narrow-necked bottle with little surface between liquid and gas. But you must also be careful not to introduce liquid-gas surface area inside the liquid. That's what happens when you shake soda or pour it into a glass—you create tiny bubbles inside the soda and these bubbles grow rapidly as carbon dioxide molecules move from the liquid into the bubbles. Cool temperatures, minimal surface area, and plenty of carbon dioxide in the gas phases will keep soda from going flat. As for pouring the soda over ice causing it to bubble particularly hard, that is partly the result of air stirred into the soda as it tumbles over the ice cubes and partly the result of adding impurities to the soda as the soda washes over the rough and impure surfaces of the ice. The air and impurities both nucleate carbon dioxide bubbles—providing the initial impetus for those bubbles to form and grow. Washing the ice to smooth its surfaces and remove impurities apparently reduces the bubbling when you then pour soda of it. Terminal velocity is the result of a delicate balance between two forces—an object's downward weight and the upward drag force that object experiences as it moves downward through the air. Terminal velocity is reached when those two forces exactly balance one another and the object experiences a net force of zero, stops accelerating, and simply coasts downward at a constant velocity. Since the upward drag force increases with downward speed, there is generally a velocity at which this balance occurs—the terminal velocity. But while a parachutist can't change her weight, she can change the relationship between her downward speed and the upward drag force she experiences. If she rolls herself into a compact ball, she weakens the drag force and ultimately increases her terminal velocity. On the other hand, if she spreads her arms and legs wide so as to catch more air, she strengthens the drag force and decreases her terminal velocity. Popping open her parachute strengthens the drag force so much that her terminal velocity diminishes almost to zero and she coasts slowly downward to a comfortable landing. So to answer your question—two twin parachutists will descend at very different terminal velocities if they adopt different profiles or if only one opens a parachute. Your comparison between the limitless counting numbers and the limited speeds in the universe is an interesting one because it points out a fundamental difference between the older Galilean/Newtonian understanding of the universe and the newer Einsteinian understanding. The older understanding claims that velocities can be added in the same way that counting numbers can be added and that there is thus no limit to the speeds that can exist in our universe. For example, if you are jogging eastward at 5 mph and a second runner passes you traveling eastward 5 mph faster, then a person watching the two of you from a stationary vantage point sees the second runner traveling eastward at 10 mph. The velocities add, so that 5 mph + 5 mph = 10 mph. If the second runner is now passed by a third runner, who is traveling eastward 5 mph faster than the second runner, then the stationary observer sees that third runner traveling eastward at 15 mph. And so it goes. As long as velocities add in this manner, objects can reach any speed they like. At this point, you might assert that velocities do add and that objects should be able to reach any speed. But that's not the case. The modern, relativistic understanding of the universe says that even at these small speeds, velocities don't quite add. To the stationary observer, the second runner travels at only 9.9999999999999994 mph and the third runner at only 14.9999999999999988 mph. As you can see, when two or more velocities are combined, the final velocity isn't quite as large as the simple sum. What that means is that the velocity you observe in another object is inextricably related to your own motion. This interrelatedness is part of the theory of relativity—that observers who are moving relative to one another will see space and time somewhat differently. For objects traveling close to the speed of light, the failure of velocity addition becomes quite severe. For example, if one spaceship travels past the earth at half the speed of light and the people in that spaceship watch a second spaceship pass them at half the speed of light in the same direction, then a person on earth will see the second spaceship traveling only four-fifths of the speed of light. As you can see, relativity is making it difficult to reach the speed of light. In fact, it's impossible to reach the speed of light! No matter how you combine velocities, no observer will ever see a massive object reach or exceed the speed of light. The only objects that can reach the speed of light are objects without mass and they can only travel at the speed of light. So while the counting numbers obey simple addition and go on forever, velocities do not obey simple addition and have a firm limit—the speed of light. The additive counting numbers are an example of a mathematical group that extends infinitely in both directions, but there are many examples of groups that do not extend to infinity. The group that describes relativistic, real-world velocities is one such group. You can visualize another simple limited group—the one associated with walking around the surface of the earth. No matter how much you try, you can't walk more than a certain distance northward. While it seems as though steps northward add, so that 5 steps north plus 5 steps north equals 10 steps north, things aren't quite that simple. Eventually you reach the north pole and start walking south! While I'm not an expert on geysers and would need to visit the library to verify my ideas, I believe that they operate the same way a coffee percolator does. Both objects involve a narrow water-filled channel that's heated from below. As the temperature at the bottom of the water column increases, the water's stability as a liquid decreases and its tendency to become gaseous steam increases. What prevents this heated water from converting into gas is the weight of the water and air above it, or more accurately the pressure caused by that weight. But when the water's temperature reaches a certain elevated level, it begins to turn into steam despite the pressure. Since steam is less dense than liquid water, the hot water expands as it turns into steam and it lifts the column of water above it. Water begins to spray out of the top of the channel, decreasing the weight of water in the channel and the pressure at the bottom of the channel. With less pressure keeping the water liquid, the steam forming process accelerates and the column of water rushes up the channel and into the air. Once the steam itself reaches the top of the channel, it escapes freely into the air and the pressure in the channel plummets. Water begins to reenter the channel and the whole process repeats. What a great idea! Mylar is DuPont's brand of PET film, where "PET" is Poly(ethylene terephthalate)—the same plastic used in most plastic beverage containers (look for "PET" or "PETE" in the recycling triangle on the bottom). PET isn't a particularly inert plastic and you shouldn't have any trouble gluing to it. To form a rigid structure, you need either a glassy plastic backing (one that is stiff and brittle at room temperature) or a stiff composite backing. I'd go with fiberglass—mount the Mylar in a large quilting or needlepoint frame, coat the back of the Mylar with the glass and epoxy mixture, invert it, weight it with water, and let it harden. Mylar doesn't stretch easily, so you'll get a very shallow curve and a very long focal length mirror. While the mirror will probably have some imperfections and a non-parabolic shape, it should still do a decent job of concentrating sunlight. I'm afraid that you confuse the hypothetical with the actual. While people have hypothesized about superluminal particles called tachyons, they have never been observed and probably don't exist. This speculation is based on an interesting but apparently non-physical class of solutions to the relativistic equations of motion. Although tachyons make for fun science fiction stories, they don't seem to have a place in the real world. If a whistle's tube is relatively narrow, its pitch is determined primarily by its length and by how many of its ends are open to the air. That's because as you blow the whistle, a "standing" sound wave forms inside it—the same sound wave that you hear as it "leaks" out of the whistle. If the whistle is open at both ends, almost half a wavelength of this standing sound wave will fit inside the tube. Since a sound's wavelength times its frequency must equal the speed of sound (331 meters per second or 1086 feet per second), a double-open whistle's pitch is approximately the speed of sound divided by twice its length. For example, a whistle that's 0.85 centimeters long can hold one wavelength of a sound with a frequency near 19,500 cycles per second—at the upper threshold of hearing for a young person. If the whistle is closed at one end, the air inside it vibrates somewhat different; only a quarter of a wavelength of the standing sound wave will fit inside the tube. In that case, its pitch is approximately the speed of sound divided by four times its length. However, if you blow a whistle hard enough, you can cause more wavelengths of a standing sound wave to fit inside it. A strongly blown double-open whistle can house any half-integer number of wavelengths (1/2, 1, 3/2, or more), emitting higher pitched tones as it does so. A strongly blown single-open whistle can house any odd quarter-integer number of wavelengths (1/4, 3/4, 5/4, or more). To understand the two bulges, imagine three objects: the earth, a ball of water on the side of the earth nearest the moon, and a ball of water on the side of the earth farthest from the moon. Now picture those three objects orbiting the moon. In orbit, those three objects are falling freely toward the moon but are perpetually missing it because of their enormous sideways speeds. But the ball of water nearest the moon experiences a somewhat stronger moon-gravity than the other objects and it falls faster toward the moon. As a result, this ball of water pulls away from the earth—it bulges outward. Similarly, the ball of water farthest from the moon experiences a somewhat weaker moon-gravity than the other objects and it falls more slowly toward the moon. As a result, the earth and the other ball of water pull away from this outer ball so that this ball bulges outward, away from the earth. It's interesting to note that the earth itself bulges slightly in response to these tidal forces. However, because the earth is more rigid than the water, its bulges are rather small compared to those of the water. Your solution should work nicely—the pulley and weight system should protect your cable from breaking because the weights should maintain a constant tension in the line. As the tree swings back and forth, the weights should rise and fall while the tension in the cord remains almost steady. Obviously, if the rising weights reach the pulley the cord will pull taut and break, so you must leave enough hanging slack. However, if the tree's motion is too violent, even this weight and pulley system may not save the cable. As long as everything moves slowly, the tension in the cord should be equal to the weight of the weights. But if the tree moves away from the house very suddenly, then the tension in the cord will increase suddenly because the cord must not only support the weights, it must accelerate them upward as well. Part of the cord's tension acts to overcome the weights' inertia. Just as a sudden yank on a paper towel will rip it free from the roll, so a sudden yank on your cable will rip it free from the weights. If sudden yanks of this type cause trouble for you, you can fix the problem by coupling the cord to the weights via a strong spring. On long timescales, the spring will have no effect on the tension in the cord—it will still be equal to the weight of the weights. But the spring will stretch or contract during sudden yanks on the cord and will prevent the tension in the cord from changing abruptly either up or down. The spring shouldn't be too stiff—the less stiff and the more it stretches while supporting the weights, the more effectively it will smooth out changes in tension. As far as the weight of the weights, that depends on how much curvature you want in the cable supporting the feeders. The more weight you use, the less the cable will sag but the more stress it will experience. You can determine how much weight you need by pulling on the far end of the cable with your hands and judging how hard you must pull to get a satisfactory amount of sag. You can produce colored flames by adding various metal salts to the burning materials. That's what's done in fireworks. These metal salts decompose when heated so that individual metal atoms are present in the hot flame. Thermal energy in the flame then excites those atoms so that their electrons shift among the allowed orbits or "orbitals" and this shifting can lead to the emission of particles of light or "photons". Since the orbitals themselves vary according to which chemical element is involved, the emitted photons have specific wavelengths and colors that are characteristic of that element. To obtain a wide variety of colors, you'll need a wide variety of metal salts. Sodium salts, including common table salt, will give you yellow light—the same light that's produced by sodium vapor lamps. Potassium salts yield purple, copper and barium salts yield green, strontium salts yield red, and so on. The classic way to produce a colored flame is to dip a platinum wire into a metal salt solution and to hold the wire in the flame. Since platinum is expensive, you can do the same trick with a piece of steel wire. The only problem is that the steel wire will burn eventually. Because the electrons in an atom move about as waves, they can follow only certain allowed orbits that we call orbitals. This limitation is equivalent to the case of a violin string—it can only vibrate at certain frequencies. If you try to make a violin string vibrate at the wrong frequency, it won't do it. That's because the string vibrates in a wave-like manner and only certain waves fit properly along the strong. Similarly, the electron in an atom "vibrates" in a wave-like manner and only certain waves fit properly around the nucleus. Most of the collisions between an electron and a neon atom are completely elastic—the electron bounces perfectly from the neon atom and retains essentially all of its kinetic energy. But occasionally the electron induces a structural change in the neon atom and transfers some of its energy to the neon atom. In such a case, the electron rebounds weakly and retains only a fraction of its original kinetic energy. The missing energy is left in the neon atom, which usually releases that energy as light. Some experiments are so sensitive to electromagnetic waves that they must be performed inside "Faraday cages". A Faraday cage is a metal or metal screen box. Its walls conduct electricity and act as mirrors for electromagnetic waves. As long as a wave has a wavelength significantly longer than the largest hole in the walls, that wave will be reflected and will not enter the box. This reflection occurs because the wave's electric field pushes charges inside the metal walls and causes those charges to accelerate. These accelerating charges redirect (absorb and reemit) the wave in a new direction—a mirror reflection. Just as a box made of metal mirrors will keep light out, a box made with metal walls will keep electromagnetic waves out. A properly built and maintained microwave oven leaks so little microwave power that you needn't worry about it. There are also inexpensive leakage testers available that you can use at home for a basic check, or for a more reliable and accurate check—as recommended by both the International Microwave Power Institute (IMPI) and the FDA—you can take your microwave oven to a service shop and have it checked with an FDA certified meter. It's only if you have dropped the oven or injured its door in some way that you might have cause to worry about standing near it. If it were to leak microwaves, their main effect would be to heat your tissue, so you would feel the leakage. CB or citizens band radio refers to some parts of the electromagnetic spectrum that have been set aside for public use. You can operate a CB radio without training and without serious legal constraints, although the power of your transmitted wave is strictly limited. The principal band for CB radio is around 27 MHz and I think that the transmissions use the AM audio encoding scheme. As you talk, the power of your transmission increases and decreases to represent the pressure fluctuations in your voice. The receiving CB radio detects the power fluctuations in the radio wave and moves its speaker accordingly. When you first turn on a typical computer, it must run an initial program that sets up the operating system. This initial program has to run even before the computer is able to interact with its hard drive, so the program must be available at the very instant the computer's power becomes available. Read-only memory is used for this initial bootup operation. Unlike normal random access memory, which is usually "volatile" and loses its stored information when power is removed, read-only memory retains its information without power. When you turn on the computer, this read-only memory provides the instructions the computer uses to begin loading the operating system from the hard drive. Actually, you are asking about a current of electrons, which carry a negative charge. It's true that electrons can't be sent across the p-n junction from the p-type side to the n-type side. There are several things that prevent this reverse flow of electrons. First, there is an accumulation of negative charge on the p-type side of the p-n junction and this negative charge repels any electrons that approach the junction from the p-type end. Second, any electron you add to the p-type material will enter an empty valence level. As it approaches the p-n junction, it will find itself with no empty valence levels in which to travel the last distance to the junction. It will end up widening the depletion region—the region of effectively pure semiconductor around the p-n junction; a region that doesn't conduct electricity. A microwave oven that's built properly and not damaged emits so little electromagnetic radiation that the speaker should never notice. The speaker might have some magnetic field leakage outside its cabinet, and that might have some effect on a microwave oven. However, most microwaves have steel cases and the steel will shield the inner workings of the microwave oven from any magnetic fields leaking from the speaker. The two devices should be independent. A phonograph record represents the air pressure fluctuations associated with sound as surface fluctuations in long, spiral groove. This groove is V-shaped, with two walls cut at right angles to one another—hence the "V". Silence, the absence of pressure fluctuations in the air, is represented by a smooth portion of the V groove, while moments of sound are represented by a V-groove with ripples on its two walls. The depths and spacings of the ripples determine the volume and pitch of the sounds and the two walls represent the two stereo channels on which sound is recorded and reproduced. To sense the ripples in the V-groove, a phonograph places a hard stylus in the groove and spins the record. As the stylus rides along the walls of the moving groove, it vibrates back and forth with each ripple in a wall. Two transducers attached to this stylus sense its motions and produce electric currents that are related to those motions. The two most common transduction techniques are electromagnetic (a coil of wire and a magnet move relative to one another as the stylus moves and this causes current to flow through the coil) and piezoelectric (an asymmetric crystal is squeezed or unsqueezed as the stylus moves and this causes charge to be transferred between its surfaces). The transducer current is amplified and used to reproduce the recorded sound. Exactly. When you switch your tape recorder to the record mode, it has a special erase head that becomes active. This erase head deliberately scrambles the magnetic orientations of the tape's magnetic particles. The erase head does this by flipping the magnetizations back and forth very rapidly as the particles pass by the head, so that they are left in unpredictable orientations. There are, however, some inexpensive recorders that use permanent magnets to erase the tapes. This process magnetizes all the magnetic particles in one direction, effectively erasing a tape. Because it leaves the tape highly magnetized, this second technique isn't as good as the first one. It tends to leave some noise on the recorded tape. The basis for Einstein's theory of relativity is the idea that everyone sees light moving at the same speed. In fact, the speed of light is so special that it doesn't really depend on light at all. Even if light didn't exist, the speed of light would still be a universal standard—the fastest possible speed for anything in our universe. Once we recognize that the speed of light is special and that everyone sees light traveling at that speed, our views of space and time have to change. One of the classic "thought experiments" necessitating that change is the flashbulb in the boxcar experiment. Suppose that you are in a railroad boxcar with a flashbulb in its exact center. The flashbulb goes off and its light spreads outward rapidly in all directions. Since the bulb is in the center of the boxcar, its light naturally hits the front and back walls of the boxcar at the same instant and everything seems simple. But your boxcar is actually hurtling forward on a track at an enormous speed and your friend is sitting in a station as the train rushes by. She looks into the boxcar through its window and sees the flashbulb go off. She watches light from the flashbulb spread out in all directions but it doesn't hit the front and back walls of the boxcar simultaneously. Because the boxcar is moving forward, the front wall of the boxcar is moving away from the approaching light while the back wall of the boxcar is moving toward that light. Remarkably, light from the flashbulb strikes the back wall of the boxcar first, as seen by your stationary friend. Something is odd here: you see the light strike both walls simultaneously while your stationary friend sees light strike the back wall first. Who is right? The answer, strangely enough, is that you're both right. However, because you are moving at different velocities, the two of you perceive time and space somewhat differently. Because of these differences, you and your friend will not always agree about the distances between points in space or the intervals between moments in time. Most importantly, the two of you will not always agree about the distance or time separating two specific events and, in certain cases, may not even agree about which event happened first! The remainder of the special theory of relativity builds on this groundwork, always treating the speed of light as a fundamental constant of nature. Einstein's famous formula, E=mc2, is an unavoidable consequence of this line of reasoning. Some metals are composed of microscopic permanent magnets, all lumped together. Such metals include iron, nickel, and cobalt. This magnetism is often masked by the fact that the tiny magnets in these metals are randomly oriented and cancel one another on a large scale. But the magnetism is revealed whenever you put one of these magnetic metals in an external magnetic field. The tiny magnets inside these metals then line up with the external field and the metal develops large scale magnetism. However, most metals don't have any internal magnetic order at all and there is nothing to line up with an external field. Metals such as copper and aluminum have no magnetic order in them—they don't have any tiny magnets present. The only way to make aluminum or copper magnetic is to run a current through it. An electric current is itself magnetic—it creates a structure in the space around it that exerts forces on any magnetic poles in that space. The magnetic field around a single straight wire forms loops around the wire—the current's magnetic field would push a magnetic pole near it around in a circle about the wire. But if you wrap the wire up into a coil, the magnetic field takes on a more familiar shape. The current-carrying coil effectively develops a north pole at one end of the coil and a south pole at the other. Which end is north depends on the direction of current flow around the loop. If current flows around the loop in the direction of the fingers of your right hand, then your thumb points to the north pole that develops at one end of the coil. While the full answer to this question is complicated, the most important issues are the strengths and locations of the magnetic poles in each magnet. Since each magnet has north poles and south poles of equal strengths, there are always attractive and repulsive forces at work between a pair of magnets—their opposite poles always attract and their like poles always repel. You can make two magnets attract one another by turning them so that their opposite poles are closer together than their like poles (e.g. by turning a north pole toward a south pole). To maximize the attraction between the magnets, opposite magnetic poles should be as near together as possible while like magnetic poles are as far apart as possible. With long bar magnets, you align the magnets head to toe so that you have the north pole of one magnet opposite the south pole of the other magnet and vice versa. But long magnets also tend to have weaker poles than short stubby magnets because it takes energy to separate a magnet's north pole from its south pole. With short stubby magnets, the best you can do is to bring the north pole of one magnet close to the south pole of the other magnet while leaving their other poles pointing away from one another. Horseshoe magnets combine some of the best of both magnets—they can have the strong poles of short stubby magnets with more distance separating those poles. Returning to the paper question, size is less important than pole strength and separation. The stronger the magnets and the farther apart their poles, the more paper you can hold between them. The sound you hear may be related to the vortices that swirl behind a plane's wingtips as it moves through the air. These vortices form as a consequence of the wing's lift-generating processes. Because the air pressure above a wing is lower than the air pressure below the wing, air is sucked around the wingtip and creates a swirling vortex. The two vortices, one at each wingtip, trail behind the plane for miles and gradually descend. You may be hearing them reach the ground after the airplane has passed low over your home. If someone reading this has another explanation, please let me know. Those dispensers measure the volume of liquid they dispense and shut off when they've delivered enough liquid to fill the cup. They don't monitor where that liquid is going, so if you put the wrong sized cup below them or press the button twice, you're in trouble. The number of "basic forces" has changed over the years, increasing as new forces are discovered and decreasing as seemingly separate forces are joined together under a more sophisticated umbrella. A good example of this evolution of understanding is electromagnetism—electric and magnetic forces were once thought separate but gradually became unified, particularly as our understanding of time and space improved. More recently, weak interactions have joined electromagnetic interactions to become electroweak interactions. In all likelihood, strong and gravitational interactions will eventually join electroweak to give us one grand system of interactions between objects in our universe. But regardless of counting scheme, I can still answer your question about how the four basic forces differ. Gravitational forces are attractive interactions between concentrations of mass/energy. Everything with mass/energy attracts everything else with mass/energy. Because this gravitational attraction is exceedingly weak, we only notice it when there are huge objects around to enhance its effects. Electromagnetic forces are strong interactions between objects carrying electric charge or magnetic pole. While most of these interactions can be characterized as attractive or repulsive, that's something of an oversimplification whenever motion is involved. Weak interactions are too complicated to call "forces" because they almost always do more than simply pull two objects together or push them apart. Weak interactions often change the very natures of the particles that experience them. But the weak interactions are rare because they involve the exchange of exotic particles that are difficult to form and live for exceedingly short times. Weak interactions are responsible for much of natural radioactivity. Strong forces are also very complicated, primarily because the particles that convey the strong force themselves experience the strong force. Strong forces are what hold quarks together to form familiar particles like protons and neutrons. The effects you are referring to are extremely subtle, so no one will ever notice them in an astronaut. But with ultraprecise clocks, it's not hard to see strange effects altering the passage of time in space. There are actually two competing effects that alter the passage of time on a spaceship—one that slows the passage of time as a consequence of special relativity and the other that speeds the passage of time as a consequence of general relativity. The time slowing effect is acceleration—a person or clock that takes a fast trip around the earth and then returns to the starting point will experience slightly less time than a person or clock that remained at the starting point. This effect is a consequence of acceleration and the changing relationships between space and time that come with different velocities. The time speeding effect is gravitational redshift—a person or clock that is farther from the earth's center experiences slightly more time than a person or clock that remains at the earth's surface. This effect is a consequence of the decreased potential energy that comes with being deeper in the earth's gravitational potential well. When an astronaut is orbiting the earth, he isn't really weightless. The earth's gravity is still pulling him toward the center of the earth and his weight is almost as large as it would be on the earth's surface. What makes him feel weightless is the fact that he is in free fall all the time! He is falling just as he would be if he had jumped off a diving board or a cliff. If it weren't for the astronaut's enormous sideways velocity, he would plunge toward the earth faster and faster and soon crash into the earth's surface. But his sideways velocity carries him past the horizon so fast that he keeps missing the earth as he falls. Instead of crashing into the earth, he orbits it. During his orbit, the astronaut feels weightless because all of his "pieces" are falling together. Those pieces don't need to push on one another to keep their relative positions as they fall, so he feels none of the internal forces that he interprets as weight when he stands on the ground. A falling astronaut can't feel his weight. To prepare for this weightless feeling, the astronaut needs to fall. Jumping off a diving board or riding a roller coaster will help, but the classic training technique is a ride on the "Vomit Comet"—an airplane that follows a parabolic arc through the air that allows everything inside it to fall freely. The airplane's arc is just that of a freely falling object and everything inside it floats around in free fall, too—including the astronaut trainee. The plane starts the arc heading upward. It slows its rise until it reaches a peak height and then continues arcing downward faster and faster. The whole trip lasts at most 20 seconds, during which everyone inside the plane feels weightless. Europe uses alternating current, just as we do, however some of the characteristics of that current are slightly different. First, Europe uses 50 cycle-per-second current, meaning that current there reverses directions 100 times per second. That's somewhat slower than in the U.S., where current reverses 120 times per second (60 full cycles of reversal each second or 60 Hz). Second, their standard voltage is 230 volts, rather than the 120 volts used in the U.S. While some of our appliances won't work in Europe because of the change in cycles-per-second, the biggest problem is with the increase in voltage. The charges entering a U.S. appliance in Europe carry about twice the energy per change (i.e. twice the voltage) and this increased "pressure" causes about twice the number of charges per second (i.e. twice the current) to flow through the appliance. With twice the current flowing through the appliance and twice as much voltage being lost by this current as it flows through the appliance, the appliance is receiving about four times its intended power. It will probably burn up. They contain highly purified and refined chemicals and are actually marvels of engineering. It's more surprising to me that they are so cheap, given how complicated they are to make. It's useful to describe moving electric charges as a current and for that current to flow in the direction that the charges are moving. Suppose that we define current as flowing in the direction that electrons take and look at the result of letting this current of electrons flow into a charge storage device. We would find that as this current flowed into the storage device, the amount of charge (i.e. positive) charge in that device would decrease! How awkward! You're "pouring" something into a container and the contents of that container are decreasing! So we define current as pointing in the direction of positive charge movement or in the direction opposite negative charge movement. That way, as current flows into a storage device, the charge in that device increases! The bulb in a battery doesn't care which way current flows through it. The metal has no asymmetry that would treat left-moving charges differently from right-moving charges. That's not true of the transistors in a walkman or gameboy. They contain specialized pieces of semiconductor that will only allow positive charges to move in one direction, not the other. When you put the batteries in backward and try to propel current backward through its parts, the current won't flow and nothing happens. Your body is similar to salt water and is thus a reasonably good conductor of electricity. Once current penetrates your skin (which is insulating), it flows easily through you. At high currents, this electricity can deposit enough energy in you to cause heating and thermal damage. But at lower currents, it can interfere with normal electrochemical and neural process so that your muscles and nerves don't work right. It takes about 0.030 amperes of current to cause serious problems for your heart, so that currents of that size can be fatal. The battery stops separating charges once enough have accumulated on its terminals. If the flashlight is off, so that charges build up, then the battery soon stops separating charge and the light bulb doesn't light. You can recharge any battery by pushing charge through it backward (pushing positive charge from its positive terminal to its negative terminal). However, some batteries don't take this charge well or heat up. The ones that recharge most effectively are those that can rebuild their chemical structures most effectively as they operate backward. If you are asking why doesn't the earth itself get pulled up toward a large magnet or electromagnet that I'm holding in my hand, the answer is that the magnetic forces just aren't strong enough to pull the magnet and earth together. I'm holding the two apart with other forces and preventing them from pulling together. The forces between poles diminish with distance. Those forces are proportional to the inverse square of the distance between poles, so they fall off very quickly as the poles move apart. Moreover, each north pole is connected to a south pole on the same magnet, so the attraction between opposite poles on two separate magnets is mitigated by the repulsions of the other poles on those same magnets. As a result, the forces between two bar magnets fall over even faster than the simple inverse square law predicts. It would take an incredible magnet, something like a spinning neutron star, to exert magnet forces strong enough to damage the earth. But then a neutron star would exert gravitational forces that would damage the earth, too, so you'd hardly notice the magnetic effects. The earth is a huge magnet and it is made out of metal. The earth's core is mostly iron and nickel, both of which can be magnetic metals. However, the earth's magnetism doesn't appear to come from the metal itself. Current theories attribute the earth's magnetism to movements in and around the core. There are either electric currents associated with this movement or some effects that orient the local magnetization of the metal. I don't think that there is any general consensus on the matter. If the ball was pitched straight and true, the same way every pitch, good batters could hit every one. There is enough time in the wind-up and pitch for the batter to determine where and when to swing and to hit the ball just right. But the pitches vary and the balls curve. That limits the batter's ability to predict where the ball is going. There aren't any physical laws that limit a batter's ability to hit every ball well, but there are physiological and mental limits that lower everyone's batting average. Actually, if you drive fast over a real speed bump, it's not good for your wheels and suspension. The springs in your car do protect the car from some of the effects of the bump, but not all of them. However, imagine driving over a speed bump on a traditional bicycle—one that has no spring suspension. The faster you drive over that bump, the more it will throw you into the air. First, magnets don't involve charges, they involve poles. So the question should probably be "are all metals magnetically poled?" The answer to this question is that they are never poled—they never have a net pole. They always have an even balance of north and south pole. However, there are some metals that have their north and south poles separated from one another. A magnetized piece of steel is that way. Only a few metals can support such separated poles and we will study those metals in a few weeks. No. Blue light causes the photoconductor to conduct. When you use white light in a xerographic copier, it's the blue and green portions of the light that usually do the copying. The red is wasted. There don't appear to be any isolated poles in our universe, or at least none have been found. That's just the way it is. As a result of this situation, the only way to create magnetism is through its relationship with electricity. When you use electricity to create magnetic fields, you effectively create equal pairs of poles—as much north pole as south pole. Yes. The light sensitive particles in black-and-white photographic paper don't respond to red light because the energy in a photon of red light doesn't have enough energy to cause the required chemical change. In effect, electrons are being asked to shift between levels when the light hits them and red light can't make that happen in the photographic paper. However, most modern black-and-white films are sensitive to red light because that makes roses and other red objects appear less dark and more realistic in the photographs. They assemble 4 colors, yellow, cyan, magenta, and black together to form the final image. The photoconductor creates charge images using blue, red, green, and white illumination successively and uses those images to form patterns of yellow, cyan, magenta, and black toner particles. These particles are then superimposed to form the final image, which appears full color. Naturally, the photoconductor used in such a complicated machine must be sensitive to the whole visible spectrum of light. As one of my readers (Tom O.) points out, most modern color copiers are essentially scanners plus color printers. They use infrared lasers to write the images optically onto four light-sensitive drums, one drum for each of the four colors (some systems reuse the same drum four times). Yes. Particles of light, photons, cause chemical changes in the film. You can work with some black-and-white films in red light because red light photons don't have enough energy to cause changes in those films. However, color film and most modern black-and-white films require complete darkness during processing. If you expose them to any visible light, you'll cause chemistry to occur. They are generally more conducive. Black light is actually ultraviolet light and its photons carry more energy than any visible photon. They can cause chemical changes in many materials, including skin. I don't know. That question has puzzled me for years. The mixture should find its molecules clinging together. They must contain something that keeps the oppositely charged systems separate from one another so that they don't aggregate. In a metal, electrons can easily shift from one level to another empty level because the levels are close together in energy. In a full insulator, it's very difficult for the electrons to shift from one level to an empty level because all of the empty levels are far above the filled levels in energy. In a photoconductor, the empty levels are modestly above the filled levels in energy, so a modest amount of energy is all that's needed to shift an electron. This energy can be supplied by a particle or "photon" of light. An illuminated photoconductor conducts electricity. The simplest way to make these fields is with electric charges (for an electric field) or with magnets (for a magnetic field). Charges are naturally surrounded by electric fields and magnets are naturally surrounded by magnetic fields. But fields themselves can create other fields by changing with time. That's how the fields in a light wave work—the electric field in the light wave changes with time and creates the magnetic field and the magnetic field changes with time and creates the electric field. This team of fields can travel through space without any charge or magnets nearby. If you put a conditioner on your hair, it will attract enough moisture to allow static charge to dissipate. They leave a layer of conditioning soap on the clothes and this soap attracts moisture. The moisture conducts electricity just enough to allow static charge to dissipate. No, an MRI uses a very different technique for imaging your body. A copier uses light to examine the original document while an MRI machine uses the magnetic responses of hydrogen atoms to map your body. While charges can move freely through a metal, allowing the metal to carry electric current, it's much harder for charges to travel outside of a conductor. Charges can move through the air or through plastic or glass, but not very easily. It takes energy to pull the charges out of a metal and allow them to move through a non-metal. Most of the time, this energy requirement prevents charges from moving through insulators such as plastic, glass, air, and even empty space. It is possible to simply pull up your legs. When you do that, you reduce the downward force your feet exert on the ground and the ground responds by pushing upward on your feet less strongly. With less upward force to support you, you begin to fall. When you complete a circuit by plugging an appliance into an electrical outlet, current flows out one wire to the appliance and returns to the electric company through the other wire. With alternating current, the roles of the two wires reverse rapidly, so that at one moment current flows out the black wire to the appliance and moments later current flows out the white wire to the appliance. But the power company drives this current through the wires by treating the black wire specially—it alternately raises and lowers the electrostatic potential or voltage of the black wire while leaving the voltage of the white wire unchanged with respect to ground. When the voltage of the black wire is high, current is pushed through the black wire toward the appliance and returns through the white wire. When the voltage of the black wire is low, current is pulled through the black wire from the appliance and is replaced by current flowing out through the white wire. The white wire is rather passive in this process because its voltage is always essentially zero. It never has a net charge on it. But the black wire is alternately positively charged and then negatively charged. That's what makes its voltage rise and fall. Since the black wire is capable of pushing or pulling charge from the ground instead of from the white wire, you don't want to touch the black wire while you're grounded. You'll get a shock. Heat is thermal energy that is flowing from one object to another. While several centuries ago, people thought heat was a fluid, which they named "caloric," we now know that it is simply energy that is being transferred. Heat moves via several mechanisms, including conduction, convection, and radiation. Conduction is the easiest to visualize—the more rapidly jittering atoms and molecules in a hotter object will transfer some of their energy to the more slowly jittering atoms in molecules in a colder object when you touch the two objects together. Even though no atoms or molecules are exchanged, their energy is. In convection, moving fluid carries thermal energy along with it from one object to another. In this case, there is material exchanged although usually only temporarily. In radiation, the atoms and molecules exchange energy by sending thermal radiation back and forth. Thermal radiation is electromagnetic waves and includes infrared light. A hotter object sends more infrared light toward a colder object than vice versa, so the hotter object gives up thermal energy to the colder object. Yes, but only if some of the poles are weaker than other so that when you sum up the total north pole strength and the total south pole strength, those two sums are equal. For example, you can make a magnet that has two north poles and one south pole if the north poles are each half as strong as the south pole. All magnets that we know of have exactly equal amounts of north and south pole. That's because we have never observed a pure north or a pure south pole in nature and you'd need such a pure north or south pole to unbalance the poles of a magnet. A The absence of such "monopoles" is an interesting puzzle and scientists haven't given up hope of finding them. Some theories predict that they should exist, but be very difficult to form artificially. There may be magnetic monopoles left over from the big bang, but we haven't found any yet. Not exactly. Sliding friction refers to the situation in which two surfaces slide across one another while touching. In hydroplaning, the two surfaces are sliding across one another, but they aren't touching. Instead, they're separated by a thin layer of trapped water. While hydroplaning still converts mechanical energy into thermal energy, just as sliding friction does, the lubricating effect of the water dramatically reduces the energy conversion. That's why you can hydroplane for such a long distance on the highway; there is almost no slowing force at all. Dan Barker, one of my readers, informed me of a NASA study showing that there is a minimum speed at which a tire will begin to hydroplane and that that speed depends on the square root of the tire pressure. Higher tire pressure tends to expel the water layer and prevent hydroplaning, while lower tire pressure allows the water layer to remain in place when the vehicle is traveling fast enough. As Dan notes, a large truck tire is typically inflated to 100 PSI and resists hydroplaning at speed of up to about 100 mph. But a passanger car tire has a much lower pressure of about 32 PSI and can hydroplane at speeds somewhat under 60 mph. That's why you have to be careful driving on waterlogged pavement at highway speeds and why highway builders carefully slope their surfaces to shed rain water quickly. Ideally, it doesn't matter how many steps you take with each step—the work you do in lifting yourself up a staircase depends only on your starting height and your ending height (assuming that you don't accelerate or decelerate in the overall process and thus change your kinetic energy, too). But there are inefficiencies in your walking process that lead you to waste energy as heat in your own body. So the energy you convert from food energy to gravitational potential energy in climbing the stairs is fixed, but the energy you use in carrying out this procedure depends on how you do it. The extra energy you use mostly ends up as thermal energy, but some may end up as sound or chemical changes in the staircase, etc. Actually, some bearings are dry (no grease or oil) and still last a very long time. The problem is that the idea touch-and-release behavior is hard to achieve in a bearing. The balls or rollers actually slip a tiny bit as they rotate and they may rub against the sides or retainers in the bearing. This rubbing produces wear as well as wasting energy. To reduce this wear and sliding friction, most bearings are lubricated. If you brake your car too rapidly, the force of static friction between the wheels and the ground will become so large that it will exceed its limit and the wheels will begin to skid across the ground. Once skidding occurs, the stopping force becomes sliding friction instead of static friction. The sliding friction force is generally weaker than the maximum static friction force, so the stopping rate drops. But more importantly, you lose steering when the wheels skid. An anti-lock braking system senses when the wheels suddenly stop turning during braking and briefly release the brakes. The wheel can then turn again and static friction can reappear between the wheel and the ground. When a ball bounces, some of its molecules slide across one another rather than simply stretching or bending. This sliding leads to a form of internal sliding friction and sliding friction converts useful energy into thermal energy. The more sliding friction that occurs within the ball, the less the ball stores energy for the rebound and the worse the ball's bounce. The missing energy becomes thermal energy in the ball and the ball's temperature increases. Actually, both a mouse ball and a bowling ball will bounce somewhat if you drop them on a suitably hard surface. It does have to do with elasticity. During the impact, the ball's surface dents and the force that dents the ball does work on the ball—the force on the ball's surface is inward and the ball's surface moves inward. Energy is thus being invested in the ball's surface. What the ball does with this energy depends on the ball. If the ball is an egg, the denting shatters the egg and the energy is wasted in the process of scrambling the egg's innards. But in virtually any normal ball, some or most of the work done on the ball's surface is stored in the elastic forces within the ball—this elastic potential energy, like all potential energies, is stored in forces. This stored energy allows the surface to undent and do work on other things in the process. During the rebound, the ball's surface undents. Although it's a little tricky to follow the exact flow of energy during the rebound, the elastic potential energy in the dented ball becomes kinetic energy in the rebounding ball. But even the best balls waste some of the energy involved in denting their surfaces. That's why balls never bounce perfectly and never return to their original heights when dropped on a hard, stationary surface. Some balls are better than others at storing and returning this energy, so they bounce better than others. Yes, when a falling object hits a table, the table pushes up on the falling object. What happens from then on depends on the object's characteristics. The egg shatters as the table pushes on it and the ball bounces back upward. Each time the ball bounces, it rises to a height that is a certain fraction of its height before that bounce. The ratio of these two heights is the fraction of the ball's energy that is stored and returned during the bounce. A very elastic ball will return about 90% of its energy after a bounce, returning to 90% of its original height after a bounce. A relatively non-elastic ball may only return about 20% of its energy and bounce to only 20% of its original height. It is this energy efficiency that determines how many times a ball bounces. The missing energy is usually converted into thermal energy within the ball's internal structure. While we ordinarily associate energy with an object's overall movement or position or shape, the individual atoms and molecules within the object can also have their own separate portions of energy. Thermal energy is the energy associated with the motions and positions of the individual atoms within the object. While an object may be sitting still, its atoms and molecules are always jittering about, so they have kinetic energies. When they push against one another during a bounce, they also have potential energies. These internal energies, while hard to see, are thermal energy. You are merging two equations out of context. The force you exert on an object can be non-zero without causing that object to accelerate. For example, if someone else is pushing back on the object, the object may not accelerate. If the object moves away from you as you push on it, then you'll be doing work on the object even though it's not accelerating. The only context in which you can merge those two equations (Force=mass x acceleration and Work=Force x distance) is when you are exerting the only force on the object. In that case, your force is the one that determines the object's acceleration and your force is the one involved in doing work. In that special case, if the object doesn't accelerate, then you do no work because you exert no force on the object! If someone else is pushing the object, then the force causing it to accelerate is the net force and not just your force on the object. As you can see, there are many forces around and you have to be careful tacking formulae together without thinking carefully about the context in which they exist. Different forces acting on a single object are not official pairs; not the pairs associated with Newton's third law of action-reaction. While it is possible for an object to experience two different forces that happen to be exactly equal in magnitude (amount) but opposite in direction, that doesn't have to be the case. When an egg falls and hits a table, the egg's downward weight and the table's upward support force on the egg are equal in magnitude only for a fleeting instant during the collision. That's because the table's support force starts at zero while the egg is falling and then increases rapidly as the egg begins to push against the table's surface. For just an instant the table pushes upward on the egg with a force equal in magnitude to the egg's weight. But the upward support force continues to increase in strength and eventually pushes a hole in the egg's bottom. The enormous upward force on the egg when it hits the table does cause the egg to accelerate upward briefly. The egg loses all of its downward velocity during this upward acceleration. But the egg breaks before it has a chance to acquire any upward velocity and, having broken, it wastes all of its energy ripping itself apart into a mess. If the egg had survived the impact and stored its energy, it probably would have bounced, at least a little. But the upward force from the table diminished abruptly when the egg broke and the egg never began to head upward for a real bounce. When an egg is sitting on a table, each object is exerting a support force on the other object. Those two support forces are equal in magnitude (amount) but opposite in direction. To be specific, the table is pushing upward on the egg with a support force and the egg is pushing downward on the table with a support force. Both forces have the same magnitude—both are equal in magnitude to the egg's weight. The fact that the egg is pushing downward on the table with a "support" force shows that not all support forces actually "support" the object they are exert on. The egg isn't supporting the table at all. But a name is a name and on many occasions, support forces do support the objects they're exerted on. I'm afraid that spoon bending is simply a hoax. While there are electrochemical processes going on in the mind that exert detectable forces on special probes located outside the head, these forces are so small that they are incapable of doing anything as demanding as bending a spoon. Spoon bending and all other forms of telekinesis are simply tricks played on gullible audiences. The fact that more massive objects also weigh more is just an observation of how the universe works. However, any other behavior would lead to some weird consequences. Suppose, for example, that an object's weight didn't depend on its mass, that all objects had the same weight. Then two separate balls would each weigh this standard amount. But now suppose that you glued the two balls together. If you think of them as two separate balls that are now attached, they should weigh twice the standard amount. But if you think of them as one oddly shaped object, they should weigh just the standard amount. Something wouldn't be right. So the fact that weight is proportional to mass is a sensible situation and also the way the universe actually works. A ball bounces because its surface is elastic and it stores energy during the brief period of collision when the ball and floor are pushing very hard against one another. Much of this stored energy is released in a rebound that tosses the ball back upward for another bounce. But people don't store energy well during a collision and they don't rebound much. The energy that we should store is instead converted into thermal energy—we get hot rather than bouncing back upward. The force that gravity exerts on an object is that object's weight. An object that has more gravity pulling on it weighs more and vice versa. While you are throwing the ball upward, you are pushing it upward and there is an upward force on the ball. But as soon as the ball leaves your hand, that upward force vanishes and the ball travels upward due to its inertia alone. In the discussion of that upward flight, I always said "after the ball leaves your hand," to exclude the time when you are pushing upward on the ball. Starting and stopping demonstrations are often tricky and I meant you to pay attention only to the period when the ball was in free fall. The fact that both balls fall together is the result of a remarkable balancing effect. Although the larger ball is more massive than the smaller ball, making the larger ball harder to start or stop, the larger ball is also heavier than the smaller ball, meaning that gravity pulls downward more on the larger ball. The larger ball's greater weight exactly compensates for its greater mass, so that it is able to keep up with the smaller ball as the two objects fall to the ground. In the absence of air resistance, the two balls will move exactly together-the larger ball with its greater mass and greater weight will keep up with the smaller ball. It's very important to distinguish velocity from acceleration. Acceleration is caused only by forces, so while a ball is falling freely it is accelerating according to gravity alone. In that case it accelerates downward at 9.8 m/s2 throughout its fall (neglecting air resistance). But while the ball's acceleration is constant, its velocity isn't. Instead, the ball's velocity gradually increases in the downward direction, which is to say that the ball accelerates in the downward direction. Velocity doesn't "act"—only forces "act." Instead, a ball's velocity shifts more and more toward the downward direction as it falls. About terminal velocity: when an object descends very rapidly through the air, it experiences a large upward force of air resistance. This new upward force becomes stronger as the downward speed of the object becomes greater. Eventually this upward air resistance force balances the object's downward weight and the object stops accelerating downward. It then descends at a constant velocity—obeying its inertia alone. This special downward speed is known as "terminal velocity." An object's terminal velocity depends on the strength of gravity, the shape and other characteristics of the object, and the density and other characteristics of the air. Inertia is everywhere. Left to itself, an object will obey inertia and travel at constant velocity. In deep space, far from any planet or star that exerts significant gravity, an object will exhibit this inertial motion. But on earth, the earth's gravity introduces complications that make it harder to observe inertial motion. A ball that's thrown up in the air still exhibits inertial effects, but its downward weight prevents the ball from following its inertia alone. Instead, the ball gradually loses its upward speed and eventually begins to descend instead. So inertia is the basic underlying principle of motion while gravity is a complicating factor. When you stand on the floor, the floor exerts two different kinds of forces on you—an upward support force that balances your downward weight and horizontal frictional forces that prevent you from sliding across the floor. Ultimately, both forces involve electromagnetic forces between the charged particles in the floor and the charged particles in your feet. The support force develops as the atoms in the floor act to prevent the atoms in your feet from overlapping with them. The frictional forces have a similar origin, although they involve microscopic structure in the surfaces. Since light carries energy with it, a cloth that absorbs light also absorbs energy. In most cases, this absorbed energy becomes thermal energy in the cloth. Because of this extra thermal energy, the cloth's temperature rises and it begins to transfer the thermal energy to its surroundings as heat. Its temperature stops rising when the thermal energy it receives from the light is exactly equal to the thermal energy it transfers to its surroundings as heat. This final temperature depends on how much light it absorbs—if it absorbs lots of light, then it will reach a high temperature before the balance of energy flow sets in. A cloth's color is determined by how it absorbs and emits light. Black cloth absorbs essentially all light that hits it, which is why its temperature rises so much. White cloth absorbs virtually no light, which is why it remains cool. Colored cloths fall somewhere in between black and white. Blue cloth absorbs light in the green and red portions of the spectrum while reflecting the blue portion. Red cloth absorbs light in the blue and green portions of the spectrum while reflecting the red portion. Since most light sources put more energy in the red portion of the spectrum than in the blue portion of the spectrum, the blue cloth absorbs more energy than the red cloth. So the sequence of temperatures you observed is the one you should expect to observe. One final note: most light sources also emit invisible infrared light, which also carries energy. Most of the light from an incandescent lamp is infrared. You can't tell by looking at a piece of cloth how much infrared light it absorbs and how much it reflects. Nonetheless, infrared light affects the cloth's temperature. A piece of white cloth that absorbs infrared light may become surprisingly hot and a piece of black cloth that reflects infrared light may not become as hot as you would expect. A roller coaster is a gravity-powered train. Since it has no engine or other means of propulsion, it relies on energy stored in the force of gravity to make it move. This energy, known as "gravitational potential energy," exists because separating the roller coaster from the earth requires work—they have to be pulled apart to separate them. Since energy is a conserved quantity, meaning that it can't be created or destroyed, energy invested in the roller coaster by pulling it away from the earth doesn't disappear. It becomes stored energy: gravitational potential energy. The higher the roller coaster is above the earth's surface, the more gravitational potential energy it has. Since the top of the first hill is the highest point on the track, it's also the point at which the roller coaster's gravitational potential energy is greatest. Moreover, as the roller coaster passes over the top of the first hill, its total energy is greatest. Most of that total energy is gravitational potential energy but a small amount is kinetic energy, the energy of motion. From that point on, the roller coaster does two things with its energy. First, it begins to transform that energy from one form to another—from gravitational potential energy to kinetic energy and from kinetic energy to gravitational potential energy, back and forth. Second, it begins to transfer some of its energy to its environment, mostly in the form of heat and sound. Each time the roller coaster goes downhill, its gravitational potential energy decreases and its kinetic energy increases. Each time the roller coaster goes uphill, its kinetic energy decreases and its gravitational potential energy increases. But each transfer of energy isn't complete because some of the energy is lost to heat and sound. Because of this lost energy, the roller coaster can't return to its original height after coasting down hill. That's why each successive hill must be lower than the previous hill. Eventually the roller coaster has lost so much of its original total energy that the ride must end. With so little total energy left, the roller coaster can't have much gravitational potential energy and must be much lower than the top of the first hill. It's then time for the riders to get off, new riders to board, and for a motor-driven chain to drag the roller coaster back to the top of the hill to start the process again. The chain does work on the roller coaster, investing energy into it so that it can carry its riders along the track at break-neck speed again. Overall, energy enters the roller coaster by way of the chain and leaves the roller coaster as heat and sound. In the interim, it goes back and forth between gravitational potential energy and kinetic energy as the roller coaster goes up and down the hills. Bouncing is related to elasticity. Any object that stores energy when deformed will rebound when it collides with a rigid surface. As long as the object is elastic, it doesn't matter whether it's hard or soft. It will still rebound from a rigid surface. Thus both a rubber ball and a steel marble will rebound strongly when you drop them on a steel anvil. But hardness does have an important effect on bouncing from a non-rigid surface. When a hard object collides with a non-rigid surface, the surface does some or all of the deforming so that the surface becomes involved in the energy storage and bounce. If the surface is elastic, storing energy well when it deforms, then it will make the object rebound strongly. That's what happens when a steel marble collides with a rubber block. However, if the surface isn't very elastic, then the object will not rebound much. That's what happens when a steel marble collides with a thick woolen carpet. A dead ball, a ball that doesn't bounce, is one with enormous internal friction. A bouncy ball stores energy when it collides with a surface and then returns this energy when it rebounds. But no ball is perfectly elastic, so some of the collision energy extracted from the ball and surface when they collide is ultimately converted into heat rather than being returned during the rebound. The deader the ball is, the less of the collision energy is returned as rebound energy. A truly dead ball converts all of the collision energy into heat so that it doesn't rebound at all. Most of the missing collision energy is lost because of sliding friction within the ball. Molecules move across one another as the ball's surface dents inward and these molecules rub. This rubbing produces heat and diminishes the elastic potential energy stored in the ball. When the ball subsequently undents, there just isn't as much stored energy available for a strong rebound. The classic dead "ball" is a beanbag. When you throw a beanbag at a wall, it doesn't rebound because all of its energy is lost through sliding friction between the beans as the beanbag dents. To track someone in a forest, he must be emitting or reflecting something toward you and doing it in a way that is different from his surroundings. For example, if he is talking in a quiet forest, you can track him by his sound emissions. Or if he is exposed to sunlight in green surroundings, you can track him by his reflections of light. But while both of these techniques work fine at short distances, they aren't so good at large distances in a dense forest. A better scheme is to look for his thermal radiation. All objects emit thermal radiation to some extent and the spectral character of this thermal radiation depends principally on the temperatures of the objects. If the person is hotter than his surroundings, as is almost always the case, he will emit a different spectrum of thermal radiation than his surrounds. Light sensors that operate in the deep infrared can detect a person's thermal radiation and distinguish it from that of his cooler surroundings. Still, viewing that thermal radiation requires a direct line-of-sight from the person to the infrared sensor, so if the forest is too dense, the person is untrackable. The large, rounded head of a badminton birdie serves at least two purposes: it makes sure that the birdie bounces predictably off the racket's string mesh and it protects the strings and birdie from damage. If the birdie's head were smaller, it would strike at most a small area on one of the racket strings. If it hit that string squarely, the birdie might bounce predictably. But if it hit at a glancing angle, the birdie would bounce off at a sharp angle. By spreading out the contact between the birdie and the string mesh, the large head makes the birdie bounce as though it had hit a solid surface rather than one with holes. Spreading out the contact also prevents damage to the racket and birdie. If they collided over only a tiny area, the forces they exerted on one another would be concentrated over that area and produce enormous local pressures. These pressures could cut the birdie or break a string. But with the birdie's large head, the pressures involved are mild and nothing breaks. Any time you hit an object with a racket or bat, there's a question about how heavy the racket or bat should be for maximum distance. Actually, it isn't weight that's most important in a racket or bat, it's mass—the measure of the racket or bat's inertia. The more massive a racket or bat is, the more inertia it has and the less it slows down when it collides with something else. A more massive racket will slow less when it hits a birdie. From that observation, you might think that larger mass is always better. But a more massive racket or bat is also harder to swing because of its increased inertia. So there are trade offs in racket or bat mass. For badminton, the birdie has so little mass that it barely slows the racket when the two collide. Increasing the racket's mass would allow it to hit the birdie slightly farther, but only if you continued to swing the racket as fast as before. Since increasing the racket mass will make it harder to swing, it's probably not worthwhile. In all likelihood, people have experimented with racket masses and have determined that the standard mass is just about optimal for the game. An event horizon is the surface around a black hole from which not even light can escape. But to make it clearer what that statement means, consider first what happens to the light from a flashlight that's resting on the surface of a large planet. Light is affected by gravity—it falls just like everything else. The reason you never notice this fact is that light travels so fast that it doesn't have time to fall very far. But suppose that the gravity on the planet is extremely strong. If the flashlight is aimed horizontally, the light will fall and arc downward just enough that it will hit the surface of the planet before escaping into space. To get the light to leave the planet, the flashlight must be tipped a little above horizontal. If the planet's gravity is even stronger, the flashlight will have to be tipped even more above horizontal. In fact, if the gravity is sufficiently strong, light can only avoid hitting the planet if the flashlight is aimed almost straight up. And beyond a certain strength of gravity, even pointing the flashlight straight up won't keep the light from hitting the planet's surface. When that situation occurs, an event horizon forms around the planet and forever separates the planet from the universe around it. Actually, the planet ceases to exist as a complex object and is reduced to its most basic characteristics: mass, electric charge, and angular momentum. The planet becomes a black hole. and light emitted at or within this black hole's event horizon falls inward so strongly that it doesn't escape. Since nothing can move faster than light, nothing else can escape from the black hole's event horizon either. The nature of space and time at the event horizon are quite complicated and counter-intuitive. For example, an object dropped into a black hole will appear to spread out on the event horizon without ever entering it. That's because, to an outside observer, time slows down in the vicinity of the event horizon. By that, I mean that it takes an infinite amount of our time for an object to fall through that event horizon. But the object itself doesn't experience a change in the flow of time. For it, time passes normally and it zips right through the event horizon. Finally, event horizons and the black holes that have them aren't truly black—quantum mechanical fluctuations at the event horizon allow black holes to emit particles and radiation. This "Hawking radiation," discovered by Stephen Hawking about 25 years ago, means that black holes aren't truly black. Nonetheless, objects that fall into an event horizon never leave intact. While tracking a radio transmitter is easy—you only need to follow the radio waves back to their source—you might think that tracking a radio receiver is impossible. After all, a radio receiver appears to be a passive device that collects radio waves rather than emitting them. But that's not entirely true. Sophisticated radio receivers often use heterodyne techniques in which the signal from a local radio-frequency oscillator is mixed with the signal coming from the antenna. The mixing process subtracts one frequency from the other so that antenna signals from a particular radio station are shifted downward in frequency into the range the radio uses to create sound. This mixing process allows the radio receiver to be very selective about which station it receives. The receiver can easily distinguish the station that's nearest in frequency to its local oscillator from all the other stations, just as its easy to tell which note on a piano is closest in pitch to a particular tuning fork. But heterodyne techniques have a side effect: they cause the radio receiver to emit radio waves. These waves originate with the local radio-frequency oscillator, and with other internal mixing frequencies such as the intermediate frequency oscillator present in many sophisticated receivers. Because these oscillators don't use very much power, the waves they emit aren't very strong. Nonetheless, they can be detected, particularly at short range. For example, it's possible for police to detect a radar detector that contains its own local microwave oscillator. Similarly, people who have tried to pirate microwave transmissions have been caught because of the microwaves emitted from their receivers. In WWII, the Japanese were apparently very successful at locating US forces by detecting the 455 kHz intermediate frequency oscillators in their radios—a problem that quickly led to a redesign of the radios to prevent that 455 kHz signal from leaking onto the antennas (thanks to Tom Skinner for pointing this out to me). As you can see, it is possible to track someone who is listening to the right type of radio receiver. However, the radio waves from that receiver are going to be very weak and you won't be able to follow them from a great distance. While the moon's gravity is the major cause of tides (the sun plays a secondary role), the moon's gravity isn't directly responsible for any true currents. Basically, water on the earth's surface swells up into two bulges: one on the side of the earth nearest the moon and one on the side farthest from the moon. As the earth turns, these bulges move across its surface and this movement is responsible for the tides. If there were more than one moon, the tidal bulges would become misshapen. That is essentially what happens because of the sun. As the moon and sun adopt different arrangements around the earth, the strengths of the tides vary. The strongest tides (spring tides) occur when the moon and sun are on the same or opposite sides of the earth. The weakest tides (neap tides) occur when the moon and sun are at 90° from one another. Extra moons would probably just complicate this situation so that the strengths of the tides would vary erratically as the moons shifted their positions around the earth. Since the timing of the tides is still basically determined by the earth's rotation, there would still be approximately 2 highs and 2 lows a day. This comment, which responds to a previous posting on this site, points out one of the most important differences between physical science and pseudo-science: the fact that pseudo-science isn't troubled by its lack of self-consistency. Physical science, particularly physics itself, is completely self-consistent. By that I mean that the same set of physical rules applies to every possible situation in the universe and that this set of rules never leads to paradoxical results. Despite its complicated behavior, the universe is orderly and predictable. It's precisely this order and predictability that is the basis for the whole field of physics. In contrast, pseudo-science is eclectic—it draws from physics and magic as it sees fit. It uses the laws of physics when it finds those laws useful and it ignores the laws of physics when they conflict with its interests. But the laws of physics only make sense if they apply universally—if there were even one situation in which a law of physics didn't apply, physics would lose its self-consistency and predictive power. That's just what happens with pseudo-science when it begins to ignore the laws of physics on occasion. Moreover, the new rules that pseudo-science introduces to replace the ones it ignores make the trouble even worse. Overall, pseudo-science is inconsistent and can't be counted on to predict anything.Pseudo-science might argue that the laws of physics are correct as far as they go, but that they're incomplete. No doubt the laws of physics are incomplete; physicists have frequently discovered improvements to the laws of physics that have allowed them to make even more accurate predictions of the universe's behavior. But in the years since the discoveries of relativity and quantum physics, the pace of such discoveries has slowed and what remains to be understood is at a very deep and subtle level. It's extraordinarily unlikely that the laws of physics as they're currently understood are wrong at a level that would allow a person to bend a spoon with their thoughts alone or predict the order of a deck of cards without assistance. Just because I haven't dropped a particular book doesn't prevent me from predicting that it will fall when I let go of it. I understand the laws that govern its motion and I know that having it fly upward would violate those laws. Similarly, I don't have to watch someone try to bend a spoon with their thoughts to know that it can't be done legitimately. Again, I understand the laws that govern the spoon's condition and I know that having it bend without an identifiable force acting on it would violate those laws. I also don't have to watch someone try to predict cards to know that it, too, can't be done legitimately. Without a clear physical mechanism for transporting information from the cards to the person, a mechanism that must involve forces or exchanges of particles, there is no way for the person to predict the cards. The pole vault is all about energy and energy storage. Lifting a person upward takes energy because there is an energy associated with altitude—gravitational potential energy. Lifting a person 5 or 6 meters upward takes a considerable amount of energy and that energy has to come from somewhere. In the case of a pole-vaulter, most of the lifting energy comes from the pole. But the pole also had to get the energy from somewhere and that somewhere is the vaulter himself. Here is the story as it unfolds: When the pole-vaulter stands ready to begin his jump, he is motionless on the ground and he has no kinetic energy (energy of motion), minimal gravitational potential energy (energy of height), and no elastic energy in his pole. All he has is chemical potential energy in his body, energy that he got by eating food. Now he begins to run down the path toward the jump. As he does so, he converts chemical potential energy into kinetic energy. By the time he plants his pole at the jump, his kinetic energy is quite large. But once he plants the pole, the pole begins to bend. As it does, he slows down and his kinetic energy is partially transferred to the pole, where it becomes elastic potential energy. The pole then begins to lift the vaulter upward, returning its stored energy to him as gravitational potential energy. By the time the vaulter clears the bar, 5 or 6 meters above the ground, almost all of the energy in the situation is in the form of gravitational potential energy. The vaulter has only just enough kinetic energy to carry him past the bar before he falls. On his way down, his gravitational potential energy becomes kinetic energy and he hits the pit at high speed. The pit's padding extracts his kinetic energy from him gently and converts that energy into thermal energy. This thermal energy then floats off into the air as heat. One interesting point about jumping technique involves body shape. The vaulter bends his body as he passes over the bar so that his average height (his center of gravity) never actually gets above the bar. Since his gravitational potential energy depends on his average height, rather than the height of his highest part, this technique allows him to use less overall energy to clear the bar. A mirror doesn't really flip your image horizontally or vertically. After all, the image of your head is still on top and the image of your left hand is still on the left. What the mirror does flip is which way your image is facing. For example, if you were facing north, then your image is facing south. This front-back reversal makes your image fundamentally different from you in the same way a left shoe is fundamentally different from a right shoe. No matter how you arrange those two shoes, they'll always be reversed in one direction. Similarly, no matter how you arrange yourself and your image, they'll always be reversed in one direction. While you're looking at your image, the reversed direction is the forward-backward direction. But it's natural to imagine yourself in the place of your image. To do this you imagine turning around to face in the direction that your image is facing. When you turn in this manner, you mentally eliminate the forward-backward reversal but introduce a new reversal in its place: a left-right reversal. If you were to imagine standing on your head instead, you would still eliminate the forward-backward reversal but would now introduce an up-down reversal. Since it's hard to imagine standing on your head in order to face in the direction your image is facing, you tend to think only about turning around. It's this imagined turning around that leads you to say that your image is reversed horizontally. The atoms in a molecule are usually held together by the sharing or exchange of some of their electrons. When two atoms share a pair of electrons, they form a covalent bond that lowers the overall energy of the atoms and sticks the atoms together. About half of this energy reduction comes from an increase in the negatively charged electron density between the atoms' positively charged nuclei and about half comes from a quantum mechanical effect—giving the two electrons more room to move gives them longer wavelengths and lowers their kinetic energies. When two atoms exchange an electron, they form an ionic bond that again lowers the overall energy of the atoms and sticks them together. Although moving the electron from one atom to the other requires some energy, the two atomic ions that are formed by the transfer have opposite charges and attract one another strongly. The reduction in energy that accompanies their attraction can easily exceed the energy needed to transfer the electron so that the two atoms become permanently stuck to one another. It's true that the earth's surface is moving eastward rapidly relative to the earth's center of mass. However, that motion is very difficult to detect. When you are standing on the ground, you move with it and so does everything around you, including the air. While you are actually traveling around in a huge circle once a day, for all practical purposes we can imagine that you are traveling eastward in a straight line at a constant speed of 950 mph relative to the earth's center of mass. Ignoring the slight curvature of your motion, you are in what is known as an inertial frame of reference, meaning a viewpoint that is not accelerating but is simply coasting steadily through space. You'll notice that I keep saying "relative to the earth's center of mass" when I discuss motion. I do that because there is no special "absolute" frame of reference. Any inertial frame is as good as any other frame and your current inertial frame is just as good as anyone else's. In fact, you are quite justified in declaring that your frame of reference is stationary and that everyone else's frames of reference are moving. After all, you don't detect any motion around you so why not declare that your frame is officially stationary. Since the air is also stationary in that frame of reference, flying about in the air doesn't make things any more complicated. You are flying through stationary air in your old stationary frame of reference. The only way in which the 950 mph speed appears now is in comparing your frame of reference to the rest of the earth: in your frame of reference, the earth's center of mass is moving westward at 950 mph. While it is sometimes noted that old cathedral glass is now thicker at the bottom than at the top, such cases appear to be the result of how the glass was made, not of flow. Medieval glass was made by blowing a giant glass bubble on the end of a blowpipe or "punty" and this bubble was cut open at the end and spun into a huge disk. When the disk cooled, it was cut off the punty and diced into windowpanes. These panes naturally varied in thickness because of the stretching that occurred while spinning the bubble into a disk. Evidently, the panes were usually put in thick end down. Modern studies of glass show that below the glass transition temperature, which is well above room temperature, molecular rearrangement effectively vanishes altogether. The glass stops behaving like a viscous liquid and becomes a solid. Its heat capacity and other characteristics are consistent with its being a solid as well. When a light wave passes through matter, the charged particles in that matter do respond—the light wave contains an electric field that pushes on electrically charged particles. But how a particular charged particle responds to the light wave depends on the frequency of the light wave and on the quantum states available to the charged particle. While the charged particle will begin to vibrate back and forth at the light wave's frequency and will begin to take energy from the light wave, the charged particle can only retain this energy permanently if doing so will promote it to another permanent quantum state. Since light energy comes in discrete quanta known as photons and the energy of a photon depends on the light's frequency, it's quite possible that the charged particle will be unable to absorb the light permanently. In that case, the charged particle will soon reemit the light. In effect, the charged particle "plays" with the photon of light, trying to see if it can absorb that photon. As it plays, the charged particle begins to shift into a new quantum state—a "virtual" state. This virtual state may or may not be permanently allowed. If it is, it's called a real state and the charged particle may remain in it indefinitely. In that case, the charged particle can truly absorb the photon and may never reemit it at all. But if the virtual state turns out not to be a permanently allowed quantum state, the charged particle can't remain in it long and must quickly return to its original state. In doing so, this charged particle reemits the photon it was playing with. The closer the photon is to one that it can absorb permanently, meaning the closer the virtual quantum state is to one of the real quantum states, the longer the charged particle can play with the photon before recognizing that it must give the photon up. A colored material is one in which the charged particles can permanently absorb certain photons of visible light. Because this material only absorbs certain photons of light, it separates the components of white light and gives that material a colored appearance. A transparent material is one in which the charged particles can't permanently absorb any photons of visible light. While these charged particles all try to absorb the visible light photons, they find that there are no permanent quantum states available to them when they do. Instead, they play with the photons briefly and then let them continue on their way. This playing process slows the light down. In general blue light slows down more than red light in a transparent material because blue light photons contain more energy than red light photons. The charged particles in the transparent material do have real permanent states available to them, but to reach those states, the charged particles would have to absorb high-energy photons of ultraviolet light. While blue photons don't have as much energy as ultraviolet photons, they have more energy than red photons do. As a result, the charged particles in a transparent material can play with a blue photon longer than they can play with a red photon—the virtual state produced by a blue photon is closer to the real states than is the virtual state produced by a red photon. Because of this effect, the speed at which blue light passes through a transparent material is significantly less than the speed at which red light passes through that material. Finally, about quantum states: you can think of the real states of one of these charged particles the way you think about the possible pitches of a guitar string. While you can jiggle the guitar string back and forth at any frequency you like with your fingers, it will only vibrate naturally at certain specific frequencies. You can hear these frequencies by plucking the string. If you whistle at the string and choose one of these specific frequencies for your pitch, you can set the string vibrating. In effect, the string is absorbing the sound wave from your whistle. But if you whistle at some other frequency, the string will only play briefly with your sound wave and then send it on its way. The string playing with your sound waves is just like a charged particle in a transparent material playing with a light wave. The physics of these two situations is remarkably similar. There are two forces present when the cars collide: each car pushes on the other car so each car experiences a separate force. As for the strength of these two forces, all I can say is that they are exactly equal in amount but opposite in direction. That relationship between the forces is Newton's third law of motion, the law dealing with action and reaction. In accordance with this law of motion, no matter how big or small the cars are, they will always exert equal but oppositely directed forces on one another. The amount of each force is determined by how fast the cars approach one another before they hit and by how stiff their surfaces and frames are. If the cars are approaching rapidly and are extremely stiff and rigid, they will exert enormous forces on one another when they collide and will do so for a very short period of time. During that time, the cars will accelerate violently and their velocities will change radically. If you happened to be in one of the cars, you would also accelerate violently in response to severe forces and would find the experience highly unpleasant. If, on the other hand, the cars are soft and squishy, they will exert much weaker forces on another and they will accelerate much more gently for a long period of time. That will be true even if they were approaching one another rapidly before impact. When the collision period is over, the cars will again have changed velocities significantly but the weaker forces will have made those changes much more gradual. If you have to be in a collision, chose the soft squishy cars over the stiff ones—the accelerations and forces are much weaker and less injurious. That's why cars have crumple zones and airbags: they are trying to act squishy so that you don't get hurt as much. You're right about the glass expanding along with the liquid inside it. But liquids normally expand more than solids as their temperatures increase. That's because the atoms and molecules in a liquid have more freedom to move around than those in a solid and they respond to increasing temperatures by forming less and less tightly packed arrangements. Since the liquid in a thermometer expands more than the glass container around it, the liquid level rises as the thermometer's temperature increases. A halogen bulb uses a chemical trick to prolong the life of its filament. In a regular bulb, the filament slowly thins as tungsten atoms evaporate from the white-hot surface. These lost atoms are carried upward by the inert gases inside the bulb and gradually darken the bulb's upper surface. In a halogen bulb, the gases surrounding the filament are chemically active and don't just deposit the lost atoms at the top of the bulb. Instead, they react with those tungsten atoms to form volatile compounds. These compounds float around inside the bulb until they collide with the filament again. The extreme heat of the filament then breaks the compounds apart and the tungsten atoms stick to the filament. This tungsten recycling process dramatically slows the filament's decay. Although the filament gradually develops thin spots that eventually cause it to fail, the filament can operate at a higher temperature and still last two or three times as long as the filament of a regular bulb. The hotter filament of a halogen bulb emits relatively more blue light and relatively less infrared light than a regular bulb, giving it a whiter appearance and making it more energy efficient. In real life, only explosive sounds will break normal glass. That's because normal glass vibrates poorly and has no strong natural frequencies. You can see this by tapping a glass window or cup—all you hear is a dull "thunk" sound. For an object to vibrate strongly in response to a tone, that object must exhibit a strong natural resonance and the tone's pitch must be perfectly matched to the frequency of that resonance. A crystal wineglass vibrates well and emits a clear tone when you tap it. If you listen to the pitch of that tone and then sing it loudly, you can make the wineglass vibrate. A crystal windowpane would also have natural resonances and would vibrate in response to the right tones. But it would take very loud sound at exactly the right pitch to break this windowpane. A few extraordinary voices have been able to break crystal wineglasses unassisted (i.e., without amplification) and it would take such a voice to break the crystal windowpane. The answer to that question is complicated—glass is neither a normal liquid nor a normal solid. While the atoms in glass are essentially fixed in place like those in a normal solid, they are arranged in the disorderly fashion of a liquid. For that reason, glass is often described as a frozen liquid—a liquid that has cooled and thickened to the point where it has become rigid. But calling glass a liquid, even a frozen one, implies that glass can flow. Liquids always respond to stresses by flowing. Since unheated glass can't flow in response to stress, it isn't a liquid at all. It's really an amorphous or "glassy" solid—a solid that lacks crystalline order. Yes, but not as quickly as without the glass. While glass absorbs short wavelength ultraviolet light, it does pass 350 to 400 nanometer ultraviolet. While this longer wavelength ultraviolet is less harmful than the shorter wavelength variety, you can still tan or burn if you get enough exposure. Glass is like sunscreen—it protects you pretty well but it isn't perfect. A center punch is a common tool used to dent a surface prior to drilling. The drill bit follows the pointed dent and the hole ends up passing right through it. But in the situation you describe, the center punch is being used to damage the surface of a car window. When you push the handle of the center punch inward, you are compressing a spring and storing energy. A mechanism inside the center punch eventually releases that spring and allows it to push a small metal cylinder toward the tip of the punch. This cylinder strikes the tip of the punch and pushes it violently into the glass. The glass chips. In normal glass, this chipping would be barely noticeable. But the side and rear windows of a car are made of tempered glass—glass that has been heat processed in such a way that its surfaces are under compression and its body is under tension. Tempering strengthens the glass by making it more resistant to tearing. But once an injury gets through the compressed surface of the tempered glass and enters the tense body, the glass rips itself apart. The spider web pattern of tearing you observe is a feature of the tempered glass, not the center punch. Any deep cut or chip in the tempered glass will cause this "dicing fracture" to occur. Whenever you wipe a CD to clean it, there is a chance that you will scratch its surface. If that scratch is wide enough, it may prevent the player's optical system from reading the data recorded beneath it and this loss of data may make the CD unplayable. It turns out that tangential scratches are much more serious than radial scratches. When the scratch is radial (extending outward from the center of the disc to its edge), the player should still be able to reproduce the sound without a problem. That's because sound information is recorded in a spiral around the disc and there is error-correcting information included in each arc shaped region of this spiral. Since a radial scratch only destroys a small part of each arc it intersects, the player can use the error correcting information to reproduce the sound perfectly. But when the scratch is tangential (extending around the disc and along the spiral), it may prevent the player from reading a large portion of an arc. If the player is unable to read enough of the arc to perform its error correcting work, it can't reproduce the sound. That's why a tangential scratch can ruin a CD much more easily than a radial scratch can. That's why you should never wipe a CD tangentially. Always clean them by wiping from the center out. A CD player reads ahead of the sound it is playing so that it always has sound information from at least one full turn of the disc in its memory. It has to read ahead as part of the error correcting process—the sound information associated with one moment in time is actually distributed around the spiral rather than squeezed into one tiny patch. This reading ahead is particularly important for a portable CD player, which usually saves several seconds of sound information in its memory so that it will have time to recover if its optical system is shaken out of alignment. When you pause the CD player, it reads ahead until its memory is full and then lets its optical system hover while the disc continues to turn. When you unpause the player, it uses the sound information it has saved in its memory to continue where it left off and its optical system resumes the reading ahead process. Yes. Heavy water ice is about 1% more dense than liquid water at its melting temperature of 3.82° C. I wouldn't recommend drinking large amounts of heavy water, but you could make sinking ice cubes out of it. When you heat water on the stove, heat flows into the water from below and the water at the bottom of the pot becomes a little hotter than the water above it. As a result, the water at the bottom of the pot boils first and its steam bubbles begin to rise up through the cooler water above. As they rise, these steam bubbles cool and collapse—they are crushed back into liquid water by the ambient air pressure. These collapsing steam bubbles are noisy. When the water finally boils throughout, the steam bubbles no longer collapse as they rise and simply pop softly at the surface of the liquid. I can't think of any situation in which what you say would be true. Hot water should always defrost things faster than cold water. That's because the rate of heat flow between two objects always increases as the temperature difference between them increases. When you put frozen food in hot water, heat flows into that food faster than it would from cold water because the temperature difference is larger. Let's start with three simpler problems: the coexistences of ice and water, of water and steam, and of ice and steam. Each pair of phases can coexist whenever the water molecules leaving one phase are replaced at an equal rate by water molecules leaving the second phase. This isn't as hard as it sounds. In ice water, the water molecules leaving the ice cubes for the liquid are replaced at an equal rate by water molecules leaving the liquid for the ice cubes. In a sealed bottle of mineral water, the water molecules leaving the liquid for the water vapor above it are replaced at an equal rate by water molecules leaving the water vapor for the liquid. And in an old-fashioned non-frostfree freezer with a tray of ice cubes, the water molecules leaving the ice cubes for the water vapor around them are replaced at an equal rate by water molecules leaving the water vapor for the ice cubes. In each case, there is some flexibility in temperature—these coexistence conditions can be reached over at least a small range of temperature by varying the pressure on the system. In fact, at 0.03° C and a pressure of 6.11 torr; pure water, pure ice, and pure steam can coexist as a threesome. At this triple point, water molecules will be moving back and forth between all three phases but without producing any net change in the amount of ice, water, or steam. The microwave oven is superheating the water to a temperature slightly above its boiling temperature. It can do this because it doesn't help water boil the way a normal coffee maker does. For water to boil, two things must occur. First, the water must reach or exceed its boiling temperature—the temperature at which a bubble of pure steam inside the water becomes sturdy enough to avoid being crushed by atmospheric pressure. Second, bubbles of pure steam must begin to nucleate inside the water. It's the latter requirement that's not being met in the water you're heating with the microwave. Steam bubbles rarely form of their own accord unless the water is far above its boiling temperature. That's because a pure nucleation event requires several water molecules to break free of their neighbors simultaneously to form a tiny steam bubble and that's very unlikely at water's boiling temperature. Instead, most steam bubbles form either at hot spots, or at impurities or imperfections—scratches in a metal pot, the edge of a sugar crystal, a piece of floating debris. When you heat clean water in a glass container using a microwave oven, there are no hot spots and almost no impurities or imperfections that would assist boiling. As a result, the water has trouble boiling. But as soon as you add a powder to the superheated water, you trigger the formation of steam bubbles and the liquid boils madly. Not without using something other than pure, normal water for the ice. The density of ice is always less than that of water at the same pressure. While squeezing the ice will increase its density, it will also increase the density of the water so the ice will always float. Of course, you could add dense materials to the ice to weight it down to neutral buoyancy, but then it wouldn't be pure ice any more. The simple answer is entropy—the ever-increasing disorder of the universe. Salt water is far more disordered than the salt and water from which it's formed, so separating those components doesn't happen easily. The second law of thermodynamics observes that the entropy of an isolated system cannot decrease—you can't reduce the disorder of the salty water without paying for it elsewhere. In effect, you have to export the salty water's disorder somewhere else as you separate it into pure water and pure salt. In most cases, this exported disorder winds up in the energy used to desalinating sea water. You start with nicely ordered energy—perhaps electricity or gasoline—and you end up with junk energy such as waste heat. While some desalination techniques such as reverse osmosis can operate near the efficiency limits imposed by thermodynamics, they can't avoid those limits. If you want to desalinate water, you must consume ordered resources and those resources usually cost money (an exception is sunlight). The desalinating equipment is also expensive. Until water becomes scarce enough or energy cheap enough, desalinated water will remain uncommon in the United States. When you simply heat the cold air, you lower its relative humidity—the heated air is holding a smaller fraction of its maximum water molecule capacity and is effectively dry. Dry air always feels colder than humid air at the same temperature. That's because water molecules are always evaporating from your skin. If the air is dry, these evaporating molecules aren't replaced and they carry away significant amounts of heat. On a hot day, this evaporation provides pleasant cooling but on a cold day it's much less welcome. If the air near your skin is humid, water molecules will return to your skin almost as frequently as they leave and will bring back most of the heat that you would have lost to evaporation. Thus humid air spoils evaporative cooling, making humid weather unpleasant in the summer but quite nice in the winter. A hot water heater is built so that hot water is drawn out of its top and cold water enters it at its bottom. Since hot water is less dense than cold water, the hot water floats on the cold water and they don't mix significantly. As you take your shower, you slowly deplete the hot water at the top of the tank and the level of cold water rises upward. But the shower doesn't turn cold until almost all the hot water has left the tank and the cold water level has risen to its top. You're right. The greater the temperature difference between two objects, the faster heat flows between them. This effect is useful whenever you forget to chill drinks for a party. Just don't leave a glass bottle in the freezer too long; if the water inside freezes, it may expand enough to break the bottle. While the cannonball is in your boat, its great weight pushes the boat deeper into the water. To support the cannonball, the boat must displace the cannonball's weight in water—a result known as Archimedes principle. Since the cannonball is very dense, the boat must displace perhaps 8 cannonball volumes of water in order to obtain the buoyancy needed to support the cannonball. This displaced water appears on the surface of the lake so that the lake's level rises. Now suppose that you throw the cannonball overboard. The cannonball quickly sinks to the bottom. The boat now floats higher than before because it no longer needs to displaces the extra 8 cannonball volumes of water. Although the cannonball itself is displacing 1 cannonball volume of water, there are still 7 cannonball volumes less water being displaced by objects in the water. As a result, the water level of the lake drops slightly when you throw the cannonball overboard. A nuclear reactor operates just below critical mass so that each radioactive decay in its fuel rods induces a large but finite number of subsequent fissions. Since each chain reaction gradually weakens away to nothing, there is no danger that the fuel will explode. But operating just below critical mass is a tricky business and it involves careful control of the environment around the nuclear fuel rods. The operators use neutron absorbing control rods to dampen the chain reactions and keep the fuel just below critical mass. Fortunately, there are several effects that make controlled operation of a reactor relatively easy. Most importantly, some of the neutrons involved in the chain reactions are delayed because they come from radioactive decay processes. These delayed neutrons slow the reactor's response to changes—the chain reactions take time to grow stronger and they take time to grow weaker. As a result, it's possible for a reactor to exceed critical mass briefly without experiencing the exponentially growing chain reactions that we associate with nuclear explosions. In fact, the only nuclear reactor that ever experienced these exponentially growing chain reactions was Chernobyl. That flawed and mishandled reactor went so far into the super-critical regime that even the neutron delaying effects couldn't prevent exponential chain reactions from occurring. The reactor superheated and ripped itself apart. Critical, sub-critical, and super-critical mass all refer to the chain reactions that occur in fissionable material—a material in which nuclei can shatter or "fission" when struck by a passing neutron. When this nuclear fuel is at critical mass, each nucleus that fissions directly induces an average of one subsequent fission. This situation leads to a steady chain reaction in the fuel: the first fission causes a second fission, which causes a third fission, and so on. Steady chain reactions of this sort are used in nuclear reactors. When the fuel is below critical mass, there aren't quite enough nuclei around to keep the chain reactions proceeding steadily and each chain gradually dies away. While such a sub-critical mass of fuel continues to experience chain reactions, they aren't self-sustaining and depend on natural radioactive decay to restart them. When the fuel is above critical mass, there are more than enough nuclei around to sustain the chain reactions. In fact, each chain reaction grows exponentially in size with the passage of time. Since each fission directly induces more than one subsequent fission, it takes only a few generations of fissions before there are astronomical numbers of nuclei fissioning in the fuel. Explosive chain reactions of this sort occur in nuclear weapons. Almost the instant the nuclear fuel reaches critical mass, it begins to release heat and explode. If this fuel overheats and rips itself apart before most its nuclei have undergone fission, only a small fraction of the fuel's nuclear energy will have been released in the explosion. There are at least two possible causes for such a "fizzle": slow assembly of the super-critical mass needed for explosive chain reactions and poor containment of the exploding fuel. A well designed fission bomb assembles its super-critical mass astonishingly quickly and it shrouds that mass in an envelope that prevents it from exploding until most of the nuclei have had time to shatter. Critical mass is something of a misnomer because in addition to mass, it also depends on shape, density, and even the objects surrounding the nuclear fuel. Anything that makes the nuclear fuel more efficient at using its neutrons to induce fissions helps that fuel approach critical mass. The characteristics of the materials also play a role. For example, fissioning plutonium 239 nuclei release more neutrons on average than fissioning uranium 235 nuclei. As a result, plutonium 239 is better at sustaining a chain reaction than uranium 235 and critical masses of plutonium 239 are typically smaller than for uranium 235. Apart from obtaining fissionable material, this is the biggest technical problem with building a nuclear weapon. Although a fission bomb's nuclear fuel begins to heat up and explode almost from the instant it reaches critical mass, just reaching critical mass isn't good enough. To use its fuel efficiently—to shatter most of its nuclei before the fuel rips itself apart—the bomb must achieve a significantly super-critical mass. It needs the explosive chain reactions that occur when each fission induces an average of far more than one subsequent fission. There are two classic techniques for reaching super-critical mass. The technique used in the uranium bomb dropped over Hiroshima in WWII involved a collision between two objects. A small cannon fired a piece of uranium 235 into a nearly complete sphere of uranium 235. The uranium projectile entered the incomplete sphere at enormous speed and made the overall structure a super-critical mass. But despite the rapid mechanical assembly, the bomb still wasn't able to use its nuclei very efficiently. It wasn't sufficiently super-critical for an efficient explosion. The technique used in the two plutonium bombs, the Gadget tested in New Mexico and the Fat Man dropped over Nagasaki, involved implosions. In each bomb, high explosives crushed a solid sphere of plutonium 239 so that its density roughly doubled. With its nuclei packed more tightly together, this fuel surged through critical mass and went well into the super-critical regime. It consumed a much larger fraction of its nuclei than the uranium bomb and was thus a more efficient device. However, its design was so complicated and technically demanding that its builders weren't sure it would work. That's why they tested it once on the sands of New Mexico. The builders of the uranium bomb were confident enough of its design and too worried about wasting precious uranium to test it. Once the bomb has assembled a super-critical mass of fissionable material, each chain reaction that occurs will grow exponentially with time and lead to a catastrophic release of energy. But you're right in wondering just what starts those chain reactions. The answer is natural radioactivity from a trigger material. While the nuclear fuel's own radioactivity could provide those first few neutrons, it's generally not reliable enough. To make sure that the chain reactions get started properly, most nuclear weapons introduce a highly radioactive neutron-emitting trigger material into the nuclear fuel assembly. Your hot water heater is powered by 240 volt electric power through the two black wires. Each black wire is hot, meaning that its voltage fluctuates up and down significantly with respect to ground. In fact, each black wire is effectively 120 volts away from ground on average, so that if you connected a normal light bulb between either black wire and ground, it would light up normally. However, the two wires fluctuate in opposite directions around ground potential and are said to be "180° out of phase" with one another. Thus when one wire is at +100 volts, the other wire is at -100 volts. As a result of their out of phase relationship, they are always twice as far apart from one another as they are from ground. That's why the two wires are effectively 240 volts apart on average. Most homes in the United States receive 240 volt power in the form of two hot wires that are 180° out of phase, in addition to a neutral wire. 120-volt lights and appliances are powered by one of the hot wires and the neutral wire, with half the home depending on each of the two hot wires. 240-volt appliances use both hot wires. An airplane supports itself in flight by deflecting the passing airstream downward. The plane's wings push this airstream downward and the airstream reacts by pushing the wings upward. This action/reaction effect is an example of Newton's third law of motion, which observes that forces always come in equal but oppositely directed pairs: if one object pushes on another, then the second object must push back on the first object with a force of equal strength pointing in the opposite direction. Even air obeys this law so that when the plane's wings push air downward, the air must push the wings upward in response. In level flight, the deflected air pushes upward so hard that it supports the entire weight of the plane. Just how the airplane's wings deflect the airstream downward to obtain this upward lift force is a marvel of fluid dynamics. We can view it from at least two perspectives: a Newtonian perspective which concentrates on the accelerations of the passing airstream and a Bernoullian perspective which concentrates on speeds and pressures in that airstream. The Newtonian perspective is the most intuitive and where we will start. The airstream arriving at the forward or "leading" edge of the airplane wing splits into two separate flows that travel over and under the wing, respectively. The wing is shaped and tilted so that these two flows experience very different accelerations as they travel around the wing. The flow that goes under the wing encounters a downward sloping surface that pushes it downward and it accelerates downward. In response to this downward push, the air pushes upward on the bottom of the wing and provides part of the force that supports the plane. The air that flows over the wing follows a more complicated route. At first, this flow encounters an upward sloping surface that pushes it upward and it accelerates upward. In response to this upward force, the air pushes downward on the leading portion of the wing's top surface. But the wing's top surface is curved so that it soon begins to slope downward rather than upward. When this happens, the airflow must accelerate downward to stay in contact with it. A suction effect appears, in which the rear or "trailing" portion of the wing's top surface sucks downward on the air and the air sucks upward on it in response. This upward suction force more than balances the downward force at the leading edge of the wing so that the air flowing over the wing provides an overall upward force on the wing. Since both of these air flows produce upward forces on the wing, they act together to support the airplane's weight. The air passing both under and over the wings is deflected downward and the plane remains suspended. In the Bernoullian view, air flowing around a wing's sloping surfaces experiences changes in speed and pressure that lead to an overall upward force on the wing. The fact that each speed change is accompanied by a pressure change is the result of a conservation of energy in air passing a stationary surface—when the air's speed and motional energy increase, the air's pressure and pressure energy must decrease to compensate. In short, when air flowing around the wing speeds up, its pressure drops and when it slows down, its pressure rises. When air going under the wing encounters the downward sloping bottom surface, it slows down. As a result, the air's pressure rises and it exerts a strong upward force on the wing. But when air going over the wing encounters the up and down sloping top surface, it slows down and then speeds up. As a result, the air's pressure first rises and then drops dramatically, and it exerts a very weak overall downward force on the wing. Because the upward force on the bottom of the wing is much stronger than the downward force on the top of the wing, there is an upward overall pressure force on the wing. This upward force can be strong enough to support the weight of the airplane. But despite the apparent differences between these two descriptions of airplane flight, they are completely equivalent. The upward pressure force of the Bernoullian perspective is exactly the same as the upward reaction force of the Newtonian perspective. They are simply two ways of looking at the force produced by deflecting an airstream, a force known as lift. An object doesn't have to be on the ground to be a target for lightning. In fact, most lightning strikes don't reach the ground at all—they occur between different clouds. All that's needed for a lightning strike between two objects is for them to have very different voltages, because that difference in voltages means that energy will be released when electricity flows between the objects. If an airplane's voltage begins to differ significantly from that of its surroundings, it's going to have trouble. Sooner or later, it will encounter something that will exchange electric charge with it and the results may be disastrous. To avoid a lightning strike, the airplane must keep its voltage near that of its surroundings. That's why it has static dissipaters on the tips of its wings. These sharp metal spikes use a phenomenon known as a corona discharge to spray unwanted electric charges into the air behind the plane. Any stray charges that the plane picks up by rubbing against the air or by passing through electrically charged clouds are quickly released to the air so that the plane's voltage never differs significantly from that of its surroundings and it never sticks out as a target for lightning. While an unlucky plane may still get caught in an exchange of lightning between two other objects, the use of static dissipaters significantly reduces its chances of being hit directly. In effect, you would be a skydiver without a parachute and would survive up until the moment of impact with the ground. Like any skydiver who has just left a forward-moving airplane, you would initially accelerate downward (due to gravity) and backward (due to air resistance). In those first few seconds, you would lose your forward velocity and would begin traveling downward rapidly. But soon you would be traveling downward so rapidly through the air that air resistance would keep you from picking up any more speed. You would then coast downward at a constant speed and would feel your normal weight. If you closed your eyes at this point, you would feel as though you were suspended on a strong upward stream of air. Unfortunately, this situation wouldn't last forever—you would eventually reach the ground. At that point, the ground would exert a tremendous upward force on you in order to stop you from penetrating into its surface. This upward force would cause you to decelerate very rapidly and it would also do you in. When two objects collide with one another, they usually bounce. What distinguishes an elastic collision from an inelastic collision is the extent to which that bounce retains the objects' total kinetic energy—the sum of their energies of motion. In an elastic collision, all of the kinetic energy that the two objects had before the collision is returned to them after the bounce, although it may be distributed differently between them. In an inelastic collision, at least some of their overall kinetic energy is transformed into another form during the bounce and the two objects have less total kinetic energy after the bounce than they had before it. Just where the missing energy goes during an inelastic collision depends on the objects. When large objects collide, most of this missing energy usually becomes heat and sound. In fact, the only objects that ever experience perfectly elastic collisions are atoms and molecules—the air molecules in front of you collide countless times each second and often do so in perfectly elastic collisions. When the collisions aren't elastic, the missing energy often becomes rotational energy or occasionally vibrational energy in the molecules. Actually, some of the collisions between air molecules are superelastic, meaning that the air molecules leave the collision with more total kinetic energy than they had before it. This extra energy came from stored energy in the molecules—typically from their rotational or vibrational energies. Such superelastic collisions can also occur in large objects, such as when a pin collides with a toy balloon. Returning to inelastic collisions, one of the best examples is a head-on automobile accident. In that case, the collision is often highly inelastic—most of the two cars' total kinetic energy is transformed into another form and they barely bounce at all. Much of this missing kinetic energy goes into deforming and heating the metal in the front of the car. That's why well-designed cars have so called "crumple zones" that are meant to absorb energy during a collision. The last place you want this energy to go is into the occupants of the car. In fact, the occupants will do best if they transfer most of their kinetic energies into their airbags. While I don't know the details of the jump, there are some basic physics issues that must be present. At a fundamental level, the skater approaches the jump in a non-spinning state, leaps into the air while acquiring a spin, spins three times in the air, lands on the ice while giving up the spin, and then leaves the jump in a non-spinning state. Most of the physics is in spin, so that's what I'll discuss. To start herself spinning, something must exert a twist on the skater and that something is the ice. She uses her skates to twist the ice in one direction and, as a result, the ice twists her in the opposite direction. This effect is an example of the action/reaction principle known as Newton's third law of motion. Because of the ice's twist on her, she acquires angular momentum during her takeoff. Angular momentum is a form of momentum that's associated with rotation and, like normal momentum, angular momentum is important for one special reason: it's a conserved physical quantity, meaning that it cannot be created or destroyed; it can only be transferred between objects. The ice transfers angular momentum to the skater during her takeoff and she retains that angular momentum throughout her flight. She only gives up the angular momentum when she lands and the ice can twist her again. During her flight, her angular momentum causes her to spin but the rate at which she spins depends on her shape. The narrower she is, the faster she spins. This effect is familiar to anyone who has watched a skater spin on the tip of one skate. If she starts spinning with her arms spread widely and then pulls them in so that she becomes very narrow, her rate of rotation increases dramatically. That's because while she is on the tip of one skate, the ice can't twist her and she spins with a fixed amount of angular momentum. By changing her shape to become as narrow as possible, she allows this angular momentum to make her spin very quickly. And this same rapid rotation occurs in the triple axle jump. The jumper starts the jump with arms and legs widely spread and then pulls into a narrow shape so that she spins rapidly in the air. Finally, in landing the skater must stop herself from spinning and she does this by twisting the ice in reverse. The ice again reacts by twisting her in reverse, slowing her spin and removing her angular momentum. She skates away smoothly without much spin. In the form used for water desalination, reverse osmosis involves a special membrane that allows water molecules to pass through it while blocking the movement of salt ions. When water molecules are free to move between two volumes of water, they move in whichever direction reduces their chemical potential energy. The concept of a chemical potential is part of statistical physics—the area of physics that deals with vast collections of particles—and it depends partly on energy and partly on probability. Factors that contribute to a water molecule's chemical potential are the purity of the water and the water's pressure. Increasing the salt content of the water lowers a water molecule's chemical potential while increasing the water's pressure raises its chemical potential. Because salty water has a lower chemical potential for water molecules than pure water, water molecules tend to move from purer water to saltier water. This type of flow is known as osmosis. To slow or stop osmosis, you must raise the chemical potential on the saltier side by applying pressure. The more you squeeze the saltier side, the higher the chemical potential there gets and the slower water molecules move from the purer side to the saltier side. If you squeeze hard enough, you can actually make the water molecules move backwards—toward the purer side! This flow of water molecules from the saltier water toward the purer water with the application of extreme pressure is known as reverse osmosis. In commercial desalination, high-pressure seawater is pushed into jellyroll structures containing the semi-permeable membranes. The pressure of the salty water is so high that the water molecules flow through the membrane from the salty water side to the pure water side. This pure water is collected for drinking. While it may seem that you are somehow attracting the water to your mouth when you suck, you are really just making it possible for air pressure to push the water up toward you. By removing much of the air from within the hose, you are lowering the air pressure in the hose. There is then a pressure imbalance at the bottom end of the hose: the pressure outside the hose is higher than the pressure inside it. It's this pressure imbalance that pushes water into the hose and upward toward your mouth. But air pressure can't push the water upward forever. As the column of water in the hose rises, its weight increases. Atmospheric pressure can only lift the column of water so high before the upward force on the water is balanced by the water's downward weight. Even if you remove all of the air inside the hose, atmospheric pressure can only support a column of water about 30 feet tall inside the hose. If you're higher than that on your balcony, the water won't reach you no matter how hard you try. The only way to send the water higher is to put a pump at the bottom end of the hose. This pump can push upward harder than atmospheric pressure can and it can support a taller column of water. That's why deep home wells have submersible pumps at their bottoms—they must pump the water upward because it's impossible to suck it upward more than 30 feet from above. Yes, the speed of light. The gravitational interaction between two objects can be viewed as the exchange of particles called "gravitons," just as the electromagnetic interaction between two objects can be viewed as the exchange of particles called "photons." Gravitons and photons are both massless particles and therefore travel at a special speed: the "speed of light." Since light is easier to work with than gravity, people discovered this special speed in the context of light first. If gravity had been easier to work with, they might have named it "the speed of gravity" instead. Sometime in the not too distant future, gravity-wave detectors such as the LIGO project will begin to observe gravity waves traveling through space from nearby cosmic events, particularly star collapses. These gravity waves will reach us at essentially the same time as light waves from those events since the gravity and light travel at the same speed. Like any tape recorder, a cassette recorder uses the magnetization of the tape's surface to represent sound. The tape is actually a thin plastic film that's coated with microscopic cigar-shaped permanent magnets. These particles are aligned with the tape's length and can be magnetized in either of two directions—they can have their north magnetic poles pointing in the direction of tape motion or away from that direction. In a blank tape, the particles are magnetized randomly so that there are as many of them magnetized in one direction as the other. In this balanced arrangement, the tape is effectively non-magnetic. But in a recorded tape, the balance is upset and the tape has patches of strong magnetization. These magnetized patches represent sound. When you are recording sound on the tape, the microphone measures the air pressure changes associated with the sound and produces a fluctuating electric current that represents those changes. This current is amplified and used to operate an electromagnet in the recording head. The electromagnet magnetizes the tape—it flips the magnetization of some of those tiny magnetic particles so that the tape becomes effectively magnetized in one direction or the other. The larger the pressure change at the microphone, the more current flows through the electromagnet and the deeper the magnetization penetrates into the tape's surface. After recording, the tape is covered with tiny patches of magnetization, of various depths and directions. These magnetized patches retain the sound information indefinitely. During playback, the tape moves past the playback head. As the magnetic fields from magnetized regions of the tape sweep past the playback head, they cause a fluctuating electric current to flow in that head. The process involved is called electromagnetic induction; a moving or changing magnetic field produces an electric field, which in turn pushes an electric current through a wire. The current from the playback head is amplified and used to operate speakers, which reproduce the original sound. The rest of the cassette recorder is just transport mechanism—wheels and motors that move the tape smoothly and steadily past the recording or playback heads (which are often the same object). There is also an erase head that demagnetizes the tape prior to recording. It's an electromagnet that flips its magnetic field back and forth very rapidly so that it leaves the tiny magnetic particles that pass near it with randomly oriented magnetizations. Although electricity involves the movement of electrically charged particles through conducting materials, it can also be viewed in terms of electromagnetic waves. For example, programs that reach your home through a cable TV line are actually being carried by electromagnetic waves that travel in the cylindrical space between coaxial cable's central wire and the tubular metal shield around it. These waves would travel at the speed of light, except that whenever charged particles in the wires interact with the passing waves, they introduce delays. The charged particles in the wires don't respond as quickly as empty space does to changes in electric or magnetic fields, so they delay these changes and therefore slow down the waves. The materials that insulate the wires also influence the speed of the electricity by responding slowly to the changing fields. The fastest wires are ones with carefully chosen shapes and almost empty space for insulation. In general, the less the charges in the wire respond to the passing electromagnetic waves, the faster those waves can move. While the designers of low speed planes focus primarily on lift and drag, designers of high speed planes must also consider shock waves—pressure disturbances that fan out in cones from regions where the plane's surface encounters supersonic airflow. The faster a plane goes, the easier it is for the plane's wings to generate enough lift to support it, but the more likelihood there is that some portions of the airflow around the plane will exceed the speed of sound and produce shock waves. Since a transonic or supersonic plane needs only relatively small wings to support itself, the designers concentrate on shock wave control. Sweeping the wings back allows them to avoid some of their own shock waves, increasing their energy efficiencies and avoiding shock wave-induced surface damage to the wings. Slower planes can't use swept wings easily because they don't generate enough lift at low speeds. It seems that quarks are forever trapped inside the particles they comprise—no one has ever seen an isolated quark. But inside one of those particles, the quarks move at tremendous speeds. Their high speeds are a consequence of quantum mechanics and the uncertainty principle—whenever a particle (such as a quark) is confined to a small region of space (i.e. its location is relatively well defined), then its momentum must be extremely uncertain and its speed can be enormous. In fact, a substantial portion of the mass/energy of quark-based particles such as protons and neutrons comes from the kinetic energy of the fast-moving quarks inside them. But despite these high speeds, the quarks never exceed the speed of light. As a massive particle such as a quark approaches the speed of light, its momentum and kinetic energy grow without bounds. For that reason, even if you gave all the energy in the world to a single quark, its speed would still remain just a hair less than the speed of light. Thermal energy is actually bad for permanent magnets, reducing or even destroying their magnetizations. That's because thermal energy is related to randomness and permanent magnetization is related to order. Not surprisingly, cooling a permanent magnet improves its ordering and makes its magnetization stronger (or at least less likely to become weaker with time). At absolute zero, a permanent magnet's magnetic field will be in great shape—assuming that the magnet itself doesn't suffer any mechanical damage during the cooling process. All three of these objects contain solids, liquids, and gases, so I'll begin by describing how pressure affects those three states of matter. Solids and liquids are essentially incompressible, meaning that as the pressure on a solid or a liquid increases, its volume doesn't change very much. Without extraordinary tools, you simply can't squeeze a liter of water or liter-sized block of copper into a half-liter container. Gases, on the other hand, are relatively compressible. With increasing pressure on it, a certain quantity of gas (as measured by weight) will occupy less and less volume. For example, you can squeeze a closet full of air into a scuba tank. Applying these observations to the three objects, it's clear that the solid and liquid portions of these objects aren't affected very much by the pressure, but the gaseous portions are. In a fish or diver, the gas-filled parts (the swim bladder in a fish and the lungs in a diver) become smaller as the fish or diver go deeper in the water and are exposed to more pressure. In a submarine, the hull of the submarine must support the pressure outside so that the pressure of the air inside the submarine doesn't increase. If the pressure did reach the air inside the submarine, that air would occupy less and less volume and the submarine would crush. That's why the hull of a submarine must be so strong—it must hide the tremendous water pressure outside the hull from the air inside the hull. Apart from these mechanical effects on the three objects, there is one other interesting effect to consider. Increasing pressure makes gases more soluble in liquids. Thus at greater depths and pressures, the fish and diver can have more gases dissolved in their blood and tissues. Decompression illness, commonly called "the bends", occurs when the pressure on a diver is suddenly reduced by a rapid ascent from great depth. Gases that were soluble in that diver's tissue at the initial high pressure suddenly become less soluble in that diver's tissue at the final low pressure. If the gas comes out of solution inside the diver's tissue, it causes damage and pain. For very fundamental reasons, the speed of light in vacuum cannot be exceeded. Calling it the "speed of light" is something of a misnomer—it is the fundamental speed at which all massless particles travel. Since light was the first massless particle to be studied in detail, it was the first particle seen to travel at this special speed. While nothing can travel faster than this special speed, it's easy to go slower. In fact, light itself travels more slowly than this when it passes through a material. Whenever light encounters matter, its interactions with the charged particles in that matter delay its movement. For example, light travels only about 2/3 of its vacuum speed while traveling in glass. Because of this slowing of light, it is possible for massive objects to exceed the speed at which light travels through a material. For example, if you send very, very energetic charged particles (such as those from a research accelerator) into matter, those particles may move faster than light can move in that matter. When this happens, the charged particles emit electromagnetic shock waves known as Cherenkov radiation—there is light emitted from each particle as it moves. I suppose that the brochure could have been talking about this light/matter interaction. But since that effect has been observed for decades, there is nothing special about 1995. More likely, the brochure is talking about nonsense. A bipolar transistor is a sandwich consisting of three layers of doped semiconductor. A pure semiconductor such as silicon or germanium has no mobile electric charges and is effectively an insulator (at least at low temperatures). Dope semiconductor has impurities in it that give the semiconductor some mobile electric charges, either positive or negative. Because it contains mobile charges, doped semiconductor conducts electricity. Doped semiconductor containing mobile negative charges is called "n-type" and that with mobile positive charges is called "p-type." In a bipolar transistor, the two outer layers of the sandwich are of the same type and the middle layer is of the opposite type. Thus a typical bipolar transistor is an npn sandwich—the two end layers are n-type and the middle layer is p-type. When an npn sandwich is constructed, the two junctions between layers experience a natural charge migration—mobile negative charges spill out of the n-type material on either end and into the p-type material in the middle. This flow of charge creates special "depletion regions" around the physical p-n junctions. In this depletion regions, there are no mobile electric charges any more—the mobile negative and positive charges have cancelled one another out! Because of the two depletion regions, current cannot flow from one end of the sandwich to the other. But if you wire up the npn sandwich—actually an npn bipolar transistor—so that negative charges are injected into one end layer (the "emitter") and positive charges are injected into the middle layer (the "base"), the depletion region between those two layers shrinks and effectively goes away. Current begins to flow through that end of the sandwich, from the base to the emitter. But because the middle layer of the sandwich is very thin, the depletion region between the base and the second end of the sandwich (the "collector") also shrinks. If you wire the collector so that positive charges are injected into it, current will begin to flow through the entire sandwich, from the collector to the emitter. The amount of current flowing from the collector to the emitter is proportional to the amount of current flowing from the base to the emitter. Since a small amount of current flowing from the base to the emitter controls a much larger current flowing from the collector to the emitter, the transistor allows a small current to control a large current. This effect is the basis of electronic amplification—the synthesis of a larger copy of an electrical signal. A transformer only works with ac current because it relies on changes in a magnetic field. It is the changing magnetic field around the transformer's primary coil of wire that produces the electric field that actually propels current through the transformer's secondary coil of wire. When dc current passes through the primary coil of wire, the coil does have a magnetic field around it, but it doesn't have an electric field around it. The electric field is what pushes electric charges through the secondary coil to transfer power from the primary coil to the secondary coil. In contrast, when ac current passes through that primary coil of wire, the magnetic field around the coil flips back and forth in direction and this changing magnetic field gives rise to an electric field around the coil. It is this electric field that pushes on electrically charged particles—typically electrons—in the secondary coil of wire. These electrons pick up speed and energy as they move around the secondary coil's turns. The more turns these charged particles go through, the more energy they pick up. That's why doubling the turns in a transformer's secondary coil doubles the voltage of the current leaving the secondary coil. A slot machine is a classic demonstration of rotational inertia. When you pull on the lever, you are exerting a torque (a twist) on the three disks contained inside the machine. These disks undergo angular acceleration—they begin turning toward you faster and faster as you complete the pull. When you stop pulling on the lever, the lever decouples itself from the disks and they continue to spin because of their rotational inertia alone—they are coasting. However, their bearings aren't very good and they experience frictional torques that gradually slow them down. They eventually stop turning altogether and then an electromechanical system determines whether you have won. Each disk is actually part of a complicated rotary switch and the positions of the three disks determine whether current can flow to various places on an electromechanical counter. That counter controls the release of coins—coins that are dropped one by one into a tray if you win. Sadly, computerized gambling machines are slowly replacing the beautifully engineered electromechanical ones. These new machines are just video games that handle money—they have little of the elegant mechanical and electromechanical physics that makes the real slot machines so interesting. You can tell how far away a lightning flash is by counting the time separating the flash from the thunderclap. Every five seconds is about a mile. The reason that this technique works is that light and sound travel at very different speeds. The light and sound are created simultaneously, but the light travels much faster than the sound. You see the flash almost immediately after it actually occurs, but the thunderclap takes time to reach your ears. You can determine how long it takes sound to travel from the lightning bolt to your ears by counting the seconds between the flash and the thunderclap. Since it takes sound about 5 seconds to travel a mile, you can determine the distance to the lightning bolt in miles by dividing the seconds of sound delay by 5. The atmosphere maintains a natural temperature gradient of about 10° C (which is equivalent to 18° F) per kilometer in dry air and about 6 or 7° C (which is equivalent to about 12° F) per kilometer in moist air. The higher you look in the lower atmosphere, the colder the air is. Because of this gradient, it may be 20° C (68° F) in the valley and 0° C (32° F) at the top of a 2,000 meter high mountain. This temperature gradient has its origin in the physics of gases—when a gas expands and does work on its surroundings, its temperature decreases. To see why this effect is important, imagine that you have a plastic bag that's partially filled with valley air. If you carry this bag up the side of the mountain, you will find that the bag's volume will gradually increase. That's because there will be less and less air overhead as you climb and the pressure that this air exerts on the bag will diminish. With less pressure keeping it small, the air in the bag will expand and the bag will fill up more and more. But for the bag's size to increase, it must push the air around it out of the way. Pushing this air away takes work and energy, and this energy comes from the valley air inside the bag. Since the valley air has only one form of energy it can give up—thermal energy—its temperature decreases as it expands. By the time you reach the top of the mountain, your bag of valley air will have cooled dramatically. If it started at 20° C, its temperature may have dropped to 0° C, cold enough for snow. If you now turn around and walk back down the mountain, the increasing air pressure will gradually squeeze your bag of valley air back down to its original size. In doing do, the surrounding air will do work on your valley air, giving it energy, and will increase that air's thermal energy—the valley air will warm up! When you reach the valley, the air in your bag will have returned to its original temperature. Air often rises and falls in the atmosphere and, as it does, it experiences these same changes in temperature. Air cools as it blows up into the mountains (often causing rain to form) and warms as it flows down out of the mountains (producing dry mountain winds). These effects maintain a temperature gradient in the atmosphere that allows snow to remain on mountaintops even when it's relatively warm in the valleys. The red blood cells in your blood contain large amounts of a complicated and brightly colored molecule known as hemoglobin. This molecule's ability to bind and later release oxygen molecules is what allows blood to carry oxygen efficiently throughout your body. Each hemoglobin molecule contains four heme groups, the iron-containing structures that actually form the reversible bond with oxygen molecules and that also give the hemoglobin its color. However, this color depends on the oxidization state of the heme group—red when the heme group is binding oxygen and blue-purple when the heme group is alone. That color difference explains why someone who is holding their breath may "turn blue"—their hemoglobin is lacking in oxygen. The clip you wore was analyzing the color of your blood to determine the extent of oxygenation in its hemoglobin. It measured your pulse rate by looking for periodic fluctuations in the opacity of your finger, brought on by changes in your finger's blood content with each heartbeat. Once lightning strikes you, whether or not you are wearing rubber-soled shoes will make little difference. The voltages involved in lightning are so enormous (hundreds of millions of volts) that the insulating character of rubber soles will be completely overwhelmed. If the electric current can't pass through your rubber soles, it will simply form an electric arc around them or through them. However, I would guess that rubber-soled shoes provide some slight protection against being hit by lightning in the first place. Lightning tends to strike objects that have acquired an electric charge that is opposite that of the cloud overhead. This opposite charge naturally appears on grounded conducting objects because the cloud's charge pulls opposite charges up from the ground and onto the objects. Once this charging has taken place, the object is a prime target for a lightning strike. If you are standing alone and barefoot on the top of a mountain during a thunderstorm, the cloud will draw opposite charge up from the ground through your feet and you will become very highly charged. There are even photographs of people on mountaintops with their hair standing up because of this charging effect. Unfortunately, some of these people were struck by lightning shortly after experiencing this effect. If you ever experience it, run for your life down the mountain! It's possible that wearing rubber soles shoes will prevent or delay this charging effect, and it might keep you from being struck by lightning. But I sure wouldn't count on it. When the bottle is sealed, its contents are in equilibrium. In this context, equilibrium means that while carbon dioxide gas molecules are continuously shifting from solution in the water to independence in the gas underneath the cap, there is no net movement of gas molecules between the two places. Since the company that bottled the water put a great many gas molecules in the bottle, the concentration of dissolved molecules in the water is high and so is the density of molecules in the gas under the cap. This high density of gaseous carbon dioxide molecules under the cap makes the pressure inside the bottle quite high, which is why the bottle's surface is taut and hard. While you can't see it in this unopened bottle, there is activity both at the surface of the water and within the water. At the water's surface, carbon dioxide molecules are constantly leaving the water for the gas under the cap and returning from the gas under the cap to the water. The rates of departure and return are equal, so that nothing happens overall. Within the water, tiny bubbles are also forming occasionally. But these tiny bubbles, which nucleate through random fluctuations within the liquid or more often at defects in the bottle's walls, can't grow. Even though these bubbles contain gaseous carbon dioxide molecules, the molecules aren't dense enough to keep the bubbles from being crushed by the pressurized water. So these tiny bubbles form and collapse without ever becoming noticeable. However, once you remove the top from the bottle, everything changes. The bottle's contents are no longer in equilibrium. To begin with, carbon dioxide molecules that leave the surface of the water are no longer replaced by molecules returning to the liquid. That's one reason why an opened bottle of carbonated water begins to lose its dissolved carbon dioxide and become "flat." Secondly, without its trapped portion of dense carbon dioxide gas, the bottle is no longer pressurized and it stops being taut and hard (assuming that it's made of plastic rather than gas). Thirdly, with the loss of pressure, the water in the bottle stops crushing the tiny gas bubbles that form within it. In fact, once one of those bubbles forms, carbon dioxide molecules can enter it from the liquid just as they enter the gas at the top of the bottle. As a result, each bubble that forms grows larger and larger. Since the gas in a bubble is less dense than water, the bubble begins to float upward until it reaches the top of the bottle. Because the bottle is taller than a typical water glass, a bubble has more time to grow before reaching the top in the bottle than it would have in the glass. That's one reason why the bubbles in a bottle are taller than in a glass. Another reason is that the concentration of dissolved carbon dioxide molecules is higher while the water is in the bottle than it is by the time the water reaches the glass, so that bubbles grow faster in the bottle than in the glass. An acetylene miner's lamp produces acetylene gas through the reaction of solid calcium carbide with water. An ingenious system allows the production of gas to self-regulate—the gas pressure normally keeps the water away from the calcium carbide so that gas is only generated when the lamp runs short on gas. In contrast, a propane lamp obtains its gas from pressurized liquid propane. Whenever the propane lamp runs short on gas, the falling gas pressure allows more liquid propane to evaporate. Only the propane lamp needs a mantle to produce bright light. That's because the hot gas molecules that are produced by propane combustion aren't very good at radiating their thermal energy as visible light. The mantle extracts thermal energy from the passing gas molecules and becomes incandescent—it converts much of its thermal energy into thermal radiation, including visible light. Mantles are actually delicate ceramic structures consisting of metal oxides, including thorium oxide. Thorium is a naturally occurring radioactive element, similar to uranium, and lamp mantles are one of the few unregulated uses of thorium. The light emitted by these oxide mantles is shorter in average wavelength than can be explained simply by the temperature of the burning gases, so it isn't just thermal radiation at the ambient temperature. The mantle's unexpected light emission is called candoluminescence and is thought to involve non-thermal light emitted as the result of chemical reactions and radiative transitions involving the burning gases and the mantle oxides. In contrast, the acetylene miner's lamp works pretty well without a mantle. I think that's because the flame contains lots of tiny carbon particles that act as the mantle and emit an adequate spectrum of yellow thermal radiation. Many of these particles then go on to become soot. A candle flame emits yellow light in the same manner. One last feature of a properly constructed miner's lamp, a safety lamp, is that it can't ignite gases around it even if those gases are present in explosive concentrations. That's because the lamp's flame is surrounded by a fine metal mesh. This mesh draws heat out of any gas within its holes and thus prevents the flame inside the mesh from igniting any gas outside the mesh. No. If you are above the clouds, then the sky above you is free from droplets of condensed moisture. While that doesn't mean that there is no water overhead, that water must be entirely in the form of gaseous water molecules. Since rain forms when droplets of condensed moisture grow large enough to descend rapidly through the air, the absence of any condensed droplets makes it impossible for full raindrops to form. In short, no clouds overhead, no rain. An automatic transmission contains two major components: a fluid coupling that controls the transfer of torque from the engine to the rest of the transmission and a gearbox that controls the mechanical advantage between the engine and the wheels. The fluid coupling resembles two fans with a liquid circulating between them. The engine turns one fan, technically known as an "impeller," and this impeller pushes transmission fluid toward the second impeller. As the liquid flows through the second impeller, it exerts a twist (a "torque") on the impeller. If the car is moving or is allowed to move, this torque will cause the impeller to turn and, with it, the wheels of the car. If, however, the car is stopped and the brake is on, the transmission fluid will flow through the second impeller without effect. Overall, the fluid coupling allows the efficient transfer of power from the engine to the wheels without any direct mechanical linkage that would cause trouble when the car comes to a stop. Between the second impeller and the wheels is a gearbox. The second impeller of the fluid coupling causes several of the gears in this box to turn and they, in turn, cause other gears to turn. Eventually, this system of gears causes the wheels of the car to turn. Along with these gears are several friction plates that can be brought into contact with one another by the transmission to change the relative rotation rates between the second impeller and the car's wheels. These changes in relative rotation rate give the car the variable mechanical advantage it needs to be able to both climb steep hills and drive fast on flat roadways. Finally, some cars combine parts of the gear box with the fluid coupling in what is called a "torque converter." Here the two impellers in the fluid coupling have different shapes so that they naturally turn at different rates. This asymmetric arrangement eliminates the need for some gears in the gearbox itself. Unfortunately, the answer is no. The atmosphere is too complicated to be described by a simple formula or equation, although you can always fit a formulaic curve to measured pressure values if you make that formula flexible enough. The complications arise largely because of thermodynamic issues: air expands as it moves upward in the atmosphere and this expansion causes the air to cool. As a result of this cooling, the air in the atmosphere doesn't have a uniform temperature and, without a uniform temperature, the air's pressure is difficult to predict. Radiative heating of the greenhouse gases and phase changes in the air moisture content further complicate the atmosphere's temperature profile and consequently its pressure profile. If you want to know the air pressure at specific altitude, you do best to look it up in a table. Both of your observations are correct: short wavelength light, such as violet, carries more energy per particle (per "photon") than long wavelength light, such as red, and red light does appear "warmer" than blue light. But the latter observation is one of feelings and psychology, rather than of physics. It is ironic that colors we associate with cold and low thermal energies are actually associated with higher energy light particles than are colors we associate with heat and high thermal energies. First, an electromagnetic wave consists of an electric and a magnetic field. These two fields create one another as they change with time and they travel together through empty space. An electromagnetic wave of this sort carries energy with it because electric and magnetic fields both contain energy. That much was well understood by the end of the 19th century, but something new was discovered at the beginning of the 20th century: an electromagnetic wave cannot carry an arbitrary amount of energy. Instead, it can carry one or more units of energy, units that are commonly called "quanta." An electromagnetic wave that carries only one quanta of energy is called a "photon." The amount of energy that a photon carries depends on the frequency of that photon—the higher the frequency, the more energy. Photons of visible light carry enough energy to induce various changes in atoms and molecules, which is why they provide our eyes with such useful information about the objects around us—we see how this visible light is interacting with the world around us. Radio waves are a class of electromagnetic waves, specifically the lowest frequency, longest wavelength electromagnetic waves. Actually, the electromagnetic waves used in cellular & PCS transmissions are technically known as microwaves because they have wavelengths of less than 1 meter, but there are no important differences between radio waves and microwaves. Like all electromagnetic waves, radio waves and microwaves consist of coupled electric and magnetic fields that sustain one another in stable structures that move rapidly through empty space. Because an electromagnetic wave's electric field changes with time, it is able to create the wave's magnetic field and, because its magnetic field changes with time, that magnetic field is able to create the wave's electric field. Since they consist only of electric and magnetic fields, these waves cannot stay still—they must move (although you can trap them between mirrors so that they appear to stand in one place as they bounce back and forth). While they contain no true mass, they do contain energy and an electromagnetic wave carries energy from one place to another. Electromagnetic waves are created whenever electrically charged particles change speed or direction; whenever they accelerate. Since there are accelerating electric charges everywhere—thermal energy keeps them moving about—there are also electromagnetic waves everywhere. But the radio waves used in communications systems are generated deliberately by moving electric charges back and forth. When charges are sent up and down a radio antenna, these charges are accelerating and they form complicated electric and magnetic fields that include electromagnetic waves. Once launched, those electromagnetic waves propagate through space at approximately the speed of light. To send information with radio waves, a transmitter makes modifications in one or more the wave's characteristics. In an amplitude modulation scheme (AM), the transmitter changes the strength or "amplitude" of the wave to convey information—like sending radio smoke signals. In the frequency modulation scheme (FM), the transmitter changes the frequency of the wave to convey information—like whistling a tune with a complicated melody. The VCR Plus codes contain just enough information to tell the VCR what time and day a program starts, what channel that program is on, and how long it will last. What is remarkable about these codes is not that they exist, but that many of them are so short. A long number that contained the complete date, the entire channel number, and the length of the program in minutes would obvious fulfill the requirements, but the actually numbers are never that long. While I don't know the precise encoding scheme, the date is clearly compressed—a daily or weekly program is represented by a very small code—and so is the record time for programs with a common duration. The VCR Plus codes get significantly longer when they must represent one-time only shows and shows with complicated durations. Even then, the date is truncated so that there are no current codes to represent a show five years in the future. The rice cooker turns off when there is no longer enough liquid water on its heating element to keep that element's temperature at the boiling temperature of water (212° F or 100° C). As long as the element is covered with liquid water, it is hard for that element's temperature to rise above water's boiling temperature. That's because as the water boils, all of the thermal energy produced in the heating element is converted very efficiently into chemical potential energy in the resulting steam. In short, boiling water remains at 212° F even as you add lots of thermal energy into it. But as soon as the liquid water is gone (and, fortuitously, the rice is fully cooked), there is nothing left to keep the heating element's temperature from rising. As more electric energy enters the element and becomes thermal energy, the element gets hotter and hotter. A thermostat, probably a bimetallic strip like that used in most toasters, senses the sudden temperature rise. It releases a switch that turns off the electric power to the rice cooker. One of the principal observations of thermodynamics (and statistical mechanics, a related field) is that vast, complicated systems naturally evolve from relatively unlikely arrangements to relatively likely arrangements. This trend is driven by the laws of probability and the fact that improbable things don't happen often. Here's an example: consider your sock drawer, which contains 100 each of red and blue socks (it's a large drawer and you really like socks). Suppose you arrange the drawer so that all the red socks are on one side and all the blue socks are on the other. This arrangement is highly improbable—it didn't happen by chance; you caused it to be ordered. If you now turn out the light and randomly exchange socks within the drawer, you're awfully likely to destroy this orderly situation. When you turn the light back on, you will almost certainly have a mixture of red and blue socks on each side of the drawer. You could turn the light back out and try to use chance to return the socks to their original state, but your chances of succeeding are very small. Even though the system you are playing with has only 200 objects in it, the laws of probability are already making it nearly impossible to order it by chance alone. By the time you deal with bulk matter, which contains vast numbers of individual atoms or electrons or bits of energy, chance and the laws of probability dominate everything. Even when you try to impose order on a system, the laws of probability limit your success: there are no perfect crystals, perfectly clean rooms, flawless structures. These objects aren't forbidden by the laws of motion, they are simply too unlikely to ever occur. A fan and a propeller are actually the same thing. Both are rotating wings that push the air in one direction and experience a reaction force in the opposite direction as a result. Each experiences a "lift" force, typically called "thrust," in the direction opposite the airflow. If you put a strong fan on a low-friction cart or a good skateboard, it will accelerate forward as it pushes the air backward. Similarly, if you prevent a propeller plane from moving, its spinning blades will act as powerful fans. Most CD's are made from polycarbonate plastic (though other plastics with the same index of refraction are occasionally used). Polycarbonate is a pretty tough material, so it should survive most common stain or gum removing solvents. Try your favorite solvent on an unimportant CD first; such as one of the free discs that come occasionally in the mail. However, if the stain molecules have diffused into the plastic and have become trapped within the tangle of plastic molecules, you're probably out of luck. Removing such a stain will require wearing away some of the plastic. Since the disc's surface finish must remain smooth and the thickness of the disc shouldn't change much, serious resurfacing is likely to make the disc unplayable. Also, stay away from the printed side of the disc—it has only a thin layer of varnish protecting the delicate aluminum layer from injury. Solvents can wreck this side of the disc. Finally, if the stain is a white mark (or a scratch), you may be able to render the disc clear again by filling the tiny air gaps that make it white with another plastic. I'll bet that a clear furniture polish or liquid wax will soak into the white spot, replace the air, and render the disc clear and playable. Apparently there are conditions in which green light from the sun is bent by the atmosphere so that it is visible first as the sun begins to rise above the horizon. Instead of seeing the yellow edge of the sun peaking up from behind the water or land, you see a green edge that lasts a second or two before being replaced by the usual yellow. This green flash is the result of refraction (bending of light) and dispersion (color-dependent light-speed) in air and is discussed in considerable detail at http://www.isc.tamu.edu/~astro/research/sandiego.html. According to the author of that site, Andrew Young, given a low enough horizon, which is the primary consideration, and clear air, which is also important, and a little optical aid, which helps a lot, one can certainly see green flashes at most sunsets. When light passes into a material, it interacts primarily with the negatively charged electrons in that material. Since light consists in part of electric fields and electric fields push on charged particles, light pushes on electrons. If the electrons in a material can't move long distances and can't shift from one quantum state to another as the result of the light forces, then all that will happen to the light as it passes through the material is that it will be delayed and possibly redirected. But if the electrons in the material can move long distance or shift between states, then there is the chance that the light will be absorbed by the material and that the light energy will become some other type of energy inside the material. Which of these possibilities occurs in a particular organic material depends on the precise structure of that material. Carbon atoms can be part of transparent organic materials, such as sugar, or of opaque organic materials, such as asphalt. The carbon atoms and their neighbors determine the behaviors of their electrons and these electrons in turn determine the optical properties of the materials. While most microwave ovens operate at 2.45 GHz, that frequency is not a resonant frequency for the water molecule. In fact, using a frequency that water molecules responded to strongly (as in a resonance) would be a serious mistake—the microwaves would all be absorbed by water molecules at the surface of the food and the center of the food would remain raw. Instead, the 2.45 GHz frequency was chosen because it is absorbed weakly enough in liquid water (not free water molecules) that the waves maintain good strength even deep inside a typical piece of food. Higher frequencies would penetrate less well and cook less evenly. Lower frequencies would penetrate better, but would be absorbed so weakly that they wouldn't cook well. The 2.45 GHz frequency is a reasonable compromise between the two extremes. In three-phase power, the voltages of the three power wires fluctuate up and down cyclically so that they are "120 degrees" apart. By "120 degrees" apart, I mean that each wire reaches its peak voltage at a separate time—first the X wire, then the Y wire, and then the Z wire—with the Y wire reaching its peak 1/3 of the 360 degree cycle (or 120 degrees) after the X wire and the Z wire reaching its peak 1/3 of the 360 degree cycle (or 120 degrees) after the Y wire. The specific voltages and their relationships with ground or a possible fourth "neutral" wire depend on the exact type of transformer arrangement that supplies your home or business. In the standard "Delta" arrangement (which you can find discussed at sites dealing with power distribution), the voltage differences between any pair of the three phases is typically 240 VAC. In the standard "Wye" arrangement, the typical voltage difference between any pair of phases is 208 VAC and the voltage difference between any single phase and ground is 120 VAC. And in the "Center-Tapped Grounded Delta" arrangement, the voltage difference between any pair of phases is 240 VAC and the voltage difference between a single phase and neutral is 120, 120, and 208 VAC respectively (yes, the three phases behave differently in this third arrangement). If you run a single-phase 220 VAC motor from two wires of a Delta arrangement power outlet, that motor will receive a little more voltage (240 VAC) than it was designed for and if you run it from two wires of a Wye arrangement outlet, it will receive a little less voltage (208 VAC) than appropriate. Still, the motor will probably run adequately and it's unlikely that you'll ever notice the difference. The magnetic interaction between the stator and the rotor is repulsive—the rotor is pushed around in a circle by the stator's magnetic field; it is not pulled. To see why this is so, imagine unwrapping the curved motor so that instead of having a magnetic field that circles around a circular metal rotor you have a magnet (or magnetic field) that moves along a flat metal plate. As you move this magnet across the plate, it will induce electric currents in that plate and the plate will develop magnetic poles that are reversed from those of the moving magnet-the two will repel one another. That choice of pole orientation is the only one consistent with energy conservation and is recognized formally in "Lenz's Law". For reasons having to do with resistive energy loss and heating, the repulsive forces in front of and behind the moving magnet don't cancel perfectly, leading to a magnetic drag force between the moving magnet and the stationary plate. This drag force tends to push the plate along with the moving magnet. In the induction motor, that same magnetic drag force tends to push the rotor around with the rotating magnetic field of the stator. In all of these cases, the forces involved are repulsive-pushes not pulls. If any current reaching the equipment through the three-phase power cord returns through that same power cord, then the net current in the cord is always exactly zero. Despite the complicated voltage and current relationships between the three power wires, one simple fact remains: the equipment can't store electric charge. As a result, any current that flows toward the equipment must be balanced by a current flowing away from the equipment, and if both flows are in the same power cord, they'll cancel perfectly. Since there is no net current flowing through the power cord, it develops no magnetic field and exhibits no inductive reactance or voltage drop. When a moving magnet generates electricity, it does transfer energy to the electric current. However, that energy comes from either the magnet's kinetic energy (its energy of motion) or from whatever is pushing the magnet forward. The magnet's magnetism is basically unchanged by this process. Nonetheless, a large permanent magnet isn't really permanent. The random fluctuations of thermal energy and the influences of passing magnetic fields gradually demagnetize large permanent magnets. However, good permanent magnets demagnetize so slowly that the changes are completely undetectable. You might have to wait a billion years to detect any significant weakening in the magnetic field around such a magnet. The best conventional conductors are silver, copper, gold, and aluminum. What makes them good conductors is that electrons move through them for relatively long distances without colliding with anything that wastes their energy. These materials become better conductors as their purities increase and as their temperatures decrease. A cold, near-perfect crystal is ideal, because all of the atoms are then neatly arranged and nearly motionless, and the electrons can move through them with minimal disruption. However, there is a class of even better conductors: the so-called "superconductors." These materials allow electric current to travel through them will absolutely no loss of energy. The carriers of electric current are no longer simply independent electrons; they are typically pairs of electrons. Still, superconductivity appears because the moving charged particles can no longer suffer collisions that waste their energy-they move with perfect ease. We would be using superconductors everywhere in place of copper or aluminum wires if it weren't for the fact that superconductors only behave that way at low temperatures. As for the best insulators, I'd vote for good crystals of salts like lithium fluoride and sodium chloride (table salt), and covalently-bound substances like aluminum oxide (sapphire) or diamond. All of these materials are pretty nearly perfect insulators. While metal detectors can easily distinguish between ferromagnetic metals such as steel and non-ferromagnetic metals such as aluminum, gold, silver, and copper, it is difficult for them to distinguish between the particular members of those two classes. Ferromagnetic metals are ones that have intrinsic magnetic structure and respond very strongly to outside magnetic fields. The non-ferromagnetic metals have no intrinsic magnetic structure but can be made magnetic when electric currents are driven through them. Good metal detectors produce electromagnetic fields that cause currents to flow through nearby metal objects and then detect the magnetism that results. Unfortunately, identifying what type of non-ferromagnetic metal is responding to a metal detector is hard. Mark Rowan, Chief Engineer at White's Electronics of Sweet Home, Oregon, a manufacturer of consumer metal detecting equipment, notes that their detectors are able to classify non-ferromagnetic metal objects based on the ratio of an object's inductance to its resistivity. They can reliably distinguish between all denominations of U.S. coins—for example, nickels are relatively more resistive than copper and clad coins, and quarters are more inductive than smaller dimes. The primary mechanism they use in these measurements is to look at the phase shift between transmitted and received signals (signals typically at, or slightly above, audio frequencies). However, they are unable to identify objects like gold nuggets where the size, shape, and alloy composition are unknown. No, I don't think that anti-gravity is possible. The interpretation of gravity found in Einstein's General Theory of Relativity is as a curvature of space-time around a concentration of mass/energy. That curvature has a specific sign, leading to what can be viewed as an attractive force. There is no mechanism for reversing the sign of the curvature and creating a repulsive force—anti-gravity. I know of only one case, involving a collision between two rapidly spinning black holes, in which two objects repel one another through gravitational effects. But that bizarre case is hardly the anti-gravity that people would hope to find. When carbon dioxide gas (CO2) dissolves in water (H2O), its molecules often cling to water molecules in such a way that they form carbonic acid molecules (H2CO3). Carbonic acid is a weak acid, an acid in which most molecules are completely intact at any given moment. But some of those molecules are dissociated and exist as two dissolved fragments: a negatively charged HCO3- ion and a positively charged H+ ion. The H+ ions are responsible for acidity—the higher their concentration in a solution, the more acidic that solution is. The presence of carbonic acid in carbonated water makes that water acidic—the more carbonated, the more acidic. What you're feeling when you drink a carbonated beverage is the moderate acidity of that beverage "irritating" your throat. A battery uses electrochemical processes to provide power to a current passing it. This statement means that if you send an electric charge through the battery in the normal direction, that charge will emerge from the battery with more energy than it had when it entered the battery. But while it might seem that the number of electric charges passing through the battery each second doesn't matter—that each charge will pick up the usual amount of extra energy during its passage—that's not always the case. To understand this fact, let's look at how charges "pass through" the battery and how they pick up energy. What's really happening is that electrochemical processes are spontaneously separating charges from one another inside the battery and placing those separated charges on the battery's terminals—the battery's negative terminal becomes negatively charged and its positive terminal becomes positively charged. This charge separating process proceeds in a random, statistical manner until enough charges accumulate on the terminals to prevent any further charge separation. Because like charges repel one another, sufficiently large accumulations of positive charges on the positive terminal and negative charges on the negative terminal stop further arrivals of those charges. But when you send a positive charge through a wire and onto the battery's negative terminal, you reduce the amount of negative charge there and weaken the repulsive forces. As a result, the chemicals in the battery separate another pair of charges. The battery's negative terminal returns to normal, but now there is an extra positive charge on the battery's positive terminal. This extra charge flows away through a wire. Overall, it appears that your positive charge "passed through" the battery—entering the battery's negative terminal and emerging from the positive terminal with more energy than it had when it arrived at the negative terminal. But what really happened was that the battery's chemicals separated another pair of charges. In a warm environment, the battery's chemicals can separate charges rapidly and can keep up with reasonably large currents of arriving charges. But in a cold battery, the electrochemical processes slow down and it becomes hard for the battery to keep up. If you try to send too much current through the battery while it's cold, it is unable to replace the charges on its terminals quickly enough and it voltage sags—it doesn't have enough separated charges on its terminals to give the charges "passing through" it their full increase in energy. If you use a battery while it's very cold, you should be careful not to send too much current through it because it will become inefficient and will provide less than its usual voltage. You have every reason to be skeptical about this sort of activity. Despite its length, I have included your entire question here because it gives me an opportunity to point out some of the differences between science and pseudo-science. You have written a wonderful survey of some of the quackery that exists in our society and have illustrated beautifully the widespread view that science is fundamentally nothing more than gibberish. I cringe as I read your review of "healing science" because in that description I see science, a field that has been developed with care by people I respect and admire, tossed cavalierly into the gutter by self-important know-nothings who aren't worth a moments notice. That these miserable individuals draw such attention, often at the expense of far more deserving real scientists—or worse, by "standing on the shoulders" of those real scientists—is a tragedy of modern society. It's just dreadful. Let me begin to pick up the pieces by pointing out that terms like "human energy field", "vibrational medicine", and "energy imbalance" are simply meaningless and that the use of "Einstein's Theory" to justify healing-at-a-distance is typical of people who don't have a clue about what science actually is. The meaningless misuse of scientific terms and the uninformed and careless misapplication of scientific techniques is an activity called pseudo-science. Pseudo-science may sound and look like science, but the two have almost nothing else in common. Among the benefits of a good college education is learning how vast is the world of human knowledge, recognizing how little you know of that world, discovering how much others have already thought about everything you can imagine, and finding out how dangerous it is to venture unprepared into any area you do not know well. Most of these pseudo-scientific quacks are either oblivious of their own ignorance or so arrogant that they dismiss the work of others as not worthy of their attention. Either way, they make terrible students and, consequently, useless teachers. You'll do best to leave their books on the shelves. Because real science is not buzzwords, simply stringing together the words of science does not make one a scientist. Science is an intense, self-reflective, skeptical, objective investigative process in which we try to form conceptual models for the universe and its contents, and try to test those models against the universe itself. We do this modeling and testing over and over again, improving and perfecting the models and discarding or modifying models that do not appear consistent with actual observations. Accurate models are valuable because they have predictive power—you can tell in advance how something will behave if you have modeled it correctly. In the course of these scientific investigations, concepts arise which deserve names and so we assign names to them. In that manner, words such as "energy" and "vibration" have entered our language. Each such word has a very specific meaning and applies only in a specific context. Thus the word "force" was assigned to the concept we commonly refer to as a "push" or a "pull" and applies in the context of interactions between objects. The expression "the force be with you" has nothing to do with physics—the word "force" in that phrase doesn't mean a push or a pull and has nothing to do with the interactions between objects. As you can see, taken out of its applicable context and used carelessly in another usually renders a scientific word completely meaningless. Alas, the average person doesn't understand science, doesn't speak its language, and cannot distinguish the correct use of the language of science from the meaningless gibberish of pseudo-science. As anyone who has spent time exploring the web ought to have discovered, highly polished prose and graphics is no guarantee of intelligent content. That's certainly true of what appears to be scientific material. I am further saddened to see that even the titles of academia are deemed fair game by the quacks. While the physics term "energy" and the biological word "medicine" can appear together in a sentence about cancer treatment or medical imaging, that's not what the person claiming to have a Ph.D. in "Energy Medicine" has in mind. That degree was probably granted by a group that understands neither physics nor medicine. There may be a place for non-traditional medicine because medicine is not an exact science—there is often more than one correct answer in medicine and there are poorly understood issues in medicine even at fairly basic levels. However, physics is an exact science, with mechanical predictability (within the limitations of quantum mechanics) and only one truly correct answer to each question. Its self-consistent and quantitative nature leaves physics with no room for conflicting explanations. Like most academic physicists, I occasionally receive self-published books and manuscripts from people claiming to have discovered an entirely new physics that is far superior to the current one. And like most academic physicists, I flip briefly through these unreviewed documents and then, with a moment's sadness that the authors have wasted so much time, effort, and money, I toss them into the recycling bin. It's not that we scientists are close minded medieval keepers of the dogma, it's that these "new physics" offerings are the works of ignorant people who don't know w
http://www.howeverythingworks.org/complete.html
13
64
The following tips and information focus on how to optimize aerodynamics. Depending on class rules, these suggestions may or may not be valid. Always check your regulations. Aerodynamics Design Tips A simple definition of aerodynamics is the study of the flow of air around and through a vehicle, primarily if it is in motion. To understand this flow, you can visualize a car moving through the air. As we all know, it takes some energy to move the car through the air, and this energy is used to overcome a force called Drag. Drag, in vehicle aerodynamics, is comprised primarily of two forces. Frontal pressure is caused by the air attempting to flow around the front of the car. As millions of air molecules approach the front grill of the car, they begin to compress, and in doing so raise the air pressure in front of the car. At the same time, the air molecules traveling along the sides of the car are at atmospheric pressure, a lower pressure compared to the molecules at the front of the car. Just like an air tank, if the valve to the lower pressure atmosphere outside the tank is opened, the air molecules will naturally flow to the lower pressure area, eventually equalizing the pressure inside and outside the tank. The same rules apply to cars. The compressed molecules of air naturally seek a way out of the high pressure zone in front of the car, and they find it around the sides, top and bottom of the car. See the diagram below. Rear vacuum (a non-technical term, but very descriptive) is caused by the "hole" left in the air as the car passes through it. To visualize this, imagine a bus driving down a road. The blocky shape of the bus punches a big hole in the air, with the air rushing around the body, as mentioned above. At speeds above a crawl, the space directly behind the bus is "empty" or like a vacuum. This empty area is a result of the air molecules not being able to fill the hole as quickly as the bus can make it. The air molecules attempt to fill in to this area, but the bus is always one step ahead, and as a result, a continuous vacuum sucks in the opposite direction of the bus. This inability to fill the hole left by the bus is technically called Flow detachment. See the diagram below. Flow detachment applies only to the "rear vacuum" portion of the drag equation, and it is really about giving the air molecules time to follow the contours of a car's bodywork, and to fill the hole left by the vehicle, it's tires, it's suspension and protrusions (ie. mirrors, roll bars). If you have witnessed the Le Mans race cars, you will have seen how the tails of these cars tend to extend well back of the rear wheels, and narrow when viewed from the side or top. This extra bodywork allows the air molecules to converge back into the vacuum smoothly along the body into the hole left by the car's cockpit, and front area, instead of having to suddenly fill a large empty space. The reason keeping flow attachment is so important is that the force created by the vacuum far exceeds that created by frontal pressure, and this can be attributed to the Turbulence created by the detachment. Turbulence generally affects the "rear vacuum" portion of the drag equation, but if we look at a protrusion from the race car such as a mirror, we see a compounding effect. For instance, the air flow detaches from the flat side of the mirror, which of course faces toward the back of the car. The turbulence created by this detachment can then affect the air flow to parts of the car which lie behind the mirror. Intake ducts, for instance, function best when the air entering them flows smoothly. Therefore, the entire length of the car really needs to be optimized (within reason) to provide the least amount of turbulence at high speed. See diagram below (Light green indicates a vacuum-type area behind mirror): Lift (or Down One term very often heard in race car circles is Down force. Down force is the same as the lift experienced by airplane wings, only it acts to press down, instead of lifting up. Every object traveling through air creates either a lifting or down force situation. Race cars, of course use things like inverted wings to force the car down onto the track, increasing traction. The average street car however tends to create lift. This is because the car body shape itself generates a low pressure area above itself. How does a car generate this low pressure area? According to Bernoulli, the man who defined the basic rules of fluid dynamics, for a given volume of air, the higher the speed the air molecules are traveling, the lower the pressure becomes. Likewise, for a given volume of air, the lower the speed of the air molecules, the higher the pressure becomes. This of course only applies to air in motion across a still body, or to a vehicle in motion, moving through still air. When we discussed Frontal Pressure, above, we said that the air pressure was high as the air rammed into the front grill of the car. What is really happening is that the air slows down as it approaches the front of the car, and as a result more molecules are packed into a smaller space. Once the air Stagnates at the point in front of the car, it seeks a lower pressure area, such as the sides, top and bottom of the car. Now, as the air flows over the hood of the car, it's loses pressure, but when it reaches the windscreen, it again comes up against a barrier, and briefly reaches a higher pressure. The lower pressure area above the hood of the car creates a small lifting force that acts upon the area of the hood (Sort of like trying to suck the hood off the car). The higher pressure area in front of the windscreen creates a small (or not so small) down force. This is akin to pressing down on the Where most road cars get into trouble is the fact that there is a large surface area on top of the car's roof. As the higher pressure air in front of the wind screen travels over the windscreen, it accelerates, causing the pressure to drop. This lower pressure literally lifts on the car's roof as the air passes over it. Worse still, once the air makes it's way to the rear window, the notch created by the window dropping down to the trunk leaves a vacuum, or low pressure space that the air is not able to fill properly. The flow is said to detach and the resulting lower pressure creates lift that then acts upon the surface area of the trunk. This can be seen in old 1950's racing sedans, where the driver would feel the car becoming "light" in the rear when traveling at high speeds. See the diagram below. Not to be forgotten, the underside of the car is also responsible for creating lift or down force. If a car's front end is lower than the rear end, then the widening gap between the underside and the road creates a vacuum, or low pressure area, and therefore "suction" that equates to down force. The lower front of the car effectively restricts the air flow under the car. See the diagram So, as you can see, the airflow over a car is filled with high and low pressure areas, the sum of which indicate that the car body either naturally creates lift or down force. The shape of a car, as the aerodynamic theory above suggests, is largely responsible for how much drag the car has. Ideally, the car - Have a small grill, to minimize frontal - Have minimal ground clearance below the grill, to minimize air flow under the car. - Have a steeply raked windshield to avoid pressure build up in front. - Have a "Fastback" style rear window and deck, to permit the air flow to stay attached. - Have a converging "Tail" to keep the air flow attached. - Have a slightly raked underside, to create low pressure under the car, in concert with the fact that the minimal ground clearance mentioned above allows even less air flow under the car. If it sounds like we've just described a sports car, you're right. In truth though, to be ideal, a car body would be shaped like a tear drop, as even the best sports cars experience some flow detachment. However, tear drop shapes are not conducive to the area where a car operates, and that is close to the ground. Airplanes don't have this limitation, and therefore teardrop shapes work. What all these "ideal" attributes stack up to is called the Drag coefficient (Cd). The best road cars today manage a Cd of about 0.28. Formula 1 cars, with their wings and open wheels (a massive drag component) manage a minimum of about 0.75. If we consider that a flat plate has a Cd of about 1.0, an F1 car really seems inefficient, but what an F1 car lacks in aerodynamic drag efficiency, it makes up for in down force and horsepower. Drag coefficient, by itself is only useful in determining how "Slippery" a vehicle is. To understand the full picture, we need to take into account the frontal area of the vehicle. One of those new aerodynamic semi-trailer trucks may have a relatively low Cd, but when looked at directly from the front of the truck, you realize just how big the Frontal Area really is. It is by combining the Cd with the Frontal area that we arrive at the actual drag induced by the vehicle. Scoops, or positive pressure intakes, are useful when high volume air flow is desirable and almost every type of race car makes use of these devices. They work on the principle that the air flow compresses inside an "air box", when subjected to a constant flow of air. The air box has an opening that permits an adequate volume of air to enter, and the expanding air box itself slows the air flow to increase the pressure inside the box. See the diagram below: NACA stands for "National Advisory Committee for Aeronautics". NACA is one of the predecessors of NASA. In the early days of aircraft design, NACA would mathematically define airfoils (example: NACA 071) and publish them in references, from which aircraft manufacturers would get specific applications The purpose of a NACA duct is to increase the flowrate of air through it while not disturbing the boundary layer. When the cross-sectional flow area of the duct is increased, you decrease the static pressure and make the duct into a vacuum cleaner, but without the drag effects of a plain scoop. The reason why the duct is narrow, then suddenly widens in a graceful arc is to increase the cross-sectional area slowly so that airflow does separate and cause turbulence (and drag). NACA ducts are useful when air needs to be drawn into an area which isn't exposed to the direct air flow the scoop has access to. Quite often you will see NACA ducts along the sides of a car. The NACA duct takes advantage of the Boundary layer, a layer of slow moving air that "clings" to the bodywork of the car, especially where the bodywork flattens, or does not accelerate or decelerate the air flow. Areas like the roof and side body panels are good examples. The longer the roof or body panels, the thicker the layer becomes (a source of drag that grows as the layer thickens too). Anyway, the NACA duct scavenges this slower moving area by means of a specially shaped intake. The intake shape, shown below, drops in toward the inside of the bodywork, and this draws the slow moving air into the opening at the end of the NACA duct. Vortices are also generated by the "walls" of the duct shape, aiding in the scavenging. The shape and depth change of the duct are critical for Typical uses for NACA ducts include engine air intakes and cooling. Spoilers are used primarily on sedan-type race cars. They act like barriers to air flow, in order to build up higher air pressure in front of the spoiler. This is useful, because as mentioned previously, a sedan car tends to become "Light" in the rear end as the low pressure area above the trunk lifts the rear end of the car. See the diagram below: Front air dams are also a form of spoiler, only their purpose is to restrict the air flow from going under the car. Probably the most popular form of aerodynamic aid is the wing. Wings perform very efficiently, generating lots of down force for a small penalty in drag. Spoiler are not nearly as efficient, but because of their practicality and simplicity, spoilers are used a lot on sedans. The wing works by differentiating pressure on the top and bottom surface of the wing. As mentioned previously, the higher the speed of a given volume of air, the lower the pressure of that air, and vice-versa. What a wing does is make the air passing under it travel a larger distance than the air passing over it (in race car applications). Because air molecules approaching the leading edge of the wing are forced to separate, some going over the top of the wing, and some going under the bottom, they are forced to travel differing distances in order to "Meet up" again at the trailing edge of the wing. This is part of Bernoulli's theory. What happens is that the lower pressure area under the wing allows the higher pressure area above the wing to "push" down on the wing, and hence the car it's mounted to. See the diagram below: Wings, by their design require that there be no obstruction between the bottom of the wing and the road surface, for them to be most effective. So mounting a wing above a trunk lid limits the effectiveness. - Cover Open wheels. Open wheels create a great deal of drag and air flow turbulence, similar to the diagram of the mirror above. Full covering bodywork is probably the best solution, if legal by regulations, but if partial bodywork is permitted, placing a converging fairing behind the wheel provides - Minimize Frontal Area. It's no coincidence that Formula 1 cars are very narrow. It is usually much easier to reduce FA (frontal area) than the Cd (Drag coefficient), and top speed and acceleration will be that - Converge Bodywork Slowly. Bodywork which quickly converges or is simply truncated, forces the air flow into turbulence, and generates a great deal of drag. As mentioned above, it also can affect aerodynamic devices and bodywork further behind on the car body. - Use Spoilers. Spoilers are widely used on sedan type cars such as NASCAR stock cars. These aerodynamic aids produce down force by creating a "dam" at the rear lip of the trunk. This dam works in a similar fashion to the windshield, only it creates higher pressure in the area above the trunk. - Use Wings. Wings are the inverted version of what you find on aircraft. They work very efficiently, and in less aggressive forms generate more down force than drag, so they are loved in many racing circles. Wings are not generally seen in concert with spoilers, as they both occupy similar locations, and defeat each other's purpose. - Use Front Air Dams. Air dams at the front of the car restrict the flow of air reaching the underside of the car. This creates a lower pressure area under the car, effectively providing down force. - Use Aerodynamics to Assist Car Operation. Using car bodywork to direct airflow into side pods, for instance, permits more efficient (i.e.. smaller FA) side pods. Quite often, with some for-thought, you can gain an advantage over a competitor by these small dual purpose techniques. Another useful technique is to use the natural high and low pressure areas created by the bodywork to perform functions. For instance, Mercedes, back in the 1950s placed radiator outlets in the low pressure zone behind the driver. The air inlet pressure which fed the radiator became less critical, as the low pressure outlet area literally sucked air through the radiator. A useful high pressure area is in front of the car, and to make full use of this area, the nose of the car is often slanted downward. This allows the higher air pressure to push down on the nose of the car, increasing grip. It also has the advantage of permitting greater driver visibility. - Keep Protrusions Away From The Bodywork. The smooth airflow achieved by proper bodywork design can be messed up quite easily if a protrusion such as a mirror is too close to it. Many people will design very aerodynamic mounts for the mirror, but will fail to place the mirror itself far enough from the - Rake the chassis. The chassis, as mentioned in the aerodynamics theory section above, is capable of being slightly lower to the ground in the front than in the rear. The lower "Nose" of the car reduces the volume of air able to pass under the car, and the higher "Tail" of the car creates a vacuum effect which lowers the air pressure. - Cover Exposed Wishbones. Exposed wishbones (on open wheel cars) are usually made from circular steel tube, to save cost. However, these circular tubes generate turbulence. It would be much better to use oval tubing, or a tube fairing that creates an oval shape over top of the round tubing. See Reprint from gmecca.com TECH INFO | HOW to ORDER Copyright 2007 Unlimited Performance Products. All rights reserved. UP22.com, UnlimitedProducts.com,
http://up22.com/Aerodynamics.htm
13
63
A level plot is a type of graph that is used to display a surface in two rather than three dimensions – the surface is viewed from above as if we were looking straight down and is an alternative to a contour plot – geographic data is an example of where this type of graph would be used. A contour plot uses lines to identify regions of different heights and the level plot uses coloured regions to produce a similar effect. To illustrate this type of graph we will consider some surface elevation data that is available in the geoR package. The data set in this package is called elevation and stores the elevation height in feet (as multiples of ten feet) for a grid region of x and y coordinates (recorded as multiples of 50 feet). To access this data we load the geoR pacakage and then use the data function: For some packages we need the call to the data function to make a set of data available for our use. The elevation object is not a data frame so our first step is to create our own data frame to be used to create the level plots using the different graphics packages. elevation.df = data.frame(x = 50 * elevation$coords[,"x"], y = 50 * elevation$coords[,"y"], z = 10 * elevation$data) We extract the x and y grid coordinates and the height values, multiplying them by 50 and 10 respectively to convert to feet for the graphs. Rather than trying to plot the individual values we need to create a surface to cover the whole grid region as the points themselves are too sparse. We make use of the loess function to fit a local polynomial trend surface (using weighted least squares) to approximate the elevation across the whole region. The function call for a local quadratic surface is shown below: elevation.loess = loess(z ~ x*y, data = elevation.df, degree = 2, span = 0.25) The next stage is to extract heights from this fitted surface at regular intervals across the whole grid region of interest – which runs from 10 to 300 feet in both the x and y directions. The expand.grid function creates an array of all combinations of the x and y values that we specify in a list. We choose a range every foot from 10 to 300 feet to create a fine grid: elevation.fit = expand.grid(list(x = seq(10, 300, 1), y = seq(10, 300, 1))) The predict function is then used to estimate the surface height at all of these combinations of x and y coordinates covering our grid region. This is saved as an object z which will be used by the base graphics function: z = predict(elevation.loess, newdata = elevation.fit) The lattice and ggplot2 expect the data in a different format so we make use of the as.numeric function to convert from a table of heights to a single column and append to the object we create based on all combinations of x and y coordinates: elevation.fit$Height = as.numeric(z) The data is now in a format that can be used to create the level plots in the various packages. The function image in the base graphics package is the function we use to create a level plot. This function requires a list of x and y values that cover the grid of vertical values that will be used to create the surface. These heights are specified as a table of values, which in our case was saved as the object z during the calculations on the local trend surface. The text on the axis labels are specified by the xlab and ylab function arguments and the main argument determines the overall title for the graph. The function call below creates the level plot: image(seq(10, 300, 1), seq(10, 300, 1), z, xlab = "X Coordinate (feet)", ylab = "Y Coordinate (feet)", main = "Surface elevation data") box() After the image function is used we call the box function mainly for aesthetic purposes to ensure there is a line surrounding the level plot. The graph that is created is shown below: The default colour scheme used by the base graphics produces an attractive level plot graph where we can easily see the variation in height across the grid region. It is basically a fancy version of a contour plot where the regions between the contour lines are coloured with different shades indicating the height in those regions. The lattice graphics package provides a function levelplot for this type of graphical dispaly. We use the data stored in the object elevation.fit to create the graph with lattice graphics. levelplot(Height ~ x*y, data = elevation.fit, xlab = "X Coordinate (feet)", ylab = "Y Coordinate (feet)", main = "Surface elevation data", col.regions = terrain.colors(100) ) The formula is used to specify which variable to use for the three axes and a data frame where the values are stored – as there are three dimensions it is the z axis that is specified on the left hand side of the formula. The axes labels and title are specified in the same way as the base graphics. The range of colours used in the lattice level plot can be specified as a vector of colours to the col.regions argument of the function. We make use of the terrian.colors function to create this vector which a range of 100 colours which are less striking than those used above with the base graphics. The level plot that we can is shown here: This is in general similar to the base graphics display but the actual plot region is a different shape that makes things look slightly different. The ggplot2 package also provides facilities for creating a level plot making use of the tile geom to create the desired graph. The function ggplot forms the basis of the graph and various other options are used to customise the graph: ggplot(elevation.fit, aes(x, y, fill = Height)) + geom_tile() + xlab("X Coordinate (feet)") + ylab("Y Coordinate (feet)") + opts(title = "Surface elevation data") + scale_fill_gradient(limits = c(7000, 10000),low = "black",high = "white") + scale_x_continuous(expand = c(0,0)) + scale_y_continuous(expand = c(0,0)) This large number of options that are added to the graph change various settings. The choice of colours for the heights used on graph is selected by the scale_fill_gradient function with colours ranging from black to white. The scale_x_continuous and scale_y_continuous options are used to stretch the tiles to cover the whole grid region covering up the default gray background – this makes the graph more visually appealing. The graph that is produced is shown here: The graph from ggplot2 is visually as impressive as the other graphs – there is more smoothing between the colours which blurs some of the lines on the other graphs because of the type of colour gradient that was selected. This blog post is summarised in a pdf leaflet on the Supplementary Material page.
http://www.wekaleamstudios.co.uk/posts/displaying-data-using-level-plots/
13
58
This section contains a brief overview of digital electronics, it has enough information for you to complete the labs for this course, but is not meant to be all inclusive. If you need more information there are many books in the library that cover digital logic. Up until now the labs have dealt with electricity in its analog form where a quantity is described by the amount of voltage, or current, or charge... expressed as a real number. However a large proportion of electronic equipment, including computers, uses digital electronics where the quantities (usually voltage) are described by two states; on and off. These two states can also be represented by true and false, 1 and 0, and in most physical systems are represented by the voltages 5V and 0V, or something close to that. While the restriction to two states seems limiting it makes many things easier because problems due to noise are minimized. It is generally very easy to reliably distinguish between logic 1 or logic 0. Since many quantities cannot be represented by two states, more than one binary digit can be used to represent a number. For example the number 2510 (twenty five base 10) can be represented by the binary number 110012. It is easy to convert back and forth from binary to decimal by remembering that each digit in a binary number simply corresponds to a power of 2, as every digit in a decimal number corresponds to a power of 10. Using the previous example: 101 100 24 23 22 21 (10) (1) (16) (8) (4) (2) (1) 2 5 = 1 1 0 0 1 2*10 + 5*1 = 1*16 + 1*8 + 0*4 + 0*2 + 1*1 In general an n digit binary number can represent numbers from 0 to 2n-1. For instance a byte is 8 bits and can represent numbers from 0 to 255 (28-1). More about binary numbers. Another advantage of digital electronics is the ability to express signals in terms of logic equations using standard terms from logic: and, or, and not. These functions can be represented by truth tables as shown below with A and B as inputs and C as output. and or not A B C A B C A C 0 0 0 0 0 0 0 1 0 1 0 0 1 1 1 0 1 0 0 1 0 1 1 1 1 1 1 1 These logic functions can be represented using a shorthand notation, and is represented by . or &, or is represented by + or #, and not is represented by ~ or ! (there are also other conventions, the most common is to put a bar over the variable). Thus the equation D equals A and B or not C can be represented as D = A . B+ C or by D = A & B # !C Obviously this equation has different meanings depending on whether the and or the or function is performed first and parentheses can be used in the normal way to get rid of the ambiguity D = (A . B)+ C Other functions that are common are nand and nor. The nand function is an and function followed by a not, nor is an or function followed by not. The symbols used in schematics for these functions are given below: Logic equations, like any other, can get complicated quickly. To simplify logic equations a system called Boolean algebra (after the mathematician George Boole) was developed. A short selection of its theorems is listed. (1) A.0 = 0 (6) A+1 = 1 !A.!B (11) (A.B).C = A.(B.C) (2) A.1 = A (7) A+Ã = 1 1 (12) A.(B+C) = A.B+B.C (3) A.Ã = 0 (8) A+A = A 0 (13) A+(B.C)= (A+B).(A+C) (4) A+0 = A (9) A+B = B+A 0 (14) !(A+B) = !A.!B (5) A.A = A (10) (A+B)+C = A+(B+C) 0 (15) !(A.B) = !A+!B Some of these rules are quite obvious. For example if we make out the truth table for rule (1) we get: A A.0 0 0 1 0 Some of the other rules are not so obvious. For example rule 14 yields the truth table shown below A B !A !B A+B !(A+B) !A.!B 0 0 1 1 0 1 1 0 1 1 0 1 0 0 1 0 0 1 1 0 0 1 1 0 0 1 0 0 The truth table shows that rule 14, !(A+B) = !A.!B, is correct. These theorems can be used to simplify equations. For example if we start off with the expression D = (A.B+(A+C.C)).A+B, we can apply the rules in turn to simplify it. D = (A.B+(A+C.C)).A+B apply 3 D = (A.B+A).A+B apply 2 D = (A.B+A.1).A+B apply 12 D = (A.(B+1)).A+B apply 6 D = A.A+B apply 5 D = A+B As with the algebra you learned in elementary school, this kind of simplification gets tedious, and messy, quickly. Luckily there is a graphical shortcut to doing logic minimizations called a Karnaugh map. This introduction will only cover Karnaugh maps with up to four variables, though the technique can be generalized to larger systems - though these systems are usually simplified using computers. Consider the truth table of the equation given above, which is given in the forma of a truth table and a three variable Karnaugh map: One way to get a solution is simply to write an expression for each true result in the table. For example, the lower-leftmost true result from the Karnaugh map represents the case where A=1 and B=0 and C=0 and it can be written as A.B.C. The true result next to it can be written as A.B.C. Now to develop a logic expression, we would just or together all of these terms. Thus our result (including an expression for each true term -- 6 in all) is: A.B.C + A.B.C + A.B.C + A.B.~C + A.B.C + A.B.C This is called the sum-of-products form. Although this expression is correct, it is also unwieldy. We could use the theorems of Boolean algebra to simplify the expression, which is often difficult and does not guarantee a best solution. However we can use a visual technique based on Karnaugh maps to develop a minimal sum-of-products solution. To get the simplified equation one takes the table and encircles as many 1's as possible in rectangular groups that have 1, 2, 4, or 8 elements (i.e., any power of 2). The idea is to make the groupings as large as possible. For the example above this can be accomplished with 2 groupings: If you examine these groupings carefully you can see that the red grouping has no dependence on the variables A or C, and is totally described by the statement B=1. The blue group on the other hand has no dependence on B or C and is described by the statement A=1. Therefore to include the elements from both groups we can use the equation A+B. If you had used smaller groups you would have obtained an equivalent, though more complicated, expression. Try it. This graphical method is clearly easier than the technique used earlier that employed algebraic simplifications. You should examine the map shown above and convince yourself that any grouping of a single element needs all three variables to describe it. For instance the uppermost "1" on the right hand side is described by B.C. Ã A grouping of two gets rid of the dependence on one of the variables (the two rightmost ones have no dependence on A and are given by B.C). A group of four, as you have seen, depends only on one variable. Therefore by choosing the smallest number of groups (i.e., the largest groups), you will come up with the minimal equation describing those groups. The result obtained with the Karnaugh map is called the minimal sum of products form because it generates the smallest number of product (anded) terms. Also, if you look at the table again you can convince yourself that it is possible to "wrap-around" the ends of the table, as shown The two groups are represented by B.C (the red group) and the blue group that wraps around by A.C. This technique also works for two variables (trivial), four variables (shown below), and even more (though this gets complicated and will not be described here). A typical four variable map and its groupings are shown here. which simplifies to: A.B.D.C + Ã.B.D + Ã.C + B.C. Prove it to yourself. Months in school Truth Table and Karnaugh map Alternate solution (others possible) The digital logic described thus far is called combinatorial logic because the output depends solely upon the presently existing combination of the inputs; past values of the inputs are not important. Sequential logic deals with the issue of time dependence and can get much more complicated than combinatorial logic -- much in the same way that differential equations are more difficult than algebraic equations. The fundamental building block of sequential circuits is the flip-flop (which flips and flops between different logic states) of which there are several. The simplest flip-flop is the R-S, or Set-Reset flip flop which is made up of two gates: Note that flip flops are often quite complicated at the gate level and are frequently represented by a "black box" with inputs and outputs as shown at right. Let's see how this device operates by examining the four possible inputs, if S,R=1,0 then Q=1 and Q=0, therefore S=1 sets Q. If S,R=0,1 then Q=1 and Q=0, or R=1 resets Q. If S,R=1,1 then Q=0 and Q=0; a result that doesn't seem to make sense, but we will deal with this soon. If S,R=0,0 then you can convince yourself that the outputs are indeterminate -- that is you cannot figure out what they are. This is where the time dependence of the circuit is important: if S,R goes from the 1,0 to the 0,0 state then Q=1 and Q=0; if S,R goes from 0,1 to 0,0 then Q=0 and Q=1; but if S,R goes from 1,1 to 0,0 then Q and Q are still indeterminate and so we will call 1,1 the disallowed state and design our circuit so this state is not used. The 0,0 state is called the hold state. The truth table for this circuit is given as: Note that the variables have a subscript associated with them. This subscript denotes time, so that if S,R=0,0 at time n then the output Q retains the value it had at time n-1. There are several other types of flip-flops, but the most popular are the D and J-K flip-flops which have the following truth tables: and the circuit symbols: Note that these flip-flops have another input called the "clock". The transition from time n-1 to time n occurs at some transition of the clock; usually when the clock goes from low to high (rising edge) or from high to low (falling edge), but is sometimes level sensitive (for instance the output may do what is required by the input while the clock is high and holds the last value when the clock is low). In addition to the clock there are sometimes reset inputs to clear the output to logic 0, and preset inputs to set the outputs to logic 1. To further understand these devices consider the circuit shown below (actually a simplified portion of an integrated circuit, the 7493): Note: the inverted outputs of the flip-flops aren't shown since we aren't using them. Assume that the flip-flops are falling edge triggered and that the outputs are initially all 0, then if a series of clock pulses is fed into the circuit, Clock, you should be able to convince yourself that the output is given by: Thus Qa is half the frequency of the clock, Qb half that of Qa, and Qc half that of Qb; making Qc 1/8 the frequency of the clock. Also if you use Qa through Qc to represent a number (Qa=least significant bit), then the output of this circuit cycles repetitively through eight states representing the numbers 010 to 710 (02 to 1112). Therefore this circuit is known as a divide by eight counter. The actual 7493 is a little more complicated and includes a reset signal so that you can reset the counter to 0 at any time. We will use counters in the next lab. Thus ends the introduction to digital logic. Comments or Questions?
http://www.swarthmore.edu/NatSci/echeeve1/Ref/Digital/DigitalIntro.html
13
57
Labor Movement in the United States Jewish American women have played a central role in the American labor movement since the beginning of the twentieth century. As women, they brought to trade unions their sensibilities about the organizing process and encouraged labor to support government regulation to protect women in the workforce. As Jews who emerged from a left-wing cultural tradition, they nurtured a commitment to social justice, which would develop into what is often called “social unionism.” From their position as an ethnic and religious minority, as well as from their position as women, they helped to shape the direction of the mainstream labor movement. Jewish mass immigration reached the United States just as the ready-made clothing industry hit its stride at the turn of the century—a circumstance that provided men and women with unprecedented incentives to unionize. In the Old Country, where jobs were scarce, daughters were married off as fast as possible. But many immigrant women had learned to sew in the workshops of Russian and Polish towns, and in America, where families counted on their contributions, they were expected to work. Girls sometimes immigrated as teenagers, seeking out an uncle or older sister who might help them to find a first job so that some part of their wages could be sent back to Europe. The wages of other young women helped to pay the rent, to buy food and clothing, to bring relatives to America, and to keep brothers in school. When they married, young women normally stopped working in the garment shops. But, much as in the Old Country, they were still expected to contribute to family income, sometimes by taking in clothing to sew at home. In early twentieth-century New York, Philadelphia, Boston, and other large cities, only the exceptional unmarried woman did not operate a sewing machine in a garment factory for part of her young adult life. Factory or sweatshop work before marriage and the expectation of some form of paid labor afterward fostered a continuing set of ties to the garment industry for this first generation of urban Jewish American women, encouraging a community of understanding around its conditions and continuing support for the ever-changing stream of workers who entered it. Women who worked long hours at extraordinarily low wages, in unsanitary and unsafe working conditions, and faced continual harassment to boot, found succor in their communities and benefited from a class-conscious background. To be sure, competitive individualism and the desire to make it in America could hinder unionization efforts. But a well-developed ethic of social justice played an equally important role, producing perhaps the most politically aware of all immigrant groups. Socialist newspapers predominated in New York’s Yiddish-speaking Lower East Side. Jews were well represented in the Socialist Party after 1901. Though their unions were weak, Jews were among the best organized of semiskilled immigrants. In the immigrant enclaves of America’s large cities, as in Europe, women benefited from the shared sense that women worked for their families, as they absorbed much of their community’s concern for social justice. This did not deflect the desire of working women to get out of the shops, but it did contribute to the urge to make life in them better. Still, even under these circumstances, it could not have been easy for a woman to become politically active. A number of small unions dotted New York’s Lower East Side at the turn of the century. There, where the boss was almost always an immigrant like oneself and sometimes a relative, and where shops were small and vulnerable, unions sprang up and flowered, or withered, as trade picked up and declined. Organized largely by the International Ladies Garment Workers Union (ILGWU) and members of a socialist umbrella organization called the United Hebrew Trades, they reflected the weakness of a labor movement rooted in an industry of poorly paid workers and undercapitalized ventures. The men in these unions did not, at first, welcome the women, placing them in a dilemma. In order to improve their working lives, incipient activists found themselves choosing between a conservative trade union movement hostile to women in the workforce and a women’s movement whose members did not work for wages. A young and inexperienced Rose Schneiderman, for example, was turned away from the cap-makers’ union to solicit the signatures of twenty-five of her coworkers before the union would acknowledge them or provide aid. Her friend Pauline Newman recalled that when she and her friends “organized a group, we immediately called the union so that they would take the members in.” Despite the early devotion of women—despite the fact that women were often baited, beaten, and arrested on picket lines—the ILGWU insisted that women, destined for marriage, were unorganizable. Even those who managed to enter were badly treated. During the 1905 cap-makers’ strike, for example, married men got strike benefits but even women who were supporting widowed mothers and young siblings received nothing. Discouraged by their union brothers, recognizing their issues as different from those of male workers, women turned to other women for help with their work-related problems. They developed solidarity and loyalty with women workers from other industries and areas, sometimes striking when their sisters were attacked and resisting separation from one another when they were jailed. They also sought and received help from the Women’s Trade Union League, an organization that brought together wealthy upper-middle-class women with women wage earners. To be sure, solidarity was limited by class as well as by ethnic divisions. Jewish women thought they were superior unionists and often treated “American-born” women, as well as Polish and Italian colleagues, as suspiciously as they themselves were treated. But, isolated as they were from the mainstream of the labor movement and divided from other working women who came from less class-conscious backgrounds, Jewish women had little choice but to accept financial help and moral support from wherever it came. There was inevitable tension here. Jewish women had been nurtured in the cradle of socialism. For them, alliances with other women were largely ways of achieving a more just society. Even their middle-class friends viewed labor organization among women as a way of transcending class lines in the service of feminist interests. Early organizers like Newman, Schneiderman, and Clara Lemlich [Shavelson] spent enormous amounts of time and energy reconciling the seemingly divergent interests of the male-dominated skilled labor movement, working women, and middle-class allies. A major strike of shirtwaist makers, in the winter of 1909–1910, dramatically altered the position of women and their role in the trade union movement. Popularly called the Uprising of the 20,000, the strike began when a tiny local, consisting of about 100 Jewish shirtwaist makers, veterans of a series of small but brutal strikes for better working conditions, called a meeting of workers in the industry. Thousands of women turned out for the November 22 meeting, surprising the reluctant leaders of the ILGWU into supporting a general strike. The event and its aftermath turned out to have significance not only for Jewish women but also for the entire trade union movement. The young—often teenage—garment workers who walked the picket lines in the cold winter months that followed provided their own organizational strategies, solicited funds, and set goals alien to men. They introduced into the movement qualities of idealism, self-sacrifice, and commitment that appeased skeptical male leaders and stirred what they called “spirit” into a revitalized movement. Refusing to settle for small increases in wages, and even for shorter hours (the bread-and-butter issues that traditional craft unions thought crucial), women demanded union recognition. In the end, most settled for less, but in the process they inserted into the negotiations a range of issues that translated into dignity, honor, and justice. The strikers insistently emphasized the discrepancy between the starvation wages and harsh working conditions they endured on the one hand, and their ability to fulfill widespread expectations of future marriage and motherhood on the other. Pressured and unsanitary conditions, they argued, yielded little room for personal needs or safety. Malnourished and exhausted, they could hardly protect their virtue, much less the health and stamina they would require to nurture the next generation. These arguments won the grudging respect of a hostile public and energized young women whose resistance to unionization often came from their expectation of marriage. Moral outrage, not economic pressure, became their trump card, and public support, indignation, and protest was their path to victory. Relying on moral suasion created bonds among women that courageous and innovative organizers quickly reinforced with new organizing strategies. Women organizers urged individual women who would remain only briefly in the labor force to fight for the women who would follow them. Fired by idealism, women did not hesitate to put themselves in positions where they forced the authorities to violate convention, as when they were beaten by police or thrown into jail with prostitutes. They welcomed the efforts of their more affluent allies to publicize brutal incidents by joining them on picket lines or sponsoring public meetings. By such extreme measures, women dramatized the injustices of their daily treatment in the shop and factory. Addressing a nation committed to the rhetoric of chivalry, motherhood, and the idealization of the weaker sex, they demonstrated the brutal treatment that daily violated ideas about womanhood inside as well as outside the workplace. These tactics worked. In the big New York strike of 1909–1910, and in a wave of garment workers’ strikes that followed, women members became the heart of a revivified union movement. Unionized women introduced into their new unions energy and ideas alien to those of male craft unions that aimed to raise wages by restricting the labor supply. They brought notions of self-sacrifice for the future, a recognition of women’s particular needs, and special attention to sanitation and cleanliness, as well as traditional demands for higher wages and shorter hours. They wanted time to attend to family and personal needs: to launder, to cook, to help out at home. They wanted sufficient wages to dress decently and contribute to family support. Above all, they wanted education, opportunities to learn not just about trade union matters but about the world around them. To foster loyalty and solidarity, they pushed for a broader vision of unionism, one that incorporated what union officials called “spirit” but that contemporaries sometimes called “soul.” As Fannia Cohn put the issue, “I do not see how we can get girls to sacrifice themselves unless we discuss something besides trade matters . . . there must be something more than the economic question, there must be idealism.” The “girls local”—Local 25 of the ILGWU—sponsored dances, concerts, lectures, and entertainments. They suffered the derision of male colleagues who asked, “What do the girls know? Instead of a union, they want to dance.” But the women persisted. In 1916, Fannia Cohn persuaded the ILGWU to create the first education department in an international union, and soon, it organized a network of “unity centers” in public schools across New York City. These centers offered night-school classes in literature, music, economics, and public policy. Local 25 also bought an old country house that they turned into a vacation retreat for members, spawning a network of such houses by other locals until the International bought them out in 1921. These innovations, inspired and fought for by women, became the basis of a new social unionism, which garment unions sustained in the threatening atmosphere of the 1920s, which flourished in the 1930s, and which has sparked the imagination of the trade union movement since. The pattern established by the women of the ILGWU was expanded and developed by women who worked in the men’s clothing trades. The history of their union can be traced back to a 1910 protest by a small group of Jewish women who worked at the big Hart, Schaffner, and Marx factory in Chicago. After several years of trying to work effectively with more established unions, this group, led by Bessie Abramowitz [Hillman], formed their own union, the Amalgamated Clothing Workers of America (ACWA). Abramowitz later married Sidney Hillman, who became the new union’s president. By the 1920s, workers in the men’s clothing industry were a heterogeneous group, and women in the Amalgamated faced the challenging task of convincing potential members of the values of social unionism. Under Hillman’s leadership, and supported by an active group of Jewish women who turned the union’s women’s bureau into a lively discussion center, the Amalgamated pioneered such social programs as unemployment insurance for seasonal workers, vacations with pay, and retirement pensions. What unions could not do, or could do only for their members, Jewish women in the labor movement encouraged others to do for them through protective labor laws. In a period when the labor movement as a whole remained suspicious of government intervention, and the passage of laws on behalf of workers was largely supported by middle-class reformers, Jewish women trade unionists helped to educate the labor movement in the value of legislation. After the devastating Triangle Shirtwaist Fire in 1911, they turned to the state for protective laws that would establish industrial safety standards and regulate sanitary conditions. They joined with middle-class men and women in the National Consumers’ League to agitate for legislation that would restrict the numbers of hours women could work; place a floor under their wages; establish standards of cleanliness in workrooms and factories; mandate clean drinking water, washrooms, dressing rooms, and toilets; and provide seats and ventilation at work. Insisting that the quality of women’s lives was as important as the wages they earned, Jewish women, like Rose Schneiderman, served on state-level investigative commissions and offered testimony that pressured state legislatures into providing new ground rules for women at work. In her capacity as president of the New York Women’s Trade Union League (WTUL), Schneiderman not only played an important role in achieving a fifty-four-hour work week for women but also provided an important bridge to help craft unions accept the principle of protective labor legislation for women only. Arguably, at least, these women opened the eyes of male unionists to the positive aspects of government regulation. By the 1920s, probably 40 percent of all unionized women in the country were garment workers—most of them Jews in the ILGWU and the Amalgamated Clothing Workers of America. The garment industry remained the only place where Jewish women were organized in large numbers. Yet the male leadership persistently discouraged women’s efforts to expand their voices within unions. Women were recruited, sometimes reluctantly, as dues-paying members, tolerated as shop-level leaders, and occasionally advanced to become business agents and local officers. Only rarely did women of exceptional promise, like Fannia Cohn, Dorothy Jacobs Bellanca, and Rose Pesotta, reach the status of international officers. Where they could have fostered harmony, cooperation, and a sense of belonging, the garment unions instead mistrusted their female members, creating friction, resentment, and defensiveness among them, reducing their value, and undermining their ability to do good work. As far as the male leaders were concerned, women remained outsiders who threatened to undermine the wages of men and for whom labor legislation, rather than unionization, increasingly seemed the appropriate strategy. As if this were not enough, the social unionist strategies that women had initiated and deeply valued became implicated in the strife-filled politics of the early 1920s. Deeply divided by efforts of the newly formed Communist Party to seek power inside unions, male leaders identified women’s demands for education and democratic participation as threats, seeing a fundamental conflict between the search for “soul” in the union and their own restrictive solidarity around wage issues. Rather than risk the possibility that the women might ally with Communist Party supporters, they chose to eliminate or take over most of the women’s programs, including their vacation houses, unity centers, and educational initiatives. Women simply dropped out, leaving their unions in droves. A last-ditch effort to restore morale in the ACWA by reestablishing a women’s bureau failed to stanch the flow. Even as the Jewish labor movement was being wracked by political divisions in the early 1920s, Jewish women began to move outside its confines in order to provide benefits for women workers. Schneiderman increasingly turned to the WTUL, which she urged into the movement for protective labor legislation and which she persuaded to found a school for women workers. Cohn, distressed at new union limits on the broad educational opportunities that had so inspired women, turned to a more ambitious program of labor education. In 1921, she borrowed money to establish Brookwood Labor College, where Rose Pesotta was an early student. Brookwood inspired another experiment: the Bryn Mawr Summer School for Women Workers, which became the first of a series of schools to provide residential programs for women, many of them garment workers. Its faculty included economist Theresa Wolfson. Some of these schools, like Brookwood, were open to workers of both sexes, but in the 1920s, women remained their major clientele. In all these experiments, women in the rank and file of unions, especially Jewish women, remained among the most ardent supporters of workers’ education. They kept the flame alive until, in the 1930s, a revivified labor movement once again began to sponsor its own coeducational schools. The Depression of the 1930s restored union activism, giving voice to women’s earlier demands for community within unions and institutionalizing many of their visions in the social programs of the New Deal. Not surprisingly, Sidney Hillman and the Amalgamated Clothing Workers, with its heavily female membership, played a key role in extending social unionism into the national legislative arena. Like the ILGWU, the Amalgamated had developed a health-care program for its own members in the 1920s. The Amalgamated also initiated old-age pensions and unemployment insurance. But these had limited scope and rarely covered any but long-term workers. Under the pressure of the Depression, Hillman, abetted by his wife, Bessie Abramowitz, and vice president Dorothy Jacobs Bellanca, agitated for and helped to draft bills for unemployment compensation and fair labor standards. He also played a key role in the development of the Congress of Industrial Organizations (CIO), which, in the 1930s, promised to harbor the residues of the social unionist tradition. Jewish women unionists reaped some rewards in these days: tied to Frances Perkins and Eleanor Roosevelt through the New York Women’s Trade Union League, they entered the federal bureaucracy, where they worked to soften the edges of the new legislative agenda. Under the impetus of a dramatic upswing in female membership sparked by industrywide strikes in 1933, the trade union movement regained some of its vigorous support for the cultural and recreational programs it had earlier abandoned. Drawing on its earlier tradition, unions sponsored gym classes, athletic teams, dramatic clubs, choral groups, orchestras, and, of course, educational opportunities of all kinds. The ILGWU’s long-running Broadway hit, Pins and Needles, testifies to the value of these programs, both for developing the morale of members and for reaching out into the community to expand understanding and support. Still, the union movement remained uncomfortable with the leadership style of women. Neither in the garment industry nor in the other unions that Jewish women now began to join were women welcomed as leaders. As the only female vice president of the ILGWU in the 1930s, Rose Pesotta adapted the unorthodox female style that had started a generation earlier to become perhaps its most successful organizer. She created pleasant headquarters, threw parties for members and potential members, and devised style shows, festivals, and dances to attract women to the union. In Los Angeles, she constructed a picket line of women in gowns to embarrass factory owners attending a charity event. Before Easter, she demanded and got special strike allowances for mothers to buy holiday clothes for their children. Nobody questioned her success: She became the union’s most important troubleshooter, traveling far afield, to places like Puerto Rico, Seattle, Akron, Boston, and Montreal to resolve outstanding disputes. But neither the leaders of the ILGWU nor those of other unions understood these tactics. David Dubinsky regularly berated Pesotta for extravagance and mistrusted techniques that appealed to women’s interest in human welfare as much as their interest in wages. These issues emerged in other arenas where women began to organize in the heyday of the Depression. Jews had begun to enter school teaching, social work, retail sales, and office work in the 1920s. Though by no means dominant in any of these arenas, Jewish women played prominent roles in the organizational battles of all of them in the 1930s. Sometimes the daughters of unionists and often the heirs of the Jewish tradition of social justice, these women were drawn to the humane vision then offered by the Communist Party. Inevitably, this created conflicts with the traditional bread-and-butter image of many male and non-Jewish unionists. Ann Prosten, who in the 1930s organized office staff for the United Office and Professional Workers Association, spoke for many Jewish union women when she said, “the basic problems of women will not be solved until we have socialism.” The battles among schoolteachers for union control illustrates the point. Although there were ephemeral teachers’ unions even before World War I, professional identification prevented large numbers, even of Jewish women, from joining. Nor were Jewish women encouraged by the fact that the largely native-born leadership of the two national unions, the American Federation of Teachers (AFT) and the National Education Association, remained wedded to narrow strategies for improving the status of the teaching profession. Still, by the mid-1920s, in large cities like New York and Chicago, where the administrative and teaching staffs of public schools remained predominantly Irish Catholic, the majority of local union members were Jews, most of them women. By 1925, Jewish women made up half of the New York Teachers’ Union executive board. The onset of the Depression coincided with the entry of the children of immigrants into the field, and, inspired by the spirit of the times, more and more teachers sought the protection of unionism. Jewish women rapidly became the majority of union activists in every major city. Like Rebecca Coolman Simonson, who came from a family of trade unionists and socialists, and who presided over the noncommunist Teachers’ Guild, which split from Teachers’ Union in 1935, these women carried their traditions with them. But to be a Jewish unionist in an occupation still largely non-Jewish was to be associated with left-wing policies. From the beginning, unionists refused to hew a narrow line, fighting not only for higher salaries but also for a range of socioeconomic programs. In the schools, they wanted better facilities, relief from overcrowding, and child-welfare provisions like free cooked lunches. But they also advocated policies to aid unemployed teachers, to democratize the schools, and to support progressive education. To pay for these programs, they demanded federal and state aid to schools. Organized teachers opposed efforts to remove married women from the classroom. When fascism became an overriding issue, they attempted to stymie racial and religious bigotry by stepping up a long-standing campaign for academic and religious freedom. If women’s vision of social justice seemed to find a natural ally in the communist vision of the 1930s, the alliance did little to enhance women’s voices. The leadership of the teachers’ unions remained male. Lillian Herstein failed to take over an important Chicago teachers’ local in 1937. In New York, women members opted to support the communist leaders of Teachers’ Union, Local 5, of the American Federation of Teachers, believing that radical ideas harbored the best hope for opening up the union movement, as well as for social change. But more conservative AFT leaders could not tolerate dissent and retaliated by expelling the communist locals from party leadership. Jewish women would not play a major role in the AFT again until the 1960s, when they slowly reentered union leadership positions. By then, their influence was once again important in the local arena where women like New York’s Sandra Feldman rose through the ranks to become president first of New York City’s United Federation of Teachers in 1986 and then of its parent body, the AFT, in 1997. Nor did Jewish women have much influence on the CIO, which in its early days espoused the rhetoric of radical change. Its leadership remained relentlessly male. A tiny proportion of delegates to its first convention were women, and by 1946, it could boast only twenty women among its six hundred convention delegates. Those who did attend reflected the diversity of the two million women who had joined union ranks in the 1930s. They were Jewish and Catholic, white ethnic and African American. Among them was Ruth Young, a member of the executive board of the United Electrical Workers, and daughter-in-law of Clara Lemlich, who had sparked the Uprising of the 20,000. Women’s enhanced loyalty to communism undermined long-standing alliances with women’s groups, and particularly with the Women’s Trade Union League. Under Rose Schneiderman’s leadership, the WTUL straddled a delicate line between its traditional support for the AFL and its sympathy for the goals of the new CIO. But neither Schneiderman nor the WTUL’s leaders could tolerate communist influence, and here the WTUL drew a line in the sand. Schneiderman alienated many women (Jews and non-Jews) when she refused to help the militant female members of a United Electrical, Radio and Machine Workers local in a bitter 1937 strike. And many more, like Boston’s Rose Norwood, abandoned her as she passively watched the AFL bully local leagues into expelling members from CIO unions. Norwood resigned from the leadership of the Boston WTUL to work for the CIO. By the end of World War II, three million women workers belonged to trade unions, and half a million more would join union ranks in the two years that followed. Among union members, Jewish women now formed a tiny minority, their presence minimal even in the unions they had once deeply influenced. Most new recruits joined unions out of necessity, not because they chose to do so. And though, on paper, every national union admitted women, a sex-segregated labor market ensured that most would have precious few women members. Under these circumstances, the role of Jewish women became less distinctive. As leadership remained either firmly tied to men or passed to a second generation of women, many of them organized in the 1930s, the battleground shifted. Both inside and outside unions, women joined together to extend the limited coverage of New Deal legislation beyond traditional manufacturing sectors to the domestic workers, public sector workers, retail clerks, and part-time employees who had been left out. They fought now for equal pay for equal work and to push back the barriers of occupational segregation. Younger women in the labor movement began to protest the restrictions imposed by the protective labor legislation that Schneiderman had supported all her life. They wanted to guarantee equality with an Equal Rights Amendment. Schneiderman vacillated, finally compromising on night work. Her old friend Pauline Newman refused to budge. Newman’s partner Frieda Segelke Miller pushed for an equal pay bill. Within unions, the influence of Jewish women eroded as they were expelled or marginalized for their earlier communist sympathies. Some, like Betty Friedan, a staff writer for the United Electrical Workers Union, quit to raise a family when their left-leaning unions came under attack. Yet many, like Friedan herself, abandoned union activism only to find other ways of organizing women. A younger generation of women unionists, including many African Americans, replaced them to battle for an equality that Jewish women had only imagined. Their struggle, begun in the 1950s, prefigured the feminist movement of the late 1960s and 1970s. Often, they allied themselves with the incipient civil rights movement in an effort to draw attention to race as well as to sex discrimination. Organizing efforts turned to the public sector: Teachers, nurses, and clerical workers were their new targets. The legacy of Jewish women remained merely an echo. Two events symbolized the change. The first was the death of the Women’s Trade Union League in 1950. The league had been the vehicle of partnership between Jewish immigrant women and the trade union movement. It provided a place where women could develop their skills, draw support and sustenance, and find responses to creative ideas. It served to remind men that women had special needs and interests that the labor movement would need to fill if it were to retain their loyalty. Yet it also maintained the legislative pressure that enabled men to marginalize women in the movement. The second was the conflict that erupted in 1968 in the Ocean Hill–Brownsville section of New York City. Jewish teachers were faced with a largely African-American school district, which had been authorized to modify its public schools to conform to its own sense of priorities. African-American community representatives decided to replace some of the teachers with new recruits of their own choosing. The teachers, mostly Jewish women led by a Jewish male, found themselves protecting their union rights against the community rights of African Americans. In the bitter struggle that followed, one irony was lost on the participants. The descendants of the Jewish women who had fought to incorporate their vision of justice and dignity in a reluctant union movement half a century earlier were now using a trade union to prevent another disadvantaged group from fostering its own sense of dignity and community. Baum, Charlotte, Paula Hyman, and Sonya Michel. The Jewish Woman in America (1976); Glenn, Susan A. Daughters of the Shtetl: Life and Labor in the Immigrant Generation (1990); Howe, Irving. World of Our Fathers: The Journey of East European Jews to America and the Life They Found and Made (1976); Kessler-Harris, Alice. “Organizing the Unorganizable: Three Jewish Women and Their Union.” Labor History 17 (Winter 1976): 5–23, and “Problems of Coalition Building: Women and Trade Unions in the 1920s.” In Women, Work and Protest: A Century of Women’s Labor History, edited by Ruth Milkman (1985), and “Rose Schneiderman and the Limits of Women’s Trade Unionism.” In Labor Leaders in America, edited by Melvyn Dubofsky and Warren Van Tine (1987); Leeder, Elaine. The Gentle General: Rose Pesotta, Anarchist and Labor Organizer (1993); Markowitz, Ruth Jacknow. My Daughter, the Teacher: Jewish Teachers in the New York City Schools (1993); Murphy, Marjorie. Blackboard Unions: The AFT and the NEA, 1900–1980 (1990); Orleck, Annelise. Common Sense and a Little Fire: Women and Working-Class Politics in the United States, 1900–1965 (1995); Pesotta, Rose. Bread Upon the Waters (1944, 1987); Seidman, Joel. The Needle Trades (1942); Strom, Sharon Hartman. “Challenging ‘Women’s Place’: Feminism, the Left, and Industrial Unionism in the 1930s.” Feminist Studies 9 (Summer 1983): 359–386; Weinberg, Sydney Stahl. The World of Our Mothers: The Lives of Jewish Immigrant Women (1988).
http://jwa.org/encyclopedia/article/labor-movement-in-united-states
13
89
Bayesian refers to any method of analysis that relies on Bayes' equation. Developed by Thomas Bayes, the equation assigns a probability to a hypothesis directly - as opposed to a normal frequentist statistical approach, which can only return the probability of a set of data (evidence) given a hypothesis. In order to translate the probability of data given a hypothesis to the probability of a hypothesis given the data , it is necessary to use prior probability and background information. Bayesian approaches essentially attempt to link known background information with incoming evidence to assign probabilities. Since the goal of science is to determine the probability of hypotheses, Bayesian approaches to analysis are a much more direct link to what we really want to know. They also help to elucidate the assumptions that go into scientific reasoning and skepticism. There is also increasing evidence that the human brain uses something very similar to Bayesian inference. Bayes and probability Probability of the hypothesis versus the data The main focus of probability theory is assigning a probability to a statement. However, probabilities cannot be assigned in isolation. Probabilities are always assigned relative to some other statement. The sentence "the probability of winning the lottery is 1 in 100 million" is actually a fairly meaningless sentence. For example, if I never buy a lottery ticket my probability is significantly different than someone who buys 10 every week. A meaningful "sentence" in probability theory must be constructed with both the statement we seek to assign a probability to and the background information used to assign that probability. The essential form is "the probability of x given y is z." The probability calculus short hand for this sentence is . When we seek answers to a question or understanding of a phenomenon we usually start by forming a hypothesis; data is then collected that seeks to provide information about the reality of that hypothesis. Ultimately we seek to know, what is the probability that our hypothesis is correct? In this case our background information is the relevant data we have collected. So in probability speak we seek to know what is equal to. In this case h is our hypothesis and d is our data. Things start to get a little complicated from here. Let's imagine we are asking a simple question, like "is this coin I have fairly weighted?" We hypothesize that if it is fairly weighted, flipping it should result in an equal numbers of heads and tails. We flip it 10 times and come up with 6 heads and 4 tails. With this information, can we answer the question of what equals? The answer is that we cannot. The only question we can answer is what is equal to. This is a subtle but very important point. Given only a hypothesis and some relevant data, we can only ever answer how probable our data is given our hypothesis, not the other way around. These two pieces of information are not equal. To see why, let's form some probability sentences with everyday concepts and see what happens when we reverse them: - The probability that it is cloudy outside given that it is raining does not equal the probability that it is raining given it is cloudy outside. - The probability that someone is drunk given they consumed 10 beers is not equal to the probability that someone consumed 10 beers given that they are drunk. - The probability that the Antichrist is coming given that it's in the Bible does not equal the probability that the Antichrist is in the Bible given that he is coming. So given that does not equal , what sorts of answers can we get about ? This is classic approach to statics that has come to be called frequentist approach. Frequentist approaches The frequentist approach is the standard statistical model taught in most high schools, colleges and graduate school programs. It seeks to find the answer to what P( d | h) equals. Based on our example of a coin being flipped, the frequentist essentially asks: if one did a whole lot of sets of 10 flips, how often would I get a distribution of 6 heads and 4 tails? The answer of course depends on whether the coin is fairly weighted or not. The frequentist will usually ask how likely that distribution is to appear if the coin is weighted fairly and if it is not weighted fairly. Since being weighted fairly means that the result of heads or tails is essentially random, that question can be generalized to asking what is the distribution if we assume our results are random. This is known as the null hypothesis and is the "omnibus" test for frequentist statisticians. You can take any data distribution and ask: what are chances of this data distribution appearing given it is caused at random? If there is a high chance that it can appear, then we say that it appeared as random. If there is a low chance it appeared, we say that something had to cause it. Usually some sort of percentage cut off is used, the standard being 5 percent, meaning that there must be less than a 1/20 chance of forming a given distribution assuming random cause before we are willing to say there must be a non-random cause. This is called "significance". There are many complicated statistical procedures that can be used to further differentiate causes beyond just "random" or "non-random", but all of them rest on the same basic idea of providing some sort of arbitrary statistical cut off where we assume something is unlikely enough that it has to be something else. This, however, is not really the case. As previously stated, we cannot actually assign a probability to our hypothesis. If our data only has a 1% chance of appearing if the cause is random, this does not mean that there is only a 1 percent chance that the hypothesis that our cause is random is true or that there is a 99 percent chance that our data is caused by something non-random. The frequentist approach, while providing valuable information and hinting at the relationships between hypotheses, can not tell us the probability of a hypothesis being true. To understand why this is the case, and to understand how we can do this, we must turn to Bayesian inference. Bayes' Equation We have generated our data, run all the stats on it, and come up with the encouraging results that our data should only appear 1 percent of the time if it's randomly caused. Why then are we not safe in assuming, at the very least, that it is not randomly caused and that our hypothesis is more likely than random chance? The answer to this question has to do with the prior probabilities of each hypothesis, or in Bayesian parlance just "priors." Let's illustrate this with a little story: You're walking down the road when you hear a whisper in the alleyway calling you over. Curious, you enter. A stranger is standing against the wall. He tells you that he can predict any series of numbers that will be chosen by man or machine. This includes tonight's lottery numbers, and he is more than willing to tell you what they will be in exchange for $1000 in cash. That would certainly be a good deal... if his story is true. You, however, are skeptical of his claim for many obvious reasons and ask for him to prove it. The man agrees and asks you to pick a number between 1 and 5, you do so, and seconds later he tells you the exact number you picked. Would you hand over the thousand dollars now? Most people would not, for such a feat is not that impressive. Lets instead say he told you to choose a number between 1 and 100 and guessed it exactly. While this is more intriguing, it is probably not worth the $1000. What about 1 in 1000, 1 in 100,000, 1 in 1,000,000, 1 in 100,000,000? Eventually we will reach a point where we are convinced enough to turn over our money. Now let's look at a different story. You enter a novelty shop at the local mall and spot a package of dice that tells you they are weighted to roll a 6 every time. Intrigued, you open a package and roll a die. Sure enough, it comes up 6. Are you willing to say that these dice are probably weighted? Maybe you will roll a second time, but how many people will remain unconvinced after the 2nd or 3rd roll? Not many. So in the first scenario most people are willing to ascribe to random chance when it was only 1 in 100 or 1 in 1000 likelihood, while in the second a 1 in 6 or 1 in 36 chance was all it took to convince people that it was not random. What is the difference? It is the prior likelihood for each hypothesis that separates these out. In this first scenario nothing makes sense. Everyone knows psychic abilities have never been demonstrated, why is this guy in an alley and why is he selling a $100 million lottery ticket for $1000? Sure it might be true, but the chances are tiny. In the second scenario you are in a respected shop looking at a commercial good with clearly labeled and professional packaging, so it's probably telling you the truth about being weighted. Let's break this down into something a little more quantifiable. For the sake of argument, let's assume that the chance the guy in the alley is telling the truth is 1 in 10,000,000. What is more likely; the 1 in 10,000,000 chance that he is telling the truth, or the 1 in 100 chance that he guessed your number randomly? In the second scenario, let's say that the chances the package is lying about the dice is 1 in 100, so the chances that it's lying and you randomly rolled a 6 is 1 in 600 while the chances that the package is telling the truth is 99/100 (100 percent chance you will roll a 6). In this case, the probability a single roll of a 6 with a die in a wrongly labeled package is far lower than the package being labeled correctly. As these examples hopefully show, the only way to move from to is to take into account our prior beliefs about the probability of each hypothesis. The Bayes equation is the equation that relates and our priors, and calculates what equals. The equation is simple and is made up of three parts, the first is the priors which we just talked about or , the second is called the likelihood probability, which is simply , and the last is called the posterior probability, which is . Bayes equation then looks like: Since the posterior probability is the holy grail of most questions humanity has asked, understanding the Bayes equation and its parts and how they relate to each other can tell us much about the optimal way of gaining and testing knowledge about the world. Bayes and Science Science is predominantly interested in comparing various hypotheses to determine which are the more probable among a set. This means that, with very rare exceptions, the probability researchers want is . However, the vast majority of statistics used in modern publications are based on frequentist approaches that can only return . Since frequentist approaches cannot tell you directly about the probability of a hypothesis, various ad hoc approaches have been attempted to force the results to fit this mold. The most frequently used device is the concept of statistical significance. This approach assigns an often arbitrary cut off to such that when the probability is below the threshold, it's “significant” and supports the hypothesis, and when it's above the threshold, a given hypothesis is rejected and random chance is favored. While this approach is immensely popular and dominates current reporting techniques in most peer reviewed journals, it is fraught with problems (see: statistical significance for a discussion of these issues). Bayesian approaches offer a solution to many of the problems exhibited by frequentist methods. Since Bayes equation returned directly, there is no need for arbitrary devices such as statistical significance. For these reasons, there is a growing group of researchers that advocate the use of Bayesian statistics in reporting on scientific findings. However, Bayesian statistics is not without its own issues. The most prominent issue surrounds the construction of priors. Since priors are fundamental to any Bayesian approach, they must be considered carefully. If two different researchers used two different priors, then the results of the statistics would be very different. Since posterior probabilities from previous work can be used as priors for future tests, the real issue centers around the initial priors before very much information is available. Some feel that setting priors is so arbitrary that it negates any benefit of using Bayesian approaches. If no information is available when setting initial priors, most people will use what is referred to as uniform prior. A uniform prior merely sets all possible hypotheses to an equal initial prior probability. Another approach is to use a reference prior, which is often a complex distribution created to specifically eliminate as much role of the prior in calculating a posterior as possible. However, this method is criticized as essentially eliminating any gain from using a Bayesian approach at all. Bayesian reasoning and the rational mind For many years, social sciences used the formulated concept that humans were inherently rational to guide predictive models of social, political and economic interactions. This concept is often labeled Homo economicus and has come under fire for a myriad of reasons, not the least of which is that people do not appear to behave rationally at all. A great amount of evidence in both economics and psychology have shown what appears to be consistent sub-optimal and irrational reasoning in laboratory experiments. Several of the more widely cited examples are: Co-variation analysis When attempting to gather information about whether two variables correlate with each other, there are four frequencies that can be gathered : - A is present and B is absent - B is present and A is absent - A is present and B is present - A is absent and B is absent Each of these should a priori carry the same weight when assessing correlation, but people will give far more weight to the case when both are present, and the least amount of weight to the case when both are absent. Another related and classic task is the Wason selection task, where subjects are asked to test a conditional hypothesis ' if p then q '. Subjects are usually requested to turn over cards that obey the given rule. For example, the hypothesis might be "If there is a vowel on one side, then there is an even number on the other side" then four cards are displayed such that two cards show the p and q and two cards show not p and not q for example A,K,2,7. In classic reasoning the cards to turn over are p and not q (A, 7)as these are the only ones that could falsify the rule. Fewer than ten percent of people will follow this pattern and most will instead turn over p and q cards (A, 2). This is viewed as classic irrationality. Surprisingly, people fare a lot better on this task when the problem is formulated in terms of cheating in a social exchange, e.g. "A child can eat the dessert only if s/he ate the dinner". Framing effects Descriptions of events can often be phrased in more than one logically equivalent way, but are often viewed differently. One common example is reporting survival statistics after diagnosis of disease. People will will feel more optimistic when told that they have a "75 percent chance of survival" than when told they have a "25 percent chance of dying." These statements are logically equivalent and should not invoke different responses, but they clearly do. This has also been assessed many times in risk tasks, where people are told they have a "75 percent chance of winning points" versus a "25 percent chance of losing points." People will select the task phrased in positive language and not select the task when phrased in negative language. Bayes to the rescue These examples plus others have been used to argue that people are actually irrational actors. However, these tasks are fairly contrived in the laboratory setting. The major missing ingredient in all of this is that people do not make choices in a vacuum. Prior information and experience can alter what choice is the most optimal. Reasoning that takes into account prior information along with basic logical and likelihood information is inherently Bayesian. A significant amount of evidence has emerged in the fields of cognitive psychology and neuroscience that humans use Bayesian approaches to asses their environment and make predictions. This occurs both at a low level in sensory processing, where neuron firing patterns seem to encode the probability distributions and handle the computations, and at a higher level of thought with people making executive decisions. When Bayesian approaches are used to analyze the above tasks, it turns out people are performing fairly optimally. Subject performance in the co-variation tasks and task selection above makes a lot more sense when you consider that most conditional hypotheses are formulated for the rare events rather than the common events. For example, if we are testing the correlation that smoking causes cancer, we know that smoking is relatively rare, and we know that cancer is relatively rare, then the four types of frequencies should no longer be equally weighted. Since there are a lot of people who do not smoke and a lot who do not have cancer, finding someone who does not smoke and does not have cancer is going to happen far more frequently by random chance than finding people who smoke and have cancer. This is exactly the distribution of weighting we see in the laboratory tests. For framing effects, it's important to realize that information in the real world is rarely communicated completely accurately. Social psychologists have found that people choose to phrase things in an optimistic or pessimistic fashion not randomly, but in a predictable way. People will describe a glass as "half-full" if it was empty and they watched it filled up half-way, but describe it as "half-empty" if it was full and was emptied by half. Therefore, taking into account social information and communication as background information, Bayesian reasoning may treat seemingly equivalent logical statements as actually conveying different information. Once again, this is how people behave in the laboratory. Evidence then seems to point to the fact that people are Bayesian reasoners, and can perform far more optimally and rationally than they are usually given credit for. Bayes and illusions In addition to the evidence that higher order cognitive functioning follows a Bayesian approach, there is a lot of evidence that lower order systems in the subconscious also use an analogous method. It appears that neurons encode and analyze sensory information using firing patterns to form probability distributions. This means that both a likelihood and priors are calculated for any stimuli. This may help explain many interesting aspects of cognitive processing. One recent example is this block of text which has been circling around the internet: “”Aoccdrnig to rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. The ability to read this paragraph so easily can be attributed to the Bayesian nature of cognitive processing. The letters present, as well as the anchoring of the first and last letter, provide fodder for a calculation of word likelihood. Having learned the priors after exposure to many sentences over a lifetime, it is very easy to reconstruct the actual meaning, even though the input is jumbled. Bayesian analysis of sensory input may also be the cause of many optical illusions. One example is illusions that are based on concave and convex. Below are two copies of the same image, rotated 180 degrees with respect to each other: The shift in concavity relies on a prior assumption that the light source comes from above. This is a perfectly sane and rational prior to build into the sensory system and would work under most conditions. But when there is actually no light source, and the difference in shades of color is real, the Bayesian mind winds up with the "wrong" hypothesis and hence the illusion. The "light from above" prior is one of the more specific Bayesian priors for optical illusions. Another very powerful illusion created by the assumption of a light source is the Adelson's shadowed checkerboard illusion. This image is shown below. Take a close look at the squares marked A and B; they are actually the same exact shade of grey. The illusion is caused because the various squares around A and B are shaded to create the image of a shadow. The prior assumption is that the difference in perceived shading is really due to a shadow cast by an unseen light source. The perceived brightness of the squares are then adjusted under the assumption of light and shadow. One of the first optical illusions to be tackled in psychology is the famous Gestalt triangle shown below. People tend to see a triangle occluding the other objects, rather than the other objects just lacking those pieces. Gestalt psychologists explained this as people processing objects as a whole rather than by pieces. The assumption is that there are three spheres and a dark lined triangle. The only way to make sense of this image then is to view an occluding second triangle. Knowing that the sensory system is Bayesian, this illusion can be explained more fully. A circle with a sector missing is a very rare occurrence. The prior for any such shape is that it is a whole sphere or circle. Additionally, the likelihood that all such objects with missing parts were to accidentally line up to create the shape of a triangle is very small. When these elements are combined in a Bayesian fashion, the hypothesis with the greatest probability is that there is an occluding triangle. Sometimes the information a Bayesian analysis returns is ambiguous, with two or more competing hypotheses with about equal posterior probabilities. In this case, slight perturbations in the sensory stream (random noise, slight changes in perceptional environment, etc.) will cause the brain to switch back and forth between perceptions. This is probably the cause of the shifting in the perceived depth of the Necker cube shown below. Optical illusions like the Necker cube that rely on ambiguous posterior probabilities can be highly sensitive to prior expectations. If two hypotheses for the perceived image are equal under uniform priors, then setting one prior higher than another should force the illusion to at least initially appear in that direction. Here is a test: this is an image of a rabbit.
http://rationalwiki.org/wiki/Bayesian
13
63
How can you find out what the other planets are like by just observing them carefully from the Earth? Most of the information comes in the form of electromagnetic radiation but we also have little chunks of rock, called meteorites, that give other clues. The image below compares the apparent sizes of the planets. The outer planets are shown at their closest approach to us and the two inner planets are shown at various distances from us (but all are with the same magnification). Before you can do any sort of comparison of the planets, you need to know how far away they are. Once you know their distances, you can determine basic properties of the planets such as mass, size, and density. To establish an absolute distance scale, the actual distance to one of the planets had to be measured. Distances to Venus and Mars were measured from the parallax effect by observers at different parts of the Earth when the planets were closest to the Earth. Knowing how far apart the observers were from each other and coordinating the observation times, astronomers could determine the distance to a planet. The slight difference in its position on the sky due to observing the planet from different positions gave the planet's distance from trigonometry. The state-of-the-art measurements still had a large margin of uncertainty. The last major effort using these techniques was in the 1930's. Parallax observations of an asteroid, called Eros, passing close to Earth were used to fix the value of the astronomical unit at 150 million kilometers. With the invention of radar, the distance to Venus could be determined very precisely. By timing how long it takes the radar beam travelling at the speed of light to travel the distance to an object and back, the distance to the object can be found from distance = (speed of light) × (total time)/2. The total time is halved to get just the distance from the Earth to the object. Using trigonometry, astronomers now know that the astronomical unit =149,597,892 kilometers. This incredible degree of accuracy is possible because the speed of light is known very precisely and very accurate clocks are used. You cannot use radar to determine the distance to the Sun directly because the Sun has no solid surface to reflect the radar efficiently. Isaac Newton used his laws of motion and gravity to generalize Kepler's third law of planet orbits to cover any case where one object orbits another. He found for any two objects orbiting each other, the sum of their masses, planet mass + moon mass = (4p2/G) × [(their distance apart)3/(their orbital period around each other)2]. Newton's form of Kepler's third law can, therefore, be used to find the combined mass of the planet and the moon from measurements of the moon's orbital period and its distance from the planet. You can usually ignore the mass of the moon compared to the mass of the planet because the moon is so much smaller than the planet, so Kepler's third law gives you the planet's mass directly. Examples are given in the Newton's Law of Gravity chapter. The one noticeable exception is Pluto and its moon, Charon. Charon is massive enough compared to Pluto that its mass cannot be ignored. The two bodies orbit around a common point that is proportionally closer to the more massive Pluto. The common point, called the center of mass, is 7.3 times closer to Pluto, so Pluto is 7.3 times more massive than Charon. Before the discovery of Charon in 1978, estimates for Pluto's mass ranged from 10% the Earth's mass to much greater than the Earth's mass. After Charon's discovery, astronomers found that Pluto is only 0.216% the Earth's mass---less massive than the Earth's Moon! For planets without moons (Mercury and Venus), you can measure their gravitational pull on other nearby planets to derive an approximate mass or, for more accurate results, measure how quickly spacecraft are accelerated when they pass close to the planets. If you know how far away a planet is from you, you can determine its linear diameter D. The diameter of a planet D = 2p × (distance to the planet) × (the planet's angular size in degrees)/360°, where the symbol p is a number approximately equal to 3.14 (your calculator may say 3.141592653...). The figure above explains where this formula comes from. This technique is used to find the actual diameters of other objects as well, like moons, star clusters, and even entire galaxies. How do you do that?As the planets orbit the Sun, their distance from us changes. At ``opposition'' (when they are in the direct opposite direction from the Sun in our sky) a planet gets closest to us. These are the best times to study a planet in detail. The planet Mars reaches opposition every 780 days. Because of their elliptical orbits around the Sun, some oppositions are more favorable than others. Every 15--17 years Mars is at a favorable opposition and approaches within 55 million kilometers to the Earth. At that time its angular size across its equator is 25.5 arc seconds. In degrees this is 25.5 arc seconds × (1 degree/3600 arc seconds) = 0.00708 degrees, cancelling out arc seconds top and bottom. Its actual diameter = (2p × 55,000,000 km × 0.00708°)/360° = 6800 kilometers. Notice that you need to convert arc seconds to degrees to use the angular size formula. Little Pluto is so small and far away that its angular diameter is very hard to measure. Only a large telescope above the Earth atmosphere (like the Hubble Space Telescope) can resolve its tiny disk. However, the discovery in 1978 of a moon, called Charon, orbiting Pluto gave another way to measure Pluto's diameter. Every 124 years, the orientation of Charon's orbit as seen from the Earth is almost edge-on, so you can see it pass in front of Pluto and then behind Pluto. This favorable orientation lasts about 5 years and, fortunately for us, it occurred from 1985 to 1990. When Pluto and Charon pass in front of each other, the total light from the Pluto-Charon system decreases. The length of time it takes for the eclipse to happen and the speed that Charon orbits Pluto can be used to calculate their linear diameters. Recall that the distance travelled = speed×(time it takes). Pluto's diameter is only about 2270 kilometers (about 65% the size of our Moon!) and Charon is about 1170 kilometers across. This eclipsing technique is also used to find the diameters of the very far away stars in a later chapter. Pluto's small size and low mass (see the previous section) have some astronomers calling it an ``overgrown comet'' instead of a planet and it was recently re-classified as a "dwarf planet". Another way to specify a planet's size is to use how much space it occupies, i.e., its volume. Volume is important because it and the planet's composition determine how much heat energy a planet retains after its formation billions of years ago. Also, in order to find the important characteristic of density (see the next section), you must know the planet's volume. Planets are nearly perfect spheres. Gravity compresses the planets to the most compact shape possible, a sphere, but the rapidly-spinning ones bulge slightly at the equator. This is because the inertia of a planet's material moves it away from the planet's rotation axis and this effect is strongest at the equator where the rotation is fastest (Jupiter and Saturn have easily noticeable equatorial bulges). Since planets are nearly perfect spheres, a planet's volume can be found from volume = (p/6) × diameter3. Notice that the diameter is cubed. Even though Jupiter has ``only'' 11 times the diameter of the Earth, over 1300 Earths could fit inside Jupiter! On the other end of the scale, little Pluto has a diameter of just a little more than 1/6th the diameter of the Earth, so almost 176 Plutos could fit inside the Earth. An important property of a planet that tells what a planet is made of is its density. A planet's density is how much material it has in the space the planet occupies: density = mass/volume. Planets can have a wide range of sizes and masses but planets made of the same material will have the same density regardless of their size and mass. For example, a huge, massive planet can have the same density as a small, low-mass planet if they are made of the same material. I will specify the density relative to the density of pure water because it has an easy density to rememeber: 1 gram/centimeter3 or 1000 kilograms/meter3. The four planets closest to the Sun (Mercury, Venus, Earth, Mars) are called the terrestrial planets because they are like the Earth: small rocky worlds with relatively thin atmospheres. Terrestrial (Earth-like) planets have overall densities = 4-5 (relative to the density of water) with silicate rocks on the surface. Silicate rock has density = 3 (less than the average density of a terrestrial planet) and iron has a density = 7.8 (more than the average density of a terrestrial planet). Since terrestrial planets have average densities greater than that for the silicate rocks on their surface, they must have denser material under the surface to make the overall average density what it is. Iron and nickel are present in meteorites (chunks of rock left over from the formation of the solar system) and the presence of magnetic fields in some of the terrestrial planets shows that they have cores of iron and nickel. Magnetic fields can be produced by the motion of liquid iron and nickel. Putting these facts together leads to the conclusion that the terrestrial planets are made of silicate rock surrounding a iron-nickel core. The four giant planets beyond Mars (Jupiter, Saturn, Uranus, Neptune) are called the jovian planets because they are like Jupiter: large, mostly liquid worlds with thick atmospheres. Jovian (Jupiter-like) planets have overall densities = 0.7-1.7 (relative to the density of water) with light gases visible on top. Gases and light liquids (like hydrogen and helium) have densities lower than water. Using reasoning similar to before you conclude that the jovian planets are made of gaseous and liquid hydrogen, helium and water surrounding a possible relatively small rocky core. Spectroscopy says the jovian planets have hydrogen, helium, methane, ammonia, and water gas in their thick atmospheres so the predictions are not too far off track. The properties determined for each planet are given in the Planet Properties table. Clicking on the planet's name will bring up the full fact sheet for that planet. The important properties are given in the table. |angular diameter||angular size||center of mass| Go back to previous section -- Go to next section last updated: March 25, 2013
http://www.astronomynotes.com/solarsys/s2.htm
13
63
Purpose and origins The Shulba Sutras are part of the larger corpus of texts called the Shrauta Sutras, considered to be appendices to the Vedas. They are the only sources of knowledge of Indian mathematics from the Vedic period. Unique fire-altar shapes were associated with unique gifts from the Gods. For instance, "he who desires heaven is to construct a fire-altar in the form of a falcon"; "a fire-altar in the form of a tortoise is to be constructed by one desiring to win the world of Brahman" and "those who wish to destroy existing and future enemies should construct a fire-altar in the form of a rhombus". The four major Shulba Sutras, which are mathematically the most significant, are those composed by Baudhayana, Manava, Apastamba and Katyayana, about whom very little is known. The texts are dated by comparing their grammar and vocabulary with the grammar and vocabulary of other Vedic texts. The texts have been dated from around 800 BCE to 200 CE, with the oldest being the sutra that was attributed to Baudhayana around 800 BCE to 600 BCE. There are competing theories about the origin of the geometry that is found in the Shulba sutras, and of geometry in general. According to the theory of the ritual origins of geometry, different shapes symbolized different religious ideas, and the need to manipulate these shapes lead to the creation of the pertinent mathematics. Another theory is that the mystical properties of numbers and geometry were considered spiritually powerful and consequently, led to their incorporation into religious texts. Pythagorean theorem The sutras contain discussion and non-axiomatic demonstrations of cases of the Pythagorean theorem and Pythagorean triples. It is also implied and cases presented in the earlier work of Apastamba and Baudhayana, although there is no consensus on whether or not Apastamba's rule is derived from Mesopotamia. In Baudhayana, the rules are given as follows: 1.9. The diagonal of a square produces double the area [of the square]. 1.12. The areas [of the squares] produced separately by the lengths of the breadth of a rectangle together equal the area [of the square] produced by the diagonal. 1.13. This is observed in rectangles having sides 3 and 4, 12 and 5, 15 and 8, 7 and 24, 12 and 35, 15 and 36. The Satapatha Brahmana and the Taittiriya Samhita were probably also aware of the Pythagoras theorem. Seidenberg (1983) argued that either "Old Babylonia got the theorem of Pythagoras from India or that Old Babylonia and India got it from a third source". Seidenberg suggested that this source might be Sumerian and may predate 1700 BC. Staal 1999 illustrates an application of the Pythagorean Theorem in the Shulba Sutra to convert a rectangle to a square of equal area. Pythagorean triples However, since these triples are easily derived from an old Babylonian rule, Mesopotamian influence is not unlikely. The Baudhayana Shulba sutra gives the construction of geometric shapes such as squares and rectangles. It also gives, sometimes approximate, geometric area-preserving transformations from one geometric shape to another. These include transforming a square into a rectangle, an isosceles trapezium, an isosceles triangle, a rhombus, and a circle, and transforming a circle into a square. In these texts approximations, such as the transformation of a circle into a square, appear side by side with more accurate statements. As an example, the statement of circling the square is given in Baudhayana as: 2.9. If it is desired to transform a square into a circle, [a cord of length] half the diagonal [of the square] is stretched from the centre to the east [a part of it lying outside the eastern side of the square]; with one-third [of the part lying outside] added to the remainder [of the half diagonal], the [required] circle is drawn. and the statement of squaring the circle is given as: 2.10. To transform a circle into a square, the diameter is divided into eight parts; one [such] part after being divided into twenty-nine parts is reduced by twenty-eight of them and further by the sixth [of the part left] less the eighth [of the sixth part]. 2.11. Alternatively, divide [the diameter] into fifteen parts and reduce it by two of them; this gives the approximate side of the square [desired]. The constructions in 2.9 and 2.10 give a value of π as 3.088, while the construction in 2.11 gives π as 3.004. Square roots Altar construction also led to an estimation of the square root of 2 as found in three of the sutras. In the Baudhayana sutra it appears as: 2.12. The measure is to be increased by its third and this [third] again by its own fourth less the thirty-fourth part [of that fourth]; this is [the value of] the diagonal of a square [whose side is the measure]. which leads to the value of the square root of two as being: One conjecture about how such an approximation was obtained is that it was taken by the formula: - with and which expresses in the sexagesimal system, and which too is accurate up to 5 decimal places (after rounding). Indeed an early method for calculating square roots can be found in some Sutras, the method involves the recursive formula: for large values of x, which bases itself on the non-recursive identity for values of r extremely small relative to a. Before the period of the Sulbasutras was at an end, the Brahmi numerals had definitely begun to appear (c. 300BCE) and the similarity with modern day numerals is clear to see. More importantly even still was the development of the concept of decimal place value. Certain rules given by the famous Indian grammarian Pāṇini (c. 500 BCE) add a zero suffix (a suffix with no phonemes in it) to a base to form words, and this can be said somehow to imply the concept of the mathematical zero. It has sometimes been suggested the sutras contain knowledge of irrationality and irrational numbers. List of Shulba Sutras The following Shulba Sutras exist in print or manuscript - Maitrayaniya (somewhat similar to Manava text) - Varaha (in manuscript) - Vadhula (in manuscript) - Hiranyakeshin (similar to Apastamba Shulba Sutras) Further reading - Parameswaran Moorthiyedath, "Sulbasutra" - Seidenberg, A. 1983. "The Geometry of the Vedic Rituals." In The Vedic Ritual of the Fire Altar. Ed. Frits Staal. Berkeley: Asian Humanities Press. - Sen, S.N., and A.K. Bag. 1983. The Sulbasutras. New Delhi: Indian National Science Academy. See also - Plofker, Kim (2007). "Mathematics in India". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton University Press. ISBN 978-0-691-11485-9. - Boyer, Carl B. (1991). A History of Mathematics (Second Edition ed.). John Wiley & Sons, Inc. ISBN 0-471-54397-7. - Cooke, Roger (1997). The History of Mathematics: A Brief Course. Wiley-Interscience. ISBN 0-471-18082-3. - Cooke, Roger (2005), The History of Mathematics: A Brief Course, New York: Wiley-Interscience, 632 pages, ISBN 0-471-44459-6 - Staal, Frits (1999), "Greek and Vedic Geometry", Journal of Indian Philosophy (Kluwer Academic Publishers) 27: 105–127 Citations and footnotes - Plofker, Kim (2007). p. 387. "Certain shapes and sizes of fire-altars were associated with particular gifts that the sacrificer desired from the gods: "he who desires heaven is to construct a fire-altar in the form of a falcon"; "a fire-altar in the form of a tortoise is to be constructed by one desiring to win the world of Brahman"; "those who wish to destroy existing and future enemies should construct a fire-altar in the form of a rhombus" [Sen and Bag 1983, 86, 98, 111]. The Sulbasutra texts are associated with the names of individual authors, about whom very little is known. Even their dates can only be roughly estimated by comparing their grammar and vocabulary with the more archaic language of earlier Vedic texts and with later works written by so-called "Classical" Sanskrit. The one we shall look at is the oldest according to these criteria, composed by one Baudhayana probably around 800-600 BCE. It tells the priests officiating at sacrifices how to construct certain shapes using stakes and marked cords. [...] Many of the altar constructions involve area-preserving transformations, such as making a square altar into a circular or oblong rectangular one of the same size. We don't know how these geometric procedures originally came to be associated with sacrificial rituals. Various theories of the "ritual origin of geometry" infer that the geometrical figures symbolized religious ideas, and the need to manipulate them ritually inspired the development of the relevant mathematics. It seems at least equally plausible, though, that the beauty and mystery of independently discovered geometric facts were considered spiritually powerful (perhaps like the concepts of number and divisibility mentioned about), and were incorporated into religious ritual on that account." Missing or empty - Boyer (1991). "China and India". p. 207. "we find rules for the construction of right angles by means of triples of cords the lengths of which form Pythagorean triages, such as 3, 4, and 5, or 5, 12, and 13, or 8, 15, and 17, or 12, 35, and 37. However all of these triads are easily derived from the old Babylonian rule; hence, Mesopotamian influence in the Sulvasutras is not unlikely. Aspastamba knew that the square on the diagonal of a rectangle is equal to the sum of the squares on the two adjacent sides, but this form of the Pythagorean theorem also may have been derived from Mesopotamia. [...] So conjectural are the origin and period of the Sulbasutras that we cannot tell whether or not the rules are related to early Egyptian surveying or to the later Greek problem of altar doubling. They are variously dated within an interval of almost a thousand years stretching from the eighth century B.C. to the second century of our era." - The rule in the Apastamba cannot be derived from Old Babylon (Cf. Bryant 2001:263) - Plofker, Kim (2007). pp. 388–389. Missing or empty - Seidenberg 1983. Bryant 2001:262 - Seidenberg 1983, 121 - Joseph, G. G. 2000. The Crest of the Peacock: The Non-European Roots of Mathematics. Princeton University Press. 416 pages. ISBN 0-691-00659-8. page 229. - Plofker, Kim (2007). pp. 388–391. Missing or empty - Plofker, Kim (2007). p. 391. Missing or empty - Plofker, Kim (2007). p. 392. "The "circulature" and quadrature techniques in 2.9 and 2.10, the first of which is illustrated in figure 4.4, imply what we would call a value of π of 3.088, [...] The quadrature in 2.11, on the other hand, suggests that π = 3.004 (where s = 2r·13/15), which is already considered only "approximate." In 2.12, the ratio of a square's diagonal to its side (our is considered to be 1 + 1/3 + 1/(3·4) - 1/(3·4·34) = 1.4142.]" Missing or empty - Cooke (1997). "The Mathematics of the Hindus". p. 200. "The Hindus had a very good system of approximating irrational square roots. Three of the Sulva Sutras contain the approximation for the diagonal of a square of side 1 (that is ). [...] We can only conjecture how such an approximation was obtained. One guess is the approximation with a = 4/3 and r = 2/9. This approximation follows a rule given by the twelfth century Muslim mathematician Al-Hassar." - Neugebauer, O. and A. Sachs. 1945. Mathematical Cuneiform Texts, New Haven, Connecticut, Yale University Press. p. 45. - (Cooke 2005, p. 200) - Boyer (1991). "China and India". p. 208. "It has been claimed also that the first recognition of incommensurables is to be found in India during the Sulbasutra period,"
http://en.wikipedia.org/wiki/Shulba_Sutras
13
121
Johannes Kepler (1571 - 1630) So wrote Johannes Kepler in 1605 to his friend Fabricius. The letter concerned the elliptical nature of the orbit of Mars. By this time, Kepler was a confirmed Copernican. And he was in Prague, an associate of the famous Danish astronomer Tycho Brahe. Tycho was the best star watcher and recorder of star positions of that time. When Tycho died, Kepler inherited his voluminous records of star and planet positions. This was a big assistance in his work. Above is part of a letter from Kepler to David Fabricius, who was an independent-minded thinker, but who yet kept contradicting Kepler on his discovery. Like Copernicus, Fabricius preferred that Mars have a circular orbit, not an elliptical one, and that the red planet's velocity in space never changed. Kepler knew better. Kepler had carefully studied the orbit of Mars and had logged the positions of Mars in its present orbit. From his studies, and supported from Tycho's records, a second opinion, Kepler concluded that Mars advanced more slowly in one part of its orbit, near its aphelion, and faster when nearing its perihelion. Kepler had calculated and determined that the Martian orbit was an ellipse; it was not a circular orbit as Copernicus and Fabricius had assumed. This was the start of his scientific breakthrough. Kepler was well on his way to codifying his famous three laws of planetary motions. This codification, published in 1626 twelve years later, was his biggest achievement - and the most painful achievement to the more numerous astronomers, flat Earth folk, who preferred Ptolemy, and who preferred having the Sun to orbit the Earth. In producing his three laws of planetary motion in 1626, his Rudolphine Tables, Kepler became the father of celestial mechanics and of modern astronomy. This publication in 1626 became one the strongest of a long series of birth pangs that birthed modern science. [n1], [n2] It is to be observed that Kepler's work featured a study of the orbit of Mars in the modern era. That orbit is now known to have an eccentricity of .0933865, a perihelion of 128,409,085 miles and an aphelion of 154,862,861 miles. Its perihelion has a longitude of 335°, which places its semi-major axis in space. Its modern period is 686.978839+ days, a period which began in 701 B.C.E. This present research is a study, including calculations, of the likely orbit of Mars in the Catastrophic Era. Shortly, the energy calculations and the shifts in energy for the ancient orbit of Mars will be presented. In the next chapter, the simultaneous angular momentum shifts and their calculations are presented. Both address the heart of celestial mechanics. The key date is March 20-21, 701 B.C.E. This was the night of the Earth's final waltz with Mars. This occasion as recorded by Isaiah was the 15th year of the reign of King Hezekiah. Edwin R. Thiele has produced a classic in dating the era of the kings of Judah and Israel. In his historical analysis, The Mysterious Numbers of the Hebrew Kings, Thiele concludes the year of the key flyby, Hezekiah's 15th, was 701 B.C.E. [n3] Our astronomical study, based on retro-calculations of the orbits of Mars, Jupiter and Saturn, independently places Mars there at this time. Retro-calculations for 701 B.C.E. also place Jupiter in the feared and significant Capricorn, and Saturn at 180° opposite, in the feared and significant zone of Cancer. If planetary catastrophists were unable to demonstrate logically - and could not model - how Mars shifted from its “catastrophic orbit” to its modern orbit, reservations and indeed full-blown disbelief about planetary catastrophism would be appropriate. |If, on the other hand, a model can be developed wherein energy shifts and angular momentum realignments are solved simultaneously, and work in harmony, then it can be called a “scientific model.” Then, tit for tat, reservations and full disbelief about the gradualist system would be appropriate. Models are two-edged swords. This study is about the Catastrophic Era orbit of Mars and the change of that orbit. Its orbital eccentricity was approximately .561; its perihelion was about 64,353,000 miles and its aphelion about 228,805,000 miles. Its longitude of perihelion was 105° and its period was 723.26 days (modern days, not Catastrophic Era days). |In that era, this analysis models the orbital period of Mars as having been in following resonance’s: With the Moon 1:24 In addition, there was a near perfect ratio of the orbits of Mars and Saturn, very close to 15:1. It was an age of orbital “harmony in the heavens,” an ancient phrase used notably in the Book of Job. More appropriately, the former age was an age of orbital harmonics. All this ended in the year 701 B.C.E. How Did Mars Achieve Its Modern Orbit? If the shift in the orbit of old Mars to modern Mars, herein advocated, cannot be demonstrated, any and all discussions of Mars-Earth catastrophism are cloudy and uncertain. Therefore for science, the next four chapters, 9, 10, 11 and 12, are the most important chapters in this volume. The heart of these chapters are Tables XI, XII and XIII, what are named our “Rudolphine Tables”. In chapters 11 and 12, twelve clues of scientific support are presented. The aim is to present a triad of truth - energy shifts, angular momentum shifts and solid supporting clues. Tables XI and XII of the Rudolphine Tables are condensations. Table XIII, the most detailed, addresses those ancient orbital shifts in 24 categories of astronomical data. This data defines precisely what happened. It is incumbent for serious students of science and cosmology to delve into this data --- statistics of orbits and of planet masses that comprise planet energies and planet angular momentum’s for four planets involved. For those of non-mathematical backgrounds, the significance of the ensuing data may be lost in “boring” numbers. In such an event, skipping over the upcoming statistical material in favor of the more descriptive material is appropriate. But it is with the recommendation that later, the Tables XI, XII and XIII be reexamined. It might be what all gradualists assume, that Mars has been only a slightly reddish pinpoint of light in the nocturnal heavens since the beginning of time. It might be that Mars never has come closer to our planet than 33,900,000 miles. Moreover, it might be that Mars never orbited out further, into the asteroid belt, some 230,000,000 miles distant from the Sun. If so, why are 93% of its craters on one side, and how did it capture its little satellites on the fly? If that were the case, this volume and even this series might have to be classified with such books as Alice in Wonderland, The Wizard of Oz and the Star Trek themes. Fantasy. Historical fantasy and science fiction. However the converse also is true. If this model can be defended successfully in celestial mechanics, it logically follows that some version of the data herein is solid history --- solid history of the solar system -- solid cosmology. Involved are the changes to orbits for Venus, Mars, the Earth-Moon system, Jupiter and ... Astra. Like the dogma of Claudius Ptolemy, the 18th-19th-20th century theories and dogmas of gradualism would then become recognized as fantasy, imaginative and just so much science fiction. There is no mistaking it; at least one of these two competing paradigms of solar system history is applesauce. Fifty years ago, virtually all academic astronomers and geologists classified themselves as millions of years “gradualists.” However times do change. Photographic evidence has come in showing heavy catastrophism of the surfaces of Mercury, Venus, the Moon, Mars, Deimos, Phobos, Io, Europa, Ganymede, Callisto, Titan, the Rings of Saturn, and other satellites more distant. In general, these surfaces reveal surprising spasms of catastrophism, whatever may be the era. The evidence dictates that some events there were rather energetic. Due perhaps to the planetary missions of the last 30 years, now, in the l990's, almost all astronomers and geologists prefer to be known as “Millions of Years Catastrophists.” “Catastrophism” has become the latest vogue, an acceptable label of academic fashion. This is a 180-degree change in the preference of styles over the last 40 years since the early first space probes. In the 1940's and 1950's, catastrophism had about as much respect as prostitution. But the changes that have occurred may have been one of style, not one of substance, not a change in paradigm. Judging from the scenario of planetary catastrophism, many “lately styled catastrophists” actually cling to most of the traditional assumptions and dogmas of gradualism, whether or not they are aware of it. There hasn't been a competing paradigm, except perhaps among the followers of Immanuel Velikovsky. With the model herein, the dogma of “Millions of Years For Catastrophism” has competition. Not only are the nature of catastrophic processes at issue; so are the timing issues. In geology, like astronomy, the new style of the 1980's and 1990's also has become catastrophism. There are paleo-catastrophists, meso-catastrophists and ceno-catastrophists in addition to astronomy, where there are “asteroid catastrophists” and “comet catastrophists”. To repeat, in 1996 hyphenated catastrophism is in vogue, but no basic paradigms have changed. Until paradigms shift, the change toward “catastrophism” is more one of style than of substance. Some still assume what has been so widely taught, that the Sun had a hiccup or two, and a cough or two, for reasons unknown, four billion years ago. Expelled out was the makings of a planet or two, or three or four, or six or eight, even nine. Yet the planets we encounter have spin rates, often in pairs, craters, satellite systems, etc., all unexplained. Such is the dysfunctional, traditional menu of gradualism. There is evidence that planetary catastrophism has assaulted Venus, the Earth, Mars and has destroyed Astra. In addition, it is pointed out in our “The Recent Organization of the Solar System”, there are four levels of evidence that planetary catastrophism has also extended the heart of this solar system, to the Sun itself. The only conclusion we can draw from the evidence is that the Sun itself had a nova recently, as astronomers chart time. The Sun is still cooling and shrinking, recovering adjustments of that crisis to the Sun. As to asteroid catastrophism, asteroids typically are .0000000002 of the mass of the Earth; about one twentieth of a billionth of the mass of the Earth. Such masses can create surface craters of various sizes. But a single asteroid collision with the Earth cannot create a spin axis shift, an orbital shift, a geomagnetic field dynamo (or generator), a fragmentation (icy or rocky), or a paleomagnetic polarity reversal, much less sudden crustal tears and up thrusts, a hemispheric sized flash flood and sudden sedimentary strata. Our planet, the Earth, has experienced all of these. Evidence indicates the timing thereof has been “recent”. Comets, typically are icier and smaller than asteroids. Typically, they are tiny, dirty snowballs with orbits of high eccentricity. Were a small ice ball to enter the Earth's atmosphere, like the Tunguska Bolide on Northern Siberia, there would be considerable damage on a local scale. The Tunguska Bolide, June 30, 1908, 7:17 a.m., was just such an event. It created a flash explosion, with a shock wave heard for over 1,000 miles. It decimated forests, blasting them down and burning them up, all within a 10-mile radius. But a colliding comet could never cause (1) a spin axis shift of several degrees, (2) an orbital relocation, (3) a paleomagnetic polarity reversal, (4) a hemisphere-wide flood or several other phenomena associated with ancient planetary catastrophes. Never. These scales are entirely different. In geology, all newly styled catastrophists have to decide if they are paleo-catastrophists, meso-catastrophists or ceno-catastrophists. But they have yet to identify the causing agent or the scope or the timing of these catastrophes. They have yet to consider Mars making a planetary flyby as close as 100,000 miles or (heaven forbid) 27,000 miles to our planet. Thus, planetary catastrophism is neither asteroid catastrophism, nor is it comet catastrophism. Not in agent of cause, not in scope, not in timing. The energy exchanges in planetary catastrophism far exceed such minuscule explanations. And the timing of these upheavals is much too recent. To reiterate, if planetary catastrophists cannot demonstrate logically - if it cannot be modeled - how Mars shifted from its “catastrophic orbit” to its modern orbit, then this model of planetary catastrophism is in trouble. Also, conversely, if it can be sensibly modeled, 18th century gradualistic cosmology is in trouble. Chapters 9, 10, 11 and 12 are dedicated to this task, and this achievement. For many general readers, who are comfortable with popularized materials, the discussion will now move into an area with which they are unfamiliar, celestial mechanics. However, to accent the positive, mind-bending material can be good; it can open up new vistas of thought and new perspectives of events in ancient history. This is the first of four chapters that address how and why the orbit of Mars shifted from its Catastrophic Third Orbit to its Modern Era Orbit. This chapter addresses energy shifts. Chapter 10 addresses angular momentum shifts between Mars and Venus, the Earth-Moon system, and Jupiter. It is found that the two energy shifts were less than 60 days apart. The last of the Mars-Venus polkas briefly preceded (by 58 days) the last of the Mars-Earth waltzes. Chapters 11 and 12 concern clues, twelve in number. In murder mysteries, clues are left at the scene and sometimes elsewhere. In the death of the Catastrophic Third Orbit of Mars, twelve clues exist, and they can be sifted from the ancient “wreckage.” Each clue points to the crime, its timing, and to its perpetrator, the red planet. Many physical scars on the surfaces of Mars, the Earth and the asteroids have been discussed in Chapters 1 to 8. More are discussed in Chapters 9 to 11. In a change of direction, Chapter 12 discusses psychological scars that still remain with us, 2,700 years later. It discusses over 350 of them. Energy and Angular Momentum ENERGY. In a closed system, energy can be given up by one body to another and it can change its form as well. One planetary body can acquire (or lose) a specific amount of energy. But at the same time, another planet must give up (or acquire) an equal amount. Such is the second law of thermodynamics. Such is the principle of the conservation of energy. Nature knows no exception in a closed system, such as our solar system. The total amount of energy in a closed system must always total the same - at the beginning, at the end of each stage in between, and at the end of an entire series of stages. This process is what happened to four planets, rapidly, late in the 8th century B.C.E. Because no “exchange” is perfectly efficient, some energy always is transformed into heat, tides, etc. This is insignificant both in billiards and in planets where calculations are taken to six decimals. ANGULAR MOMENTUM. Similarly, in a closed system, angular momentum also can shift from one Solar System body to another nearby planet. But at the same time, what one body gains in angular momentum, another planet must lose, and this is also true at the end of each and every stage. Again, as with energy changes, nature knows no exceptions. Perhaps one can visualize the Solar System, looking down on it from above, viewing it from the North Star. In this format for a planet in orbit, one can visualize ENERGY as “vertical motion”. This is a measure of its long axis multiplied by planet mass. This is the “x” axis of its orbit. Energy changes relate to any and all vertical changes along this axis. Changes can be in length, or in direction or both. Energy is a total property of a body that relates to its mass, its speed, its temperature, etc., but not to its direction of travel. In this same context, one can view ANGULAR MOMENTUM as “horizontal motion,” or sideways movement. Its changes affect only an orbit's “y” axis, the measure of an orbit's widest dimension. It is called the minor axis. Momentum relates not only to a planet's mass and its velocity but also to its direction. To the mathematician, energy is a scalar value, while angular momentum is in vectors. There is a similarity between shifts - (a) of energy and (b) of angular momentum among the planets. It is (c) the completion of a crossword puzzle. Crossword puzzles are composed of both horizontal words and vertical words. To solve it correctly, all the letters in vertical words must agree with all letters in the horizontal words. Otherwise, the solution for the puzzle is flawed. Energy can be likened to the vertical words of a crossword puzzle. Angular momentum can be likened to words on the horizontal axis. In a correct solution, all words and letters must agree, harmonize and integrate. If both of these two issues are found to be in agreement, simultaneously, it can be concluded that the puzzle is solved (or nearly solved). If one issue is not addressed, or is not in agreement, the puzzle is not solved. Like crossword puzzles, there is one totally correct solution, and only one. Put another way, if the two issues above are found to be in agreement, it can be said that it is a completed model that conforms to the Second Law of Thermodynamics. It is a scientific model. It can be verified by known data, equations, procedures and testing. On the other hand, if a model of catastrophism does not conform to these laws, it can be said that such a model is imaginary, wishful thinking, and/or speculative. It could even be the stuff from which fantasies, wild speculations, fairy tales, false superstitions and science fiction plots are spun. The Circulization Of The Orbit Of Mars Issue 1. Mars And Astra Distances There are five prominent issues involved in this model, and in understanding the shift of Mars from its catastrophic orbit into its modern orbit. The first issue concerns whether or not it was the Roche Limit of Mars - and hence the mass of Mars - that caused little Astra to fragment into asteroids. If this was the case, then when Astra fragmented, Mars must have been some 225,000,000 miles distant from the Sun, where the average perihelion’s of the asteroids is. This is contrary to traditional theory. Mars nevertheless was there. The red planet was 70,000,000 miles farther out, or more, than is the modern orbit of Mars. Probably, its ancient aphelion was even somewhat farther out. If Mars indeed was there, its energy level would have to be maintained within a few per cent. The center of its orbit would be not far different than now. With this in mind, if Mars went out 70 or 80,000,000 miles farther, also it would had to have come in much closer. Its orbital length and its eccentricity would have to have been greater. In so doing, Mars would approach closer to the Sun than the Earth, and even closer than Venus in a long narrow orbit. The central issue then is identified as the old eccentricity of the orbit of Mars. Its modern eccentricity is .093387. A prefect circle has an eccentricity of .000000. THE CENTRAL ISSUE THEN IS WHETHER OR NOT THE ORBIT OF MARS EXPERIENCED A SUDDEN AND SUBSTANTIAL SHIFT IN ITS FORMER ECCENTRICITY. Its ancient orbital eccentricity would had to have been greater than .400000 in order for its perihelion to be within the orbit space of the Earth-Moon system. And even more, its ancient orbit eccentricity would had to have been over .500000 in order to orbit within Venus orbit space. Its modern eccentricity is under .10, at .093. Therefore research for this model requires a sudden change in an ancient eccentricity from the old Mars orbit in excess of .400. The issue is not one of a vast shift in the total orbit of Mars. It is much less; it is a vast shift only in its eccentricity, or oblateness. This model has the ancient orbit of Mars, from perihelion to aphelion, with a length of 293,158,820 miles. Its modern orbit from perihelion to aphelion is 283,271,946 miles. In this model, the length of the semi-major axis of Mars has diminished, but only 3.37%. Mars lost some energy and in so doing, the length of its “x” axis was reduced by 9,886,874 miles. The Martian energy diminished from -1.345017 to -1.39196l. What was the cause of such a reduction? Did another planet's energy similarly increase? Was that planet the Earth? Did ancient calendars of 360 days suddenly become obsolete, requiring a higher day count per year, reflecting an increase in energy? An increase from 360 to 365.256 is an increase of about 1.5%. What is this story? This staff also has calculated that the probable former orbital eccentricity of Mars was .561. Its orbit was lower in eccentricity than Halley's Comet, but nevertheless, it had a high eccentricity. This compares to the modern eccentricity of the Martian orbit at .093. It is a shift downward of .468. Its follows that in the catastrophes of the late 8th century B.C.E., the width of the “y” axis, the minor axis of Mars increased. This model indicates the length of its minor axis (the “y” axis) increased by 19,672,856 miles as its orbit rounded out. This is why Mars no longer bothers Mars, and it no longer bothers the Earth-Moon system. It was the increase in angular momentum that rounded out the orbit of Mars, with the very significant results that celestial peace came to the inner solar system. Thus it is seen that the long axis of the orbit of Mars was shortened, but not a lot. But its short axis, the “y” axis increased by 16.2%. In this process, the period of ancient Mars was reduced from 723.26 new days down to the current 686.98 new days. Does the shortening of the semi-major axis of Mars integrate well with the evidence that the Earth's ancient period increased from 360 to 365.256 days? Of course. Does an increase in its semi-minor axis and its rounding out agree with the observation that Mars no longer threatens either the Earth or Venus? Again, of course. The Rudolphine Tables, XI, XII and XIII indicate that the Earth's semi-major axis did increase, from 92,339,242 miles to the modern 92,955,807 miles. It was an increase of 0.668%. One of the results was that all of the old 360-day calendars used by numerous ancient societies suddenly became obsolete. Calendars of 365, 365.25 or 365.25+ days were needed in replacement. Did this happen in the 7th and 6th centuries B.C.E.? Most certainly, according to ancient testimonies from a wide variety of Issue 2. The Moon And An Ancient 30-Day Orbit The second issue concerns the Moon and whether or not some 12 to 15 ancient lunar calendars, each containing 30.00 days for the old lunar orbit, were accurate and hence useful in the ancient era. If it were not the case, why are there so many ancient calendars from diverse peoples and places all making the same mistake? After all, ancient calendar makers were not Neanderthals; they may have been basically more intelligent than modern men. If the period of the Moon's orbit shifted, some event such as a planetary flyby must have occurred, and affected both the Earth and the Moon simultaneously. It reduced the Moon's period from about 30.00 old days (sidereal) down to 29.53 new days, its modern period. If ancient calendars contained 30 days, not 29.53 days, the Moon's orbital radius necessarily was decreased from about 241,500 miles to the modern distance, 238,900 miles. If so, this was a shrinking of 1.08% in the radius of the Moon's orbit. Could such a strange change happen if Mars passed between the Earth and the Moon when at full? Yes it could. The Earth would be pulled outward while Mars would be pulled inward. How could the expansion of the Earth's orbit be related simultaneously to the contraction of the orbits both of the Moon and of Mars? Does this mean that the Earth gained energy or that the red planet lost energy? Yes for both. Both of these are issues are easily answered IF MARS “BUZZED THE EARTH” FROM THE NIGHT SIDE OF THE EARTH, SOMEWHERE BETWEEN THE EARTH AND THE MOON, that is, DURING A FULL MOON. The ancient Hebrew “Passover” was always recorded to have occurred on the night of the first full moon of Nisan (or our March). The “Passover” of -701 featured just such a full moon. On their Mosaic calendar, the Passover was Nisan 13, always a Friday night in the Hebrew calendar; it was the “unlucky thirteenth”. (It corresponds to the night of March 20 on our modern calendar). This night, once every century or so, actually involved a devastating Mars flyby. Research to be published later indicates repeated Mars flybys in 108-year cycles. Ancient traditions of a catastrophically unlucky Friday the thirteenth is a vestige from Hebrew traditions when, sometimes, Passovers on that date were very destructive. The Romans have a similar tradition, their tubulustrium, a time of trouble, also March 20, appropriately, in a month they named after Mars. The Irish, on the other hand, are not to be outdone. There were October 24 flybys also, which were even more destructive to the Eastern Hemisphere than were the March flybys. Our modern Halloween themes are a carry over, a vestige from ancient Celtic traditions about destructions coming visibly from the heavens, in late October, at the time of a full moon. The comet Mars' tail was interpreted as the witch's broomstick on which she flew. It is the Irish interpretation comparable to the Greek “Fleece of Aries”. Occasionally the cosmos of Celtic Ireland was fraught with celestial danger - something witchy, something moving across the heavens, making screechy celestial noises, with an orbit and a cometary tail instead of by broomstick. That something was something close enough and massive enough to produce loud noises from lightning, shock waves, earthquakes, weird subliminal noises at subliminal levels, possibly due to uniting planetary magnetic fields. In the Hebrew calendar, March 20 or Nisan 13 always was a full moon ... and Friday the 13th - of the month of Nisan. If the Moon always was full on Nisan l3, and the ancient Hebrews had twelve 30-day months, Mars had to be in some kind of orbital resonance with the Earth. It had to be. Therefore it is important to note how many other societies around the Earth also had 360-day calendars (and 360-degree circles). This will be addressed in chapter 12. In the modern era, lunar calendars don't work well for anniversaries. Passover nights, on full moons, are always on a different day each year, like the dates for Easter. In this age, Passover celebrations and Easter Sundays are linked to the first full moon AFTER the vernal equinox. In that era, the full moon of March/Nisan always was the vernal equinox. Ancient Hebrew calendars portray this. In this age, 2700 years after the change in orbit occurred, lunar calendars still are employed to determine Easter and Passover, a vestige of ancient 30-day calendars. Periodically, repeatedly, in 108-year cycles, the Destructive Angel of the Lord arrived with “pestilences”, possibly involving geomagnetic waves, accompanied by cosmic lightning discharges, fire from heaven, earthquakes, deforming crustal bulges, volcanic upheavals, oceanic tides, and a scintillating geomagnetic show. Talmudic sources identify the night of the Final Flyby as March 20-21. Our research indicates March 20-21 of the year 701 B.C.E. The word “Passover” is, and was quite appropriate. [n4] Issue 3. The Earth And An Ancient 360-Day Year The third issue concerns whether or not the Earth's orbit suddenly expanded from a former 360 day count per year to a modern 365.256 day count. Such a shift would involve a sudden expansion of the day count per year by about 1.003%. This would necessarily involve an expansion of the average radius of the Earth's orbit from approximately somewhere around 92,339,242 miles to the modern value of 92,955,807 miles. This is an expansion of 616,000+ miles in the radius, 0.668%. If catastrophism happened as this model describes, that perturbation by Mars was more than a mere tweak on the Earth's orbit. It was a strong yank on our orbit, with simultaneously a strong gyroscopic torque on our planet's spin axis. [n5], [n6] If this occurred during the last Martian flyby event, it could occur only if Mars passed the Earth's outside, which is as Isaiah and Talmudic source material both report to have been the case. Mars in orbit would have to have lapped the Earth-Moon system, somewhere between the Earth and the Moon. And the moon would have to have been at or near full, as in “a Passover night”. Such an orbit expansion for the Earth and orbit contraction for the Moon could happen, but only if that planetary flyby was between the two, but much closer to the Earth than the Moon. The Moon's mass is 1.23% of the Earth's mass, about one part in 81.5. Mars is 10.74% of the Earth's Mass. The Moon is 11.45% of the mass of Mars. Thus, roughly, the Earth is nine times the mass of Mars, and Mars is nine times the mass of the Moon. In order to create these kinds of orbital perturbations, Mars would have to go on the night side of the Earth, during a full moon. And the red planet's distance to the Earth at its closest works best if it was between one-eighth and one-ninth of the Earth's distance to the Moon. This means that Mars, at its closest, was between 26,500 miles and 30,000 miles from the Earth, center to center. And Mars was, at its closest, 215,000 miles from the Moon, again center to center. Orbit perturbations (directional changes or shifts) caused by one planet on another follow an equation that states changes are according to the mass of the two planets, and are inversely proportional to the distance of the flyby squared. This means that, during flybys, each time the distance between Mars and the Earth was halved, as from 240,000 to 120,000 miles, orbit shifts and energy shifting, whatever the amount, increased fourfold. Thus, at a flyby of 27,000 miles, the orbit perturbation was four times as much as at 54,000 miles. And perturbations would be sixteen times as much as flybys at 108,000 miles. And 64 times as much as at 216,000 miles. During a CLOSE, OUTSIDE flyby, Mars could shift its directions --- somewhat --- its orbital direction now could pivot on the Earth. Such a close outside flyby by Mars would produce a push or a torque on the Earth's spin axis, producing a minute increase in the rate of its rotation. Such a close flyby could, and apparently did increase the Earth's spin rate, however minutely. The model presented in the next three chapters maintains there was an increase in spin rate due to the Final Flyby of Mars, and its torque. That speed up is calculated at approximately 0.452+%. Thus, it is modeled that the spin rate increased from about 360 old days to 361.628 new days, an increase of 1.628 days or 0.452+%. Simultaneously the orbit expanded. The combination of the new orbit and the new spin rate produced an increase from 360 old days (= 361.628 new days) to 365.256365 days per orbit. Of the combination producing the new increase in day count, this model proposes about 31% was due to the tiny increase in spin rate, and the rest, about 69%, was due to the expansion of the orbit, an expansion over 600,000 miles in the semi-major axis. Today's spin rate relative to the fixed stars is one rotation in 1,436 minutes. This is “sidereal time”. All gradualists suppose that this spin rate has been unchanged in 4.6 billion years. Our model suggests that before 701 B.C.E., the Earth's slightly slower spin rate, was between 1,442 and 1,443 minutes (sidereal). This is an increase in spin rate of 0.452%, or one part in 221. The ancient increase in day count per year to 365.256 days per orbit is a product of a slight increase in spin, accounting for some 31% of the new day count per year. And there was a significant increase of the Earth's distance from the Sun, accounting for an estimated 69% of the new day count per year, 365.256 days. As the Earth's distance from the Sun increased, there was a corresponding slight decrease in the Earth's velocity around the Sun. The increase in spin rate seems to have been 0.452%. The increase in orbital radius, the semi-major axis, seems to have been 0.668%. The increase in day count per year, a combination of the two preceding factors, has totaled about 1.46% according to this model. This means 360 old days were equal to 361.628 modern days. The spin rate was a minute bit slower. This analysis is in harmony with a unique, Romanesque ancient change in the day count per year for the Earth made by a mathematical philosopher, Plutarch. Plutarch said and wrote that at the end of the heroic age, the era of catastrophes, there had been a “celestial crap game”. It was between Hermes, the Earth and the Moon. Plutarch, or his translators, apparently confused Hermes with Ares. In that celestial crap shoot, the Moon lost 1/70th of her “holdings”, or its period, while the “winner,” the Earth gained a similar 1/70th of its day count per orbit, an addition to its former orbit period. The Moon's modern period is 29.53 days. 30 days minus one part in 70 is 29.57. Plutarch was within .04 day of being exactly correct for the new lunar period. The Earth's new period, 365.256 days, less one part in 70, results in an earlier period of 360.038 days. Here again Plutarch's explanation was within .04 of a day of being exactly correct. Plutarch's ancient Greek sources were solid, and his explanation for the new conditions satisfied his Roman audiences. Issue 4. Mars And An Ancient 720-Day Year The model here proposes that Mars had an ancient 720-day orbit (or 723.257 new days). Mars was in 1:2 resonance with the Earth, and it was in 6:1 resonance with Jupiter. Further, it was in a near 15:1 resonance with Saturn. The general pattern of perturbations of Mars by the Earth and Jupiter requires this resonance of periods. Interestingly, James Frazer cited in “The Golden Bough” that the Gonds, a tribe in Southern India, still worships Jupiter and has celebrations for Jove every twelfth year. This relic of ancient planet worship is despite the fact that the Earth has not been in a 12:1 resonance with Jupiter for 2700 years. A contraction or shrinkage of the orbit of Mars from 720 old days (or 723.26 new days) down to the modern 687 new days per year is a change in the right direction. The Earth gained in day count per orbit - 5.256 old days - while the orbit of Mars, one ninth of the mass of the Earth, lost 36 days. What one planet gained in energy (and in angular momentum), the other lost, a classical exchange of energy. Some have superficially assumed that this is a three body problem, involving just Mars, the Earth and the Sun. No. It is a five body problem, involving Mars, the Earth, the Sun, Venus and Jupiter. The model presented below portrays that the spasms of catastrophism in 701 B.C.E. were a celestial double header. First there was an expansion of the Martian orbit which caused a close, inside flyby of Venus. Tables XI and XIII indicate Mars gained from Venus .086476 energy units. Some 54 days later, the red planet encountered the Earth and lost .133424 energy units to the Earth-Moon system. For Mars in 701 B.C.E., it was a net loss of .046944 energy units. There was also a loss in angular momentum. The net result for Mars of the two 701 B.C.E. crises was a contraction in the period of Mars. Measured in terms of days, it was 36 days - about 5.016%. In terms of its average distance to the Sun, the two planetary flybys of Mars in 701 B.C.E. resulted in a net contraction in distance of 3.37%. Again see Tables XI and XIII. If Mars passed close by, on the outside of the Earth, because of the Earth's much greater mass, our planet would (and we affirm did) pivot and pull Mars inward eight or nine times as powerfully as Mars could pull the Earth outward. The pattern is consistent. Ancient calendars indicate something very much like this happened. Ancient folklore’s involving Earth upheavals with the planet Mars (by whatever name) occurs on at least five continents. As was mentioned earlier, Plutarch, a first century AD philosopher taught the rich, gambling, semi-barbarian Romans that the Earth and the Moon once had entered into a celestial crap game with Hermes. Actually Plutarch made a misidentification; it was with Ares, not Hermes (Mercury.) Plutarch taught that the Earth gained one-seventieth of its period in that celestial crap game. 365.256 x 69 / 70 = 360.038 old days. That was good enough for the “undereducated” (barbarian) Romans. Plutarch also taught that in the same celestial crap game, the Moon lost one-seventieth of its period. 30 days x 69 / 70 = 29.5714. The Moon's modern period is 29.5306 days - its “lunation” or synodic month. Again, Plutarch's explanation was good enough to satisfy Roman standards about both celestial mathematics and gambling. [n7] Energy And Its Equation |The equation of energy for any orbiting planet involves four factors. Two factors are constants, two are not. The first constant is (a) two Pi squared, which is 19.739209. The second constant is (b) a multiplying factor needed to express a planet's energy in usable measures. Of the two variable factors, the third is (c) the masses of the two celestial bodies involved in the perturbation. The masses of the relevant planets are as follows: |The fourth factor is orbital. It is (d) the length of the “semi-major axis” of these four planets, formerly and in the current age. The semi-major axis is half of the distance of the long axis, the “x” axis. It also is half of the distance from either aphelion or from perihelion to the midpoint of the orbit. The semi-major axis of the relevant planets are given in “a.u.” One a.u. is 92,955,807 miles, the average distance between the Sun and the Earth.|| The energy lost by the Moon, involved in the shrinking of its orbit some 3,000 miles, can also be calculated. But the calculations involved require going up to twelve decimal places. To employ such figures for the Moon's energy and the contraction in its orbit would give a false impression of extreme accuracy. Therefore, for both energy balances and for angular momentum considerations, changes in the length of the axis of the Moon's orbit are so slight that they are dismissed. On the other hand, the mass of the Moon is .012303 of the Earth, about one part in eighty-one. Earth's mass is 1.000000 Earth masses, the measure of one unit of weight. Together, the mass of the Earth-Moon system is 1.0l2303. The addition of the lunar mass to the Earth's mass is significant and necessary for purposes of calculating perturbations for both Mars and the Earth. Issue 5. Astronomical Support Data There are several statistical categories of astronomical data that support the model. One can retro-calculate the position of Venus to determine whether or not it was where our model requires it was on January 24, 701 B.C.E. Retro-calculations are made on the orbits of the Moon, Venus, Mars and Jupiter for their locations on Jan. 24, 701 B.C.E. for Venus, and March 20 for the Moon and Mars on the night of March 20, 701 B.C.E. See Chapter 11. If they retro-calculate well, they will comprise four clues of support, evidence to indict Mars as the ancient “bane of mortals”. If retro-calculations are unsuccessful, they will invalidate this model. Retro-calculations can be made from any modern date, back to March 20, 701 B.C.E. Were the Moon, Venus and Mars where our model claims they must have been? Was there a Mars-Venus interaction about January 24, just before the Final Flyby? Can the orbit of Mars, and the position of Mars therein, be traced back to March 20, 701 B.C.E.? For technical reasons, the energy of a planet is always cited in negative numbers. Thus, the higher the negative number, the LOWER the energy level, and vice versa. The lower the number, the greater the planet's energy. Negative numbers for describing energy are helpful in science but make a confusion for the regular non-scientific reader. Among Kepler's works are Mysterium Cosmographicum (1596), Astronomia Nova (1609) and Harmonice Mundi (1619). Kepler is considered the founder of physical astronomy because of his demonstration that the planes of all planetary orbits pass through the center of the Sun. He recognized the Sun as the moving power of the planetary system. In his first teaching position at Graz, Johannes Kepler undertook extra-curricular nocturnal studies of the successive positions, hence, of the movements of Mars. Years later in Prague, this Mars data taken 400 years ago led to his monumental scientific breakthrough. All planetary orbits are ellipses that follow three laws of planetary motion. Kepler discovered those laws and published them. Two are in his original Rudolphine Tables. Emperor Rudolph of Prague was his benefactor, and was the one to whom he dedicated them. It is to Johannes Kepler, that our Tables XI, XII and XIII are dedicated; they are our “Rudolphine Tables.” Table XI is a resume on energy changes. Note that Jupiter energy did not change very much. Table XII in the next chapter is a resume on angular momentum changes. There, Jupiter's angular momentum does shift a bit. Table XIII, also in the next chapter, is an in-depth analysis. Table XIII lists 24 astronomical categories, involving the four planets and the various measurements of change from the Catastrophic Third Orbit of Mars to the Serene Fourth Orbit. These shifts were what produced the modern orbit of Mars. Table XI - The Energy Exchange of 701 B.C.E. An Analysis Of Flyby Distances In this model of Mars catastrophism, Mars and Venus did a passionate polka on or about January 24, +/- one day. This polka in late January 701 B.C. was, in a gravitational sense, passionate. In an electromagnetic sense, it was also intense. Ardent as the Ares-Venus dance was, at .086479 energy units, it was only 54.423% as passionate as was the last waltz between Mars and the Earth some 56 days later. The last waltz, even more passionate, was at .133424 energy units. It had been a long time since the Solar System had experienced anything as dramatic as these events in 701 B.C.E. These polka left Venus closer to the Sun by 262,467 miles. And the last waltz left the Earth-Moon system farther out by 616,565 miles. The combination of these two dances plus modest effects by Jupiter left little Ares closer in to the Sun, by 4,943,437 miles. See Table XIII. Table XI, further analysis, indicates that the last Mars-Earth waltz, at its closest and most passionate, was at an estimated distance of 27,000 miles, planet center to center. (Since Mars has a radius of 2,100 miles and the Earth 3,950, the two closest surfaces were about 21,000 miles distant). The last Mars-Venus polka was only 58.423% as passionate as was the last Mars-Earth waltz. Venus is smaller than the Earth; it weighs 8l% as much. The Mars-Venus flyby distance planet center to center, is also estimated, at 35,000 miles. The radius of Venus is 3,800 miles and that of Mars is the aforementioned 2,100 miles. Thus, the nearest distance between the surfaces of these two planets at the closest moment (peri-Venus) was an estimated 29,100 miles (35,000 - 5,900 miles). Is it any wonder that recent photos from space missions photographing the surface of Venus from nearby reveal a violent, recent physical geography of the Venusian surface? The damage appears as if it happened only yesterday, and to astronomers, a time span of 3,700 years or 12,000 years ago is only “yesterday”. Hesiod and Isaiah both saw the two flybys. Hesiod saw them from Thebes, Greece. Isaiah saw them from Jerusalem. Hesiod wrote that first Ares had a tangle with Pallas Athene (Venus), and next Ares had a cosmic encounter with the Earth. It brought much havoc, lightning, earthquakes, volcanism, etc. He mentions the steeds of Ares, Deimos and Phobos, l7 times in 490 lines. Hesiod is in disagreement with modern astronomers that Mars, with Deimos and Phobos, has always been 30,000,000 miles or more distant from the Earth over the last 4 billion years. They were close enough to be seen, and as we shall see in chapter 11, their distances were noted in Mars diameters. Swinging on a dance floor with different partners for different dances often involves in swinging different directions on a dance floor. So it was with these two planetary flings of Mars; they were also in opposite directions (with respect to the Sun.) In the Mars-Venus polka, little Mars was on the inside, being pulled outward by Venus. Venus, on the outside, was pulled inward, somewhat closer to the Sun. As is mentioned above, Table XIII indicates that Venus ended up shifting some 262,467 miles closer to the Sun, according to calculations of our staff. As a consequence, Venus acquired a permanent increase of 1% in solar radiation. Hot, Venus became even hotter, by 0.36%. Also as a consequence, the Earth, now at 92,956,000 miles, receives 1.49% less solar radiation at 92,339,000 miles than during the Catastrophic Era. In the process, Mars also attracted and pulled the on-looking, nearby full Moon inward by some 3,070 miles, or 1.28% of its orbit radius. It came in from 30.103 new days (one twelfth of the Earth's old period) to 29.53. One result is that new lunar tides in the Earth's oceans and on its sea shores increased slightly, some 3+%. Another result was that the ancient 30-day lunar calendar no longer served the ancients at all well. Nor does it serve modern persons well. But the old catastrophic era calendars had to be changed. Every year they were five more days out of synchronization with the Sun, and they kept adding up. For farmers and sailors, the old calendar soon became intolerable. Crops had to be planted correctly, and voyages dated realistically. One - Ninths --- Three Times A) Mars, at .107 mass. is about one-ninth the mass of the Earth. The Moon, at a mass of .0123, is 1/81th of the mass of the Earth. B) Thus the Moon also happens to be about one-ninth of the mass of Mars, one part in 8.7. In the Catastrophic Era, it calculates that the Moon was about 241,900 miles from the Earth. This relationship was invaded, and permanently changed, during the last Mars flyby. However, according to this model, Mars passed through the Earth-Moon system much closer (about eight times closer) to the Earth as to the Moon. Resonance studies indicate that in the Catastrophic Era, except for the last flyby, all of the 100+ Mars flybys were on the inside, or sunny side of the Earth's orbit. Only the last Mars flyby was unique, on the outside, or night side. C) It calculates out that, planet center to center, at its closest, Mars was about 27,000 miles from Earth, and simultaneously 214,000 miles from the Moon. Thus, the Mars flyby was at one ninth of the distance between the Earth and the Moon. The ancients noticed these changed conditions after 701 B.C.E., and soon realized their 30-day lunar calendars were next to useless. Despite evidence, the leaders of 20th century gradualism have rejected the idea of ancient 360-day calendars. This is because they would also have to accept a sudden change in the distance of the Earth to the Sun, and its causing factor, a Mars flyby. For the same reason, those leaders cannot accept a change in the Moon's ancient distance from our planet. It would open the gate to further changes and developments. The system of gradualism was conceived by various anti-clerical thinkers such as Kant and Laplace in astronomy, and Charles Lyell in geology. With such a predisposition, those philosophers would give no credit to the Bible for its history, much less for its ethics and theology. Now, were they to change, gradualists would need to go back, and read the Bible for its history and geography. They might also gain greater respect for its ethics and theology. Then they would need to read and assess, Plutarch, Apollodorus, Hesiod, Homer, Heraclitus, etc. No, it is easier to simply reject, and reject again, and be stylishly “politically correct” in an age of compromises of all kinds. In ancient society after society, there was a clamor by farmers, sailors and mathematicians for corrected calendars. There was a need in Egypt to revise the calendar. The Egyptian astronomical society, met in Canopus in 238 B.C.E., to create a better calendar. Their conclusion was called the Canopus Decree, and it led to a revised 365-day calendar for Egypt. It was a revision of two earlier Egyptian calendars. One earlier calendar was related to the Earth's spin rate, their 360-day calendar. The other ancient Egyptian calendar was related to the orbit of Venus, in an era when five Earth orbits equaled precisely eight Venus orbits. The Venus calendar of the ancient Egyptians had been the more accurate of the two Egyptian calendars in that era. It is to be noted that the old Egyptian Venus period is cited at .625 of the Earth's period in Tables XI and XIII. .625 is a perfect 8:5 resonance. The old Venus calendar of ancient Egypt had been a very useful one, and it might be that the ancient Mayas had a Venus calendar somewhat similar to the Egyptian Venus calendar. For the Hebrews, a simple method was developed to revise the old calendar and still retain its now imperfect twelve month scenario. Merely, it was to add five days to the twelfth month of the year, which was named “Adar”. This was done and the additional five-day period was called “Veadar,” the new, or “the second” Adar. Veadar made its appearance years after 701 B.C.E. Their Veadar calendar also states much the same thing as Plutarch's assessment of the ancient order, centuries earlier. However such a simple adjustment as adding five days per year still had an imperfection. With this change, there came to be 36,500 days per century, but the Earth's orbit had 36,525.6365 (new) days. The imperfect Veadar calendar still was off 25+ days per century. So there was a need for 25 leap days added per century, one every four years. But that still wasn't quite good enough for astronomers. So analysis required for one leap day to be missed once per century (except for every fourth century). The old calendar of the earlier era had major problems, but so did the new 365-day calendar have minor problems, unless perfected. ELECTRICAL EFFECTS. Another effect of these two swinging events involved the ancient planetary magnetic fields in space. There is no dynamo in the inner core, as most scientists suppose. In a future work of this series, it will be demonstrated that Mars flybys were the dynamo of our planet's geomagnetic field. And it will be demonstrated that the geomagnetic field is housed in the cool iron of the Earth's crust. Our planet no longer has its old dynamo. As a consequence, its geomagnetic field, like any old once magnetized iron nail, is in the process of magnetic decaying. The Earth's rate of geomagnetic field decay is a half life every 1,350 years. The Earth's geomagnetic field strength now is .3 Gauss. In 701 B.C.E. as was 1.2 Gauss. In 4700 AD it will be .075 Gauss. Ultimately it will become so weak as to no longer shield the Earth from the Sun's short wave (actinic) radiation. This is a major long term problem for mankind, and for the future of all flora and fauna. Physicists need to analyze its present decay effects on the biosphere. They need to predict the future effects of the decay. And if possible, they need to find a method of planetary remagnetization. If they don't, the survival of fauna and flora on this planet’s surface is doomed within five to seven thousand years. The ancients in their literature discussed dramatically how destructive Mars flybys were. But Mars flybys were a two sided coin. On the positive note, Mars flybys created strata on continents with sudden oceanic floods; they created suddenly uplifted mountain systems from sudden magmatic crises. In the magma they created mixes of water, silanes and carbides to convert into (l) more silica, or crust, the chemical byproduct, (2) more subsurface petroleum. Also, paleomagnetic polarity reversals occurred as is evident in basalt formation, and spin axis relocations (shifts) occurred, as is evident in climatology and ancient literature. Isaiah recorded one of them for history, a shift of 10°, and Joshua wrote about another. Gravitationally, there was a strong mutual attraction (gravity) between two bodies, Mars and the Earth, attracted passionately in space during these swinging events. The gravity of Mars created immense tides in the Earth's oceans and in its sub-crustal magma. Volcanism suddenly erupted from dormant craters on the Earth, and nine times as much volcanism, or more, erupted from the gigantic volcanoes of Mars. The Earth in close flybys created massive eruptions on the surface of Mars. In this long, eruptive process, one of the Martian volcanoes eventually attained an elevation of 13 miles (Olympus Mons). Its cone covers 110,000 sq. miles and contains 500,000 cubic miles of lava. (If there were 250 flybys, that would average 2,000 cubic miles of erupting lava per fly. And that is merely the largest of the Martian volcanoes. Others rise to 35,000, 40,000 and 50,000 feet. That lava was pumped up by gravities of the Earth and Venus during ancient flybys. Electrically, it was an attraction, not gravitational combat. In space, the two planetary magnetic fields of old Earth and old Mars BRIEFLY UNITED IN SPACE for ten to fifteen hours. Soon they were torn apart ... as Mars parted company with the Earth ... and both experienced a reversed polarity for the magnetic field of each planet. The south magnetic pole and the north magnetic pole were simply and suddenly reversed. The spin axis sometimes relocated also. Major league baseball schedules used to feature double headers. The year 701 B.C.E. featured a cosmic double header, two shows in the theater of the cosmos in sixty days. Admission was free --- for the survivors. The first of the two in that remarkable year was “The Last Mars-Venus War.” The second feature, even more dramatic (from any perspective) was “The Last Mars-Earth War.” Both celestial cinemas were filled with orbit-changing action. Electrically it resembled something like a brief tryst in space, as Hesiod described. Ares had a sometime “date” with Pallas Athene; this would be the last of them. There was (a) an ion exchange as well as (b) an energy exchange. In addition, there was (c) an angular momentum exchange. As has been mentioned, this one, the last polka of Mars with Venus, was only a prelude to 56 days later, its last waltz ever, the red planet's last dance with the Earth. Both Hesiod and Isaiah saw the two flyby scenes. As was mentioned, Hesiod viewed the scenes from the Greek Thebes, a place some 40 miles outside the city of Venus. Athens, named after Pallas Athene, a cosmic deity, was the largest city of the province of Attica). The other viewer-reporter, Isaiah, saw this celestial show, perhaps from a parapet on the city wall of Jerusalem. Each reported and recorded the events they saw for posterity. We, both 20th century catastrophists and gradualists, are that posterity. But they wrote in terms understandable and acceptable to their fellow citizens. On The Scene Reporting The opening caption of this chapter cited briefly a section from Hesiod's “The Shield of Herakles.” This was a rich reporting job, less than 500 lines of verse about one of the most spectacular scenarios in the history of our planet. The second of the last two wars of Mars was a close brush with the Earth. As Mars approached the Earth, for a few minutes that night Mars was fully reflective, a “full Mars,” and with our Moon also at full moon in the background. When full, our Moon, diameter 2,160 miles, occupies over a half degree (.515) in the night time skies. Mars has a diameter of 4,212 miles, almost double that of the Moon. At 240,000 miles, Mars occupied 1.0057° in the night time sky. The Moon's reflectivity (albedo) is 7% while the reflectivity of Mars was 16%. Thus at similar distances, and if both were full, Mars was nine times brighter than the Moon (16% x 4 / 7%). When Mars was threatening the Earth at 240,000 miles distant, causing widespread terror, the red planet covered one full degree in the night time skies. It was seven hours (and 213,000 miles) from perigee. At 120,000 miles Ares covered 2°, both horizontally and vertically and it was only three hours from perigee. The tocsin alarms were sounded, gongs in Japan and India, began to be gonged (54 times, once every few minutes). Trumpets of warning in the cities of the Near East were blown in most cities. Elsewhere other tocsin sirens were rung or were blown, warnings to the populace to find quickly their shelter, in caves, cellars, fox holes, etc. It was three hours to perigee (climax). At 60,000 miles the diameter of Indra-Horus-Baal-Enlil covered 4° as more populations on this planet became frantic, terrorized, and most had run to their prepared places of refuge. It was a universal epidemic, it was Mars-phobia. It was now one hour and a few minutes to perigee (maximum upheaval). Mega Richter scale earthquakes rattled; dormant volcanoes began to re-erupt. When at within 30,000 miles (planet center to center) Mars was some 6 minutes from climax. That night, at climax, Mars' diameter covered over 8° of the night time skyscape. Its shining area at its climax appeared to be 256 times greater than the full Moon. Its reflection of sun light was 550 greater than the brightness of a full moon. This was the “Disc of Mars” (The Shield of Hercules) as it was seen on that night. A round disc was like the shape of Mars. The color of electum was its coloring. Electum was a shiny mixture of silver and gold. When Hesiod titled his work, to him and his fellow Greeks, it meant “The Shining, Electum-colored Disc of Mars”. Translators of Hesiod have lacked the perspective of planetary catastrophism, and so they have missed the point in Hesiod's titling of this saga of catastrophism. The “Shield of Hercules” was “The Disc of Mars”. The bright, light orange-colored, surface of Mars reflects sunlight almost 2.3 times more efficiently than does the duller, darker surface of the Moon. In astronomical parlance, surface reflectance of a planet is “albedo.” The albedo of the Moon that night was 0.07. The albedo of Mars was 0.16. The silvery, orange planet reflects a full 16% of its received sunlight. The intensity of tides follows the principle of mass over the distance cubed. At 240,000 miles, Mars' diameter was 1.94 that of the Moon. On the Earth's sea shores, tides generated by Mars were nine times the normal lunar tide. At 120,000 miles, four hours closer, Ares' diameter was 3.9 that of the Moon. Its area was over fifteen times that of the Moon. Its reflectance, if full, was 35 times that of the Moon. This was about three hours before midnight, Athens and Jerusalem time. At that moment, tides on the ocean shores caused by Mars were 68 x the normal lunar tides. And Mars was still coming in. At 60,000 miles, one hour from maximum, Mars-Enlil's diameter was 7.8 that of the Moon. Its area was 60 times that of the Moon. Its reflectance, if full, was almost 135 times the full Moon. It was barely one hour from midnight. As was mentioned above, tides in the oceans and in the magma increase according to the equation, mass over distance cubed. Halving the distance produces tides EIGHT TIMES greater in intensity. The following is an indication of tidal intensities as Mars approached: Mars at: 480,000 miles 1.06 |The Mediterranean Sea is almost a tideless sea since it is entirely enclosed from the Atlantic except for some 8 miles. That is the narrowest part of the Strait of Gibraltar, or, to the Greeks, the Pillars of Hercules. Thus it was that open oceanic shores of the five oceans were treated far, far worse by tides than was the shores of the Mediterranean “Lake”.| At 30,000 miles, yet closer, Mars-Indra's diameter was 15.5 that of the Moon. Its area was 62.1 of the Moon. Its reflectance, if full, was 140 times the Moon's brightness. This time is estimated as approaching midnight for our two reporters, Isaiah and Hesiot. (Talmudic material indicates the climax was around midnight). At 27,000 miles, perigee according to this analysis, the diameter of Bel/Baal/Horus was 17.24 times that of the Moon. Its area covered 69 times (2 x 2) as much skyscape compared to the Moon. Its reflectance was 157 times the brightness of the Moon if both were full. This was the perigee of Mars that night, perhaps at a few minutes after midnight. Oceanic tides and subcrustal magmatic tides were 5,900 times normal Moon-created tides, hence the crescendo of massive earthquake and volcanic activity. Some were wondering if this would be the end of the world. Or of the Earth. No. But had they understood Roche's Limit, Mars had come within 16,000 miles of shattering and being turned into countless asteroid fragments. (Roche's Limit for fragmentation is 2.44 radii and the Earth's radius is 3,900 miles). Had Mars come within 11,000 or 12,000 miles of the Earth, it would have fragmented just like Astra, many millenniums earlier. Considered together, Ares, “bane of mortals” (Homer), made a close flyby, as it rapidly closed in on the Earth. (To Greeks, “It” was a “He”, Ares, the celestial warrior. Athena, or Venus was feminine). Mars was accompanied by its celestial steeds, Deimos and Phobos, on full display, just as Hesiod and others reported. Hesiod referred to one or both of the celestial steeds of Mars, Deimos and/or Phobos, in one form or another 17 times in only 486 lines. These include lines 97, 144, 154, (Onrush and Backrush) (Battle noise and Panic), 195 (Terror and Panic), 347, 370, 372 (fluttering-maned horses) and 463 (Panic and Terror). Tiny Deimos has fragment diameters of only 6 x 7.5 x 10 miles. Phobos was a little larger fragment, with a maximum diameter 17 miles. Hesiod viewed the orbiting of these tiny asteroids as the turning of the wheels of the chariot of Mars. Phobos has a period of 7 hours 29 minutes. Deimos, 30 hours 21 minutes. (Gradualists note, Hesiod could not have seen them if Mars were 2,000,000 miles distant). For future generations, posterity, (including us), Isaiah and Hesiod reported the events of that night. It was a scary, close, midnight Passover of Mars, overhead, to a surrounded, very frightened city of Jerusalem, a city that happened to be crammed with refugees. Assyrian assault armies were waiting, a mile or so beyond the city walls. Sennacherib and his star gazer advisors viewed this coming scene as an opportunity rather than as a catastrophe. Lightning might hit Jerusalem, fry it and blow it down from the inside. Or, more likely, earthquakes might compromise the integrity of its walls, and thereby simplify the coming brutal assault. From such a sensational starry scene, including a visible Deimos and Phobos, a person might succumb to either dementia tremens, cometophobia (about which more is said in chapter 1), or “Ares-phobia.” (Deimos and Phobos were two words among some two dozen Greek words for different kinds or categories of fear). These events were both foreseen and reported by Isaiah in the Book of Isaiah in the Bible, chapters 1 to 38. Also they were reported in Hesiod's rich 486-line account which he titled “The Shield of Herakles (Hercules).” They are also reported in Ginzberg's “The Legends of the Jews”. Incidentally, the title of Hesiod's work seems to have suffered from mistranslations by later translators, scholars who never imagined planetary flybys. As was mentioned above, a more accurate title, which Hesiod probably had in mind, was “The Disc of Ares”. A shield was a disc and Hercules was among the archetype names of Ares, as was Phaethon, Gorgon, Lotan, Medusa, Perseus, etc. In Greek mythology, Hercules was very strong; he was one of many archetypes of Mars in Greek literature. Ares may have gotten a new nickname with each flyby occasion. Other archetypes included the swift Perseus, the evil Medusa, the hideous Cyclops, the even uglier Gorgon, the ravaging Lotan, etc. Essays by David Talbott and Ev Cochrane consider that Ares had 110 archetypes or “nicknames” in Greek literature. [n8], [n9] Lotan, one of the Greek archetype of Mars, is derived from the Chaldean language. This Chaldean word was borrowed or inherited by the ancient Hebrews and appears in the Bible as Leviathan, the celestial dragon in the Book of Job, ch. 41. With some 100+ archetype words, all descriptive words for Mars in Greek, this is a measure of the swiftness, the luster, the terror and the repeated damage wrought by a numerous close Mars flybys in ancient times. In another volume evidence will be cited that the Mars flybys were in cycles, 108-year periodic cycles, periodic to the day and virtually to the hour. The great monsters or dragons of the celestial deep were known by the Hebrews, as by the Chaldeans, under several names. Those names included Leviathan, Behemoth, Teammate, Asp, and Rehab. Their assaults on the Earth go back to the era when these celestial assaults began, when the Earth's geomagnetic field was created, and when paleomagnetic polarity reversals began to appear in lava flow sequences. Leviathan, in our analysis of Chaldean thought, appeared in October flybys, while Behemoth was seen and feared in Chaldean March case flybys, or “Passovers”. The scenery, that frightening night, stirred both Isaiah and Hesiod to record it. Since modern translators have no idea of planetary catastrophism, they have chosen a lesser, somewhat softer terms, non-astronomical terms for celestial scenes they could not understand. As was mentioned earlier, round, electum-colored discs became shields. They can be reflective if made of iron or bronze and polished. Electum was a shiny alloy of silver and gold. The shield of Mars according to Hesiod had the coloring of an electum disc. Shields were used in earthly combat. Circular shields were involved in celestial combat. This night a round, shield-like disc was involved in intense celestial combat with the Earth. It was with the disc For all about the circle of it, with enamel and with pale ivory, and with electrum it shone, and with gold glowing it was bright, and there were folds of cobalt driven upon it. In the middle was a face of Panic, not to be spoken of, glaring on the beholder with eyes full of fire glinting. and the mouth of it was full of teeth, terrible, repugnant, and glittering white, while over the lowering forehead hovered a figure of Hate, marshaling the slaughter of fighting men, cruel spirit, who took the senses and perception out of those fighters who tried to fight in the face of Zeus' son, the War God. [n10] As the catastrophe deepened, there was a thick smoke and ash smog in the air from burning forest fires and volcanic eruptions. The next morning the smoke and ash smog in the atmosphere altered the color of this light silvery orange disc into a deep red. Forest fire fighters readily understand this coloring due to a thick haze of smoke in the atmosphere, changing yellow sunlight into a red glow. It was a night to be remembered, so Hesiod wrote in order that it could be remembered by future generations. He had no idea this flyby would be the last of a long series of waltzes of Ares with Hera. So it is that modern translators of Hesiod, unfamiliar with planetary catastrophism and the unruly Ares, thus have had to struggle in translating the scenes Hesiod described. Interestingly, in his report, Hesiod portrayed Mars as first tangling with “gray-eyed” Pallas Athene - Venus. Apparently, interplanetary discharges were generated by fierce friction in the subcrustal tides of magma. Between Ares and Pallas Athene, the bolts of lightning lit up the cosmos, including lighting up the cometary tail of Ares. This was some 56 days before the Final Flyby. The reflective phenomenon, of discharges on Venus lighting up the cometary tail of Ares, was a rare phenomenon for the ancient Greeks. Mars-Venus flybys occurred rarely, but when they did, cosmic lightning lit up the cometary tail of Mars like a Christmas tree. It was an “aegis”. Modern translators of Hesiod into English have struggled to find a word for Ares' cometary tail. It was a reflective, silvery, effervescing celestial gauze, reflecting cosmic lightning, interplanetary Venus-Mars discharges. In English there is no word other than the Greek “aegis” for this phenomenon when lit up. Otherwise the icy, cometary tail of Mars it was known as the Fleece of Aries.” It was a wonder to look at, even for Zeus deep-thundering, through whose counsels Hephaistos had made the shield, great and massive, fitting it with his hands. And now the powerful son of Zeus swing it with full control, and leaped down from the horse-chariot like a lightning-flash from the hand of his father, Zeus of the aegis, stepping light on his feet, and his charioteer, strong Iolaos, standing firm on its floor steered the curved chariot. Meanwhile, the goddess, Athene of the gray eyes, came and stood close beside them ... [n11] So modern translators settled upon the ancient Greek term, the “aegis” of Ares. Only, no translator today can define “aegis” except that it may have had something to do with Pallas Athene and the warlike son of Zeus. In addition, Hesiod, in his brief 480 lines in “The Shield of Herakles,” did not overlook the appearance of Deimos and Phobos as the two black celestial steeds of Ares. Adjective and noun phrases used by Hesiod, and now translated, include such terms as “Shaker of the Earth” (line 104), “the War God” (150), “Manslaughter” (155), Grim-faced Ares (191), Death mist (264), “Man slaughtering Ares” (334), “Lord of Battles” (371), “Son of Zeus”, (391), “Stout-hearted son of Zeus”, (424), “Man slaughtering Ares” (425). These black steeds were described drawing the shining, warlike, sword-swinging chariot of Ares across the celestial scene, the cosmos. As was mentioned, Deimos and Phobos, and words for them occur some 17 times or more in 486 lines. Panic and Terror. Swift-footed horses. Fast horses with reins slackened. The horses that drew his chariot. Horses on either side. Fast-footed horses. Fluttering-maned horses. Two of them. Isaiah and Sennacherib Isaiah, like Hesiod, was a reporter of the last cosmic waltz. Among his predictions were forecasts of [high voltage] crashing cosmic lightning discharges, gigantic killer shock waves, terrible claps of thunder [heard for thousands of miles], rocking earthquakes, vigorous volcanic eruptions and an unsettling of the cardinal directions, hence a spin axis relocation. All of these had occurred during the flyby of 756 B.C.E., perhaps when Isaiah was a child observer. The earth is utterly broken down, the earth is clean dissolved, the earth is moved exceedingly. The earth shall reel to and fro like a drunkard and shall be removed like a [summer] cottage. Isaiah 24: 19-20 Drunkards reeled; so could the spin axis of the Earth. Summer cottages could be easily relocated with a pair of staves and a sextet of pole carriers. So also, the North Pole, the South Pole and the equator were relocated. Isaiah had it right. It was somewhat of a repeat of the 756 B.C.E. catastrophe, except it was on a Passover anniversary in the spring. It was at night rather than during the daytime. But his language was good for the Hebrews of his era, though it was not well tailored for modern science. Isaiah first predicted, and then reported on the next day after the Final Flyby, as if “the spin axis had shifted.” Isaiah reported it in a factual manner for the citizens of Jerusalem. He recorded the shadow on the Jerusalem Sun Dial shortened “10°.” Someone had measured the new shadow versus the previous shadow, probably as cast at noon, at the Sun's zenith. The standard of measure he used was lost by the Jews after their Babylonian Captivity, when they lost their Hebrew language. So, translators, in Hebrew, Latin and English, have guessed and selected the English word “degrees.” Probably the original term related to a measurement of angles. The new versus the old angle was measured and the shadow was shorter. Apparent the shadow was ten “short cubits” shorter, our educated guess. And the obelisk was perhaps 75 feet high, again a guess. Angles and degrees are related and this job of translation is fairly good. Sennacherib's astrologers, close to the throne, had advised him that the Assyrian army ought be stationed outside Jerusalem at least by our date of March 20. On that night, earthquakes would test the defensive walls, and perhaps shatter them, or at least damage them. And one bolt of cosmic lightning from the Assyrian Nergal (the Chaldean Bel, the Greek Ares) might turn the holy city into a flaming holocaust. Thus, Sennacherib's astrologers recommended taking Jerusalem the easy way, enlisting assistance from Nergal-on-high. With this vast army Sennacherib hastened onward, in accordance with the disclosures of the astrologers, who warned him that he would fail in his object of capturing Jerusalem if he arrive there later than the day set by them. [n12] The Jewish Talmud also records the scenery and the dramatic history of that unusual and pivotal night. Sennacherib wanted to capture Jerusalem, either by surrender or by sack. If by sack, it meant the total slaughter of Jerusalem's inhabitants, a holocaust, Nazi-like, including all stray pets. If Jerusalem was taken by surrender, it meant deportation of the surviving populace to a cold, remote northern land, the land of the Volga, Southern Russia, 1,000 miles, or elsewhere in cities of remote Inner Asia. King Hezekiah anguished over Sennacherib's terms of surrender, and asked for Isaiah's advice. Isaiah advised faith and resistance; the Lord would deliver Jerusalem. Taking Isaiah's advice, Hezekiah resisted, a chancy decision indeed in light of the record of Assyrian annihilation of cities elsewhere in the Near East. The “Senna” part of Sennacherib's name means “the Moon.” The “cherib” part of his name in Semitic languages is another word, as shall be demonstrated, for “the cosmic marauder, or Mars.” The translation in English is “cherub,” which was a destructive angel or messenger of the Lord. This is discussed in more detail in Chapter 12. Those same two celestial bodies (Mars and the Moon) “by chance” were the two bodies whose orbits were reduced in diameter that very night. That night, March 20, 701 B.C.E., Sennacherib's well-armed panzer, with their ample iron armor, were on schedule, and on duty, camped outside the western wall of the city, some 250,000 strong. Sennacherib's terms, as usual, were “either-or”. It was either (1) to surrender the city and accept deportation of the entire populace into a far away land, abandoning forever the land of Israel, or (2) to suffer total annihilation. No more children of Israel. It was reminiscent of Hitler's style. In the Bible, the events of that night are found in the Book of Isaiah, chapter 38, and in two other places of the Old Testament, in II Kings and in II Chronicles. Further description at greater length has been recorded in the Talmud, cited as follows. The archangel Gabriel, sent by God to ripen the fruits of the field, was charged to address himself to the task of making away with the Assyrians, ... The death of the Assyrians happened when the angel permitted them to hear the “song of the celestials.” Their souls were burnt, though their garments remained intact. [n13] Hezekiah and Isaiah were in the Temple when the host of the Assyrians approached Jerusalem; a fire arose from amidst them, which burned Sennacherib and consumed his host.See Tehellim 22, 180. The burning of Sennacherib is not to be taken literally. [n14] “That day” was the night of March 20-21, when a close flyby of Mars would create subcrustal tides, crustal earth shocks, volcanic upheaval and interplanetary mega-thousand volt electrical discharges on the Earth's surface. Sennacherib accepted the advice of his astrologers and was there on schedule, with his abundant arsenal of armor, much of it made of Hittite iron. Iron is superior in hardness to other metals, important for military purposes. But, as Benjamin Franklin discovered, iron also attracts lightning, and therefore it is an excellent material for manufacturing lightning rods. An arsenal of iron armor makes an even better attraction than does a handful of iron weapons. Iron's magnetic properties were not realized by either the Assyrians or Hebrews, the Greeks, the Trojans or the Romans. It is understood only by modern scientists and their students. The ancient rabbinical commentators wrote for the benefit of later scholars, and to our modern age of gradualism. Later Hebrews lost the ancient literal perspective of catastrophes, having never seen one, and having lost the original Hebrew language. Never would they experience interplanetary electrical discharges, on scales of 220 volts, much less on scales of 10,000 to 50,000 volts. Therefore they tendered that the burning of Sennacherib on one side of his face “was not to be taken literally.” They were mistaken. This blast in some ways was like a small atomic bomb. There were some eight or nine kinds of electrical effects and phenomena that occurred during March flybys. The Earth's geomagnetic field swept across the cosmos and briefly, for 10 to 20 hours, united in space with the planetary magnetic field of Ares, or Mars. [n15] The ancient, but post Babylonian rabbinical commentators were trying admirably, but came not even close to an understanding. Thus they rationalized that the lightning blast which originated from the surfaces of Mars and the Earth instead originated in the Earth's troposphere, or its stratosphere. Actually their ancient, original sources had been quite correct, and they ought to have been taken literally, without an Sanhedrin 95b, and similarly Jerome on Is. 10.3. The latter states that Jewish tradition considers Hamon, “noise” (comp. Is. 33.3), to be the name of the angel Gabriel. This is corroborated by Aggadat Shir 5, 39. According to Sanhedrin, the angel clapped together his wings, and the noise caused by it was so terrific that the Assyrians gave up their ghosts. Another view given in Sanhedrin is that the angel blew out the breath of the Assyrians. This means that he took their souls without injuring their bodies. [n16] In response to the high voltage lightning strike, there was a killer shock wave. Apparently there was a flash somewhat comparable to a small atomic bomb. Sennacherib himself was reported as being a survivor, but with burns - flash burns, perhaps on one side, like Hiroshima. Immediately following the blast of lightning came the killer shock wave. Traveling in the Earth's atmosphere, it was heard for thousands of miles, perhaps as far as the Western Hemisphere. The celestial lightning had struck, but not in the citadel of Jerusalem, as it might have; it struck Assyrian heavy iron armor. In three separate citations in the Bible, (Isaiah 38, II Kings and II Chronicles), the mortality count was recorded by that one monumental, massive, memorable nocturnal blast just outside Jerusalem - 185,000 dead Assyrian troops. What was left of the scared and scorched Assyrian army beat a hasty retreat to Nineveh. There, Sennacherib, burned and a big loser, was assassinated. Thus it was that the “angel Gabriel,” a destructive and dreaded messenger of the Lord indeed, delivered Jerusalem. It had happened once again. Something similar had occurred on the Long Day of Joshua, some 701+ [or 702] years earlier. [October, 1404 B.C.E.]. Something similar had happened in the years 1296, 1188, 1080, 972, 864, 756 B.C.E. and a half cycle later, in March of 701 B.C.E.]. Catastrophes were in 108-year cycles, as if they had something to do with Jupiter as well as with Mars. (The Earth was in 12:1 resonance with Jupiter, and l08 is divisible by 9). Thus Jupiter was always at the same place in the cosmos when catastrophes hit. The above citations and others uncited ones in the Talmud indicate there was a record of the “deliverance’s” although not well understood. Sennacherib's song, a rabbinical theological version of the deliverance of Jerusalem, and the final flyby, are all one, describing the last waltz of Mars and the Earth. The scientific version of 20th century planetary catastrophism is that first of all, Sennacherib had no business invading Judah or Egypt. He had no business uprooting and deporting entire populations, ethnic cleansing, which had occurred to Northern Israel and other populations elsewhere the Assyrian Empire. But that is by our contemporary standards, not his. Further, if he went ahead anyhow, Sennacherib had no business ever centralizing his iron armor in his encampment. Especially he should not centralize his iron armor anywhere in a year and in a month and week when the Assyrian “angel of the Lord,” Nergal by name, was to make a close Nergal flyby. Clearly the approaching “angel” or “messenger of the Lord” was visible and it was on schedule according to both Assyrian astrologers and Isaiah. It was on the anniversary of the Hebrew Passover, and earlier catastrophes, such as the Exodus event, also the Sodom-Gomorrah event. Sennacherib's basic problem appears to have been lust and egotism; he wanted to destroy the armies of Egypt and to plunder the great riches of the land of the Nile. Jerusalem was merely a burr in his saddle on the way, but what a burr. In our era, ongoing tidal friction within Jupiter creates discharges from Jupiter all the way to the surface of nearby Io, 260,000 miles away. A constant flow of electricity occurs in voltages up to 400,000 - and some 5,000,000 amps - 2,000,000,000,000 (two trillion) watts. This voltage between Io and Jupiter equals 70 X all of the human electrical generating capacity on the Earth. By comparison that night, Mars was distributing (by Io standards) electrical discharges across only some 27,000 to 50,000 miles of space. The discharges were scattered broadly across the face of the Eastern Hemisphere that night. Perhaps those discharges were a mere 50,000 or 75,000 volts, or perhaps they were of a higher voltage. That Troy had been hit sometimes is established by the way its mortar is fused in its city walls. No other explanation suffices. Until more research is done, their various voltages are unknown. Sennacherib's army was a casualty. It was much like the Greek army, 162+ years earlier, which had suffered heavy casualties from celestial fire just outside the walls of Troy on schedule, on October 24, 864 B.C.E. Perhaps this particular bolt in 701 B.C.E. was destined generally to discharge somewhere in the land of Israel, even possibly near Jerusalem. But (thanks be to Sennacherib) it honed in on the fine Assyrian concentration of Hittite iron armor instead, an arsenal of lightning rods. THE AFTERMATH. In the aftermath of the discharge, the first job was to bury the dead, a big job. Next, the good news, very good news indeed, was sent out far and wide. Scientific types of the late 8th century B.C.E. and early 7th century B.C.E. came from far and wide to inspect the meltdown of iron armor and the crater left at the point of the discharge. There was a small raised mound left at the very center of where the lightning discharged. (So it is with craters on Venus today). Isaiah became famous as a great prophet, one of the greatest Israel ever had. Hezekiah acquired the reputation as a great and wise king, which he was. Among the visiting scientific groups was one from Babylon, from some 700 miles to the east. The chairman was one Merodach-baladan, the son of the king of Babylon, Baladan. Baladan, the king was named after Bel, the Chaldean Ares. Merodach was named after the Babylonian deity Marduk, Jupiter. Thus the name of the prince, Merodach Baladan means “Jupiter-Mars,” or perhaps “Jupiter father of Mars.” King Belshazzar and the studious Belteshazzar had been named after Mars; Nebuchadnezzar was named after fast-moving Mercury. In Chaldea, naming royalty after the planets was in vogue in the Catastrophic Era. In this age of gradualism in the cosmos, nobody names their son Mercury, Mars or Jupiter. Only rarely in this culture is a baby daughter named “Venus.” But when this does happen, the position of Venus in the cosmos is hardly what the parents have in mind. In the centuries after this event, some like Heroditus explained the Assyrian debacle as a sudden plague of flea-infested mice or rats, bringing biological disaster to the Assyrian camp in a single night. It was a sincere try, but not a very good one. Many theologians stick with Heroditus and his mice and fleas with bubonic plague for their explanations. None of them have had the proper base of knowledge to comment. A few months, even years earlier before the Final Flyby, there were recorded vignettes from what the prophet Isaiah had foreseen and forecast: Thou shalt be visited of the Lord of hosts with thunder, and with earthquake, and great noise, with storm and tempest, and the flame of devouring fire.- Isaiah 29:6 I will sweep with the besom of destruction, saith the Lord of hosts. - Isaiah 14:23 It is well known that Isaiah credited the Lord, God of Israel, with their collective good fortune. Apparently the Lord had provided Isaiah with some kind of precognition of what would happen, and why Hezekiah should refuse to surrender. Isaiah told his king that danger indeed was coming but it was danger to the Assyrians, not a danger to the Hebrews. Isaiah's precognition was shades of Noah’s, precognition, almost 1800 years earlier. A logical pattern of energy shifts is offered for the energy conversion of Mars from its Catastrophic Era orbit to its modern orbit. Story 26 is that the year 701 B.C.E. FEATURED TWO FLYBYS, FIRST, MARS-VENUS, AND SHORTLY THEREAFTER, MARS-EARTH. The first flyby of Venus by Mars was on January 22, 701 B.C.E. The second flyby was on the night of March 21, also 701 B.C. This was only 56 to 58 days later. The energy Mars gained from the first flyby is why Mars went into an orbit farther out, and it went on the wrong side of the Earth during the second flyby. The first flyby was approximately 48% as energetic as the second. This first time, Mars gained energy; it was an exchange. Venus lost an equal amount of energy. Story 27 is that of the second flyby, THE FINAL WALTZ, WHICH WAS THE MORE ENERGETIC OF THE TWO FLYBYS. Mars passed between the Earth and the full Moon, on the Earth's outside, or dark side. This occasion increased the Earth's orbital distance from the Sun by 616,565 miles. Our planet's spin rate increased 0.452% (360 to 36l.628). Its period increased 1.003% (361.628 to 365.256). The combination of the two resulted in an increase of 1.46% in day count per year (360 to 365.256). After this siege of catastrophism, the Earth no longer was in 12:1 resonance with Jupiter's orbit, nor was Mars in 6:1 resonance. The orbit of Venus shrunk from .625 to .6152 in period. Venus left an orbit that had been in 8:5 resonance with the Earth. As is shown in Table XIII, chapter 10, the period of Venus diminished by 1.32 day. Story 28 is that THE MOON ALSO DEPARTED FROM ITS EARLIER 12:1 RESONANCE AND 30-DAY PERIOD. Its new orbit became 29.53 new days rather than the old count of 30. This chapter addresses the shortening of the SEMI-MAJOR axis of the three bodies, Venus, Mars and the Moon. In three stages, the “x” axis of Mars was shortened from 146,579,410 miles to 141,635,973 miles. This is the new semi-major axis, the “x” axis, the longest axis of the Martian orbit. The new period of Mars became 687 days, down from an old 720 (old count). The “y” axis of an orbit, the SEMI-MINOR axis, measures the width of an orbit. Changes in the “y” axis measure changes in angular momentum. Chapter 9 addresses the changes, in three steps, of the “y” axis of Mars. It has been deduced that the orbit of Mars rounded out; Table XIII will explain why. Story 29 demonstrates HOW AND WHEN THE EARTH'S ORBIT ENLARGED FROM A 360-DAY OLD ORBIT CONDITION TO A 365.256-DAY NEW ORBIT CONDITION. Simultaneously, the Moon's lunation, or synodical period shrunk from a 30-day period to 29.53 days. Tables XI, in this chapter, and XII and XIII, in chapter 11, indicate that these shifts were in accord with celestial mechanics. Such information would have brought a smile to the face of Johannes Kepler (1571-1630), the father of celestial mechanics. No doubt, Isaac Newton among others in London's John Bull club, also would have smiled. So would Jonathan Swift, Edmund Halley, John Arbuthnot, William Whiston and others. Chapter 10 indicates why Arbuthnot, Swift and others would have smiled if they had this information. Story 30 demonstrates HOW THE PERIOD OF THE ORBIT OF MARS SHRUNK FROM THE FORMER 723.257-DAY CONDITION (OLD DAYS) TO THE MODERN 686.979-DAY CONDITION. This is a major reduction in period of 5.016%. This reduction in the energy of Mars was attributable primarily to the Earth. The three planets causing the new orbit of Mars and their energy exchange were: |Venus (- 184.22%) Jupiter (+ 0%) |This is an indication as to how intense, or astronomically passionate was the final Mars-Earth waltz.| By comparison, the Earth's orbit enlarged from 361.628 new days (or 360 old days) to 365.256 new days, an increase of only 1.003%. Mars, in stage # 2, lost just as much energy as the Earth gained. The Earth's orbit expanded 616,565 miles in its “x” axis. In the Martian “x” axis, there was a reduction by 15,015,507 miles. This was because tiny Mars has only one-ninth of the Earth's mass. The orbit of Venus shrunk from 226.018 days to 224.700 days, a shrinkage in period of 0.58%. The Mars-Venus flyby featured Mars on the inside. Venus is smaller than the Earth, and the flyby of Mars wasn't quite as close. The Mars-Venus polka was only 64.82% as energetic, as passionate astronomically, as was the last Mars-Earth waltz. Half of this story of exchanges is now completed, the energy exchanges. The other half, the angular momentum shifts, is in chapter 10. It is an analysis of the changes of the “y” axes of these four planets. The “y” axis, the semi-minor axis, measures an orbit's maximum width. If the totals of both energy shifts and angular momentum exchanges agree, and agree simultaneously at all stages, this theory will constitute a viable theory of catastrophic cosmology and of Earth history. It will indicate that the popular menu for Solar System genesis and history, gradualism, is the wrong menu. Unfortunately, at stake is the turf (paradigm) which has been taught as truth to 99.9+% of the 19th and 20th century astronomers when undergraduates. Without realizing it, they themselves became victims of these doctrines and dogmas that were developed (in an anti-clerical atmosphere) two centuries ago. Each day, unwittingly, these persons, once undergraduates, now perpetuate the foolishness of gradualism, and create more victims. The teacher, at least in science, is always thought to be right, that is, except for the mistaken teachers of the Ptolemaic system 400 years ago. Some of them wanted both Copernicus and Kepler to be burned at the stake, and Galileo as well. So most embrace these dubious dogmas of the anti-clericalists of 200 years ago. For some, questioning these 200-year old “givens” is painful. So it is that both teachers and students affirm that Mars has always been only a tiny pinpoint of light in the night time sky for the past 4+ billion years. The red planet never ever has been closer to the Earth-Moon system than the modern 33,000,000 miles. These scholars however pay no attention to ancient history, to ancient calendars, to ancient literature or to the repeated accounts of planetary catastrophism in the Old Testament. In so doing, they have no concept of how much they have missed. Story 31 is that both the Book of Isaiah and Hesiod's “The Shield of Herakles” WERE WRITTEN IN THE AFTERMATH OF BOTH THE FINAL FLING OF MARS WITH VENUS, AND THE FINAL WALTZ OF MARS WITH THE EARTH. Add to this Hesiod's “Theogony” and the writings of Apollodorus, plus those of the “fire and brimstone prophets” Amos, Joel, Micah, etc. Other materials include Homer, Ovid, Plutarch and others not discussed here in detail. Vignettes only from Hesiod and Isaiah have been incorporated to widen the perspective of general readers, scientists, historians, geologists, archaeologists and theologians, among others. Story 32 is THE NATURE OF THE DEMISE OF SENNACHERIB'S ARMY, AND OF THE SURVIVAL OF JERUSALEM. Jerusalem survived the onslaught of the Assyrians, spearheaded by Sennacherib's panzer of 701 B.C.E. Loosely speaking, the crisis night was a near carbon copy of goings on during the Long Day of Joshua, 700 years earlier. At that time, also via catastrophism, the Hebrews had been spared and had triumphed over the larger, well-armed armies of a Canaanite alliance. The 20th century, and soon the 21st century, is fortunate to have the records of these two such eye witness reporters of that celestial scene. One was from just outside Athens, and the other from inside Jerusalem. The reports of both deserve careful analysis. As was mentioned above, other writers of the era, Greek, Hebrew and beyond, who described catastrophism, also merit attention. Story 33 is that THERE ALSO WAS A MODEST, INWARD SHIFT IN THE ORBIT OF VENUS DURING LATE JANUARY OF THE CATASTROPHIC YEAR 701 B.C.E. The inward shift of Venus plus the outward shift of the Earth spelled the doom of the ancient Egyptian Venus calendar. The slight increase in the Earth's spin rate coupled with an enlarged orbit insured the doom of the 360-day calendars. With story 33, the reader now is a full 70% of the way to understanding the catastrophic scenery of ancient times. The view of both history and the cosmos keeps getting better and better, richer and more spectacular. With respects to Plutarch, Hesiod, Homer, Apollodorus, Isaiah, Joel, Amos, Hosea, Jonah and others, in modern cosmology the score now is: Ancient literary catastrophism - 4, Modern 20th century gradualism - 0.
http://www.creationism.org/patten/PattenMarsEarthWars/PattenMEW09.htm
13
66
Michael Fowler, University of Virginia Photons and Electrons We have seen that electrons and photons behave in a very similar fashion—both exhibit diffraction effects, as in the double slit experiment, both have particle like or quantum behavior. We can in fact give a complete analysis of photon behavior—we can figure out how the electromagnetic wave propagates, using Maxwell’s equations, then find the probability that a photon is in a given small volume of space dxdydz, is proportional to |E|2dxdydz, the energy density. On the other hand, our analysis of the electron’s behavior is incomplete—we know that it must also be described by a wave function analogous to E, such that gives the probability of finding the electron in a small volume dxdydz around the point (x, y, z) at the time t. However, we do not yet have the analog of Maxwell’s equations to tell us how ψ varies in time and space. The purpose of this section is to give a plausible derivation of such an equation by examining how the Maxwell wave equation works for a single-particle (photon) wave, and constructing parallel equations for particles which, unlike photons, have nonzero rest mass. Maxwell’s Wave Equation Let us examine what Maxwell’s equations tell us about the motion of the simplest type of electromagnetic wave—a monochromatic wave in empty space, with no currents or charges present. First, we briefly review the derivation of the wave equation from Maxwell’s equations in empty space: To derive the wave equation, we take the curl of the third equation: together with the vector operator identity For a plane wave moving in the x-direction this reduces to The monochromatic solution to this wave equation has the form (Another possible solution is proportional to cos(kx - ωt). We shall find that the exponential form, although a complex number, proves more convenient. The physical electric field can be taken to be the real part of the exponential for the classical case.) Applying the wave equation differential operator to our plane wave solution If the plane wave is a solution to the wave equation, this must be true for all x and t, so we must have This is just the familiar statement that the wave must travel at c. What does the Wave Equation tell us about the Photon? We know from the photoelectric effect and Notice, then, that the wave equation tells us that and hence E = cp. To put it another way, if we think of as describing a particle (photon) it would be more natural to write the plane wave as that is, in terms of the energy and momentum of the particle. In these terms, applying the (Maxwell) wave equation operator to the plane wave yields E2 = c2p2. The wave equation operator applied to the plane wave describing the particle propagation yields the energy-momentum relationship for the particle. Constructing a Wave Equation for a Particle with Mass The discussion above suggests how we might extend the wave equation operator from the photon case (zero rest mass) to a particle having rest mass m0. We need a wave equation operator that, when it operates on a plane wave, yields Writing the plane wave function where A is a constant, we find we can get by adding a constant (mass) term to the differentiation terms in the wave operator: This wave equation is called the Klein-Gordon equation and correctly describes the propagation of relativistic particles of mass m0. However, it’s a bit inconvenient for nonrelativistic particles, like the electron in the hydrogen atom, just as E2 = m02c4 + c2p2 is less useful than E= p2/2m for this case. A Nonrelativistic Wave Equation Continuing along the same lines, let us assume that a nonrelativistic electron in free space (no potentials, so no forces) is described by a plane wave: We need to construct a wave equation operator which, applied to this wave function, just gives us the ordinary nonrelativistic energy-momentum relationship, E = p2/2m. The p2 obviously comes as usual from differentiating twice with respect to x, but the only way we can get E is by having a single differentiation with respect to time, so this looks different from previous wave equations: This is Schrödinger’s equation for a free particle. It is easy to check that if has the plane wave form given above, the condition for it to be a solution of this wave equation is just E = p2/2m. Notice one remarkable feature of the above equation—the i on the left means that cannot be a real function. How Does a Varying Potential Affect a de Broglie Wave? The effect of a potential on a de Broglie wave was considered by Sommerfeld in an attempt to generalize the rather restrictive conditions in Bohr’s model of the atom. Since the electron was orbiting in an inverse square force, just like the planets around the sun, Sommerfeld couldn’t understand why Bohr’s atom had only circular orbits, no Kepler-like ellipses. (Recall that all the observed spectral lines of hydrogen were accounted for by energy differences between these circular orbits.) De Broglie’s analysis of the allowed circular orbits can be formulated by assuming that at some instant in time the spatial variation of the wave function on going around the orbit includes a phase term of the form , where here the parameter q measures distance around the orbit. Now for an acceptable wave function, the total phase change on going around the orbit must be 2nπ, where n is an integer. For the usual Bohr circular orbit, p is constant on going around, q changes by 2πr, where r is the radius of the orbit, giving the usual angular momentum quantization. What Sommerfeld did was to consider a general Kepler ellipse orbit, and visualize the wave going around such an orbit. Assuming the usual relationship , the wavelength will vary as the particle moves around the orbit, being shortest where the particle moves fastest, at its closest approach to the nucleus. Nevertheless, the phase change on moving a short distance Δq should still be , and requiring the wave function to link up smoothly on going once around the orbit gives Thus only certain elliptical orbits are allowed. The mathematics is nontrivial, but it turns out that every allowed elliptical orbit has the same energy as one of the allowed circular orbits. This is why Bohr’s theory gave all the energy levels. Actually, this whole analysis is old fashioned (it’s called the “old quantum theory”) but we’ve gone over it to introduce the idea of a wave with variable wavelength, changing with the momentum as the particle moves through a varying potential. Schrödinger’s Equation for a Particle in a Potential Let us consider first the one-dimensional situation of a particle going in the x-direction subject to a “roller coaster” potential. What do we expect the wave function to look like? We would expect the wavelength to be shortest where the potential is lowest, in the valleys, because that’s where the particle is going fastest—maximum momentum. Perhaps slightly less obvious is that the amplitude of the wave would be largest at the tops of the hills (provided the particle has enough energy to get there) because that’s where the particle is moving slowest, and therefore is most likely to be found. With a nonzero potential present, the energy-momentum relationship for the particle becomes the energy equation We need to construct a wave equation which leads naturally to this relationship. In contrast to the free particle cases discussed above, the relevant wave function here will no longer be a plane wave, since the wavelength varies with the potential. However, at a given x, the momentum is determined by the “local wavelength”, that is, It follows that the appropriate wave equation is: This is the standard one-dimensional Schrödinger equation. In three dimensions, the argument is precisely analogous. The only difference is that the square of the momentum is now a sum of three squared components, for the x, y and z directions, so , and the equation is: This is the complete Schrödinger equation.
http://galileo.phys.virginia.edu/classes/252/wave_equations.html
13
106
Solid geometry is concerned with three-dimensional shapes. Some examples of three-dimensional shapes are cubes, rectangular solids, prisms, cylinders, spheres, cones and pyramids. We will look at the volume formulas and surface area formulas of the solids. We will also discuss some nets of solids. Related Topics: More Geometry Lessons A cube is a three-dimensional figure with six matching square sides. The figure above shows a cube. The dotted lines indicate edges hidden from your view. If s is the length of one of its sides, then the volume of the cube is s × s × s Volume of the cube = s3 The area of each side of a cube is s2. Since a cube has six square-shape sides, its total surface area is 6 times s2. Surface area of a cube = 6s2 |Worksheets to calculate volume and surface area of cubes.||More examples about the volume of cubes.| |More examples about the surface area of cubes.| A rectangular solid is also called a rectangular prism or a cuboid. In a rectangular solid, the length, width and height may be of different lengths. The volume of the above rectangular solid would be the product of the length, width and height that is Total area of top and bottom surfaces is lw + lw = 2lw Volume of rectangular solid = lwh Surface area of rectangular solid = 2lw + 2lh + 2wh = 2(lw + lh + wh) |Worksheets to calculate the volume and surface area of rectangular prisms.||More examples about the volume of cuboids.| |More examples about the surface area of cuboids.| A prism is a solid that has two congruent parallel bases that are polygons. The polygons form the bases of the prism and the length of the edge joining the two bases is called the height. Triangle-shaped base Pentagon-shaped base The above diagrams show two prisms: one with a triangle-shaped base called a triangular prism and another with a pentagon-shaped base called a pentagonal prism. A rectangular solid is a prism with a rectangle-shaped base and can be called a rectangular prism. The volume of a prism is given by the product of the area of its base and its height. Volume of prism = area of base × height The surface area of a prism is equal to 2 times area of base plus perimeter of base times height. Surface area of prism = 2 × area of base + perimeter of base × height |Worksheets to calculate volume of prisms and pyramids.||More examples about the volume of prisms.| |More examples about the surface area of prisms.| A cylinder is a solid with two congruent circles joined by a curved surface. In the above figure, the radius of the circular base is r and the height is h. The volume of the cylinder is the area of the base × height. The net of a solid cylinder consists of 2 circles and one rectangle. The curved surface opens up to form a rectangle. Surface area = 2 × area of circle + area of rectangle Surface area of cylinder = 2πr2 + 2πrh = 2πr (r + h) |Worksheets to calculate volume of cylinders.||Worksheets to calculate surface area of cylinders.| |Worksheets to calculate volume and surface area of cylinders.||Worksheets to calculate surface area of cylinders and pipes.| |More examples about the volume of cylinders.||More examples about the surface area of cylinders.| A sphere is a solid with all its points the same distance from the center. |Worksheets to calculate the volume of spheres.||Worksheets to calculate the surface area of spheres.| |More examples about the volume of spheres.||More examples about the surface area of spheres.| A circular cone has a circular base, which is connected by a curved surface to its vertex. A cone is called a right circular cone, if the line from the vertex of the cone to the center of its base is perpendicular to the base. The net of a solid cone consists of a small circle and a sector of a larger circle. The arc of the sector has the same length as the circumference of the smaller circle. Surface area of cone = Area of sector + area of circle |Worksheets to calculate the volume of cones.||More examples about the volume of cones.| |More examples about the surface area of cones.| A pyramid is a solid with a polygon base and connected by triangular faces to its vertex. A pyramid is a regular pyramid if its base is a regular polygon and the triangular faces are all congruent isosceles triangles. |Worksheets to calculate the volume of square pyramids.||Worksheets to calculate the volume of prisms and pyramids.| |More examples about the volume of pyramids.||More examples about the surface area of pyramids.| |Worksheet 1||Worksheet 2| |Worksheet 3||Worksheet 4| An area of study closely related to solid geometry is nets of a solid. Imagine making cuts along some edges of a solid and opening it up to form a plane figure. The plane figure is called the net of the solid. The figures above show the two possible nets for the cube. Mathsnet.net - Solid Geometry has applets showing how some shapes are made up from their nets. This video shows how to calculate the volume of prisms, cylinders, pyramids and cones. This video shows how to calculate the surface area of a prism. This video shows how to calculate the surface area of cylinders, pyramids and cones. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
http://www.onlinemathlearning.com/solid-geometry.html
13
55
Quadratic Trinomials, Quadratic Equations, and the Quadratic Formula Help Introduction to Qudratic Trinomials and Equations A quadratic trinomial contains an x2 term as well as an x term. For example, x2 – 6x + 8 is a quadratic trinomial. You can factor quadratic trinomials by using the FOIL method in reverse. Let's factor x2 – 6x + 8. Start by looking at the last term in the trinomial: 8. Ask yourself, "What two integers, when multiplied together, have a product of positive 8?" Make a mental list of these integers: 1 ×8 –1× –8 2 ×4 –2× –4 Next look at the middle term of the trinomial: –6x. Choose the two factors from the list you just made that also add up to the coefficient –6: –2 and –4 Now write the factors using –2 and –4: (x – 2)(x – 4) Use the FOIL method to double-check your answer: (x – 2)(x – 4) = x2 – 6x + 8 You can see that the answer is correct. A quadratic equation is an equation that does not graph into a straight line. The graph will be a smooth curve. An equation is a quadratic equation if the highest exponent of the variable is 2. Here are some examples of quadratic equations: x2 + 6x + 10 = 0 6x2 + 8x – 22 = 0 A quadratic equation can be written in the form: ax2+ bx + c = 0. The a represents the number in front of the x2 variable. The b represents the number in front of the x variable and c is the number. For instance, in the equation 2x2 + 3x + 5 = 0, the a is 2, the b is 3, and the c is 5. In the equation 4x2 – 6x + 7 = 0, the a is 4, the b is –6, and the c is 7. In the equation 5x2 + 7 = 0, the a is 5, the b is 0, and the c is 7. In the equation 8x2 – 3x = 0, the a is 8, the b is –3, and the c is 0. Is the equation 2x + 7 = 0 a quadratic equation? No! The equation does not contain a variable with an exponent of 2. Therefore, it is not a quadratic equation. Solving Quadratic Equations Using Factoring Why is the equation x2 = 4 a quadratic equation? It is a quadratic equation because the variable has an exponent of 2. To solve a quadratic equation, first make one side of the equation zero. Let's work with x2 = 4. Subtract 4 from both sides of the equation to make one side of the equation zero: x2 – 4 = 4 – 4. Now, simplify x2 – 4 = 0. The next step is to factor x2 – 4. It can be factored as the difference of two squares: (x – 2)(x + 2) = 0. If ab = 0, you know that either a or b or both factors have to be zero because a times b = 0. This is called the zero product property, and it says that if the product of two numbers is zero, then one or both of the numbers have to be zero. You can use this idea to help solve quadratic equations with the factoring method. Use the zero product property, and set each factor equal to zero: (x – 2) = 0 and (x + 2) = 0. When you use the zero product property, you get linear equations that you already know how to solve. Solve the equation: x – 2 = 0 Add 2 to both sides of the equation. x – 2 + 2 = 0 + 2 Now, simplify: x = 2 Solve the equation: x + 2 = 0 Subtract 2 from both sides of the equation. x + 2 – 2 = 0 – 2 Simplify: x = –2 You got two values for x. The two solutions for x are 2 and –2. All quadratic equations have two solutions. The exponent 2 in the equation tells you that the equation is quadratic, and it also tells you that you will have two answers. Tip: When both your solutions are the same number, this is called a double root. You will get a double root when both factors are the same. Before you can factor an expression, the expression must be arranged in descending order. An expression is in descending order when you start with the largest exponent and descend to the smallest, as shown in this example: 2x2 + 5x + 6 = 0. All quadratic equations have two solutions. The exponent of 2 in the equation tells you to expect two answers. Add your own comment Today on Education.com SUMMER LEARNINGJune Workbooks Are Here! TECHNOLOGYAre Cell Phones Dangerous for Kids? - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - First Grade Sight Words List - 10 Fun Activities for Children with Autism - Graduation Inspiration: Top 10 Graduation Quotes - What Makes a School Effective? - Child Development Theories - Should Your Child Be Held Back a Grade? Know Your Rights - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - Smart Parenting During and After Divorce: Introducing Your Child to Your New Partner
http://www.education.com/study-help/article/qs-quadratic-trinomials-quadratic-equations/
13
100
Believe it or not, fractions, decimals, and percents are all related! If you say a number in the form of a decimal, like .35, it can also be said in the form of a fraction and in the form of a percent. Therefore, it is important that you learn how to convert one to the other, and vice versa. This page will help you convert between fractions, decimals, and percents, so that you can express your numbers however you need to. First, we’re going to take a look at converting fractions to decimals. In order to do this conversion, we’re going to need to use long division. If you don’t remember how to do long division, you can refresh your memory by reading the page on Long Division. Otherwise, you’re ready to learn how to change fractions into decimals. Remember that a fraction indicates division. For example, if you have 3/5 (three fifths), this actually means "three divided by five." Of course, if you want your answer in the form of a fraction, you leave it as 3/5. However, if you want to figure out the decimal number, you will perform that division, like this: Notice that we put our divisor, 5, on the outside of the division sign, and our dividend, 3, on the inside of the division sign. The numerator of the fraction will always be placed on the inside, as the dividend, and the denominator of the fraction will always be placed on the outside, as the divisor. Then, we performed normal long division. Since we know we cannot put 5 into 3, we added a decimal point, and a zero (0) and then we placed a decimal point in our answer, so that we wouldn’t forget that it’s part of the answer. Then, we continued with division and ended up with 0.6 as our answer. This means that 3/5 can also be written as 0.6—they mean the exact same thing. Therefore, 3/5 = 0.6 Let’s try one more. Let’s say you have the fraction 16/20 and you want to change it into a decimal. Set up long division, as normal, like this: Thus, 16/20 can also be written as a decimal, which is 0.8. Therefore, 16/20 = 0.8 Just as we can change fractions into decimals, we can also change decimals into fractions! In order to change decimals into fractions, you need to remember place value. The first decimal place after the decimal is the "tenths" place. The second decimal place is the "hundredths" place. The third decimal place is the "thousandths" place, the fourth decimal place is the "ten thousandths" place, and the fifth decimal place is the "hundred thousandths" place, and so on. It’s important to remember, and know, how many decimal places you have before you can convert decimals to fractions. For example, let’s say we have the decimal number 0.45 and we want to change it into a fraction. The first step is to figure out what decimal place value you’re working with. In our example, there are two decimal place values filled, which means it is filled to the hundredths place value. Now, we can write our decimal as a fraction. The numerator is the decimal number we see, so in this example the numerator would be 45. The denominator is the place value reached in the decimal, so for this example, since the decimal reaches the hundredths place value, we use 100 as our denominator. Thus, our fraction is 45/100. The last step of this process is to make sure the fraction is reduced (simplified) all the way. Our fraction is not reduced, so we need to reduce it. Here is the work for reducing our fraction: Thus, our final answer is 9/20. We know that 9/20 cannot be reduced any further, because there are no common factors (besides 1) between 9 and 20. Therefore, we end with 9/20 as our answer. Let’s try this one more time. Now your decimal is 0.535, and you want to change it into a fraction. Try it on your own, and then we’ll go through the problem so you can check your answer. First, you need to figure out how many decimal place values are filled. You see that there are 3 place values filled, so you know that the thousandths place value is filled. Next, you take the decimal number you see, and convert it into the numerator. In this case, your numerator is 535. Finally, you take the place value number, in this case, it’s 1,000, and use it as the denominator. Thus, your fraction is 535/1,000. Did you remember to reduce it? This one can be reduced, like this: Thus, your final answer is 107/200. Converting decimals to percents is all about moving decimal places. For example, let’s say that you have the decimal number 0.55, and you want to change it to a percent. In order to do this, you are simply going to move the decimal point 2 places to the right. Thus, once you move the decimal, you will have 55%. Moving the decimal looks like this: After you move the decimal point two places to the right, you simply add a percent sign (%) to the end of the number. One thing to note is that when changing decimals to percents, you will always move the decimal two spaces to the right. This will not change, no matter what kind of number you have. Let’s try this one more time. Our new decimal is 0.79, and we want to change it into a percent. Try it on your own, and then keep reading for the answer. Did you get 79%? If you did, you’re right! You move the decimal point two spaces to the right, which would put it after the 9. Then, you just add a percent sign to the end of the number, which makes the final answer 79%. Converting percents to decimals is also largely about moving the decimal places. Basically, we’re going to do the "opposite" of what we did in order to convert decimals to percents. For example, let’s say that you have 81%, and you want to change it into a decimal. In order to do this, you are going to get rid of the percent sign (%) and then you are going to move the decimal point two places to the left. Thus, once you move the decimal, you will have .81. Moving the decimal looks like this: Thus, your final answer is 0.81. Let’s try one more. Your new percent is 90%. Follow the above directions and then compare your answer with ours. Solution: did you get 0.90? If you did, you’re right! First, you get rid of the percent sign. Then, you move the decimal point two places to the left; so, in this case, it’s going from after the zero to before the 9, to make your final answer 0.90! Converting percents to fractions is very similar to converting decimals to fractions; in fact, you will follow almost the exact same steps. First, you need to get rid of the percent sign. Then, you will take that number and use it as your numerator. Last, you will use 100 as your denominator; there you have it—a fraction! Let’s say you have 95% and you want to change it to a fraction. First, you would get rid of the percent sign (%) so that you just have 95. You will use 95 as your numerator. Next, you will use 100 as your denominator. So far, your fraction is 95/100. Your last step, as always, is to check and make sure it is reduced all the way. This fraction needs to be reduced, which can be done like this: There are two ways to convert fractions into percents. The first way is using a proportion. The second way is by changing the fraction into a decimal, and then converting the decimal into a percent. They both work the same way; it does not matter which method you use, you’ll get the same answer. Use whichever method makes more sense to you. The first method we’ll go through is using a proportion. We’ll set up the proportion, and then use cross multiplication and a variable to solve for the percent. First, we need to set up the proportion. We’ll start with the fraction 3/5. Next, we set 3/5 equal to x/100. When written out, it looks like this: Now, you’re going to cross multiply in order to get an equation. Cross multiplying means you are going to multiply the numbers that are diagonally across from each other. For this example, you would be multiplying 3*100, and 5*x (we’re using the * here to mean multiplication, so that it doesn’t get confused with the variable x). Once we multiply those numbers together, we are going to set the products equal to each other, like this: Now, we would begin solving the equation for x. Since we’re multiplying 5 times x, we’re going to divide to solve the equation. To get x by itself, we’re going to divide each side by 5. That step looks like this: Now that each side is divided by 5, we only have x left on the left side of the equation. On the right side, we have 300 divided by 5, which is 60. Therefore, we have x = 60. Now, we’re almost done! We just need to finish up one more step in making this fraction a percent. You need to take the number 60, and add a percent sign after it, like this: 60%. Then, you’re done! You’ve solved the problem by changing the fraction 3/5 into a percent, which is 60%. There is another way to solve this problem that doesn’t involve a proportion or cross multiplication. In fact, it uses two conversions we’ve learned already: converting a fraction to a decimal, and converting a decimal to a percent. Just to make sure this makes sense, we’ll go through one to see what it would look like: Take our fraction from before, 3/5. In order to convert it to a decimal, we need to do long division. Set up and work through the long division; when you’re done, check it with our answer: We came up with an answer of 0.6—now we need to convert that decimal into a percent. In order to convert a decimal into a percent, we need to move the decimal point two places to the right. For this problem, moving the decimal looks like this: Notice that we had to add a zero after the 6 in order to move the decimal place. You may have to do this to any numbers that are single digits, as .6 is. Now, we have 60. so all we have to do is add a percent sign to the end, like this: 60%. There you go! Your fraction is now a percent. And, this answer is the same as the answer from the proportion problem. Either method you use will give you the same answer.
http://www.wyzant.com/help/math/elementary_math/conversions
13
63
German idealism is the name of a movement in German philosophy that began in the 1780s and lasted until the 1840s. The most famous representatives of this movement are Kant, Fichte, Schelling, and Hegel. While there are important differences between these figures, they all share a commitment to idealism. Kant’s transcendental idealism was a modest philosophical doctrine about the difference between appearances and things in themselves, which claimed that the objects of human cognition are appearances and not things in themselves. Fichte, Schelling, and Hegel radicalized this view, transforming Kant’s transcendental idealism into absolute idealism, which holds that things in themselves are a contradiction in terms, because a thing must be an object of our consciousness if it is to be an object at all. German idealism is remarkable for its systematic treatment of all the major parts of philosophy, including logic, metaphysics and epistemology, moral and political philosophy, and aesthetics. All of the representatives of German idealism thought these parts of philosophy would find a place in a general system of philosophy. Kant thought this system could be derived from a small set of interdependent principles. Fichte, Schelling, and Hegel were, again, more radical. Inspired by Karl Leonhard Reinhold, they attempted to derive all the different parts of philosophy from a single, first principle. This first principle came to be known as the absolute, because the absolute, or unconditional, must precede all the principles which are conditioned by the difference between one principle and another. Although German idealism is closely related to developments in the intellectual history of Germany in the eighteenth and nineteenth centuries, such as classicism and romanticism, it is also closely related to larger developments in the history of modern philosophy. Kant, Fichte, Schelling, and Hegel sought to overcome the division between rationalism and empiricism that had emerged during the early modern period. The way they characterized these tendencies has exerted a lasting influence on the historiography of modern philosophy. Although German idealism itself has been subject to periods of neglect in the last two hundred years, renewed interest in the contributions of the German idealism have made it an important resource for contemporary philosophy. Table of Contents - Historical Background - Metaphysics and Epistemology - Moral and Political Philosophy - Reception and Influence - References and Further Reading - Editions and Translations of Other Primary Sources - Other Works on German Idealism German idealism can be traced back to the “critical” or “transcendental” idealism of Immanuel Kant (1724-1804). Kant’s idealism first came to prominence during the pantheism controversy in 1785-1786. When the controversy arose, Kant had already published the first (A) edition of the Critique of Pure Reason (1781) and the Prolegomena to Any Future Metaphysics (1783). Both works had their admirers, but they received unsympathetic and generally uncomprehending reviews, conflating Kant’s “transcendental” idealism with Berkeley’s “dogmatic” idealism (Allison and Heath 2002, 160-166). Thus, Kant was taken to hold that space and time are “not actual” and that the understanding “makes” the objects of our cognition (Sassen 2000, 53-54). Kant insisted that this reading misrepresented his position. While the dogmatic idealist denies the reality of space and time, Kant takes space and time to be forms of intuition. Forms of intuition are, for Kant, the subjective conditions of the possibility of all of our sense perception. It is only because space and time are a priori forms that determine the content of our sensations that Kant thinks we can perceive anything at all. According to Kant, “critical” or “transcendental” idealism serves merely to identify those a priori conditions, like space and time, that make experience possible. It certainly does not imply that space and time are unreal or that the understanding produces the objects of our cognition by itself. Kant hoped to enlist the support of famous German philosophers like Moses Mendelssohn (1729-1786), Johan Nicolai Tetens (1738-1807), and Christian Garve (1742-1798) in order to refute the “dogmatic” idealist interpretation of his philosophy and win a more favorable hearing for his work. Unfortunately, the endorsements Kant hoped for never arrived. Mendelssohn, in particular, was preoccupied with concerns about his health and the dispute that had arisen between himself and Friedrich Heinrich Jacobi (1743-1819) about the alleged Spinozism of his friend his friend Gotthold Ephraim Lessing (1729-1781). This dispute came to be known as the pantheism controversy, because of Spinoza’s famous equivocation between God and nature. During the controversy, Jacobi charged that any attempt to demonstrate philosophical truths was fatally flawed. Jacobi pointed to Spinoza as the chief representative of the tendency toward demonstrative reason in philosophy, but he also drew parallels between Spinozism and Kant’s transcendental idealism throughout On the Doctrine of Spinoza (1785). In 1787, the same year Kant published the second (B) edition of the Critique of Pure Reason, Jacobi published David Hume on Faith or Realism and Idealism, which included a supplement On Transcendental Idealism. Jacobi concluded that transcendental idealism, like Spinozism, subordinates the immediate certainty, or faith, through which we know the world, to demonstrative reason, transforming reality into an illusion. Jacobi later called this “nihilism.” Kant’s views were defended by Karl Leonhard Reinhold (1757-1823) during the pantheism controversy. Reinhold thought Kant’s philosophy could refute skepticism and nihilism and provide a defense of morality and religion which was not to be found in the rationalism of the Leibnizian-Wolffian philosophy. The publication of Reinhold’s Letters on the Kantian Philosophy, first in Der Teutsche Merkur in 1786-1787 and then again in an enlarged version in 1790-1792, helped make Kant’s philosophy one of the most influential, and most controversial, philosophies of the period. Jacobi remained a thorn in the side of the Kantians and the young German idealists, but he was unable to staunch interest in philosophy in general or idealism in particular. In 1787, Reinhold assumed a position at the university in Jena, where he taught Kant’s philosophy and began developing his own ideas. While Reinhold’s thought continued to be influenced by Kant, he also came to believe that Kant had failed to provide philosophy with a solid foundation. According to Reinhold, Kant was a philosophical genius, but he did not have the “genius of system” that would allow him to properly order his discoveries. Reinhold’s Elementarphilosophie (Elementary Philosophy/Philosophy of Elements), laid out in his Essay Towards a New Theory of the Faculty of Representation (1789), Contribution to the Correction of the Previous Misunderstandings of the Philosophers (1790), and On the Foundation of Philosophical Knowledge (1791), was intended to address this shortcoming and show that Kant’s philosophy could be derived from a single foundational principle. Reinhold called this principle “the principle of consciousness” and states that “in consciousness, representation is distinguished by the subject from subject and object and is referred to both.” With this principle, Reinhold thought he could explain what is fundamental to all cognition, namely, that 1) cognition is essentially the conscious representation of an object by a subject and 2) that representations refer to both the subject and object of cognition. When Reinhold left Jena for a new position in Kiel in 1794, his chair was given to Johann Gottlieb Fichte (1762-1814), who quickly radicalized Kant’s idealism and Reinhold’s attempts to systematize philosophy. In response to a skeptical challenge to Reinhold’s Elementarphilosophie, raised anonymously by Gottlob Ernst Schulze (1761-1833) in his work Aenesidemus (1792), Fichte asserted that the principle of representation was not, as Reinhold had maintained, a fact (Tatsache) of consciousness, but rather an act (Tathandlung) whereby consciousness produces the distinction between subject and object by positing the distinction between the I and not-I (Breazeale, 1988, 64). This insight became the foundation of Fichte’s Wissenschaftslehre (Doctrine of Science/Doctrine of Scientific Knowledge) which was first published in 1794. It was soon followed by Fichte’s Foundations of Natural Right (1797) and the System of Ethics (1798). In later years, Fichte presented a number of substantially different versions of the Wissenschaftslehre in lectures in Berlin. When, as a result of a controversy concerning his religious views, Fichte left Jena in 1799, Friedrich Wilhelm Joseph von Schelling (1775-1854) became the most important idealist in Jena. Schelling had arrived in Jena in 1798, when he was only 23 years old, but he was already an enthusiastic proponent of Fichte’s philosophy, which he defended in early works like On the I as Principle of Philosophy (1795). Schelling had also established close relationships with the Jena romantics, who, despite their great interest in Kant, Reinhold, and Fichte, maintained a more skeptical attitude towards philosophy than the German idealists. Although Schelling did not share the romantics’ reservations about idealism, the proximity between Schelling and the romantics is evident in Schelling’s writings on the philosophy of nature and the philosophy of art, which he presented in his Ideas for a Philosophy of Nature (1797), System of Transcendental Idealism (1800), and Philosophy of Art (1802-1803). Georg Wilhelm Friedrich Hegel (1770-1831) had been Schelling’s classmate in Tübingen from 1790-1793. Along with the poet Friedrich Hölderlin (1770-1843), the two had collaborated on The Oldest Program for a System of German Idealism (1796). After following Schelling to Jena in 1801, Hegel published his first independent contributions to German idealism, The Difference Between Fichte’s and Schelling’s System of Philosophy (1801), in which he distinguishes Fichte’s “subjective” idealism from Schelling’s “objective” or “absolute” idealism. Hegel’s work documented the growing rift between Fichte and Schelling. This rift was to expand following Hegel’s falling-out with Schelling in 1807, when Hegel published his monumental Phenomenology of Spirit (1807). Although Hegel only published three more books during his lifetime, Science of Logic (1812-1816), Encyclopedia of the Philosophical Sciences (1817-1830), and Elements of the Philosophy of Right (1821), he remains the most widely-read and most influential of the German idealists. The German idealists have acquired a reputation for obscurity, because of the length and complexity of many of their works. As a consequence, they are often considered to be obscurantists and irrationalists. The German idealists were, however, neither obscurantists nor irrationalists. Their contributions to logic are earnest attempts to formulate a modern logic that is consistent with the idealism of their metaphysics and epistemology. Kant was the first of the German idealists to make important contributions to logic. In the Preface to the second (B) edition of the Critique of Pure Reason, Kant argues that logic has nothing to do with metaphysics, psychology, or anthropology, because logic is “the science that exhaustively presents and strictly proves nothing but the formal rules of all thinking” (Guyer and Wood 1998, 106-107/Bviii-Bix). Kant came to refer to this purely formal logic as “general” logic, which is to be contrasted with the “Transcendental Logic” that he develops in the second part of the “Transcendental Doctrine of Elements” in the Critique of Pure Reason. Transcendental logic differs from general logic because, like the principles of a priori sensibility that Kant presents in the “Transcendental Aesthetic” of the Critique of Pure Reason, transcendental logic is part of metaphysics. Transcendental logic also differs from general logic because it does not abstract from the content of cognition. Transcendental logic contains the laws of pure thinking as they pertain to the cognition of objects. This does not mean that transcendental logic is concerned with empirical objects as such, but rather with the a priori conditions of the possibility of the cognition of objects. Kant’s famous “Transcendental Deduction of the Pure Concepts of the Understanding” is meant to demonstrate that the concepts the transcendental logic presents as the a priori conditions of the possibility of the cognition of objects do, in fact, make the cognition of objects possible and are necessary conditions for any and all cognition of objects. In The Foundation of Philosophical Knowledge, Reinhold objects that Kant’s transcendental logic presupposed general logic, because transcendental logic is a “particular” logic from which general logic, or “logic proper, without surnames,” cannot be derived. Reinhold insisted that the laws of general logic had to be derived from the principle of consciousness if philosophy was to become systematic and scientific, but the possibility of this derivation was contested by Schulze in Aenesidemus. Schulze’s critique of Reinhold’s Elementarphilosophie focuses on the priority Reinhold attributes to the principle of consciousness. Because the principle of consciousness has to be consistent with basic logical principles like the principle of non-contradiction and the principle of the excluded middle, Schulze concluded that it could not be regarded as a first principle. The laws of general logic were, it seemed, prior to the principle of consciousness, so that even the Elementarphilosophie presupposed general logic. Fichte accepted many aspects of Schulze’s critique of Reinhold, but, like Reinhold, he thought it was crucial to demonstrate that the laws of logic could be derived from “real philosophy” or “metaphysics.” In his Personal Meditations on the Elementarphilosophie (1792-1793), his essay Concerning the Concept of the Wissenschaftslehre (1794), and then again in the Wissenschaftslehre of 1794, Fichte argued that the act that posits the distinction between the I and not-I determines consciousness in a way that makes logical analysis possible. Logical analysis is always undertaken reflectively, according to Fichte, because it presupposes that consciousness has already been determined in some way. So, while Kant maintains that transcendental logic presupposes general logic, Reinhold attempts to derive the laws of general logic from the principle of consciousness, and Schulze shows Reinhold to presuppose the same principles, Fichte forcefully asserts that logic presupposes the determination of thought “as a fact of consciousness,” which itself depends upon the act through which consciousness is originally determined. Hegel’s contributions to logic have been far more influential than those of Reinhold or Fichte. His Science of Logic (also known as the “Greater Logic”) and the Logic that constitutes the first part of the Encyclopedia of the Philosophical Sciences (also known as the “Lesser Logic”) are not contributions to earlier debates about the priority of general logic. Nor do they accept that what Kant called “general” logic and Reinhold called “logic proper, without surnames” is purely formal logic. Because Hegel was convinced that truth is both formal and material, and not one or the other, he sought to establish the dialectical unity of the formal and the material in his works on logic. The meaning of the word “dialectical” is, of course, much debated, as is the specific mechanism through which the dialectic produces and resolves the contradictions that move thought from one form of consciousness to another. For Hegel, however, this process accounts for the genesis of the categories and concepts through which all cognition is determined. Logic reveals the unity of that process. German idealism’s contributions to logic were largely dismissed following the rise of empiricism and positivism in the nineteenth century, as well as the revolutions in logic that took place at the beginning of the twentieth century. Today, however, there is a renewed interest in this part of the idealist tradition, as is evident in the attention which has been paid to Kant’s lectures on logic and the new editions and translations of Hegel’s writings and lectures on logic. German idealism is a form of idealism. The idealism espoused by the German idealists is, however, different from other kinds of idealism with which contemporary philosophers may be more familiar. While earlier idealists maintained that reality is ultimately intellectual rather than material (Plato) or that the existence of objects is mind-dependent (Berkeley), the German idealists reject the distinctions these views presuppose. In addition to the distinction between the material and the formal and the distinction between the real and the ideal, Fichte, Schelling, and Hegel also reject the distinction between being and thinking, further complicating the German idealists’ views on metaphysics and epistemology. Kant’s idealism is, perhaps, the most moderate form of idealism associated with German idealism. Kant holds that the objects of human cognition are transcendentally ideal and empirically real. They are transcendentally ideal, because the conditions of the cognition human beings have of objects are to be found in the cognitive faculties of human beings. This does not mean the existence of those objects is mind-dependent, because Kant thinks we can only know objects to the extent that they are objects for us and, thus, as they appear to us. Idealism with respect to appearances does not entail the mind-dependence of objects, because it does not commit itself to any claims about the nature of things in themselves. Kant denies that we have any knowledge of things in themselves, because we do not have the capacity to make judgments about the nature of things in themselves based on our knowledge of things as they appear. Despite our ignorance of things in themselves, Kant thought we could have objectively valid cognition of empirically real objects. Kant recognized that we are affected by things outside ourselves and that this affection produces sensations. These sensations are, for Kant, the “matter” of sensible intuition. Along with the pure “forms” of intuition, space and time, sensations constitute the “matter” of judgment. The pure concepts of the understanding are the “forms” of judgment, which Kant demonstrates to be the conditions of the possibility of objectively valid cognition in the “Deduction of the Pure Concepts of the Understanding” in the Critique of Pure Reason. The synthesis of matter and form in judgment therefore produces objectively valid cognition of empirically real objects To say that the idealism of Fichte, Schelling, and Hegel is more radical than Kant’s idealism is to understate the difference between Kant and the philosophers he inspired. Kant proposed a “modest” idealism, which attempted to prove that our knowledge of appearances is objectively valid. Fichte, however, maintains the very idea of a thing in itself, a thing which is not an object for us and which exists independently of our consciousness, is a contradiction in terms. There can be no thing in itself, Fichte claims, because a thing is only a thing when it is something for us. Even the thing in itself is, in fact, a product of our own conscious thought, meaning the thing in itself is nothing other a postulation of our own consciousness. Thus, it is not a thing in itself, but just another object for us. From this line of reasoning, Fichte concludes that “everything which occurs in our mind can be completely explained and comprehended on the basis of the mind itself” (Breazeale 1988, 69). This is a much more radical form of idealism than Kant maintained. For Fichte holds that consciousness is a circle in which the I posits itself and determines what belongs to the I and what belongs to the not-I. This circularity is necessary and unavoidable, Fichte maintains, but philosophy is a reflective activity in which the spontaneous positing activity of the I and the determinations of the I and not-I are comprehended. Schelling defended Fichte’s idealism in On the I as Principle of Philosophy, where he maintained that the I is the unconditioned condition of both being and thinking. Because the existence of the I precedes all thinking (I must exist in order to think) and because thinking determines all being (A thing is nothing other than an object of thought), Schelling argued, the absolute I, not Reinhold’s principle of consciousness, must be the fundamental principle of all philosophy. In subsequent works like the System of Transcendental Idealism, however, Schelling pursued a different course, arguing that the essential and primordial unity of being and thinking can be understood from two different directions, beginning either with nature or spirit. It could be deduced from the absolute I as Fichte had done, but it could also arise from the unconscious but dynamic powers of nature. By showing how these two different approaches complemented one another, Schelling thought he had shown how the distinction between being and thinking, nature and spirit, could be overcome. Fichte was not pleased with the innovations of Schelling’s idealism, because he initially thought of Schelling as a disciple and a defender of his own position. Fichte did not initially respond to Schelling’s works, but, in an exchange that began in 1800, he began to argue that Schelling had confused the real and the ideal, making the I, the ideal, dependent upon nature, the real. Fichte thought this violated the principles of transcendental idealism and his own Wissenschaftslehre, leading him to suspect that Schelling was no longer the disciple he took him to be. Intervening on Schelling’s behalf as the dispute became more heated, Hegel argued that Fichte’s idealism was “subjective” idealism, while Schelling’s idealism was “objective” idealism. This means that Fichte considers the I to be the absolute and denies the identity of the I and the not-I. He privileges the subject at the expense of the identity of subject and object. Schelling, however, attempts to establish the identity of the subject and object by establishing the objectivity of the subject, the I, as well as the subjectivity of the object, nature. The idealism Schelling and Hegel defend recognizes the identity of subject and object as the “absolute,” unconditioned first principle of philosophy. For that reason, it is often called the philosophy of identity. It is clear that by the time he published the Phenomenology of Spirit, Hegel was no longer interested in defending Schelling’s system. In the Phenomenology, Hegel famously calls Schelling’s understanding of the identity of subject and object “the night in which all cows are black,” meaning that Schelling’s conception of the identity of subject and object erases the many and varied distinctions which determine the different forms of consciousness. These distinctions are crucial for Hegel, who came to believe that the absolute can only be realized by passing through the different forms of consciousness which are comprehended in the self-consciousness of absolute knowledge or spirit (Geist). Contemporary scholars like Robert Pippin and Robert Stern have debated whether Hegel’s position is to be regarded as a metaphysical or merely epistemological form of idealism, because it is not entirely clear whether Hegel regarded the distinctions that constitute the different forms of consciousness as merely the conditions necessary for understanding objects (Pippin) or whether they express fundamental commitments about the way things are (Stern). However, it is almost certainly true that Hegel’s idealism is both epistemological and metaphysical. Like Fichte and Schelling, Hegel sought to overcome the limits Kant’s transcendental idealism had placed on philosophy, in order to complete the idealist revolution he had begun. The German idealists agreed that this could only be done by tracing all the different parts of philosophy back to a single principle, whether that principle is the I (in Fichte and the early Schelling) or the absolute (in Hegel). The moral and political philosophy of the German idealists is perhaps the most influential part of their legacy, but it is also one of the most controversial. Many appreciate the emphasis Kant placed on freedom and autonomy in both morality and politics; yet they reject Kant’s moral and political philosophy for its formalism. Fichte’s moral and political philosophy has only recently been studied in detail, but his popular and polemical writings have led some to see him as an extreme nationalist and, perhaps, a precursor to fascism. Hegel is, by some accounts, an apologist for the totalitarian “absolute state.” In what follows, a more even-handed assessment of their views and their merits is developed. Kantian moral philosophy has been an important part of moral theory since the nineteenth century. Today, it is commonly associated with deontological moral theories, which emphasize duty and obligation, as well as constructivism, which is concerned with the procedures through which moral norms are constructed. Supporters of both approaches frequently refer to the categorical imperative and the different formulations of that imperative which are to be found in Kant’s Groundwork of the Metaphysics of Morals (1785) and the Critique of Practical Reason (1788). They often take the categorical imperative, or one of its formulations, as a general definition of the right or the good. The categorical imperative served a slightly different purpose for Kant. In the Groundwork, Kant uses the categorical imperative to define the form of the good will. Kant thought moral philosophy was primarily concerned with the determination of the will. The categorical imperative shows that, in order to be good, the will must be determined according to a rule that is both universal and necessary. Any violation of this rule would result in a contradiction and, therefore, moral impossibility. The categorical imperative provides Kant with a valid procedure and a universal and necessary determination of what is morally obligatory. Yet in order to determine the will, Kant thought human beings had to be free. Because freedom cannot be proven in theoretical philosophy, however, Kant says that reason forces us to recognize the concept of freedom as a “fact” of pure practical reason. Kant thinks freedom is necessary for any practical philosophy, because the moral worth and merit of human beings depends on the way they determine their own wills. Without freedom, they would not be able to determine their own wills to the good and we could not hold them responsible for their actions. Thus freedom and autonomy are absolutely crucial for Kant’s understanding of moral philosophy. The political significance of autonomy becomes apparent in some of Kant’s late essays, where he supports a republican politics of freedom, equality, and the rule of law. Kant’s moral philosophy affected Fichte profoundly, especially the Critique of Practical Reason. “I have been living in a new world ever since reading the Critique of Practical Reason,” Fichte reports, “propositions which I thought could never be overturned have been overturned for me. Things have been proven to me which I thought could never be proven, e.g., the concept of absolute freedom, the concept of duty, etc., and I feel all the happier for it” (Breazeale 1988, 357). His passion for Kant’s moral philosophy can be seen in the Aenesidemus review, where Fichte defends the “primacy” of practical reason over theoretical reason, which he takes to be the foundation of Kant’s “moral theology.” Despite his admiration for Kant’s moral philosophy, Fichte thought he could go beyond Kant’s formalism. In his essay Concerning the Concept of Wissenschaftslehre, Fichte describes the second, practical part of his plan for Wissenschaftslehre, in which “new and thoroughly elaborated theories of the pleasant, the beautiful, the sublime, the free obedience of nature to its own laws, God, so-called common sense or the natural sense of truth” are laid out, but which also contains “new theories of natural law and morality, the principles of which are material as well as formal” (Breazeale 1988, 135). Unlike Kant, in other words, Fichte would not simply determine the form of the good will, but the ways in which moral and political principles are applied in action. Fichte’s interest in the material principles of moral and political philosophy can be seen in his Foundations of Natural Right and System of Ethics. In both works, Fichte emphasizes the applicability of moral and political principles to action. But he also emphasizes the social context in which these principles are applied. While the I posits itself as well as the not-I, Fichte thinks the I must posit itself as an individual among other individuals, if it is to posit itself “as a rational being with self-consciousness.” The presence of others checks the freedom of the I, because the principles of morality and natural right both require that individual freedom cannot interfere with the freedom of other individuals. Thus the freedom of the I and the relations between individuals and members of the community are governed by the principles of morality and right, which may be applied to all their actions and interactions. Hegel was also concerned about the formalism of Kant’s moral philosophy, but Hegel approached the problem in a slightly different way than Fichte. In the Phenomenology of Spirit, Hegel describes the breakdown of the “ethical life” (Sittlichkeit) of the community. Hegel understands ethical life as the original unity of social life. While he thinks the unity of ethical life precedes any understanding of the community as a free association of individuals, Hegel also thinks the unity of ethical life is destined to break down. As members of the community become conscious of themselves as individuals, through the conflicts that arise between family and city and between religious law and civil law, ethical life becomes more and more fragmented and the ties that bind the community become less and less immediate. This process is illustrated, in the Phenomenology, by Hegel’s famous – if elliptical – retelling of Sophocles’ Antigone. Hegel provides a different account of ethical life in the Foundations of the Philosophy of Right. In this work, he contrasts ethical life with morality and abstract right. Abstract right is the name Hegel gives to the idea that individuals are the sole bearers of right. The problem with this view is that it abstracts right from the social and political context in which individuals exercise their rights and realize their freedom. Morality differs abstract right, because morality recognizes the good as something universal rather than particular. Morality recognizes the “common good” of the community as something that transcends the individual; yet it defines the good through a purely formal system of obligations, which is, in the end, no less abstract than abstract right. Ethical life is not presented as the original unity of the habits and customs of the community, but, rather, as a dynamic system in which individuals, families, civil society, and the state come together to promote the realization of human freedom. Traditional accounts of Hegel’s social and political philosophy have seen Hegel’s account of ethical life as an apology for the Prussian state. This is understandable, given the role the state plays in the final section of the Philosophy of Right on “World History.” Here Hegel says “self-consciousness finds in an organic development the actuality of its substantive knowing and willing” in the Germanic state (Wood 1991, 379-380). To see the state as the culmination of world history and the ultimate realization of human freedom is, however, to overlook several important factors, including Hegel’s personal commitments to political reform and personal freedom. These commitments are reflected in Hegel’s defense of freedom in the Philosophy of Right, as well as the role he thought the family and especially civil society played in ethical life. The German idealists’ interest in aesthetics distinguishes them from other modern systematic philosophers (Descartes, Leibniz, Wolff ) for whom aesthetics was a matter of secondary concern at best. And while there was, to be sure, considerable disagreement about the relationship between art, aesthetics, and philosophy among the German idealists, the terms of their disagreement continue to be debated in philosophy and the arts. For most of his career, Kant regarded aesthetics as an empirical critique of taste. In lectures and notes from the 1770s, several of which were later incorporated into Kant’s Logic (1800), Kant denies that aesthetics can be a science. Kant changed his mind in 1787, when he told Reinhold he had discovered the a priori principles of the faculty of feeling pleasure and displeasure. Kant laid out these principles in the first part of the Critique of the Power of Judgment (1790), where he characterizes aesthetic judgment as a “reflective” judgment, based on “the consciousness of the merely formal purposiveness in the play of the cognitive powers of the subject with regard to the animation of its cognitive powers” (Guyer and Matthews 2000, 106-107). According to Kant, it is the free yet harmonious play of our cognitive faculties in aesthetic judgment that is the source of the feeling of pleasure that we associate with beauty. Reinhold and Fichte had little to say about art and beauty, despite the Fichte’s promise to deal with the subject in the second, practical part of his Wissenschaftslehre. Aesthetics was, however, of critical importance for Schelling, Hegel, and Hölderlin. In the Oldest Program for a System of German Idealism, they write that beauty is “the idea that unites everything” and “the highest act of reason” (Bernstein 2003, 186). Thus they insist that the “philosophy of spirit” must also be an “aesthetic” philosophy, uniting the sensible and the intellectual as well as the real and the ideal. It was Schelling, rather than Hegel or Hölderlin, who did the most to formulate this “aesthetic” philosophy in the years following his move to Jena. In the System of Transcendental Idealism and Philosophy of Art, Schelling argues that the absolute is both revealed by and embodied in works of art. Art is, for Schelling, “the only true and eternal organ and document of philosophy” (Heath 1978, 231). Art is of “paramount” importance to the philosopher, because it opens up “the holy of holies, where burns in eternal and original unity, as if in a single flame, that which is rent asunder in nature and history and that which, in life and action, no less than in thought, must forever fly apart” (Heath 1978, 231). Hegel would later contest Schelling’s characterization of the artwork and its relation to philosophy in his Lectures on Fine Arts. According to Hegel, art is not the revelation and embodiment of philosophy, but an alienated form of self-consciousness. The greatest expression of spirit is not to be found in the work of art, as Schelling suggested, but in the “idea.” Beauty, which Hegel calls “the sensuous appearance of the idea,” is not an adequate expression of the absolute, precisely because it is a sensuous appearance. Nevertheless, Hegel acknowledges that the alienated and sensuous appearance of the idea can play an important role in the dialectical process through which we become conscious of the absolute in philosophy. He distinguishes three kinds of art, symbolic art, classical art, and romantic art, corresponding to three different stages in the development of our consciousness of the absolute, which express different aspects of the idea in different ways. Hegel argues that the kind of art that corresponds to the first stage in the development of our understanding of spirit, symbolic art, fails to adequately represent the idea, but points to the idea as something beyond itself. This “beyond” cannot be captured by images, plastic forms, or words and therefore remains abstract for symbolic art. However, the art corresponding to the second stage in the development of our understanding of spirit, classical art, strives to reconcile the abstract and the concrete in an individual work. It aims to present a perfect, sensible expression of the idea and, for that reason, represents the “ideal” of beauty for Hegel. Yet the problem remains, inasmuch as the idea which is expressed by classical art is not, in itself, sensible. The sensible presentation of the idea remains external to the idea itself. Romantic art calls attention to this fact by emphasizing the sensuousness and individuality of the work. Unlike symbolic art, however, romantic art supposes that the idea can be discovered within and through the work of art. In effect, the work of art tries to reveal the truth of the idea in itself. Yet when the idea is grasped concretely, in itself, rather than through the work of art, we have achieved a philosophical understanding of the absolute, which does not require the supplement of sensible appearance. For this reason, Hegel speculated that the emergence of philosophical self-consciousness signaled the end of art. “The form of art,” he says, “has ceased to be the supreme need of spirit” (Knox 1964, 10). Hegel’s thesis concerning the “end” of art has been widely debated and raises many important questions. What, for example, are we to make of developments in the arts that occurred “after” the end of art? What purpose might art continue to serve, if we have already achieved philosophical self-consciousness? And, perhaps most importantly, has philosophy really achieved absolute knowledge, which would render any “sensuous appearance” of the idea obsolete? These are important questions, but they are difficult to answer. Like Kant and Schelling, Hegel’s views on aesthetics were part of his philosophical system, and they served a specific purpose within that system. To question the end of art in Hegel is, for that reason, to question the entire system and the degree to which it presents a true account of the absolute. Yet that also is why aesthetics and the philosophy of art allow us important insight into Hegel’s thought and the thought of the German idealists more generally. Fichte, Hegel, and Schelling ended their careers in the same chair in Berlin. Fichte spent his later years reformulating the Wissenschaftslehre in lectures and seminars, hoping to finally find an audience that understood him. Hegel, who was called to take Fichte’s chair upon his death, lectured on the history of philosophy, the philosophy of history, the philosophy of religion, and the philosophy of fine art (his lectures on these subjects have been no less influential than his published works). Hegel gained a considerable following among both conservatives and liberals in Berlin, who came to be known as “right” (or “old”) and “left” (or “young”) Hegelians. Schelling’s views seem to have changed the most between the turn of the century and his arrival in Berlin. The “positive” philosophy he articulated in his late works is no longer idealist, because Schelling no longer maintains that being and thinking are identical. Nor does the late Schelling think that thought can ground itself in its own activity. Instead, thought must find its ground in “the primordial kind of all being.” Arthur Schopenhauer (1788-1860), Søren Kierkegaard (1813-1855), and Karl Marx (1818-1883) all witnessed the decline of German idealism in Berlin. Schopenhauer had studied with Schulze in Göttingen and attended Fichte’s lectures in Berlin, but he is not considered a German idealist by many historians of philosophy. Some, like Günter Zöller, have argued against this exclusion, suggesting that the first edition of The World as Will and Representation is, in fact, “the first completely execute post-Kantian philosophical system” (Ameriks 2000, 101). Whether or not this system is really idealist is, however, a matter of some dispute. Claims that Schopenhauer is not an idealist usually take as their starting point the second part of The World as Will and Representation, where Schopenhauer claims that the representations of the “pure subject of cognition” are grounded in the will and, ultimately, in the body. It is easier to distinguish Kierkegaard and Marx from the German idealists than Schopenhauer, though Kierkegaard and Marx are perhaps as different from one another as they could possibly be. Kierkegaard studied with the late Schelling, but, like Jacobi, rejected reason and philosophy in the name of faith. Many of his works are elaborate parodies of the kind of reasoning to be found in the works of the German idealists, especially Hegel. Marx, along with another one of Schelling’s students, Friedrich Engels (1820-1895), came to deride idealism as the “German ideology.” Marx and Engels charged that idealism had never really broken with religion, that it comprehended the world through abstract, logical categories, and, finally, mistook mere ideas for real things. Marx and Engels promoted their own historical materialism as an alternative to the ideology of idealism. There is a tendency to overemphasize figures like Schopenhauer, Kierkegaard, and Marx in the history of philosophy in the nineteenth century, but this distorts our understanding of the developments taking place at the time. It was the rise of empirical methods in the natural sciences and historical-critical methods in the human sciences, as well as the growth of Neo-Kantianism and positivism that led to the eclipse of German idealism, not the blistering critiques of Schopenhauer, Kierkegaard, Marx, and Nietzsche. Neo-Kantianism, in particular, sought to leave behind the speculative excesses of German idealism and extract from Kant those ideas that were useful for the philosophy of the natural and human sciences. In the process, they established Neo-Kantianism as the dominant philosophical school in Germany at the end of the nineteenth century. Despite its general decline, German idealism remained an important influence on the British idealism of F.H. Bradley (1846-1924) and Bernard Bosanquet (1848-1923) at the beginning of the twentieth century. The rejection of British idealism was one of common features of early analytic philosophy, though it would be wrong to suppose that Bertrand Russell (1872-1970), G.E. Moore (1873-1958), and others rejected idealism for purely philosophical reasons. The belief that German idealism was at least partly responsible for German nationalism and aggression was common among philosophers of Russell’s generation and only became stronger after World War I and World War II. The famous depiction of Hegel as an “enemy of liberty” and a “totalitarian” in The Open Society and its Enemies (1946) by Karl Popper (1902-1994) builds upon this view. And while it would be difficult to prove that any particular philosophy was responsible for German nationalism or the rise of fascism, it is true that the works of Fichte and Hegel were, like those of Nietzsche, favorite references for German nationalists and, later, the Nazis. The works of the German idealists, especially Hegel, became important in France during the 1930s. Lectures on Hegel by Alexander Kojeve’s (1902-1968) influenced a generation of French intellectuals, including Georges Bataille (1897-1962), Jacques Lacan (1901-1981) and Jean-Paul Satre (1905-1980). Kojeve’s understanding of Hegel is idiosyncratic, but, together with the works of Jean Wahl (1888-1974), Alexandre Koyré (1892-1964), and Jean Hyppolite (1907-1968), his approach remains influential in continental European philosophy. Objections to the anthropocentrism of German idealism can usually be traced back to this tradition and especially to Kojeve, who saw Hegel’s dialectic as a historical process through which the problems that define humanity are resolved. The end of this process is, for Kojeve, the end of history, which was popularized by Frances Fukayama (1952-) in The End of History and the Last Man (1992). Charges that German idealism is dogmatic, rationalist, foundationalist, and totalizing in its attempt to systematize, and ultimately an egocentric “philosophy of the subject,” which are also common in continental philosophy, merit more serious concern, given the emphasis Fichte, Schelling, and Hegel place on the “I” and the extent of their philosophical ambitions. Yet even these charges have been undermined in recent years by new historical scholarship and a greater understanding of the problems that actually motivated the German idealists. There has been considerable interest in German idealism in the last twenty years, as hostility waned in analytic philosophy, traditional assumptions faded in continental philosophy, and bridges were built between the two approaches. Philosophers like Richard Bernstein and Richard Rorty, inspired by Wilfrid Sellars, may be credited with re-introducing Hegel to analytic philosophy as an alternative to classical empiricism. Robert Pippin later defended a non-metaphysical Hegel, which has been a subject of intense debate, but which has also made Hegel relevant to contemporary debates about realism and anti-realism. More recently, Robert Brandom has championed the “normative” conception of rationality that he finds in Kant and Hegel, and which suggests that concepts function as rules regulating judgment rather than mere representations. Some, like Catherine Malabou, have even attempted to apply the insights of the German idealists to contemporary neuroscience. Finally, it would be remiss not to mention the extraordinary historical-philosophical scholarship, in both German and English, that has been produced on German idealism in recent years. The literature listed in the bibliography has not only enriched our understanding of German idealism with new editions, translations, and commentaries, it has also expanded the horizons of philosophical scholarship by identifying new problems and new solutions to problems arising in different traditions and contexts. - Weischedel. Wilhelm. ed. Kants Werke in sechs Bänden. Wiesbaden: lnsel Verlag, 1956-1962. - Kants Gesammalte Schriften, herausgegeben von der Preussischen Akademie der - Wissenschaften. Berlin: Walter de Gruyter, 1902. - Bowman, Curtis, Guyer, Paul, and Rauscher, Frederick, trans. and Guyer, Paul, ed. Immanuel Kant: Notes and Fragments. Cambridge: Cambridge University Press, 2005. - Allison, Henry and Heath, Peter, eds. Immanuel Kant: Theoretical Philosophy After 1781. Cambridge: Cambridge University Press, 2002. - Guyer, Paul and Matthews, Eric, trans. and eds. Immanuel Kant: Critique of the Power of Judgment. Cambridge: Cambridge University Press, 2000. - Arnulf Zweig, trans. and ed. Immanuel Kant: Correspondence. Cambridge: Cambridge University Press, 1999. - Guyer, Paul and Wood, Allen W. Immanuel Kant: Critique of Pure Reason. Cambridge: Cambridge University Press, 1998. - Heath, Peter and Schneewind, Jerome B., trans. and eds. Lectures on Ethics. New York: Cambridge University Press, 1997. - Ameriks, Karl and Naragon, Steve, trans. and eds. Immanuel Kant: Lectures on Metaphysics. Cambridge: Cambridge University Press, 1997. - Gregor, Mary, trans. and ed. Immanuel Kant: Practical Philosophy. Cambridge: Cambridge University Press, 1996. - Wood, Allen W. and di Giovanni, George, trans. and eds. Immanuel Kant: Religion and Rational Theology. Cambridge: Cambridge University Press, 1996. - Walford, David and Meerbote, Ralf, trans. and eds. Immanuel Kant: Theoretical Philosophy, 1755-1770. Cambridge: Cambridge University Press, 1992. - Young, J. Michel, trans. and ed. Immanuel Kant: Lectures on Logic. Cambridge: Cambridge University Press, 1992. - Kemp Smith, Norman, trans. The Critique of Pure Reason. London: Palgrave MacMillan, 2003. - Pluhar, Werner, trans. Critique of Judgment, Including the First Introduction. Indianapolis: Hackett, Publishing, 1987. - Allison, Henry E., trans. The Kant-Eberhard Controversy. Baltimore: Johns Hopkins University Press, 1973. - Fichte, Immanuel Hermann, ed. Fichtes Werke. Berlin: Walter de Gruyter, 1971. - Lauth, Reinhard, Gliwitzky, Hans, and Jacob, Hans. eds. J.G. Fichte: Gesamtausgabe der Bayerischen Akademie der Wissenschaften. Stuttgart-Bad Cannstatt: Frommann-Holzboog Verlag, 1962. - Green, Garrett, trans. Allen Wood, ed. Attempt at a Critique of All Revelation. Cambridge: Cambridge University Press, 2010. - Breazeale, Daniel and Zöller, Günter. The System of Ethics According to the principles of the Wissenschaftslehre. Cambridge: Cambridge University Press, 2005. - Neuhouser. Frederick and Baur, Michael. trans. and eds. Foundations of Natural Right. Cambridge: Cambridge University Press, 2000. - Breazeale, Daniel. trans. and ed. Introductions to the Wissenschaftslehre and Other Writings. Indianapolis: Hackett Publishing, 1994. - Breazeale, Daniel. trans. and ed. Foundations of the Transcendental Philosophy (Wissenschaftslehre Nova Methodo, 1796-1799). Ithaca: Cornell University Press, 1992. - Breazeale, Daniel. trans. and ed. Early Philosophical Writings. Ithaca: Cornell University Press, 1988. - Preuss, Peter, trans. The Vocation of Man. Indianapolis: Hackett Publishing, 1987. - Heath. Peter and Lachs, John, trans. Science of Knowledge. Cambridge: Cambridge University Press, 1982. - Jones, R. F. and Turnbull, George Henry, trans. Addresses to the German Nation. New York: Harper & Row, 1968. - Eva Moldenhauer and Karl Markus Michel, eds. Georg Wilhelm Friedrich Hegel: Werke. Frankfurt am Main: Suhrkamp, 1971-1979. - Hoffmeister. Johannes, ed. Briefe von und an Hegel, Hamburg: Meiner, 1969. - Deutsche Forschungsgemeinschaft in Verbindung mit der Rheiniscb-westfalischen - Akademie der Wissenschaften, ed. Hegels Gesammelte Werke. Kritische Ausgabe. Hamburg: Meiner Verlag, 1968. - Di Giovanni, George, trans. and ed. The Science of Logic. Cambridge: Cambridge University Press, 2010. - Brinkmann, Klaus and Dahlstrom, Daniel O., trans. and ed. Encyclopaedia of the Philosophical Sciences in Basic Outline, Part 1, Logic. Cambridge: Cambridge University Press, 2010. - Bowman, Brady and and Speight, Allen. Heidelberg Writings. Cambridge: Cambridge University Press, 2009. - Nisbet, H.B., trans. Wood, Allen, ed. Elements of the Philosophy of Right. Cambridge: Cambridge University Press. 1991. - Geraets, Theodore F., Harris, H.S., and Suchting, Wallis Arthur, trans. The Encylopedia Logic. Indianapolis: Hackett Publishing, 1991. - Brown, Robert, ed. Lectures on the History of Philosophy. Berkeley: University of California Press, 1990. - Burbidge. John S., trans. The Jena System 1804/1805: Logic and Metaphysics. Montreal: McGill/Queen’s University Press, 1986. - Miller, A.V., trans. George, Michael and Vincent, Andrew, eds. The Philosophical Propadeutic. Oxford: Blackwell, 1986. - Hodgson, Peter and Brown, R. F., trans. Lectures on the Philosophy of Religion. Berkeley: University of California Press, 1984-1986. - Dobbins, John and Fuss, Peter, trans. Three Essays 1793-1795. South Bend: University of Notre Dame Press, 1984. - Cerf, Walter and Harris, H.S., trans. System of Ethical Life and First Philosophy of Spirit. Albany: State University of New York Press, 1979. - Petry, Michael John, trans. and ed. Hegels Philosophie des subjektiven Geistes/Hegel’s Philosophy of Subjective Spirit. Dordrecht: Riedel, 1978. - Miller, A.V. Phenomenology of Spirit. Oxford: Oxford University Press, 1977. - Cerf, Walter and Harris, H.S., trans. The Difference Between Fichte’s and Schelling’s System of Philosophy. Albany: State University of New York Press, 1977. - Cerf, Walter and Harris, H.S., trans. Faith and Knowledge. Albany: State University of New York Press, 1977. - Nisbet, H.B., trans. Lectures on the Philosophy of World History: Introduction. Cambridge: Cambridge University Press, 1975. - Wallace. William, trans. Hegel’s Philosophy of Mind. Oxford: Oxford University Press, 1971. - Miller, A.V., trans. Philosophy of Nature. Oxford: Oxford University Press, 1970. - Miller, A.V., trans. Science of Logic. London: George Allen & Unwin, 1969. - Knox, T.M. trans. Hegel’s Aesthetics. Oxford: Clarendon Press, 1964. - Frank, Manfred and Kurz, Gerhard. eds. Materialien zu Schellings philosophischen Anfängen. Frankfurt: Suhrkamp, 1995. - Jacobs, Wilhelm G., Krings. Hermann, and Zeltner, Hermann, eds. F.W.J. von Schelling: Historisch-kritische Ausgabe. Stuttgart-Bad Cannstatt: Frommann-Holzboog, 1976-. - Fuhrmans, Horst, ed. Schelling: Briefe und Dokumente. Bonn: Bouvier, 1973· - Love, Jeff and Schmitt, Johannes, trans. Philosophical Investigations into the Essence of Human Freedom. Albany: State University of New York Press, 2007. - Matthews, Bruce, trans. The Grounding of Positive Philosophy. Albany: State University of New York Press, 2007. - Richey, Mason and Zisselsberger, Markus, trans. Historical-Critical Introduction to the Philosophy of Mythology. Albany: State University of New York Press, 2007. - Peterson, Keith R., trans. and ed. First Outline of a System of the Philosophy of Nature. Albany: State University of New York Press, 2004. - Steinkamp, Fiona, trans. Clara, or On Nature’s Connection to the Spirit World. Albany: State University of New York Press, 2002. - Wirth, Jason M., Trans. The Ages of the World. Albany: State University of New York Press, 2000. - Bowie, Andrew, trans. On the History of Modern Philosophy. Cambridge: Cambridge University Press, 1994 - Pfau, Thomas, trans. and ed. Idealism and the Endgame of Theory: Three Essays by F.W.J. Schelling. Albany: State University of New York Press, I994. - Stott, Douglas W., trans. The Philosophy of Art. Minneapolis: University of Minnesota Press, 1989. - Gutmann, James, trans. Philosophical Inquiries into the Nature of Human Freedom. La Salle: Open Court, 1989. - Harris, Errol and Heath. Peter, trans. Ideas for a Philosophy of Nature. Cambridge: Cambridge University Press, 1988. - Vater, Michael G., trans. Bruno, or On the Natural and the Divine Principle of Things Albany: State University of New York Press, 1984. - Marti, Fritz, trans. and ed. The Unconditional in Human Knowledge: Four Early Essays. Lewisburg: Bucknell University Press, 1980. - Heath, Peter, trans. System of Transcendental Idealism. Charlottesville, VA: University Press of Virginia, 1978. - Motgan, E. S. and Guterman, Norbert, trans. On University Studies. Athens: Ohio University Press, 1966. - Hammacher, Klaus and Jaeschke, eds. Friedrich Heinrich Jacobi: Werke. Hamburg: Meiner Verlag, 1998. - Di Giovanni, George, trans. and ed. Friedrich Heinrich Jacobi: The: Main Philosophical Writings and the Novel Allwill. Montreal: McGill/Queen’s University Press, 1994. - Klippen, Friedrich and von Roth, Friedrich, eds. Friedrich Heinrich Jacobi: Werke. Darmstadt: Wissenschaftliche Buchgesellschaft, 1968. - Hebbeler, James, trans., and Ameriks, Karl, ed. Letters on the Kantian Philosophy. Cambridge: Cambridge University Press, 2005. - Fabbianelli, Faustino, ed. Beiträge zur Berichtigung bisheriger Missverständnis der Philosophen. Hamburg: Meiner Verlag, 2003. - Di Giovanni, George and Harris, H.S. Between Kant and Hegel: Texts in the Development of Post-Kantian Idealism. Indianapolis: Hackett Publishing, 2000. - Beissner, Friedrich, ed. Holderlin: Samtliche Werke, Grosser Stuttgarter Ausgabe. Stuttgart: Cotta, 1943-85. - Pfau, Thomas, trans. and ed. Essays and Letters on Theory, Albany: State University of New York Press, 1988. - Cappelørn, N.J. et. al. Søren Kierkegaards Skrifter. Copenhagen: Gad, 1997. - Hong, Howard V. and Hong, Enda H., ed. Kierkegaard’s Writings. Princeton: Princeton University Press, 1983-2009. - Pascal, Roy, ed.The German Ideology, New York: International Publishers, 1947. - Ryawnov, D., and Adoratskii, Vladimir Viktorovich, eds. Karl Marx und Friedrich Engels: Historisch-Kritisch Gesamtausgabe. Redin: Dietz Verlag, 1956. - Janaway, Christopher and Norman, Judith and Welchman Alistair, trans. and eds. The World as Will and Representation. Cambridge: Cambridge University Press, 2010. - Aquila, Richard and Carus, David, trans. The World as Will and Presentation. New York: Pearson Longman, 2008. - Payne, Eric F. and Zöller, Günter, trans. Prize Essay on the Freedom of the Will. Cambridge: Cambridge University Press, 1999. - Payne. Eric F., trans. On the Fourfold Root of the Principle of Sufficient Reason. La Salle: Open Court, 1989. - Payne, Eric F., trans. The World as Will and Representation. New York: Dover, 1974. - Hübscher, Arthur, ed. Sammtliche Werke. Mannheirn: Brockhaus, 1988. - Allison, Henry. Kant’s Transcendental Idealism (2nd Edition) New Haven: Yale University Press, 2004. - Allison, Henry. Idealism and Freedom. Cambridge: Cambridge University Press, 1996. - Ameriks, Karl, ed. The Cambridge Companion to German Idealism. Cambridge: Cambridge University Press, 2000. - Ameriks, Karl. Kant and the Fate of Autonomy: Problems in the Appropriation of the Critical Philosophy. Cambridge: Cambridge University Press, 2.000. - Avineri, Shlomo. Hegel’s Theory of the Modern State. Cambridge: Cambridge University Press, 1972. - Baur, Michael and Dahlstrom, Daniel. eds. The Emergence of German Idealism. Washington, DC: Catholic University of America Press, 1999. - Beiser, Frederick. Hegel. London: Routledge, 2005. - Beiser, Frederick, ed. The Cambridge Companion to Hegel. Cambridge: Cambridge University Press, 1993. - Beiser, Frederick. Enlightenment, Revolution, and Romanticism: The Genesis of Modern German Political Thought. Cambridge: Harvard University Press, 1992. - Beiser, Frederick The Fate of Reason: German Philosophy from Kant to Fichte. Cambridge: Harvard University Press, 1987. - Breazeale, Daniel and Rockmore, Thomas, eds. Fichte: Historical Contexts/Contemporary Controversies. Atlantic Highlands: Humanities Press, 1997. - Bowie, Andrew. Aesthetics and Subjectivity: From Kant to Nietzsche (2nd Edition). Manchester: Manchester University Press, 2000. - Bowie, Andrew. Schelling and Modern European Philosophy. London: Routledge, 1993. - Cassirer, Ernst. Kant’s Life and Thought, trans. James Haden. New Haven: Yale University Press, 1981. - Croce, Benedetto. What is Living and What is Dead in the Philosophy of Hegel, trans. Douglas Ainslie. New York: Russell & Russell. 1969. - Di Giovanni, George, ed. Essays on Hegel’s Logic. Albany: State University of New York Press, 1990. - Findlay, J.N. Hegel: A Re-examination. London: George Allen and Unwin, 1958. - Forster, Michael. Hegel‘s Idea of a Phenomenology of Spirit. Chicago: University of Chicago Press, 1998 - Forster, Michael. Hegel and Skepticism. Cambridge, MA: Harvard University Press, 1989. - Guyer, Paul, ed. The Cambridge Companion to Kant. Cambridge; Cambridge University Press, 1992. - Hammer, Espen, ed. German Idealism: Contemporary Perspectives. London: Routledge, 2007. - Harris, H.S. Hegel’s Development: Night Thoughts. Oxford: Oxford University Press, 1983. - Harris, H.S. Hegel’s Development: Towards the Daylight. Oxford: Oxford University Press, 1972. - Henrich, Dieter. Between Kant and Hegel: Lectures on German Idealism. ed. David Pacini. Cambridge: Harvard University Press, 2003. - Houlgate, Stephen, ed. Hegel and the Arts. Evanston: Northwestern University Press, 2007. - Houlgate, Stephen. The Opening of Hegel’s Logic. West Lafayette: Purdue University Press, 2006. - Houlgate, Stephen, ed. Hegel and the Philosophy of Nature. Albany: State University of New York Press, 1998. - Hyppolite. Jean. Genesis and Structure of the Phenomenology of Spirit, trans. S. Cherniak and R. Heckmann. Evanston, IL: Northwestern University Press, 1974. - Inwood, Michael. Hegel. London: Routledge, 1983. - Kojeve, Alexandre. Introduction to the Reading of Hegel, trans. J. H. Nichols. New York: Basic Books, 1960. - Kuehn, Manfred. Kant: A Life. Cambridge: Cambridge University Press, 2000 - Longuenesse, Béatrice. Hegel’s Critique of Metaphysics. Cambridge: Cambridge University Press, 2007. - Martin, Wayne. Idealism and Objectivity: Understanding Fichte’s Jena Project. Stanford: Stanford University Press, 1997. - Neuhauser, Frederick. Fichte’s Theory of Subjectivity. Cambridge: Cambridge University Press, 1990. - O’Hondt, Jacques. Hegel in his Time. trans. John Burbidge. Peterborough: Broadview Press, 1988. - Pinkard, Terry. German Philosophy 1760-1860: The Legacy of Idealism. Cambridge: Cambridge University Press, 2002. - Pinkard, Terry. Hegel: A Biography. Cambridge: Cambridge University Press, 2000. - Pinkard, Terry. Hegel’s Phenomenology: The Sociality of Reason. Cambridge: Cambridge University Press, 1994. - Pippin, Robert. Hegel on Self-Consciousness: Desire and Death in the Phenomenology of Spirit. Princeton: Princeton University Press, 2010. - Pippin, Robert. Hegel’s Practical Philosophy: Rational Agency as ethical Life. Cambridge: Cambridge University Press, 2008. - Pippin, Robert. Hegel’s Idealism: The Satisfactions of Self-Consciousness. Cambridge: Cambridge University Press, 1989. - Priest, Stephen, ed. Hegel’s Critiqut of Kant. Oxford.: Oxford University Press, 1987. - Redding, Paul. Analytic Philosophy and the Return to Hegelian Thought. Cambridge: Cambridge University Press, 2010. - Ritter, Joachim. Hegel and the French Revolution. Cambridge: MIT Press, 1982. - Rockmore, Tom. Before and After Hegel: A Historical Introduction to Hegel’s Thought. Berkeley: University of California Press, 1993. - Sedgwick, Sally, ed. The Reception of Kant’s Critical Philosophy: Fichte, Schelling, and Hegel. Cambridge: Cambridge University Press, 2000. - Snow, Dale. Schelling and the End of Idealism. Albany: State University of New York Press, 1996. - Solomon, Robert M. and Higgins, Kathleen M., eds. The Age of German Idealism. London: Routledge, 1993. - Stern, Robert. Hegelian Metaphysics. Oxford: Oxford University Press. 2009. - Taylor, Charles. Hegel. Cambridge: Cambridge University Press, 1975 - Westphal, Kenneth. Hegel’s Epistemological Realism: A Study of the Aim and Method of Hegel’s Phenomenology of Spirit. Dordrecht: Kluwer, 1989. - White, Allen. Schelling: Introduction to the System of Freedom. New Haven: Yale University Press, 1983. - Wirth, Jason M., Ed. Schelling Now: Contemporary Readings. Bloomington: Indiana University Press, 2004. - Wood, Allen Kant’s Ethical Thou.ght. Cambridge: Cambridge University Press, 1999. - Wood, Allen. Hegel’s Ethical Thought. Cambridge: Cambridge University Press, 1990. - Zöller, Günter. Fichte’s Transcendental Philosophy. The Original Duplicity of Intelligence and Will. Cambridge: Cambridge University Press, 1998. University of Tennessee Knoxville U. S. A. Last updated: April 23, 2012 | Originally published: April 23, 2012 Categories: 19th Century European
http://www.iep.utm.edu/germidea/
13
51
On a previous episode of The MythBusters, Adam and Jamie made a lead balloon float. I was impressed. Anyway, I decided to give a more detailed explanation on how this happens. Using the thickness of foil they had, what is the smallest balloon that would float? If the one they created were filled all the way, how much could it lift? First, how does stuff float at all? There are many levels that this question could be answered. I could start with the nature of pressure, but maybe I will save that for another day. So, let me start with pressure. The reason a balloon floats is because the air pressure (from the air outside the balloon) is greater on the bottom of the balloon than on the top. This pressure differential creates a force pushing up that can cause the balloon to float. Why is the pressure greater on the bottom? Think of air as a whole bunch of small particles (which it basically is). These particles have two interactions. They are interacting with other gas particles and they are being pulled down by the Earth’s gravity. All the particles would like to fall down to the surface of the Earth, but the more particles that are near the surface, the more collisions they will have that will push them back up. Instead of me explaining this anymore, the best thing for you to do is look at a great simulator (that I did not make) When you run the simulator (a java applet) you will need to add some gas in the chamber by moving the handle on the pump. When you do you will see that there many more gas particles at the bottom of the container than at the top. If you look at the balloon inside the chamber, there will be more particles hitting the balloon from the bottom than from the top. Since there are more collisions on the bottom, this creates a total force from the collisions pushing the balloon up. How would one calculate how much this force is? Well, the simplest and sneaky way is the following: Suppose I did not have a balloon there at all, but there was just more air. What would that air do? It would just float there. Here is a force diagram for some of that air: So, the forces have to be the same (gravity and the force from the collisions – also called the buoyancy force). If these forces were not the same, this section of air would accelerate up or down. Yes, the density of this air is not constant, but that doesn’t matter. Thus (I like saying thus) the buoyancy force must be equal to the weight of this air. Now put a balloon (or any object – like a block of pudding) in that same space. The gas around it will still have the same collisions resulting the same buoyancy force. This is where Archimedes principle comes from that says “The buoyancy force is equal to the weight of the fluid (or air displaced)” This principle can be written as the following formula: Where ? is the density of the stuff the object is in (in this case it would be air). g is the local gravitational constant – that turns mass into weight. V is the volume of the object. Here is the data from the MythBuster’s balloon. I wrote down the dimensions of the huge (ginormous) balloon from the last episode. Here is what I have to start with: - mass of lead used = 11 kg - surface area of lead used = 640 ft2 = 59.5 m2 (from google calculator – just type “640 ft^2 in m^2″) - Also, they say it will have 30 kg of lift (which isn’t technically a proper thing to say, but if I take this to mean 30 kg *9.8 N/kg = 294 Newtons – then ok) - They also claim the balloon will be a 10 ft by 10 ft by 10 ft cube. If that was the case, it would have a surface area of 10*10*6 = 600 ft2. I guess the extra 40 square feet is from overlapping material. How thick is the foil? The density of lead is 11,340 kg/m3. Here they have a rectangular solid that looks like this: Such that it has a volume of: I know the area already. The volume can be found from the mass (and the fact that it is lead). Density is defined as the mass/volume so: This would mean that the thickness would be: That’s pretty thin. This is thin even compared to aluminum foil. [According to wikipedia the source of truthiness, aluminum foil typically ranges from 0.2 mm to 0.006 mm. Of course aluminum is stronger than lead. How much could their balloon have lifted? If they filled their balloon with pure helium (which they didn’t), how much would it lift? Well, there are essentially two forces acting on it. The buoyancy force and the weight of the stuff. In this case the stuff is the helium and the lead. (just as a side note: The helium doesn’t make it float. The purpose of the helium is to keep the walls of the balloon from collapsing. If you could make a material strong enough that it wouldn’t collapse (and be light enough) you could make it float with nothing inside). If used some other gas to fill it up (like argon), it would just add too much weight. For the Mythbuster’s balloon, the lead weighs 11 kg. There is 1000 cubic feet of helium (10x10x10). 1000 cubic feet is 28.3 m3. The density of helium (He) is 0.1786 kg/m3. So: This would make a weight (force) of: I also need to include the weight of the lead. And now, the buoyancy force: (the density of air is 1.3 kg/m3) Compare this to the claim from the Mythbusters that it would have 30 kg of lift (361 Newtons on the surface of the Earth could be the weight of 36 kg – of course I rounded in some areas). Thus the MBs (Mythbusters) were only talking about the lift of the shape, not the amount the object could lift. The total force on this lead balloon would be: So, you could add another 45 lbs of weight and it would still float. This is assuming it was filled with helium (they used a mixture) AND that it was filled all the way (which they didn’t). The lead foil would probably tear if they filled it all the way up. How small could they have made the balloon? Clearly their balloon was huge. Their first attempt at a balloon was much smaller, but it did not float. The Mythbusters showed a quick picture of why they had to make it bigger. Basically, the weight of the lead is proportional to the surface area (since it is a constant thickness). The buoyancy force is proportional to the volume. So, if you make a cube twice as wide, what happens? Here is a generic cube: This cube has sides of length d. The volume of this cube will be V = (d)(d)(d)= d3. The surface area of this cube (a cube has 6 sides) is SA=6*(d)(d) = 6d2. So, if I look at the ration of Volume to Surface area, I have: The key point is that if I double the length of the side of the cube, I increase the volume (and lift) by a factor of (2)(2)(2) =8. I increase the mass of the lead by (2)(2) = 4. So, I gain lifting ability. (well, the balloon does) What would be the smallest size ballon (cube) that one could make with that thickness foil and have it float? Let me start with a cube of dimension (d) and calculate the the lift. The point is to make the net force (weight of helium, plus weight of lead plus buoyancy force) equal to zero. Here is the weight of the lead: Note that the volume if 6d2t where t is the thickness of the foil. And the weight of the helium: And the buoyancy force: This makes the total force (remember the buoyancy is pushing up and the two weights are pushing down: Now, I simply need to set this total force to zero Newtons and solve for d: I neglected to take into account the mass of the tape to hold the foil sheets together. So, if the mythbusters made a balloon square that was 1 meter on each side, it should float. Of course the ginormous balloon they built was totally awesome and what makes the mythbuster the mythbusters. My hats off to you, Adam and Jamie.
http://scienceblogs.com/dotphysics/2009/12/26/rp-5-mythbusters-how-small-cou/
13
60
Vectors and matrices can be added, subtracted, multiplied, and divided; see Basic Arithmetic. The | ( vconcat] command “concatenates” two vectors into one. For example, after [ 1 , 2 ] [ 3 , 4 ] |, the stack will contain the single vector ‘[1, 2, 3, 4]’. If the arguments are matrices, the rows of the first matrix are concatenated with the rows of the second. (In other words, two matrices are just two vectors of row-vectors as far as | is concerned.) If either argument to | is a scalar (a non-vector), it is treated like a one-element vector for purposes of concatenation: 1 [ 2 , 3 ] | produces the vector ‘[1, 2, 3]’. Likewise, if one argument is a matrix and the other is a plain vector, the vector is treated as a one-row matrix. The H | ( append] command concatenates two vectors without any special cases. Both inputs must be vectors. Whether or not they are matrices is not taken into account. If either argument is a scalar, the append function is left in symbolic form. The I | and H I | commands are similar, but they use their two stack arguments in the opposite order. Thus I | is equivalent to <TAB> |, but possibly more convenient and also a bit faster. The v d ( diag] function builds a diagonal square matrix. The optional numeric prefix gives the number of rows and columns in the matrix. If the value at the top of the stack is a vector, the elements of the vector are used as the diagonal elements; the prefix, if specified, must match the size of the vector. If the value on the stack is a scalar, it is used for each element on the diagonal, and the prefix argument is required. To build a constant square matrix, e.g., a 3x3 matrix filled with ones, use 0 M-3 v d 1 +, i.e., build a zero matrix first and then add a constant value to that matrix. (Another alternative would be to use v b and v a; see below.) The v i ( idn] function builds an identity matrix of the specified size. It is a convenient form of v d where the diagonal element is always one. If no prefix argument is given, this command prompts for one. In algebraic notation, ‘idn(a,n)’ acts much like ‘diag(a,n)’, except that ‘a’ is required to be a scalar (non-vector) quantity. If ‘n’ is omitted, ‘idn(a)’ represents ‘a’ times an identity matrix of unknown size. Calc can operate algebraically on such generic identity matrices, and if one is combined with a matrix whose size is known, it is converted automatically to an identity matrix of a suitable matching size. The v i command with an argument of zero creates a generic identity matrix, ‘idn(1)’. Note that in dimensioned Matrix mode (see Matrix Mode), generic identity matrices are immediately expanded to the current default dimensions. The v x ( index] function builds a vector of consecutive integers from 1 to n, where n is the numeric prefix argument. If you do not provide a prefix argument, you will be prompted to enter a suitable number. If n is negative, the result is a vector of negative integers from n to -1. With a prefix argument of just C-u, the v x command takes three values from the stack: n, start, and incr (with incr at top-of-stack). Counting starts at start and increases by incr for successive vector elements. If start or n is in floating-point format, the resulting vector elements will also be floats. Note that start and incr may in fact be any kind of numbers or formulas. When start and incr are specified, a negative n has a different interpretation: It causes a geometric instead of arithmetic sequence to be generated. For example, ‘index(-3, a, b)’ produces ‘[a, a b, a b^2]’. If you omit incr in the algebraic form, ‘index(n, start)’, the default value for incr is one for positive n or two for negative n. The v b ( cvec] function builds a vector of n copies of the value on the top of the stack, where n is the numeric prefix argument. In algebraic formulas, ‘cvec(x,n,m)’ can also be used to build an n-by-m matrix of copies of x. (Interactively, just use v b twice: once to build a row, then again to build a matrix of copies of that row.) The v h ( head] function returns the first element of a vector. The I v h ( function returns the vector with its first element removed. In both cases, the argument must be a non-empty vector. The v k ( cons] function takes a value h and a vector t from the stack, and produces the vector whose head is h and whose tail is t. This is similar to |, except if h is itself a vector, | will concatenate the two vectors cons will insert h at the front of the vector t. Each of these three functions also accepts the Hyperbolic flag [ rcons] in which case t instead represents the last single element of the vector, with h representing the remainder of the vector. Thus the vector ‘[a, b, c, d] = cons(a, [b, c, d]) = rcons([a, b, c], d)’. Also, ‘head([a, b, c, d]) = a’, ‘tail([a, b, c, d]) = [b, c, d]’, ‘rhead([a, b, c, d]) = [a, b, c]’, and ‘rtail([a, b, c, d]) = d’.
http://www.gnu.org/software/emacs/manual/html_node/calc/Building-Vectors.html
13
132
History of radar The history of radar starts with experiments by Heinrich Hertz in the late 19th century that showed that radio waves were reflected by metallic objects. This possibility was suggested in James Clerk Maxwell's seminal work on electromagnetism. However, it was not until the early 20th century that systems were able to use these principles were becoming widely available, and it was German inventor Christian Hülsmeyer who first used them to build a simple ship detection device intended to help avoid collisions in fog (Reichspatent Nr. 165546). Numerous similar systems were developed over the next two decades. The term RADAR was coined in 1940 by the United States Navy as an acronym for radio detection and ranging; this was a cover for the highly secret technology. Thus, a true radar system must both detect and provide range (distance) information for a target. Before 1934, no single system gave this performance; some systems were omni-directional and provided ranging information, while others provided rough directional information but not range. A key development was the use of pulses that were timed to provide ranging, which were sent from large antennas that provided accurate directional information. Combining the two allowed for accurate plotting of targets. In the 1934–1939 period, eight nations developed, independently and in great secrecy, systems of this type: the United States, Great Britain, Germany, the USSR, Japan, the Netherlands, France, and Italy. In addition, Great Britain had shared their basic information with four Commonwealth countries: Australia, Canada, New Zealand, and South Africa, and these countries had also developed indigenous radar systems. During the war, Hungary was added to this list. Progress during the war was rapid and of great importance, probably one of the decisive factors for the victory of the Allies. By the end of hostilities, the United States, Great Britain, Germany, the USSR, and Japan had a wide diversity of land- and sea-based radars as well as small airborne systems. After the war, radar use was widened to numerous fields including: civil aviation, marine navigation, radar guns for police, meteorology and even medicine. The place of radar in the larger story of science and technology is argued differently by different authors. On the one hand, radar contributed very little to theory, which was largely known since the days of Maxwell and Hertz. Therefore radar did not advance science, but was simply a matter of technology and engineering. Maurice Ponte, one of the developers of radar in France, states: The fundamental principle of the radar belongs to the common patrimony of the physicists : after all, what is left to the real credit of the technicians is measured by the effective realisation of operational materials. But others point out the immense practical consequences of the development of radar. Far more than the atomic bomb, radar contributed to Allied victory in World War II. Robert Buderi states that it was also the precursor of much modern technology. From a review of his book: ... radar has been the root of a wide range of achievements since the war, producing a veritable family tree of modern technologies. Because of radar, astronomers can map the contours of far-off planets, physicians can see images of internal organs, meteorologists can measure rain falling in distant places, air travel is hundreds of times safer than travel by road, long-distance telephone calls are cheaper than postage, computers have become ubiquitous and ordinary people can cook their daily dinners in the time between sitcoms, with what used to be called a radar range. Heinrich Hertz In 1887 the German physicist Heinrich Hertz (1857–1894) began experimenting with electromagnetic waves in his laboratory. He found that these waves could be transmitted through different types of materials, and were reflected by others, such as conductors and dielectrics. The existence of electromagnetic waves was predicted earlier by the Scottish physicist James Clerk Maxwell (1831–79), but it was Hertz who first succeeded in generating and detecting what were soon called radio waves. Guglielmo Marconi The development of the wireless or radio is often attributed to Guglielmo Marconi (1874–1937). Although he was not the first to "invent" this technology, it might be said that he was the greatest early promoter of practical radio systems and their applications. In a paper read before the Institution of Electrical Engineers in London on March 3, 1899, Marconi described radio beacon experiments he had conducted in Salisbury Plain. Concerning this lecture, in a 1922 paper he wrote: I also described tests carried out in transmitting a beam of reflected waves across country ... and pointed out the possibility of the utility of such a system if applied to lighthouses and lightships, so as to enable vessels in foggy weather to locate dangerous points around the coasts ... It [now] seems to me that it should be possible to design [an] apparatus by means of which a ship could radiate or project a divergent beam of these rays in any desired direction, which rays, if coming across a metallic object, such as another steamer or ship, would be reflected back to a receiver screened from the local transmitter on the sending ship, and thereby immediately reveal the presence and bearing of the other ship in fog or thick weather. This paper and a speech presenting the paper to a joint meeting of the Institute of Radio Engineers and the American Institute of Electrical Engineers in New York City on June 20, 1922, is often cited as the seminal event which began widespread interest in the development of radar. Christian Hülsmeyer In 1904 Christian Hülsmeyer (1881–1957) gave public demonstrations in Germany and the Netherlands of the use of radio echoes to detect ships so that collisions could be avoided. His device consisted of a simple spark gap used to generate a signal that was aimed using a dipole antenna with a cylindrical parabolic reflector. When a signal reflected from a ship was picked up by a similar antenna attached to the separate coherer receiver, a bell sounded. During bad weather or fog, the device would be periodically "spun" to check for nearby ships. The apparatus detected presence of ships up to 3 km, and Hülsmeyer planned to extend its capability to 10 km. It did not provide range (distance) information, only warning of a nearby object. He patented the device, called the telemobiloscope, but due to lack of interest by the naval authorities the invention was not put into production. Hülsmeyer also received a patent amendment for estimating the range to the ship. Using a vertical scan of the horizon with the telemobiloscope mounted on a tower, the operator would find the angle at which the return was the most intense and deduce, by simple triangulation, the approximate distance. This is in contrast to the later development of pulsed radar, which determines distance directly. Nikola Tesla One of the hundreds of concepts generated by Nikola Tesla (1856–1943) included principles regarding frequency and power levels for primitive radio-location units. In an interview published in Century Illustrated Magazine, June 1900, Tesla gave the following: For instance, by their [standing electromagnetic waves] use we may produce at will, from a sending station, an electrical effect in any particular region of the globe; [with which] we may determine the relative position or course of a moving object, such as a vessel at sea, the distance traversed by the same, or its speed. In 1917, at the height of World War I, Tesla proposed that radio location techniques might help find submerged submarines with a fluorescent screen indicator. While radar would eventually be capable of detecting submarines on the surface, the required radio frequencies are quickly attenuated in water, making this technique ineffective for detecting submerged submarines. United States In the United States, both the Navy and Army needed means of remotely locating enemy ships and aircraft. In 1930, both services initiated the development of radio equipment that could meet this need. There was little coordination of these efforts; thus, they will be described separately. In the autumn of 1922, Albert H. Taylor and Leo C. Young at the U.S. Naval Aircraft Radio Laboratory were conducting communication experiments when they noticed that a wooden ship in the Potomac River was interfering with their signals. They prepared a memorandum suggesting that this might be used for ship detection in a harbor defense, but their suggestion was not taken up. In 1930, Lawrence A. Hyland working with Taylor and Young, now at the U.S. Naval Research Laboratory (NRL) in Washington, D.C., used a similar arrangement of radio equipment to detect a passing aircraft. This led to a proposal and patent for using this technique for detecting ships and aircraft. A simple wave-interference apparatus can detect the presence of an object, but it cannot determine its location or velocity. That had to await the invention of pulsed radar, and later, additional encoding techniques to extract this information from a CW signal. When Taylor's group at the NRL were unsuccessful in getting interference radio accepted as a detection means, Young suggested trying pulsing techniques. This would also allow the direct determination of range to the target. In 1924, Hyland and Young had built such a transmitter for Gregory Breit and Merle A. Tuve at the Carnegie Institution of Washington for successfully measuring the height of the ionosphere. Robert Morris Page was assigned by Taylor to implement Young's suggestion. Page designed a transmitter operating at 60 MHz and pulsed 10 μs in duration and 90 μs between pulses. In December 1934, the apparatus was used to detect a plane at a distance of one mile (1.6 km) flying up and down the Potomac. Although the detection range was small and the indications on the oscilloscope monitor were almost indistinct, it demonstrated the basic concept of a pulsed radar system. Based on this, Page, Taylor, and Young are usually credited with building and demonstrating the world’s first true radar. An important subsequent development by Page was the duplexer, a device that allowed the transmitter and receiver to use the same antenna without overwhelming or destroying the sensitive receiver circuitry. This also solved the problem associated with synchronization of separate transmitter and receiver antennas which is critical to accurate position determination of long-range targets. The experiments with pulsed radar were continued, primarily in improving the receiver for handling the short pulses. In June 1936, the NRL's first prototype radar system, now operating at 28.6 MHz, was demonstrated to government officials, successfully tracking an aircraft at distances up to 25 miles (40 km). Their radar was based on low frequency signals, at least by today's standards, and thus required large antennas, making it impractical for ship or aircraft mounting. Antenna size is inversely proportional to the operating frequency; therefore, the operating frequency of the system was increased to 200 MHz, allowing much smaller antennas. The frequency of 200 MHz was the highest possible with existing transmitter tubes and other components. The new system was successfully tested at the NRL in April 1937, That same month, the first sea-borne testing was conducted. The equipment was temporarily installed on the USS Leary, with a Yagi antenna mounted on a gun barrel for sweeping the field of view. Based on success of the sea trials, the NRL further improved the system. Page developed the ring oscillator, allowing multiple output tubes and increasing the pulse-power to 15 kW in 5-µs pulses. A 20-by-23 ft (6 x 7 m), stacked-dipole “bedspring” antenna was used. In laboratory test during 1938, the system, now designated XAF, detected planes at ranges up to 100 miles (160 km). It was installed on the battleship USS New York for sea trials starting in January 1939, and became the first operational radio detection and ranging set in the U.S. fleet. In May 1939, a contract was awarded to RCA for production. Designated CXAM, deliveries started in May 1940. The acronym RADAR was coined from "Radio Detection And Ranging" as a cover reference to this highly secret technology. One of the first CXAM systems was placed aboard the USS California, a battleship that was sunk in the Japanese attack on Pearl Harbor on December 7, 1941. United States Army As the Great Depression started, economic conditions led the U.S. Army Signal Corps to consolidate its widespread laboratory operations to Fort Monmouth, New Jersey. On June 30, 1930, these were designated the Signal Corps Laboratories (SCL) and Lt. Colonel (Dr.) William R. Blair was appointed the SCL Director. Among other activities, the SCL was made responsible for research in the detection of aircraft by acoustical and infrared radiation means. Blair had performed his doctoral research in the interaction of electromagnet waves with solid materials, and naturally gave attention to this type of detection. Initially, attempts were made to detect infrared radiation, either from the heat of aircraft engines or as reflected from large searchlights with infrared filters, as well as from radio signals generated by the engine ignition. Some success was made in the infrared detection, but little was accomplished using radio. In 1932, progress at the Naval Research Laboratory (NRL) on radio interference for aircraft detection was passed on to the Army. While it does not appear that any of this information was used by Blair, the SCL did undertake a systematic survey of what was then known throughout the world about the methods of generating, modulating, and detecting radio signals in the microwave region. The SCL's first definitive efforts in radio-based target detection started in 1934 when the Chief of the Army Signal Corps, after seeing a microwave demonstration by RCA, suggested that radio-echo techniques be investigated. The SCL called this technique radio position-finding (RPF). Based on the previous investigations, the SCL first tried microwaves. During 1934 and 1935, tests of microwave RPF equipment resulted in Doppler-shifted signals being obtained, initially at only a few hundred feet distance and later greater than a mile. These tests involved a bi-static arrangement, with the transmitter at one end of the signal path and the receiver at the other, and the reflecting target passing through or near the path. Blair was evidently not aware of the success of a pulsed system at the NRL in December 1934. In an internal 1935 note, Blair had commented: Consideration is now being given to the scheme of projecting an interrupted sequence of trains of oscillations against the target and attempting to detect the echoes during the interstices between the projections. In 1936, W. Delmar Hershberger, SCL’s Chief Engineer at that time, started a modest project in pulsed microwave transmission. Lacking success with microwaves, Hershberger visited the NRL (where he had earlier worked) and saw a demonstration of their pulsed set. Back at the SCL, he and Robert H. Noyes built an experimental apparatus using a 75 watt, 110 MHz (2.73 m) transmitter with pulse modulation and a receiver patterned on the one at the NRL. A request for project funding was turned down by the War Department, but $75,000 for support was diverted from a previous appropriation for a communication project. In October 1936, Paul E. Watson became the SCL Chief Engineer and led the project. A field setup near the coast was made with the transmitter and receiver separated by a mile. On December 14, 1936, the experimental set detected at up to 7 mi (11 km) range aircraft flying in and out of New York City. Work then began on a prototype system. Ralph I. Cole headed receiver work and William S. Marks lead transmitter improvements. Separate receivers and antennas were used for azimuth and elevation detection. Both receiving and the transmitting antennas used large arrays of dipole wires on wooden frames. The system output was intended to aim a searchlight. The first demonstration of the full set was made on the night of May 26, 1937. A bomber was detected and then illuminated by the searchlight. The observers included the Secretary of War, Henry A. Woodring; he was so impressed that the next day orders were given for the full development of the system. Congress gave an appropriation of $250,000. The frequency was increased to 200 MHz (1.5 m). The transmitter used 16 tubes in a ring oscillator circuit (developed at the NRL), producing about 75 kW peak power. Major James C. Moore was assigned to head the complex electrical and mechanical design of lobe switching antennas. Engineers from Western Electric and Westinghouse were brought in to assist in the overall development. Designated SCR-268, a prototype was successfully demonstrated in late 1938 at Fort Monroe, Virginia. Production of SCR-268 sets was started by Western Electric in 1939, and it entered service in early 1941. Even before the SCR-268 entered service, it had been greatly improved. In a project led by Major (Dr.) Harold A. Zahl, two new configurations evolved – the SCR-270 (mobile) and the SCR-271 (fixed-site). Operation at 106 MHz (2.83 m) was selected, and a single water-cooled tube provided 8 kW (100 kW pulsed) output power. Westinghouse received a production contract, and started deliveries near the end of 1940. The Army deployed five of the first SCR-270 sets around the island of Oahu in Hawaii. At 7:02 on the morning of December 7, 1941, one of these radars detected a flight of aircraft at a range of 136 miles (219 km) due north. The observation was passed on to an aircraft warning center where it was misidentified as a flight of U.S. bombers known to be approaching from the mainland. The alarm went unheeded, and at 7:48, the Japanese aircraft first struck at Pearl Harbor. United Kingdom In 1915, Robert Watson Watt joined the Meteorological Office as a meteorologist, working at an outstation at Aldershot in Hampshire. Over the next 20 years, he studied atmospheric phenomena and developed the use of radio signals generated by lightning strikes to map out the position of thunderstorms. The difficulty in pinpointing the direction of these fleeting signals led to the use of rotatable directional antennas, and in 1923 the use of oscilloscopes in order to display the signals. An operator would periodically rotate the antenna and look for "spikes" on the oscilloscope to find the direction of a storm. The operation eventually moved to the outskirts of Slough in Berkshire, and in 1927 formed the Radio Research Station (RRS), Slough, an entity under the Department of Scientific and Industrial Research (DSIR). Watson Watt was appointed the RRS Superintendent. As war clouds gathered over Great Britain, the likelihood of air raids and the threat of invasion by air and sea drove a major effort in applying science and technology to defence. In November 1934, the Air Ministry established the Committee for Scientific Survey of Air Defence (CSSAD) with the official function of considering "how far recent advances in scientific and technical knowledge can be used to strengthen the present methods of defence against hostile aircraft." Commonly called the “Tizard Committee” after its Chairman, Sir Henry Tizard, this group had a profound influence on technical developments in Great Britain. H. E. Wimperis, Director of Scientific Research at the Air Ministry and a member of the Tizard Committee, had read about Nikola Tesla's claim of inventing a 'death ray.' Watson Watt, Superintendent of the RRS, Slough, was now well established as an authority in the field of radio, and in January 1935, Wimperis contacted him asking if radio might be used for such a device. After discussing this with his scientific assistant, Arnold F. 'Skip' Wilkins, Watson Watt wrote back that this was unlikely, but added the following comment: "Attention is being turned to the still difficult, but less unpromising, problem of radio detection and numerical considerations on the method of detection by reflected radio waves will be submitted when required." Over the following several weeks, Wilkins considered the radio detection problem. He outlined an approach and backed it with detailed calculations of necessary transmitter power, reflection characteristics of an aircraft, and needed receiver sensitivity. Watson Watt sent this information to the Air Ministry on February 12, 1935, in a secret report titled "The Detection of Aircraft by Radio Methods." Reflection of radio signals was critical to the proposed technique, and the Air Ministry asked if this could be proven. To test this, Wilkins set up receiving equipment in a field near Upper Stowe, Northamptonshire. On February 26, 1935, a Handley Page Heyford bomber flew along a path between the receiving station and the transmitting towers of a BBC shortwave station in nearby Daventry. The aircraft reflected the 6 MHz (49 m) BBC signal, and this was readily detected by Doppler-beat interference at ranges up to 8 mi (13 km). This convincing test, known as the Daventry Experiment, was witnessed by a representative from the Air Ministry, and led to the immediate authorization to build a full demonstration system. Based on pulsed transmission as used for probing the ionosphere. a preliminary system was designed and built at the RRS by the team. Their existing transmitter had a peak power of about 1 kW, and Wilkins had estimated that 100 kW would be needed. Edward George Bowen was added to the team to design and build such a transmitter. Bowens’ transmitter operated at 6 MHz (50 m), had a pulse-repetition rate of 25 Hz, a pulse width of 25 μs, and approached the desired power. Orfordness, a narrow, 19-mile (31 km) peninsula in Suffolk along the coast of the North Sea, was selected as the test site. Here the equipment would be openly operated in the guise of an ionospheric monitoring station. In mid-May, 1935, the equipment was moved to Orfordness. Six wooden towers were erected, two for stringing the transmitting antenna, and four for corners of crossed receiving antennas. In June, general testing of the equipment began. On June 17, the first target was detected—a Supermarine Scapa flying boat at 17 mi (27 km) range. It is historically correct that on June 17, 1935, radio-based detection and ranging was first demonstrated in Great Britain. Watson Watt, Wilkins, and Bowen are generally credited with initiating what would later be called radar in this nation. In December 1935, the British Treasury appropriated £60,000 for a five-station system called Chain Home (CH), covering approaches to the Thames Estuary. The secretary of the Tizard Committee, Albert Percival Rowe, coined the acronym RDF as a cover for the work, meaning Range and Direction Finding but suggesting the already well-known Radio Direction Finding. In 1940 John Randall and Harry Boot developed the Cavity magnetron which made ten-centimetre radar a reality. This device, the size of a small dinner plate, could be carried easily on aircraft and the short wavelength meant that the antenna would also be small and hence suitable for mounting on aircraft. The short wavelength and high power made it very effective at spotting submarines from the air. Air Ministry In March 1936 the work at Orfordness was moved to Bawdsey Manor, nearby on the mainland. Until this time, the work had officially still been under the DSIR, but was now transferred to the Air Ministry. At the new Bawdsey Research Station, the CH equipment was assembled as a prototype. There were equipment problems when the Royal Air Force (RAF) first exercised the prototype station in September 1936. These were cleared by the next April, and the Air Ministry started plans for a larger network of stations. Initial hardware at CH stations was as follows: The transmitter operated on four pre-selected frequencies between 20 and 55 MHz, adjustable within 15 seconds, and delivered a peak power of 200 kW. The pulse duration was adjustable between 5 to 25 μs, with a repetition rate selectable as either 25 or 50 Hz. For synchronization of all CH transmitters, the pulse generator was locked to the 50 Hz of the British power grid. Four 360-foot (110 m) steel towers supported transmitting antennas, and four 240-foot (73 m) wooden towers supported cross-dipole arrays at three different levels. A goniometer was used to improve the directional accuracy from the multiple receiving antennas. By the summer of 1937, 20 initial CH stations were in check-out operation. A major RAF exercise was performed before the end of the year, and was such a success that £10,000,000 was appropriated by the Treasury for an eventual full chain of coastal stations. At the start of 1938, the RAF took over control of all CH stations, and the network began regular operations. In May 1938, Rowe replaced Watson Watt as Superintendent at Bawdsey. In addition to the work on CH and successor systems, there was now major work in airborne RDF equipment. This was led by E. G. Bowen and centered on 200-MHz (1.5 m) sets. The higher frequency allowed smaller antennas, appropriate for aircraft installation. From the initiation of RDF work at Orfordness, the Air Ministry had kept the British Army and the Royal Navy generally informed; this led to both of these forces having their own RDF developments. British Army In 1931, at the Woolwich Research Station of the Army’s Signals Experimental Establishment (SEE), W. A. S. Butement and P. E. Pollard had examined pulsed 600 MHz (50-cm) signals for detection of ships. Although they prepared a memorandum on this subject and performed preliminary experiments, for undefined reasons the War Office did not give it consideration. As the Air Ministry’s work on RDF progressed, Colonel Peter Worlledge of the Royal Engineer and Signals Board met with Watson Watt and was briefed on the RDF equipment and techniques being developed at Orfordness. His report, “The Proposed Method of Aeroplane Detection and Its Prospects,” led the SEE to set up an “Army Cell” at Bawdsey in October 1936. This was under E. Talbot Paris and the staff included Butement and Pollard. The Cell’s work emphasize two general types of RDF equipment: gun-laying (GL) systems for assisting anti-aircraft guns and searchlights, and coastal- defense (CD) systems for directing coastal artillery and defense of Army bases overseas. Pollard led the first project, a gun-laying RDF code-named Mobile Radio Unit (MRU). This truck-mounted system was designed a small version of a CH station. It operated at 23 MHz (13 m) with a power of 300 kW. A single 105-foot (32 m) tower supported a transmitting antenna, as well as two receiving antennas set orthogonally for estimating the signal bearing. In February 1937, a developmental unit detected an aircraft at 60 m (96 km) range. The Air Ministry also adopted this system as a mobile auxiliary to the CH system. In early 1938, Butement started the development of a CD system based on Bowen’s evolving 200-MHz (1.5-m) airborne sets. The transmitter had a 400 Hz pulse rate, a 2-μs pulse width, and 50 kW power (later increased to 150 kW). Although many of Bowen’s transmitter and receiver components were used, the system would not be airborne so there were no limitations on antenna size. Primary credit for introducing beamed RDF systems in Great Britain must be given to Butement. For the CD, he developed a large dipole array, 10 feet (3.0 m) high and 24 feet (7.3 m) wide, giving much narrower beams and higher gain. This could be rotated at a speed up to 1.5 revolutions per minute. For greater directional accuracy, lobe switching on the receiving antennas was adopted. As a part of this development, he formulated the first – at least in Great Britain – mathematical relationship that would later become well known as the “radar range equation.” By May 1939, the CD RDF could detect aircraft flying as low as 500 feet (150 m) and at a range of 25 mi (40 km). With an antenna 60 feet (18 m) above sea level, it could determine the range of a 2,000-ton ship at 24 mi (39 km) and with an angular accuracy of as little as a quarter of a degree. Although the Royal Navy maintained close contact with the Air Ministry work at Bawdsey, they chose to establish their own RDF development at the Experimental Department of His Majesty’s Signal School (HMSS) in Portsmouth, Hampshire, on the south coast. The HMSS started RDF work in September 1935. Initial efforts, under R. F. Yeo, were in wavelengths ranging between 75 MHz (4 m) and 1.2 GHz (25 cm). All of the work was under the utmost secrecy; it could not even be discussed with other scientists and engineers at Portsmouth. A 75 MHz range-only set was eventually developed and designated Type 79X. Basic tests were done using a training ship, but the operation was unsatisfactory. In August 1937, the RDF development at the HMSS changed, with many of their best researchers brought into the activity. John D. S. Rawlinson was made responsible for improving the Type 79X. To increase the efficiency, he decreased the frequency to 43 MHz (7 m). Designated Type 79Y, it had separate, stationary transmitting and receiving antennas. Prototypes of the Type 79Y air-warning system were successfully tested at sea in early 1938. The detection range on aircraft was between 30 and 50 mi (48 and 80 km), depending on height. The systems were then placed into service in August on the cruiser HMS Sheffield and in October on the battleship HMS Rodney. These were the first vessels in the Royal Navy with RDF systems. A radio-based device for remotely indicating the presence of ships was built in Germany by Christian Hülsmeyer in 1904. Often referred to as the first radar system, this did not directly measure the range (distance) to the target, and thus did not meet the criteria to be given this name. Over the following three decades in Germany, a number of radio-based detection systems were developed but none were true radars. This situation changed before World War II. Developments in three leading industries are described. In the early 1930s, physicist Rudolf Kühnhold, Scientific Director at the Kriegsmarine (German Navy) Nachrichtenmittel-Versuchsanstalt (NVA—Experimental Institute of Communication Systems) in Kiel, was attempting to improve the acoustical methods of underwater detection of ships. He concluded that the desired accuracy in measuring distance to targets could be attained only by using pulsed electromagnetic waves. During 1933, Kühnhold first attempted to test this concept with a transmitting and receiving set that operated in the microwave region at 13.5 cm (2.22 GHz). The transmitter used a Barkhausen-Kurz tube (the first microwave generator) that produced only 0.1 watt. Unsuccessful with this, he asked for assistance from Paul-Günther Erbslöh and Hans-Karl Freiherr von Willisen, amateur radio operators who were developing a VHF system for communications. They enthusiastically agreed, and in January 1934, formed a company, Gesellschaft für Elektroakustische und Mechanische Apparate (GEMA), for the effort. From the start, the firm was always called simply GEMA. Work on a Funkmessgerät für Untersuchung (radio measuring device for reconnaissance) began in earnest at GEMA. Hans Hollmann and Theodor Schultes, both affiliated with the prestigious Heinrich Hertz Institute in Berlin, were added as consultants. The first apparatus used a split-anode magnetron purchased from Philips in the Netherlands. This provided about 70 W at 50 cm (600 MHz), but suffered from frequency instability. Hollmann built a regenerative receiver and Schultes developed Yagi antennas for transmitting and receiving. In June 1934, large vessels passing through the Kiev Harbor were detected by Doppler-beat interference at a distance of about 2 km (1.2 mi). In October, strong reflections were observed from an aircraft that happened to fly through the beam; this opened consideration of targets other than ships. Kühnhold then shifted the GEMA work to a pulse-modulated system. A new 50 cm (600 MHz) Philips magnetron with better frequency stability was used. It was modulated with 2- μs pulses at a PRF of 2000 Hz. The transmitting antenna was an array of 10 pairs of dipoles with a reflecting mesh. The wide-band regenerative receiver used Acorn tubes from RCA, and the receiving antenna had three pairs of dipoles and incorporated lobe switching. A blocking device (a duplexer), shut the receiver input when the transmitter pulsed. A Braun tube (a CRT) was used for displaying the range. The equipment was first tested at a NVA site at the Lübecker Bay near Pelzerhaken. During May 1935, it detected returns from woods across the bay at a range of 15 km (9.3 mi). It had limited success, however, in detecting a research ship, Welle, only a short distance away. The receiver was then rebuilt, becoming a super-regenerative set with two intermediate-frequency stages. With this improved receiver, the system readily tracked vessels at up to 8 km (5.0 mi) range. In September 1935, a demonstration was given to the Commander-in-Chief of the Kriegsmarine. The system performance was excellent; the range was read off the Braun tube with a tolerance of 50 meters (less than 1 percent variance), and the lobe switching allowed a directional accuracy of 0.1 degree. Historically, this marked the first naval vessel equipped with radar. Although this apparatus was not put into production, GEMA was funded to develop similar systems operating around 50 cm (500 MHz). These became the Seetakt for the Kriegsmarine and the Freya for the Luftwaffe (German Air Force). Kühnhold remained with the NVA but also consulted with GEMA. He is considered by many in Germany as the Father of Radar. During 1933-6, Hollmann wrote the first comprehensive treatise on microwaves, Physik und Technik der ultrakurzen Wellen (Physics and Technique of Ultrashort Waves), Springer 1938. In 1933, when Kühnhold at the NVA was first experimenting with microwaves, he had sought information from Telefunken on microwave tubes. (Telefunken was the largest supplier of radio products in Germany) There, Wilhelm Tolmé Runge had told him that no vacuum tubes were available for these frequencies. In fact, Runge was already experimenting with high-frequency transmitters and had Telefunken’s tube department working on cm-wavelength devices. In the summer of 1935, Runge, now Director of Telefunken’s Radio Research Laboratory, initiated an internally funded project in radio-based detection. Using Barkhausen-Kurz tubes, a 50 cm (600 MHz) receiver and 0.5-W transmitter were built. With the antennas placed flat on the ground some distance apart, Runge arranged for an aircraft to fly overhead and found that the receiver gave a strong Doppler-beat interference signal. Runge, now with Hans Hollmann as a consultant, continued in developing a 1.8 m (170 MHz) system using pulse-modulation. Wilhelm Stepp developed a transmit-receive device (a duplexer) for allowing a common antenna. Stepp also code-named the system Darmstadt after his home town, starting the practice in Telefunken of giving the systems names of cities. The system, with only a few watts transmitter power, was first tested in February 1936, detecting an aircraft at about 5 km (3.1 mi) distance. This led the Luftwaffe to fund the development of a 50 cm (600 MHz) gun-laying system, the Würzburg. Since before the First World War, Standard Elektrik Lorenz had been the main supplier of communication equipment for the German military and was the main rival of Telefunken. In late 1935, when Lorenz found that Runge at Telefunken was doing research in radio-based detection equipment, they started a similar activity under Gottfried Müller. A pulse-modulated set called Einheit für Abfragung (DFA - Device for Detection) was built. It used a type DS-310 tube (similar to the Acorn) operating at 70 cm (430 MHz) and about 1 kW power, it had identical transmitting and receiving antennas made with rows of half-wavelength dipoles backed by a reflecting screen. In early 1936, initial experiments gave reflections from large buildings at up to about 7 km (4.3 mi). The power was doubled by using two tubes, and in mid-1936, the equipment was set up on cliffs near Kiel, and good detections of ships at 7 km (4.3 mi) and aircraft at 4 km (2.5 mi) were attained. The success of this experimental set was reported to the Kriegsmarine, but they showed no interest; they were already fully engaged with GEMA for similar equipment. Also, because of extensive agreements between Lorenz and many foreign countries, the naval authorities had reservations concerning the company handling classified work. The DFA was then demonstrated to the Heer (German Army), and they contracted with Lorenz for developing Kurfürst (Elector), a system for supporting Flugzeugabwehrkanone (Flak, anti-aircraft guns). In 1895, Alexander Stepanovich Popov, a physics instructor at the Imperial Russian Navy school in Kronstadt, developed an apparatus using a coherer tube for detecting distant lightning strikes. The next year, he added a spark-gap transmitter and demonstrated the first radio communication set in Russia. During 1897, while testing this in communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, but he did nothing more with this observation. In a few years following the 1917 Russian Revolution and the establishment the Union of Soviet Socialist Republics (USSR or Soviet Union) in 1924, Germany’s Luftwaffe had aircraft capable of penetrating deep into Soviet territory. Thus, the detection of aircraft at night or above clouds was of great interest to the Voiska Protivo-vozdushnoi oborony (PVO, Air Defense Forces) of the Raboche-Krest'yanskaya Krasnaya Armiya (RKKA, Workers’–Peasants’ Red Army). The PVO depended on optical devices for locating targets, and had physicist Pavel K. Oshchepkov conducting research in possible improvement of these devices. In June 1933, Oshchepkov changed his research from optics to radio techniques and started the development of a razvedyvlatl’naya elektromagnitnaya stantsiya (reconnaissance electromagnetic station). In a short time, Oshchepkov was made responsible for a PVO experino-tekknicheskii sektor (technical expertise sector) devoted to radiolokatory (radio-location) techniques as well as heading a Special Construction Bureau (SCB) in Leningrad (formerly St. Petersberg). Radio-location beginnings The Glavnoe artilkeriisko upravlenie (GAU, Main Artillery Administration) was considered the “brains” of the Red Army. It not only had competent engineers and physicists on its central staff, but also had a number of scientific research institutes. Thus, the GAU was also assigned the aircraft detection problem, and Lt. Gen. M. M. Lobanov was placed in charge. After examining existing optical and acoustical equipment, Lobanov also turned to radio-location techniques. For this he approached the Tsentral’naya radiolaboratoriya (TsRL, Central Radio Laboratory) in Leningrad. Here, Yu. K. Korovin was conducting research on VHF communications, and had built a 50 cm (600 MHz), 0.2 W transmitter using a Barkhausen-Kurz tube. For testing the concept, Korovin arranged the transmitting and receiving antennas along the flight path of an aircraft. On January 3, 1934, a Doppler signal was received by reflections from the aircraft at some 600 m range and 100–150 m altitude. For further research in detection methods, a major conference on this subject was arranged for the PVO by the Rossiiskaya Akademiya Nauk (RAN, Russian Academy of Sciences). The conference was held in Leningrad in mid-January, 1934, and chaired by Abram Fedorovich Ioffe, Director of the Leningrad Physical-Technical Institute (LPTI). Ioffe was generally considered the top Russian physicist of his time. All types of detection techniques were discussed, but radio-location received the greatest attention. To distribute the conference findings to a wider audience, the proceedings were published the following month in a journal. This included all of the then-existing information on radio-location in the USSR, available (in Russian language) to researchers in this field throughout the world. Recognizing the potential value of radio-location to the military, the GAU made a separate agreement with the Leningrad Electro-Physics Institute (LEPI), for a radio-location system. This technical effort was led by B. K. Shembel. The LEPI had built a transmitter and receiver to study the radio-reflection characteristics of various materials and targets. Shemlbel readily made this into an experimental bi-static radio-location system called Bistro (Rapid). The Bistro transmitter, operating at 4.7 m (64 MHz), produced near 200 W and was frequency-modulated by a 1 kHz tone. A fixed transmitting antenna gave a broad coverage of what was called a radiozkzn (radio screen). A regenerative receiver, located some distance from the transmitter, had a dipole antenna mounted on a hand-driven reciprocating mechanism. An aircraft passing into the screened zone would reflect the radiation, and the receiver would detect the Doppler-interference beat between the transmitted and reflected signals. Bistro was first tested during the summer of 1934. With the receiver up to 11 km away from the transmitter, the set could only detect an aircraft entering a screen at about 3 km (1.9 mi) range and under 1,000 m. With improvements, it was believed to have a potential range of 75 km, and five sets were ordered in October for field trials. Bistro is often cited as the USSR’s first radar system; however, it was incapable of directly measuring range and thus could not be so classified. LEPI and TsRL were both made a part of Nauchno-issledovatel institut-9 (NII-9, Scientific Research Institute #9), a new GAU organization opened in Leningrad in 1935. Mikhail A. Bonch-Bruyevich, a renowned radio physicist previously with TsRL and the University of Leningrad, was named the NII-9 Scientific Director. Research on magnetrons began at Kharkov University in Ukraine during the mid-1920s. Before the end of the decade this had resulted in publications with worldwide distribution, such as the German journal Annalen der Physik (Annals of Physics). Based on this work, Ioffe recommended that a portion of the LEPI be transferred to the city of Kharkov, resulting in the Ukrainian Institute of Physics and Technology (LIPT) being formed in 1930. Within the LIPT, the Laboratory of Electromagnetic Oscillations (LEMO), headed by Abram A. Slutskin, continued with magnetron development. Led by Aleksandr S. Usikov, a number of advanced segmented-anode magnetrons evolved. (It is noted that these and other early magnetrons developed in the USSR suffered from frequency instability, a problem in their use in Soviet radar systems.) In 1936, one of Usikov’s magnetrons producing about 7 W at 18 cm (1.7 GHz) was used by Shembel at the NII-9 as a transmitter in a radioiskatel (radio-seeker) called Burya (Storm). Operating similarly to Bistro, the range of detection was about 10 km, and provided azimuth and elevation coordinates estimated to within 4 degrees. No attempts were made to make this intro a pulsed system, thus, it could not provide range and was not qualified to be classified as a radar. It was, however, the first microwave radio-detection system. While work by Shembel and Bonch-Bruyevich on continuous-wave systems was taking place at NII-9, Oshehepkov at the SCB and V. V. Tsimbalin of Ioffe’s LPTI were pursuing a pulsed system. In 1936, they built a radio-location set operating at 4 m (75 MHz) with a peak-power of about 500 W and a 10-μs pulse duration. Before the end of the year, tests using separated transmitting and receiving sites resulted in an aircraft being detected at 7 km. In April 1937, with the peak-pulse power increased to 1 kW and the antenna separation also increased, test showed a detection range of near 17 km at a height of 1.5 km. Although a pulsed system, it was not capable of directly providing range – the technique of using pulses for determining range had not yet been developed. Pre-war radio location systems In June 1937, all of the work in Leningrad on radio-location suddenly stopped. The infamous Great Purge of dictator Joseph Stalin swept over the military high commands and its supporting scientific community. The PVO chief was executed. Oshchepkov, charged with “high crime,” was sentenced to 10 years at a Gulag penal labor camp. NII-9 as an organization was saved, but Shenbel was dismissed and Bonch-Bruyevich was named the new director. The Nauchnoissledovatel'skii ispytalel'nyi institut svyazi RKKA (NIIIS-KA, Scientific Research Institute of Signals of the Red Army), had initially opposed research in radio-location, favoring instead acoustical techniques. However, this portion of the Red Army gained power as a result of the Great Purge, and did an about face, pressing hard for speedy development of radio-location systems. They took over Oshchepkov’s laboratory and were made responsible for all existing and future agreements for research and factory production. Writing later about the Purge and subsequent effects, General Lobanov commented that it led to the development being placed under a single organization, and the rapid reorganization of the work. At Oshchepkov’s former laboratory, work with the 4 m (75 MHz) pulsed-transmission system was continued by A. I. Shestako. Through pulsing, the transmitter produced a peak power of 1 kW, the highest level thus far generated. In July 1938, a fixed-position, bi-static experimental system detected an aircraft at about 30 km range at heights of 500 m, and at 95 km range, for high-flying targets at 7.5 km altitude. The system was still incapable of directly determining the range. The project was then taken up by Ioffe’s LPTI, resulting in the development of a mobile system designated Redut (Redoubt). An arrangement of new transmitter tubes was used, giving near 50 kW peak-power with a 10 μs pulse-duration. Yagi antennas were adopted for both transmitting and receiving. The Redut was first field tested in October 1939, at a site near Sevastopol, a port in Ukraine on the coast of the Black Sea. This testing was in part to show the NKKF (Soviet Navy) the value of early-warning radio-location for protecting strategic ports. With the equipment on a cliff about 160 meters above sea level, a flying boat was detected at ranges up to 150 km. The Yagi antennas were spaced about 1,000 meters; thus, close coordination was required to aim them in synchronization. An improved version of the Redut, the Redut-K, was developed by Aksel Berg in 1940 and placed aboard the light cruiser Molotov in April 1941. Molotov became the first Soviet warship equipped with radar. At the NII-9 under Bonch-Bruyevich, scientists developed two types of very advanced microwave generators. In 1938, a linear-beam, velocity-modulated vacuum tube (a klystron) was developed by Nikolay Devyatkov, based on designs from Kharkpv. This device produced about 25 W at 15–18 cm (2.0–1.7 GHz) and was later used in experimental systems. Devyatkov followed this with a simpler, single-resonator device (a reflex klystron). At this same time, D. E. Malyarov and N. F. Alekseyev were building a series of magnetrons, also based on designs from Kharkov; the best of these produced 300 W at 9 cm (3 GHz). Also at NII-9, D. S. Stogov was placed in charge of the improvements to the Bistro system. Redesignated as Reven (Rhubarb), it was tested in August 1938, but was only marginally better than the predecessor. With additional minor operational improvements, it was made into a mobile system called Radio Ulavlivatel Samoletov (RUS, Radio Catcher of Aircraft), soon designated as RUS-1. This continuous-wave, bi-static system had a truck-mounted transmitter operating at 4.7 m (64 MHz) and two truck-mounted receivers. Although the RUS-1 transmitter was in a cabin on the rear of a truck, the antenna had to be strung between external poles anchored to the ground. A second truck carrying the electrical generator and other equipment was backed against the transmitter truck. Two receivers were used, each in a truck-mounted cabin with a dipole antenna on a rotatable pole extended overhead. In use, the receiver trucks were placed about 40 km apart; thus, with two positions, it would be possible to make a rough estimate of the range by triangulation on a map. The RUS-1 system was tested and put into production in 1939, then entered service in 1940, becoming the first deployed radio-location system in the Red Army. A total of about 45 RUS-1 systems were built at the Svetlana Factory in Leningrad before the end of 1941, and deployed along the western USSR borders and in the Far East. Without direct ranging capability, however, the military found the RUS-1 to be of little value. Even before the demise of efforts in Leningrad, the NIIIS-KA had contracted with the UIPT in Kharkov to investigate a pulsed radio-location system for anti-aircraft applications. This led the LEMO, in March 1937, to start an internally funded project with the code name Zenit (a popular football team at the time). The transmitter development was led by Usikov, supplier of the magnetron used earlier in the Burya. For the Zenit, Usikov used a 60 cm (500 MHz) magnetron pulsed at 10–20 μs duration and providing 3 kW pulsed power, later increased to near 10 kW. Semion Braude led the development of a superheterodyne receiver using a tunable magnetron as the local oscillator. The system had separate transmitting and receiving antennas set about 65 m apart, built with dipoles backed by 3-meter parabolic reflectors. Zenit was first tested in October 1938. In this, a medium-sized bomber was detected at a range of 3 km. The testing was observed by the NIIIS-KA and found to be sufficient for starting a contracted effort. An agreement was made in May 1939, specifying the required performance and calling for the system to be ready for production by 1941. The transmitter was increased in power, the antennas had selsens added to allow them to track, and the receiver sensitivity was improved by using an RCA 955 acorn triode as the local oscillator. A demonstration of the improved Zenit was given in September 1940. In this, it was shown that the range, altitude, and azimuth of an aircraft flying at heights between 4,000 and 7,000 meters could be determined at up to 25 km distance. The time required for these measurements, however, was about 38 seconds, far too long for use by anti-aircraft batteries. Also, with the antennas aimed at a low angle, there was a dead zone of some distance caused by interference from ground-level reflections. While this performance was not satisfactory for immediate gun-laying applications, it was the first full three-coordinate radio-location system in the Soviet Union and showed the way for future systems. Work at the LEMO continued on Zenit, particularly in converting it into a single-antenna system designated Rubin. This effort, however, was disrupted by the invasion of the USSR by Germany in June 1941. In a short while, the development activities at Kharkov were ordered to be evacuated into the Far East. The research efforts in Leningrad were similarly dispersed. After eight years of effort by highly qualified physicists and engineers, the USSR entered World War II without a fully developed and fielded radar system. As a seafaring nation, Japan had an early interest in wireless (radio) communications. The first known use of wireless telegraphy in warfare at sea was by the Imperial Japanese Navy, in defeating the Russian Imperial Fleet in 1904. There was an early interest in equipment for radio direction-finding, for use in both navigation and military surveillance. The Imperial Navy developed an excellent receiver for this purpose in 1921, and soon most of the Japanese warships had this equipment. In the two decades between the two World Wars, radio technology in Japan made advancements on a par with that in the western nations. There were often impediments, however, in transferring these advancements into the military. For a long time, the Japanese had believed they had the best fighting capability of any military force in the world. The military leaders, who were then also in control of the government, sincerely felt that the weapons, aircraft, and ships that they had built were fully sufficient and, with these as they were, the Japanese Army and Navy were invincible. In 1936, Japan joined Nazi Germany and Fascist Italy in a Tripartite Pact. Technology background Radio engineering was strong in Japan’s higher education institutions, especially the Imperial (government-financed) universities. This included undergraduate and graduate study, as well as academic research in this field. Special relationships were established with foreign universities and institutes, particularly in Germany, with Japanese teachers and researchers often going overseas for advanced study. The academic research tended toward the improvement of basic technologies, rather than their specific applications. There was considerable research in high-frequency and high-power oscillators, such as the magnetron, but the application of these devices was generally left to industrial and military researchers. One of Japan’s best-known radio researchers in the 1920s-30s era was Professor Hidetsugu Yagi. After graduate study in Germany, England, and America, Yagi joined Tohoku University where his research centered on antennas and oscillators for high-frequency communications. A summary of the radio research work at Tohoku University was contained in a 1928 seminal paper by Yagi. Jointly with Shintaro Uda, one Yagi’s first doctoral students, a radically new antenna emerged. It had a number of parasitic elements (directors and reflectors) and would come to be known as the Yagi-Uda or Yagi antenna. A U.S. patent, issued in May 1932, was assigned to RCA. To this day, this is the most widely used directional antenna worldwide. The cavity magnetron was also of interest to Yagi. This HF (~10-MHz) device had been invented in 1921 by Albert W. Hull at General Electric, and Yagi was convinced that it could function in the VHF or even the UHF region. In 1927, Kinjiro Okabe, another of Yagi’s early doctoral students, developed a split-anode device that ultimately generated oscillations at wavelengths down to about 12 cm (2.5 GHz). Researchers at other Japanese universities and institutions also started projects in magnetron development, leading to improvements in the split-anode device. These included Kiyoshi Morita at the Tokyo Institute of Technology, and Tsuneo Ito at Tokoku University. Shigeru Nakajima at Japan Radio Company (JRC) saw a commercial potential of these devices and began the further development and subsequent very profitable production of magnetrons for the medical dielectric heating (diathermy) market. The only military interest in magnetrons was shown by Yoji Ito at the Naval Technical Research Institute (NTRI). The NTRI was formed in 1922, and became fully operational in 1930. Located at Meguro, Tokyo, near the Tokyo Institute of Technology, first-rate scientists, engineers, and technicians were engaged in activities ranging from designing giant submarines to building new radio tubes. Included were all of the precursors of radar, but this did not mean that the heads of the Imperial Navy accepted these accomplishments. In 1936, Tsuneo Ito (no relationship to Yoji Ito) developed an 8-split-anode magnetron that produced about 10 W at 10 cm (3 GHz). Based on its appearance, it was named Tachibana (or Mandarin, an orange citrus fruit). Tsuneo Ito also joined the NTRI and continued his research on magnetrons in association with Yoji Ito. In 1937, they developed the technique of coupling adjacent segments (called push-pull), resulting in frequency stability, an extremely important magnetron breakthrough. By early 1939, NTRI/JRC had jointly developed a 10-cm (3-GHz), stable-frequency Mandarin-type magnetron (No. M3) that, with water cooling, could produce 500-W power. In the same time period, magnetrons were built with 10 and 12 cavities operating as low as 0.7 cm (40 GHz). The configuration of the M3 magnetron was essentially the same as that used later in the magnetron developed by Boot and Randall at Birmingham University in early 1940, including the improvement of strapped cavities. Unlike the high-power magnetron in Great Britain, however, the initial device from the NTRI generated only a few hundred watts. In general, there was no lack of scientific and engineering capabilities in Japan; their warships and aircraft clearly showed high levels of technical competency. They were ahead of Great Britain in the development of magnetrons, and their Yagi antenna was the world standard for VHF systems. It was simply that the top military leaders failed to recognize how the application of radio in detection and ranging – what was often called the Radio Range Finder (RRF) – could be of value, particularly in any defensive role; offense not defense, totally dominated their thinking. Imperial Army In 1938, engineers from the Research Office of Nippon Electric Company (NEC) were making coverage tests on high-frequency transmitters when rapid fading of the signal was observed. This occurred whenever an aircraft passed over the line between the transmitter and receiving meter. Masatsugu Kobayashi, the Manager of NEC’s Tube Department, recognized that this was due to the beat-frequency interference of the direct signal and the Doppler-shifted signal reflected from the aircraft. Kobayashi suggested to the Army Science Research Institute that this phenomenon might be used as an aircraft warning method. Although the Army had rejected earlier proposals for using radio-detection techniques, this one had appeal because it was based on an easily understandable method and would require little developmental cost and risk to prove its military value. NEC assigned Kinji Satake of their Research Institute to develop a system called the Bi-static Doppler Interference Detector (BDID). For testing the prototype system, it was set up on an area recently occupied by Japan along the coast of China. The system operated between 4.0-7.5 MHz (75–40 m) and involved a number of widely spaced stations; this formed a radio screen that could detect the presence (but nothing more) of an aircraft at distances up to 500 km (310 mi). The BDID was the Imperial Army’s first deployed radio-based detection system, placed into operation in early 1941. A similar system was developed by Satake for the Japanese homeland. Information centers received oral warnings from the operators at BDID stations, usually spaced between 65 and 240 km (40 and 150 mi). To reduce homing vulnerability – a great fear of the military – the transmitters operated with only a few watts power. Although originally intended to be temporary until better systems were available, they remained in operation throughout the war. It was not until after the start of war the Imperial Army had equipment that could be called radar. In the mid-1930s, some of the technical specialists in the Imperial Navy became interested in the possibility of using radio to detect aircraft. For consultation, they turned to Professor Yagi who, then the Director of the Radio Research Laboratory at Osaka Imperial University. Yagi suggested that this might be done by examining the Doppler frequency-shift in a reflected signal. Funding was provided to the Osaka Laboratory for experimental investigation of this technique. Kinjiro Okabe, the inventor of the split-anode magnetron and who had followed Yagi to Osaka, led the effort. Theoretical analyses indicated that the reflections would be greater if the wavelength was approximately the same as the size of aircraft structures. Thus, a VHF transmitter and receiver with Yagi antennas separated some distance were used for the experiment. In 1936, Okabe successfully detected a passing aircraft by the Doppler-interference method; this was the first recorded demonstration in Japan of aircraft detection by radio. With this success, Okabe’s research interest switched from magnetrons to VHF equipment for target detection. This, however, did not lead to any significant funding. The top levels of the Imperial Navy believed that any advantage of using radio for this purpose were greatly outweighed by enemy intercept and disclosure of the sender’s presence. Historically, warships in formation used lights and horns to avoid collision at night or when in fog. Newer techniques of VHF radio communications and direction-finding might also be used, but all of these methods were highly vulnerable to enemy interception. At the NTRI, Yoji Ito proposed that the UHF signal from a magnetron might be used to generate a very narrow beam that would have a greatly reduced chance of enemy detection. Development of microwave system for collision avoidance started in 1939, when funding was provided by the Imperial Navy to JRC for preliminary experiments. In a cooperative effort involving Yoji Ito of the NTRI and Shigeru Nakajima of JRC, an apparatus using a 3-cm (10-GHz) magnetron with frequency modulation was designed and built. The equipment was used in an attempt to detect reflections from tall structures a few kilometers away. This experiment gave poor results, attributed to the very low power from the magnetron. The initial magnetron was replaced by one operating at 16 cm (1.9 GHz) and with considerably higher power. The results were then much better, and in October 1940, the equipment obtained clear echoes from a ship in Tokyo Bay at a distance of about 10 km (6.2 mi). There was still no commitment by top Japanese naval officials for using this technology aboard warships. Nothing more was done at this time, but late in 1941, the system was adopted for limited use. In late 1940, Japan arranged for two technical missions to visit Germany and exchange information about their developments in military technology. Commander Yoji Ito represented the Navy’s interest in radio applications, and Lieutenant Colonel Kinji Satake did the same for the Army. During a visit of several months, they exchanged significant general information, as well as limited secret materials in some technologies, but little directly concerning radio-detection techniques. Neither side even mentioned magnetrons, but the Germans did apparently disclose their use of pulsed techniques. After receiving the reports from the technical exchange in Germany, as well as intelligence reports concerning the success of Great Britain with firing using RDF, the Naval General Staff reversed itself and tentatively accepted pulse-transmission technology. On August 2, 1941, even before Yoji Ito returned to Japan, funds were allocated for the initial development of pulse-modulated radars. Commander Chuji Hashimoto of the NTRI was responsible for initiating this activity. A prototype set operating at 4.2 m (71 MHz) and producing about 5 kW was completed on a crash basis. With the NTRI in the lead, the firm NEC and the Research Laboratory of Japan Broadcasting Corporation (NHK) made major contributions to the effort. Kenjiro Takayanagi, Chief Engineer of NHK’s experimental television station and called “the father of Japanese television,” was especially helpful in rapidly developing the pulse-forming and timing circuits, as well as the receiver display. In early September 1941, the prototype set was first tested; it detected a single bomber at 97 km (60 mi) and a flight of aircraft at 145 km (90 mi). The system, Japan’s first full Radio Range Finder (RRF – radar), was designated Mark 1 Model 1. Contracts were given to three firms for serial production; NEC built the transmitters and pulse modulators, Japan Victor the receivers and associated displays, and Fuji Electrical the antennas and their servo drives. The system operated at 3.0 m (100 MHz) with a peak-power of 40 kW. Dipole arrays with matte+-type reflectors were used in separate antennas for transmitting and receiving. In November 1941, the first manufactured RRF was placed into service as a land-based early-warning system at Katsuura, Chiba, a town on the Pacific coast about 100 km (62 mi) from Tokyo. A large system, it weighed close to 8,700 kg (19,000 lb). The detection range was about 130 km (81 mi) for single aircraft and 250 km (160 mi) for groups. The Philips Company in Eindhoven, Netherlands, operated Natuurkundig Laboratorium (NatLab) for fundamental research related to its products. NatLab researcher Klass Posthumus developed a magnetron split into four elements. In developing a communication system using this magnetron, C.H.J.A. Stall was testing the transmission by using parabolic transmitting and receiving antennas set side-by-side, both aimed at a large plate some distance away. To overcome frequency instability of the magnetron, pulse modulation was used. It was found that the plate reflected a strong signal. Recognizing the potential importance of this as a detection device, NatLab arranged a demonstration for the Koninklijke Marine (Royal Netherlands Navy). This was conducted in 1937 across the entrance to the main naval port at Marsdiep. Reflections from sea waves obscured the return from the target ship, but the Navy was sufficiently impressed to initiate sponsorship of the research. In 1939, an improved set was demonstrated at Wijk aan Zee, detecting a vessel at a distance of 3.2 km (2.0 mi). A prototype system was built by Philips, and plans were started by the firm Nederlandse Seintoestellen Fabriek (a Philips subsidiary) for building a chain of warning stations to protect the primary ports. Some field testing of the prototype was conducted, but the project was discontinued when Germany invaded the Netherlands on May 10, 1940. Within the NatLab, however, the work was continued in great secrecy until 1942. During the early 1930s, there were widespread rumours of a “death ray” being developed. The Dutch Parliament set up a Committee for the Applications of Physics in Weaponry under G.J. Elias to examine this potential, but the Committee quickly discounted death rays. The Committee did, however, establish the Laboratorium voor Fysieke Ontwikkeling (LFO, Laboratory for Physical Development), dedicated to supporting the Netherlands Armed Forces. Operating in great secrecy, the LFO opened a facility called the Meetgebouw (Measurements Building) located on the Plain of Waalsdorp. In 1934, J.L.W.C. von Weiler joined the LFO and, with S.G. Gratama, began research on a 1.25-m (240-MHz) communication system to be used in artillery spotting. In 1937, while tests were being conducted on this system, a passing flock of birds disturbed the signal. Realizing that this might be a potential method for detecting aircraft, the Minister of War ordered continuation of the experiments. Weiler and Gratama set about developing a system for directing searchlights and aiming anti-aircraft guns. The experimental “electrical listening device” operated at 70 cm (430 MHz) and used pulsed transmission at an RPF of 10 kHz. A transmit-receive blocking circuit was developed to allow a common antenna. The received signal was displayed on a CR tube with a circular time base. This set was demonstrated to the Army in April 1938 and detected an aircraft at a range of 18 km (11 mi). The set was rejected, however, because it could not withstand the harsh environment of Army combat conditions. The Navy was more receptive. Funding was provided for final development, and Max Staal was added to the team. To maintain secrecy, they divided the development into parts. The transmitter was built at the Delft Technical College and the receiver at the University of Leiden. Ten sets would be assembled under the personal supervision of J.J.A. Schagen van Leeuwen, head of the firm Hazemeijer Fabriek van Signaalapparaten. The prototype had a peak-power of 1 kW, and used a pulse length of 2 to 3 μs with a 10- to 20 kHz PRF. The receiver was a super-heterodyne type using Acorn tubes and a 6 MHz IF stage. The antenna consisted of 4 rows of 16 half-wave dipoles backed by a 3- by 3-meter mesh screen. The operator used a bicycle-type drive to rotate the antenna, and the elevation could be changed using a hand crank. Several sets were completed, and one was put into operation on the Malievelt in The Hague just before the Netherlands fell to Germany in May 1940. The set worked well, spotting enemy aircraft during the first days of fighting. To prevent capture, operating units and plans for the system were destroyed. Von Weiler and Max Staal fled to England aboard one of the last ships able to leave, carrying two disassembled sets with them. Later, Gratama and van Leeuwen also escaped to England. In 1927, French physicists Camille Gutton and Emile Pierret experimented with magnetrons and other devices generating wavelengths going down to 16 cm. Camille's son, Henri Gutton, was with the Compagnie Générale de Télégraphie Sans Fil (CSF) where he and Robert Warneck improved his father's magnetrons. Emile Girardeau, the head of the CSF, recalled in testimony that they were at the time intending to build radio-detection systems "conceived according to the principles stated by Tesla. In 1934, following systematic studies on the magnetron, the research branch of the CSF, headed by Maurice Ponte, submitted a patent application for a device designed to detect obstacles using continuous radiation of ultra-short wavelengths produced by a magnetron. These were still CW systems and depended on Doppler interference for detection. However, as most modern radars, antennas were collocated. The device was measuring distance and azimuth but not directly as in the later "radar" on a screen (1939). Still, this was the first patent of an operational radio-detection apparatus using centimetric wavelengths. The system was tested in late 1934 aboard the cargo ship Oregon, with two transmitters working at 80 cm and 16 cm wavelengths. Coastlines and boats were detected from a range of 10-12 nautical miles. The shortest wavelength was chosen for the final design, which equipped the liner SS Normandie as early as mid-1935 for operational use. In late 1937, Maurice Elie at SFR developed a means of pulse-modulating transmitter tubes. This led to a new 16-cm system with a peak power near 500 W and a pulse width of 6 μs. French and U.S. patents were filed in December 1939. The system was planned to be sea-tested aboard the Normandie, but this was cancelled at the outbreak of war. At the same time, Pierre David at the Laboratoire National de Radioélectricité (National Laboratory of Radioelectricity, LNR) experimented with reflected radio signals at about a meter wavelength. Starting in 1931, he observed that aircraft caused interference to the signals. The LNR then initiated research on a detection technique called barrage électromagnétique (electromagnetic curtain). While this could indicate the general location of penetration, precise determination of direction and speed was not possible. In 1936, the Défense Aérienne du Territoire (Defence of Air Territory), ran tests on David’s electromagnetic curtain. In the tests, the system detected most of the entering aircraft, but too many were missed. As the war grew closer, the need for an aircraft detection was critical. David realized the advantages of a pulsed system, and in October 1938 he designed a 50 MHz, pulse-modulated system with a peak-pulse power of 12 kW. This was built by the firm SADIR. France declared war on Germany on September 1, 1939, and there was a great need for an early-warning detection system. The SADIR system was taken to near Toulon, and detected and measured the range of invading aircraft as far as 55 km (34 mi). The SFR pulsed system was set up near Paris where it detected aircraft at ranges up to 130 km (81 mi). However the German advance was overwhelming and emergency measures had to be taken; it was too late for France to develop radars alone and it was decided that her breakthroughs would be shared with her allies. In mid-1940, Maurice Ponte, from the laboratories of CSF in Paris, presented a cavity magnetron designed by Henri Gutton at SFR (see above) to the GEC laboratories at Wembley, Great Britain. This magnetron was designed for pulsed operation at a wavelength of 16 cm. Unlike other magnetron designs to that day, such as the Boots and Randall magnetron (see British contributions above), this tube used an oxide-coated cathode with a peak power output of 1 kW, demonstrating that oxide cathodes were the solution for producing high-power pulses at short wavelengths, a problem which had eluded British and American researchers for years. The significance this event was underlined by Eric Megaw, in a 1946 review of early radar developments: "This was the starting point of the use of the oxide cathode in practically all our subsequent pulsed transmitting waves and as such was a significant contribution to British radar. The date was the 8th May, 1940". A tweaked version of this magnetron reached a peak output of 10 kW by August 1940. It was that model which, in turn, was handed to the Americans as a token of good faith during the negotiations made by the Tizard delegation in 1940 to obtain from the U.S. the resources necessary for Great Britain to exploit the full military potential of her research and development work. Guglielmo Marconi initiated the research in Italy on radio-based detection technology. In 1933, while participating with his Italian firm in experiments with a 600 MHz communications link across Rome, he noted transmission disturbances caused by moving objects adjacent to its path. This led to the development at his laboratory at Cornegliano of a 330-MHz (0.91-m) CW Doppler detection system that he called radioecometro. Barkhausen-Kurz tubes were used in both the transmitter and receiver. In May 1935, Marconi demonstrated his system to the Fascist dictator Benito Mussolini and members of the military General Staff; however the output power was insufficient for military use. While Marconi’s demonstration raised considerable interest, little more was done with his apparatus. Mussolini directed that radio-based detection technology be further developed, and it was assigned to the Regio Instituto Electrotecnico e delle Comunicazioni (RIEC, Royal Institute for Electro-technics and Communications). The RIEC had been established in 1916 on the campus of the Italian Naval Academy in Livorno. Lieutenant Ugo Tiberio, a physics and radio-technology instructor at the Academy, was assigned to head the project on a part-time basis. Tiberio prepared a report on developing an experimental apparatus that he called telemetro radiofonico del rivelatore (RDT, Radio-Detector Telemetry). The report, submitted in mid-1936, included what was later known as the radar range equation. When the work got underway, Nello Carrara, a civilian physics instructor who had been doing research at the RIEC in microwaves, was added to be responsible for developing the RDT transmitter. Before the end of 1936, Tiberio and Carrara had demonstrated the EC-1, the first Italian RDT system. This had an FM transmitter operating at 200 MHz (1.5 m) with a single parabolic cylinder antenna. It detected by mixing the transmitted and the Doppler-shifted reflected signals, resulting in an audible tone. The EC-1 did not provide a range measurement; to add this capability, development of a pulsed system was initiated in 1937. Captain Alfeo Brandimarte joined the group and primarily designed the first pulsed system, the EC-2. This operated at 175 MHz (1.7 m) and used a single antenna made with a number of equi-phased dipoles. The detected signal was intended to be displayed on an oscilloscope. There were many problems, and the system never reached the testing stage. Work then turned to developing higher power and operating frequencies. Carrara, in cooperation with the firm FIVRE, developed a magnetron-like device. This was composed of a pair of triodes connected to a resonate cavity and produced 10 kW at 425 MHz (70 cm). It was used in designing two versions of the EC-3, one for shipboard and the other for coastal defense. Italy, joining Germany, entered WWII in June 1940 without an operational RDT. A breadboard of the EC-3 was built and tested from atop a building at the Academy, but most RDT work was stopped as direct support of the war took priority. In early 1939, the British Government invited representatives from the most technically advanced Commonwealth Nations to visit England for briefings and demonstrations on the highly secret RDF (radar) technology. Based on this, RDF developments were started in Australia, Canada, New Zealand, and South Africa by September 1939. In addition, this technology was independently developed in Hungary early in the war period. In Australia, the Radiophysics Laboratory was established at Sydney University under the Council for Scientific and Industrial Research; John H. Piddington was responsible for RDF development. The first project was a 200-MHz (1.5-m) shore-defense system for the Australian Army. Designated ShD, this was first tested in September 1941, and eventually installed at 17 ports. Following the Japanese attack on Pearl Harbor, the Royal Australian Air Force urgently needed an air-warning system, and Piddington’s team, using the ShD as a basis, put the AW Mark I together in five days. It was being installed in Darwin, Northern Territory, when Australia received the first Japanese attack on February 19, 1942. A short time later, it was converted to a light-weight transportable version, the LW-AW Mark II; this was used by the Australian forces, as well as the U.S. Army, in early island landings in the South Pacific. The early RDF developments in Canada were at the Radio Section of the National Research Council of Canada. Using commercial components and with essentially no further assistance from Great Britain, John Tasker Henderson led a team in developing the Night Watchman, a surface-warning system for the Royal Canadian Navy to protect the entrance to the Halifax Harbour. Successfully tested in July 1940, this set operated at 200 MHz (1.5 m), had a 1 kW output with a pulse length of 0.5 μs, and used a relatively small, fixed antenna. This was followed by a ship-borne set designated Surface Warning 1st Canadian (SW1C) with the antenna hand-rotated; this was first tested at sea in mid-May 1941, For coastal defense by the Canadian Army, a 200 MHz set with a transmitter similar to the Night Watchman was developed. Designated CD, it used a large, rotating antenna atop a 70-foot (21 m) wooden tower. The CD was put into operation in January 1942. Ernest Marsden represented New Zealand at the briefings in England, and then established two facilities for RDF development – one in Wellington at the Radio Section of the Central NZ Post Office, and another at Canterbury University College in Christchurch. Charles N. Watson-Munro led the development of land-based and airborne sets at Wellington, while Frederick W. G. White led the development of shipboard sets at Christchurch. Before the end of 1939, the Wellington group had converted an existing 180-MHz (1.6-m), 1 kW transmitter to produce 2-μs pulses and tested it to detect large vessels at up to 30 km; this was designated CW (Coastal Watching). A similar set, designated CD (Coast Defense) used a CRT for display and had lobe-switching on the receiving antenna; this was deployed in Wellington in late 1940. A partially completed ASV 200 MHz set was brought from Great Britain by Marsden, and another group at Wellington built this into an aircraft set for the Royal New Zealand Air Force; this was first flown in early 1940. At Christchurch, there was a smaller staff and work went slower, but by July 1940, a 430-MHz (70-cm), 5 kW set was tested. Two types, designated SW (Ship Warning) and SWG (Ship Warning, Gunnery), were placed into service by the Royal New Zealand Navy starting in August 1941. In all some 44 types were developed in New Zealand during WW1. South Africa did not have a representative at the 1939 meetings in England, but in mid-September, as Ernest Marsden was returning by ship to New Zealand, Basil F. J. Schonland came aboard and received three days of briefings. Schonland, a world authority on lightning and Director of the Bernard Price Institute of Geophysics at Witwatersrand University, immediately started an RDF development using amateur radio components and Institute’s lightning-monitoring equipment. Designated JB (for Johannesburg), the 90-MHz (3.3-m), 500-W mobile system was tested in November 1939, just two months after its start. The prototype was operated in Durban before the end of 1939, detecting ships and aircraft at distances up to 80 km, and by the next March a system was fielded by anti-aircraft brigades of the South African Defence Force. In Hungary, Zoltán Lajos Bay was a Professor of Physics at the Technical University of Budapest as well as the Research Director of Egyesült Izzolampa (IZZO), a radio and electrical manufacturing firm. In late 1942, IZZO was directed by the Minister of Defense to develop a radio-location (rádiólokáció, radar) system. Using journal papers on ionospheric measurements for information on pulsed transmission, Bay developed a system called Sas (Eagle) around existing communications hardware. The Sas operated at 120 MHz (2.5 m) and was in a cabin with separate transmitting and receiving dipole arrays attached; the assembly was all on a rotatable platform. According to published records, the system was tested in 1944 atop Mount János and had a range of “better than 500 km.” A second Sas was installed at another location. There is no indication that either Sas installation was ever in regular service. After the war, Bay used a modified Sas to successfully bounce a signal off the moon. World War II radar At the start of World War II in September 1939, both the United Kingdom and Germany knew of each other's ongoing efforts in radio navigation and its countermeasures – the "Battle of the beams". Also, both nations were generally aware of, and intensely interested in, the other's developments in radio-based detection and tracking, and engaged in an active campaign of espionage and false leaks about their respective equipment. By the time of the Battle of Britain, both sides were deploying range and direction-finding units (radars) and control stations as part of integrated air defense capability. However, the German Funkmessgerät (radio measuring device) systems could not assist in an offensive role and was thus not supported by Adolf Hitler. Also, the Luftwaffe did not sufficiently appreciate the importance of British Range and Direction Finding (RDF) stations as part of RAF's air defense capability, contributing to their failure. While the United Kingdom and Germany led in pre-war advances in the use of radio for detection and tracking of aircraft, there were also developments in the United States, the Soviet Union, and Japan. Wartime systems in all of these nations will be summarized. The acronym RADAR (for RAdio Detection And Ranging) was coined by the U.S. Navy in 1940, and the subsequent name "radar" was soon widely used. Post-war radar World War II, which gave impetus to the great surge in radar development, ended between the Allies and Germany in May 1945, followed by Japan in August. With this, radar activities in Germany and Japan ceased for a number of years. In other countries, particularly the United States, Great Britain, and the USSR, the politically unstable post-war years saw continued radar improvements for military applications. In fact, these three nations all made significant efforts in bringing scientists and engineers from Germany to work in their weapon programs; in the U.S., this was under Operation Paperclip. Even before the end of the war, various project directed toward non-military applications of radar and closely related technologies were initiated. The US Army Air Forces and the British RAF had made wartime advances in using radar for handling aircraft landing, and this was rapidly expanded into the civil sector. The field of radio astronomy was one of the related technologies; although discovered before the war, it immediately flourished in the late 1940s with many scientists around the world establishing new careers based on their radar experience. Four techniques, highly important in post-war radars, were matured in the late 1940s-early 1950s: pulse Doppler, monopulse, phased array, and synthetic aperture; the first three were known and even used during wartime developments, but were matured later. - Pulse-Doppler radar (often known as moving target indication or MTI), uses the Doppler-shifted signals from targets to better detect moving targets in the presence of clutter. - Monopulse radar (also called simultaneous lobing) was conceived by Robert Page at the NRL in 1943. With this, the system derives error-angle information from a single pulse, greatly improving the tracking accuracy. - Phased-array radar has the many segments of a large antenna separately controlled, allowing the beam to be quickly directed. This greatly reduces the time necessary to change the beam direction from one point to another, allowing almost simultaneous tracking of multiple targets while maintaining overall surveillance. - Synthetic-aperture radar (SAR), was invented in the early 1950s at Goodyear Aircraft Corporation. Using a single, relatively small antenna carried on an aircraft, a SAR combines the returns from each pulse to produce a high-resolution image of the terrain comparable to that obtained by a much larger antenna. SAR has wide applications, particularly in mapping and remote sensing. One of the early applications of digital computers was in switching the signal phase in elements of large phased-array antennas. As smaller computers came into being, these were quickly applied to digital signal processing using algorithms for improving radar performance. Other advances in radar systems and applications in the decades following WWII are far too many to be included herein. The following sections are intended to provide representative samples. Military radars In the United States, the Rad Lab at MIT officially closed at the end of 1945. The Naval Research Laboratory (NRL) and the Army’s Evans Signal Laboratory continued with new activities in centimeter radar development. The United States Air Force (USAF) – separated from the Army in 1946 – concentrated radar research at their Cambridge Research Center (CRC) at Hanscom Field, Massachusetts. In 1951, MIT opened the Lincoln Laboratory for joint developments with the CRC. While the Bell Telephone Laboratories embarked on major communications upgrades, they continued with the Army in radar for their ongoing Nike air-defense program In Great Britain, the RAF’s Telecommunications Research Establishment (TRE) and the Army’s Radar Research and Development Establishment (RRDE) both continued at reduced levels at Malvern, Worcestershire, then in 1953 were combined to form the Radar Research Establishment. In 1948, all of the Royal Navy’s radio and radar R&D activities were combined to form the Admiralty Signal and Radar Establishment, located near Portsmouth, Hampshire. The USSR, although devastated by the war, immediately embarked on the development of new weapons, including radars. During the Cold War period following WWII, the primary "axis" of combat shifted to lie between the United States and the Soviet Union. By 1949, both sides had nuclear weapons carried by bombers. To provide early warning of an attack, both deployed huge radar networks of increasing sophistication at ever-more remote locations. In the West, the first such system was the Pinetree Line, deployed across Canada in the early 1950s, backed up with radar pickets on ships and oil platforms off the east and west coasts. The Pinetree Line initially used vintage pulsed radars and was soon supplemented with the Mid-Canada Line (MCL). Soviet technology improvements made these Lines inadequate and, in a construction project involving 25,000 persons, the Distant Early Warning Line (DEW Line) was completed in 1957. Stretching from Alaska to Baffin Island and covering over 6,000 miles (9,700 km), the DEW Line consisted of 63 stations with AN/FPS-19 high-power, pulsed, L-Band radars, most augmented by AN/FPS-23 pulse-Doppler systems. The Soviet Unit tested its first Intercontinental Ballistic Missile (ICBM) in August 1957, and in a few years the early-warning role was passed almost entirely to the more capable DEW Line. Both the U.S. and the Soviet Union then had ICBMs with nuclear warheads, and each began the development of a major anti-ballistic missile (ABM) system. In the USSR, this was the Fakel V-1000, and for this they developed powerful radar systems. This was eventually deployed around Moscow as the A-35 anti-ballistic missile system, supported by radars designated by NATO as the Cat House, Dog House, and Hen House. In 1957, the U.S. Army initiated an ABM system first called Nike-X; this passed through several names, eventually becoming the Safeguard Program. For this, there was a long-range Perimeter Acquisition Radar (PAR) and a shorter-range, more precise Missile Site Radar (MSR). The PAR was housed in a 128-foot (39 m)-high nuclear-hardened building with one face sloping 25 degrees facing north. This contained 6,888 antenna elements separated in transmitting and receiving phased arrays. The L-Band transmitter used 128 long-life traveling-wave tubes (TWTs), having a combined power in the megawatt range The PAR could detect incoming missiles outside the atmosphere at distances up to 1,800 miles (2,900 km). The MSR had an 80-foot (24 m), truncated pyramid structure, with each face holding a phased-array antenna 13 feet (4.0 m) in diameter and containing 5,001 array elements used for both transmitting and receiving. Operating in the S-Band, the transmitter used two klystrons functioning in parallel, each with megawatt-level power. The MSR could search for targets from all directions, acquiring them at up to 300 miles (480 km) range. One Safeguard site, intended to defend Minuteman ICBM missile silos near the Grand Forks AFB in North Dakota, was finally completed in October 1975, but the U.S. Congress withdrew all funding after it was operational but a single day. During the following decades, the U.S. Army and the U.S. Air Force developed a variety of large radar systems, but the long-serving BTL gave up military development work in the 1970s. A modern radar developed by of the U.S. Navy that should be noted is the AN/SPY-1. First fielded in 1973, this S-Band, 6 MW system has gone through a number of variants and is a major component of the Aegis Combat System. An automatic detect-and-track system, it is computer controlled using four complementary three-dimensional passive electronically scanned array antennas to provide hemispherical coverage. Radar signals, traveling with line-of-sight propagation, normally have a range to ground targets limited by the visible horizon, or less than about 10 miles (16 km). Airborne targets can be detected by ground-level radars at greater ranges, but, at best, several hundred miles. Since the beginning of radio, it had been known that signals of appropriate frequencies (3 to 30 MHz) could be “bounced” from the ionosphere and received at considerable distances. As long-range bombers and missiles came into being, there was a need to have radars give early warnings at great ranges. In the early 1950s, a team at the Naval Research Laboratory came up with the Over-the-Horizon (OTH) radar for this purpose. To distinguish targets from other reflections, it was necessary to use a phase-Doppler system. Very sensitive receivers with low-noise amplifiers had to be developed. Since the signal going to the target and returning had a propagation loss proportional to the range raised to the fourth power, a powerful transmitter and large antennas were required. A digital computer with considerable capability (new at that time) was necessary for analyzing the data. In 1950, their first experimental system was able to detect rocket launches 600 miles (970 km) away at Cape Canaveral, and the cloud from a nuclear explosion in Nevada 1,700 miles (2,700 km) distant. In the early 1970s, a joint American-British project, code named Cobra Mist, used a 10-MW OTH radar at Orfordness (the birthplace of British radar), England, in an attempt to detect aircraft and missile launchings over the Western USSR. Because of US-USSR ABM agreements, this was abandoned within two years. In the same time period, the Soviets were developing a similar system; this successfully detected a missile launch at 2,500 km (1,600 mi). By 1976, this had matured into an operational system named Duga (“Arc” in English), but known to western intelligence as Steel Yard and called Woodpecker by radio amateurs and others who suffered from its interference – the transmitter was estimated to have a power of 10 MW. Australia, Canada, and France also developed OTH radar systems. With the advent of satellites with early-warning capabilities, the military lost most of its interest in OTH radars. However, in recent years, this technology has been reactivated for detecting and tracking ocean shipping in applications such as maritime reconnaissance and drug enforcement. Systems using an alternate technology have also been developed for over-the-horizon detection. Due to diffraction, electromagnetic surface waves are scattered to the rear of objects, and these signals can be detected in a direction opposite from high-powered transmissions. Called OTH-SW (SW for Surface Wave), Russia is using such a system to monitor the Sea of Japan, and Canada has a system for coastal surveillance. Civil Aviation Radars The post-war years saw the beginning of a revolutionary development in Air Traffic Control (ATC) – the introduction of radar. In 1946, the Civil Aeronautics Administration (CAA) unveiled an experimental radar-equipped tower for control of civil flights. By 1952, the CAA had begun its first routine use of radar for approach and departure control. Four years later, it placed a large order for long-range radars for use in en route ATC; these had the capability, at higher altitudes, to see aircraft within 200 nautical miles (370 km). In 1960, it became required for aircraft flying in certain areas to carry a radar transponder that identified the aircraft and helped improve radar performance. Since 1966, the responsible agency has been called the Federal Aviation Administration (FAA). A Terminal Radar Approach Control (TRACON) is an ATC facility usually located within the vicinity of a large airport. In the US Air Force it is known as RAPCON (Radar Approach Control), and in the US Navy as a RATCF (Radar Air Traffic Control Facility). Typically, the TRACON controls aircraft within a 30 to 50 nautical mile (56 to 93 km) radius of the airport at an altitude between 10,000 to 15,000 feet (3,000 to 4,600 m). This uses one or more Airport Surveillance Radars (ASR-7, 8, & 9), sweeping the sky once every few seconds. The Digital Airport Surveillance Radar (DASR) is a newer TRACOM radar system, replacing the old analog systems with digital technology. The civilian nomenclature for this radar is the ASR-11, and AN/GPN-30 is used by the military. Two radar systems are included. The primary is an X-Band (~2.8 GHz) system with 25 kW pulse power. It provides 3-D tracking of target aircraft and also measures rainfall intensity. The secondary is a P-Band (~1.05 GHz) system with a peak-power of about 25 kW. It uses a transponder set to interrogate aircraft and receive operational data. The antennas for both systems rotate atop a tall tower. Weather radar During World War II, military radar operators noticed noise in returned echoes due to weather elements like rain, snow, and sleet. Just after the war, military scientists returned to civilian life or continued in the Armed Forces and pursued their work in developing a use for those echoes. In the United States, David Atlas, for the Air Force group at first, and later for MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H. Douglas formed the "Stormy Weather Group " in Montreal. Marshall and his doctoral student Walter Palmer are well known for their work on the drop size distribution in mid-latitude rain that led to understanding of the Z-R relation, which correlates a given radar reflectivity with the rate at which water is falling on the ground. In the United Kingdom, research continued to study the radar echo patterns and weather elements such as stratiform rain and convective clouds, and experiments were done to evaluate the potential of different wavelengths from 1 to 10 centimetres. Between 1950 and 1980, reflectivity radars, which measure position and intensity of precipitation, were built by weather services around the world. In United States, the U.S. Weather Bureau, established in 1870 with the specific mission of to provide meteorological observations and giving notice of approaching storms, developed the WSR-1 (Weather Surveillance Radar-1), one of the first weather radars. This was a modified version of the AN/APS-2F radar, which the Weather Bureau acquired from the Navy. The WSR-1A, WSR-3, and WSR-4 were also variants of this radar. This was followed by the WSR-57 (Weather Surveillance Radar – 1957) was the first weather radar designed specifically for a national warning network. Using WWII technology based on vacuum tubes, it gave only coarse reflectivity data and no velocity information. Operating at 2.89 GHz (S-Band), it had a peak-power of 410 kW and a maximum range of about 580 mi (930 km). AN/FPS-41 was the military designation for the WSR-57. The early meteorologists had to watch a cathode ray tube. During the 1970s, radars began to be standardized and organized into larger networks. The next significant change in the United States was the WSR-74 series, beginning operations in 1974. There were two types: the WSR-74S, for replacements and filling gaps in the WSR-57 national network, and the WSR-74C, primarily for local use. Both were transistor-based, and their primary technical difference was indicated by the letter, S band (better suited for long range) and C band, respectively. Until the 1990s, there were 128 of the WSR-57 and WSR-74 model radars were spread across that country. The first devices to capture radar images were developed during the same period. The number of scanned angles was increased to get a three-dimensional view of the precipitation, so that horizontal cross-sections (CAPPI) and vertical ones could be performed. Studies of the organization of thunderstorms were then possible for the Alberta Hail Project in Canada and National Severe Storms Laboratory (NSSL) in the US in particular. The NSSL, created in 1964, began experimentation on dual polarization signals and on Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just west of Oklahoma City. For the first time, a Dopplerized 10-cm wavelength radar from NSSL documented the entire life cycle of the tornado. The researchers discovered a mesoscale rotation in the cloud aloft before the tornado touched the ground : the tornadic vortex signature. NSSL's research helped convince the National Weather Service that Doppler radar was a crucial forecasting tool. Between 1980 and 2000, weather radar networks became the norm in North America, Europe, Japan and other developed countries. Conventional radars were replaced by Doppler radars, which in addition to position and intensity of could track the relative velocity of the particles in the air. In the United States, the construction of a network consisting of 10 cm (4 in) wavelength radars, called NEXRAD or WSR-88D (Weather Service Radar 1988 Doppler), was started in 1988 following NSSL's research. In Canada, Environment Canada constructed the King City station, with a five centimeter research Doppler radar, by 1985;McGill University dopplerized its radar (J. S. Marshall Radar Observatory) in 1993. This led to a complete Canadian Doppler network between 1998 and 2004. France and other European countries switched to Doppler network by the end of the 1990s to early 2000s. Meanwhile, rapid advances in computer technology led to algorithms to detect signs of severe weather and a plethora of "products" for media outlets and researchers. After 2000, research on dual polarization technology has moved into operational use, increasing the amount of information available on precipitation type (e.g. rain vs. snow). "Dual polarization" means that microwave radiation which is polarized both horizontally and vertically (with respect to the ground) is emitted. Wide-scale deployment is expected by the end of the decade in some countries such as the United States, France, and Canada. Since 2003, the U.S. National Oceanic and Atmospheric Administration has been experimenting with phased-array radar as a replacement for conventional parabolic antenna to provide more time resolution in atmospheric sounding. This would be very important in severe thunderstorms as their evolution can be better evaluated with more timely data. Also in 2003, the National Science Foundation established the Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere, "CASA", a multidisciplinary, multi-university collaboration of engineers, computer scientists, meteorologists, and sociologists to conduct fundamental research, develop enabling technology, and deploy prototype engineering systems designed to augment existing radar systems by sampling the generally undersampled lower troposphere with inexpensive, fast scanning, dual polarization, mechanically scanned and phased array radars. Mapping Radar The plan position indicator, dating from the early days of radar and still the most common type of display, provides a map of the targets surrounding the radar location. If the radar antenna on an aircraft is aimed downward, a map of the terrain is generated, and the larger the antenna, the greater the image resolution. After centimeter radar came into being, downward-looking radars – the H2S ( L-Band) and H2X (C-Band) – provided real-time maps used by the U.S. and Great Britain in bombing runs over Europe at night and through dense clouds. In 1951, Carl Wiley led a team at Goodyear Aircraft Corporation (later Goodyear Aerospace) in developing a technique for greatly expanding and improving the resolution of radar-generated images. Called synthetic aperture radar (SAR), an ordinary-sized antenna fixed to the side of an aircraft is used with highly complex signal processing to give an image that would otherwise require a much larger, scanning antenna; thus, the name synthetic aperture. As each pulse is emitted, it is radiated over a lateral band onto the terrain. The return is spread in time, due to reflections from features at different distances. Motion of the vehicle along the flight path gives the horizontal increments. The amplitude and phase of returns are combined by the signal processor using Fourier transform techniques in forming the image. The overall technique is closely akin to optical holography. Through the years, many variations of the SAR have been made with diversified applications resulting. In initial systems, the signal processing was too complex for on-board operation; the signals were recorded and processed later. Processors using optical techniques were then tried for generating real-time images, but advances in high-speed electronics now allow on-board processes for most applications. Early systems gave a resolution in tens of meters, but more recent airborne systems provide resolutions to about 10 cm. Current ultra-wideband systems have resolutions of a few millimeters. Other radars and applications There are many other post-war radar systems and applications. Only a few will be noted. Radar gun The most widespread radar device today is undoubtedly the radar gun. This is a small, usually hand-held, Doppler radar that is used to detect the speed of objects, especially trucks and automobiles in regulating traffic, as well as pitched baseballs, runners, or other moving objects in sports. This device can also be used to measure the surface speed of water and continuously manufactured materials. A radar gun does not return information regarding the object's position; it uses the Doppler effect to measure the speed of a target. First developed in 1954, most radar guns operate with very low power in the X or Ku Bands. Some use infrared radiation or laser light; these are usually called LIDAR. A related technology for velocity measurements in flowing liquids or gasses is called laser Doppler velocimetry; this technology dates from the mid-1960s. Impulse Radar As pulsed radars were initially being developed, the use of very narrow pulses was examined. The pulse length governs the accuracy of distance measurement by radar – the shorter the pulse, the greater the precision. Also, for a given pulse repetition frequency (PRF), a shorter pulse results in a higher peak power. Harmonic analysis shows that the narrower the pulse, the wider the band of frequencies that contain the energy, leading to such systems also being called wide-band radars. In the early days, the electronics for generating and receiving these pulses was not available; thus, essentially no applications of this were initially made. By the 1970s, advances in electronics led to renewed interest in what was often called short-pulse radar. With further advances, it became practical to generate pulses having a width on the same order as the period of the RF carrier (T = 1/f). This is now generally called impulse radar. The first significant application of this technology was in ground-penetrating radar (GPR). Developed in the 1970s, GPR is now used for structural foundation analysis, archeological mapping, treasure hunting, unexploded ordnance identification, and other shallow investigations. This is possible because impulse radar can concisely locate the boundaries between the general media (the soil) and the desired target. The results, however, are non-unique and are highly dependant upon the skill of the operator and the subsequent interpretation of the data. In dry or otherwise favorable soil and rock, penetration up to 300 feet (91 m) is often possible. For distance measurements at these short ranges, the transmitted pulse is usually only one radio-frequency cycle in duration; With a 100 MHz carrier and a PRF of 10 kHz (typical parameters), the pulse duration is only 10 ns (nanosecond). leading to the "impulse" designation. A variety of GPR systems are commercially available in back-pack and wheeled-cart versions with pulse-power up to a kilowatt. With continued development of electronics, systems with pulse durations measured in picoseconds became possible. Applications are as varied as security and motion sensors, building stud-finders, collision-warning devices, and cardiac-dynamics monitors. Some of these devices are match-box sized, including a long-life power source. Radar astronomy As radar was being developed, astronomers considered its application in making observations of the Moon and other near-by extraterrestrial objects. In 1944, Zoltán Lajos Bay had this as a major objective as he developed a radar in Hungary. Sadly, his radar telescope was taken away by the conquering Soviet army and had to be rebuilt, thus delaying the experiment. Under Project Diana conducted by the Army’s Evans Signal Laboratory in New Jersey, a modified SCR-271 radar (the fixed-position version of the SCR-270) operating at 110 MHz with 3 kW peak-power, was used in receiving echoes from the Moon on January 10, 1946. Zoltán Bay accomplished this on the following February 6. Radio astronomy also had its start following WWII, and many scientists involved in radar development then entered this field. A number of radio observatories were constructed during the following years; however, because of the additional cost and complexity of involving transmitters and associated receiving equipment, very few were dedicated to radar astronomy. In fact, essentially all major radar astronomy activities have been conducted as adjuncts to radio astronomy observatories. The radio telescope at the Arecibo Observatory, opened in 1963, is the largest in the world. Owned by the U.S. National Science Foundation and contractor operated, it is used primarily for radio astronomy, but equipment is available for radar astronomy. This includes transmitters operating at 47 MHz, 439 MHz, and 2.38 GHz, all with very-high pulse power. It has a 305-m (1,000-ft) primary reflector fixed in position; the secondary reflector is on tracks to allow precise pointing to different parts of the sky. Many significant scientific discoveries have been made using the Arecibo radar telescope, including mapping of surface roughness of Mars and observations of Saturns and its largest moon, Titan. In 1989, the observatory radar-imaged an asteroid for the first time in history. Several spacecraft orbiting the Moon, Mercury, Venus, Mars, and Saturn have carried radars for surface mapping; a ground-penetration radar was carried on the Mars Express mission. Radar systems on a number of aircraft and orbiting spacecraft have mapped the entire Earth for various purposes; on the Shuttle Radar Topography Mission, the entire planet was mapped at a 30-m resolution. The Jodrell Bank Observatory, an operation of the University of Manchester in Great Britain, was originally started by Bernard Lovell to be a radar astronomy facility. It initially used a war-surplus GL-II radar system operating at 71 MHz (4.2 m). The first observations were of ionized trails in the Geminids meteor shower during December 1945. While the facility soon evolved to become the third largest radio observatory in the world, some radar astronomy continued. The largest (250-ft or 76-m in diameter) of their three fully steerable radio telescopes became operational just in time to radar track Sputnik 1, the first artificial satellite, in October 1957. See also - List of World War II electronic warfare equipment - Secrets of Radar Museum - German inventors and discoverers - "Radar definition in multiple dictionaries". Answers.com. Retrieved 2008-10-09. - Raymond C. Watson, Jr.; Radar Origins Worldwide’’, Trafford Publishing, 2009. - "L'histoire du "radar", les faits". "Le principe fondamental du radar appartient au patrimoine commun des physiciens : ce qui demeure en fin de compte au crédit réel des techniciens se mesure à la réalisation effective de matériels opérationnels" - van Keuren, D.K. (1997). "Science Goes to War: The Radiation Laboratory, Radar, and Their Technological Consequences". Reviews in American History 25: 643–647. doi:10.1353/rah.1997.0150. - Buderi, Robert; The Invention that Changed the World, Simon & Schuster, 1996 - Wald, Matthew L. (June 22, 1997). "Jam Sessions". New York Times. - Marconi, Guglielmo (1922). "Radio Telegraphy". Proc. IRE 10 (4): 215–238. doi:10.1109/JRPROC.1922.219820. - "Development of A Monopulse Radar System", Kirkpatrick, George M., letter to IEEE Transactions on Aerospace and Electronic Systems, vol. 45, no. 2 (April 2009). - Christian Hülsmeyer in Radar World - "The Problem of Increasing Human Energy," by Nikola Tesla, Century Illustrated Magazine, June 1900 - "Tesla's Views on Electricity and the War", Secor, H. Winfield, The Electrical Experimenter, August, 1917 - Brown, Louis; A Radar History of World War II; Inst. of Physics Publishing, 1999, p.43 - Hyland, L.A., A.H. Taylor, and L.C. Young; "System for detecting objects by radio," U.S. Patent No. 1981884, 27 Nov. 1934 - Breit, Gregory, and Merle A. Tuve; "A Radio Method for Estimating the Height of the Conducting Layer," Nature, vol. 116, 1925, p. 116 - Page, Robert Morris; The Origin of Radar, Doubleday, 1962, p. 66. - Coulton, Roger B.; "Radar in the U.S. Army," Proc. IRE, vol. 33, 1947, pp. 740-753 - "Tesla, at 78, Bares New Death Beam: Death-Ray Machine described," New York Sun, July 11, 1934. - Bowen, E. G.; Radar Days, Inst. of Physics Publishing, 1987, p. 16 - Latham, Colin, and Anne Stobbs (2011). The Birth of British Radar: The memoirs of Arnold 'Skip' Wilkins, Second Edition, Radio Society of Great Britain, ISBN 9781-9050-8675-7 - Butement, W. A. S., and P. E. Pollard; “Coastal Defense Apparatus,” Recorded in the Inventions Book of the Royal Engineers Board, Jan. 1931 - Coales, J. F., and J. D. S. Rawlinson; “The Development of Naval Radar 1935-1945,” J. Naval Science, vol. 13, nos. 2-3, 1987. - Kummritz, Herbert; “On the Development of Radar Technologies in Germany up to 1945,” in Tracking the History of Radar, ed. by Oskar Blumtritt et al, IEEE-Rutgers, 1994 - Kroge, Harry von; GEMA: Birthplace of German Radar and Sonar, translated by Louis Brown, Inst. of Physics Publishing, 2000 - “Telefunken firm in Berlin reveals details of a 'mystery ray' system capable of locating position of aircraft through fog, smoke and clouds.” Electronics, September 1935 - Runge. W.; “A personal reminiscence,” in Radar Development to 1945, edited by Russell Burns, Peter Peregrinus Ltd, 1988, p.227 - Erickson, J.; “The air defense problem and the Soviet radar programme 1934/35-1945,” in Radar Development to 1945, ed. by Russell Burns, Peter Peregrinus Ltd., 1988, pp. 227-234 - Ioffe, A. F.; “Contemporary problems of the development of the technology of air defense,” Sbornik PVO, February 1934 (in Russian) - Shembel, B. K.; At the Origin of Radar in USSR, Sovetskoye Radio, 1977 (in Russian) - Slutzkin, A. A., and D. S. Shteinberg, "Die Erzeugung von kurzwelligen ungedämpften Schwingungen bei Anwendung des Magnetfeldes" ["The generation of undamped shortwave oscillations by application of a magnetic field"], Annalen der Physik, vol. 393, no. 5, pages 658-670 (May 1929) - Siddiqi, Asif A.; “Rockets Red Glare: Technology, Conflict, and Terror in the Soviet Union”; Technology & Culture, vol. 44, 2003, p.470 - Lobanov, M. M.; The Beginning of Soviet Radar, Sovetskoye Radio, 1975 (in Russian) - Watson, Raymond C. (2009).Radar Origins Worldwide. Trafford Publishing, p. 306. ISBN 1-4269-2110-1 - Kostenko, Alexei A., Alexander I, Nosich, and Irina A. Tishchenko; “Development of the First Soviet Three-Coordinate L-Band Pulsed Radar in Kharkov Before WWII,” IEEE Antennas and Propagation Magazine, vol. 43, June, 2001, pp. 29-48; http://www.ire.kharkov.ua/apm2001-radar.pdf - Chernyak, V. S., I. Ya. Immoreev, and B. M. Vovshin; “Radar in the Soviet Union and Russia: A Brief Historical Outline,” IEEE AES Magazine, vol. 19, December, 2003, p. 8 - Yagi, H., “Beam Transmission of Ultra Short Waves,” Proc. of the IRE, vol. 16, June, 1928 - Nakajima, S., "The history of Japanese radar development to 1945," in Russell Burns, Radar Development to 1945, Peter Peregrinus Ltd, 1988 - Wilkinson, Roger I.; “Short survey of Japanese radar – Part I,” Trans. AIEE, vol. 65, 1946, p. 370 - Nakajima, S.; “Japanese radar development prior to 1945,”IEEE Antennas and Propagation Magazine, vol. 34, Dec., 1992, pp. 17-22 - Le Pair, C. (Kees); “Radar in the Dutch Knowledge Network,” Telecommunication and Radar Conference, EUMW98, Amsterdam, 1998; http://www.clepair.net/radar-web.htm - Posthumus, K; "Oscillations in a Split-Anode Magnetron, Mechanism of Generation," Wireless Engineer, vol. 12, 1935, pp. 126-13 - Staal, M., and J.L.C. Weiller; “Radar Development in the Netherlands before the war,” in Radar Development to 1945, ed. by Russell Burns, Peter Peregrinus, 1988, pp. 235-237 - ”Measurements Building” http://www.museumwaalsdorp.nl/en/meetgben.html - Swords, S. S.; Technical history of the beginnings of radar, Peter Peregrinus Ltd, 1986, pp. 142-144 - French patent (no. 788.795, "New system of location of obstacles and its applications") - Molyneux-Berry, R. B.; “Henri Gutton, French radar pioneer,” in Radar Development to 1945, ed. by Russell Burns, Peter Peregrinus, 1988, pp. 45-52 - "System for Object Detection and Distance Measurement" http://www.freepatentsonline.com/2433838.html - David, Pierre; Le Radar (The Radar), Presses Universitaires de France, 1949 (in French) - Megaw, Eric C. S.; “The High-Power Magnetron: A Review of Early Developments,” J. of the IEE, vol. 93, 1946, p. 928, see copy at http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5299357 - Paul A. Redhead, The invention of the cavity magnetron and its introduction into Canada and the U.S.A., PHYSICS IN CANADA, November/December 2001, http://www.cap.ca/wyp/profiles/Redhead-Nov01.PDF - Calamia, M., and R. Palandri; “The History of the Italian Radio Detector Telemetro,” in Radar Development to 1945, ed. by Russell Burns, Peter Peregrinus, 1988, pp. 97-105 - Carrara, N.; “The detection of microwaves,” Proc. of the IRE, vol. 20, Oct. 1932, pp. 1615-1625 - Tiberio, U.; “Some historical data concerning the first Italian naval radar,” IEEE Trans. AES, vol. 15, Sept., 1979, p. 733 - Sinnott, D. H.; “Radar Development in Australia: 1939 to Present,” Proc. of IEEE 2005 International Radar Conference, 9–12 May, pp. 5-9 - Moorcroft, Don; “Origins of Radar-based Research in Canada,” Univ. Western Ontario, 2002; http://www.physics.uwo.ca/~drm/history/radar/radar_history.html - Unwin, R. S.; “Development of Radar in New Zealand in World War II,” IEEE Antennas and Propagation Magazine, vol. 34, June, pp.31-33, 1992 - Hewitt, F. J.; “South Africa’s Role in the Development and Use of Radar in World War II,” Military History Journal, vol. 3, no, 3, June, 1975; http://samilitaryhistory.org/vol033fh.html - Renner, Peter; “The Role of the Hungarian Engineers in the Development of Radar Systems,” Periodica Polytechnica Ser. Soc. Man. Sci, Vol. 12, p. 277, 2004; http://www.pp.bme.hu/so/2004_2/pdf/so2004_2_12.pdf - Barlow, E. J.; “Doppler Radar,” Proc. IRE, vol. 37, pp. 340-355, April, 1949 - Page, R. M.; “Monopulse Radar,” op. cet. - Von Aulock, W. H.; “Properties of Phased Arrays,” Proc. IRE, vol. 48, pp. 1715-1727, Oct., 1960 - ”Airborne Synthetic Aperture Radar”; http://airsar.jpl.nasa.gov/ - ”ABM Research and Development at Bell Laboratories,” http://srmsc.org/pdf/004438p0.pdf - ”Cobra Mist”; http://www.users.zetnet.co.uk/Ray.Flint/cobra/cm.htm - ”Mystery Signals Of The Short Wave,” Wireless World, Feb. 1977; http://www.brogers.dsl.pipex.com/Wpecker2.htm - ”Airport Surveillance Radars”; http://www.faa.gov/air_traffic/technology/asr-11/ - David Atlas, "Radar in Meteorology", published by American Meteorological Society - "Stormy Weather Group". McGill University. (2000). Retrieved 2006-05-21. - Whiton, Roger C., et al. "History of Operational Use of Weather Radar by U.S. Weather Services. Part I: The Pre-NEXRAD Era"; Weather and Forecasting, vol. 13, no. 2, pp. 219–243, 19 Feb. 1998; http://ams.allenpress.com/amsonline/?request=get-document&doi=10.1175%2F1520-0434(1998)013%3C0219:HOOUOW%3E2.0.CO%3B2 - Susan Cobb (October 29, 2004). "Weather radar development highlight of the National Severe Storms Laboratory first 40 years". NOAA Magazine. NOAA. Retrieved 2009-03-07. - Crozier, C.L.; P.I. Joe, J.W. Scott, H.N. Herscovitch and T.R. Nichols (1990). "The King City Operational Doppler Radar: Development, All-Season Applications and Forecasting (PDF)" (PDF). Canadian Meteorological and Oceanographic Society. Archived from the original on 2006-10-02. Retrieved 2006-05-24. - "Information about Canadian radar network". The National Radar Program. Environment Canada. 2002. Retrieved 2006-06-14. - Parent du Châtelet, Jacques; et al. (2005). "The PANTHERE project and the evolution of the French operational radar network and products: Rain estimation, Doppler winds, and dual polarization" (PDF). Météo-France. 32nd Radar Conference of the AMS, Albuquerque, NM. Retrieved 2006-06-24. - Daniels, Jeffrey J.; “Ground Penetrating Radar Fundamentals”; http://www.clu-in.org/download/char/GPR_ohio_stateBASICS.pdf - ”Micropower Impulse Radar”; https://ipo.llnl.gov/?q=technologies-micropower_impulse_radar - Mofenson, Jack; “Radio Echoes From the Moon,” Electronics, April 1946; http://www.campevans.org/_CE/html/diamof.html - Bay, Z.; "Reflection of microwaves from the moon," Hung. Acta Phys., vol. 1, pp. 1-22, April, 1946. - Lovell, Bernard; Story of Jodrell Bank, Oxford U. Press, 1968 Further reading - Blanchard, Yves, Le radar. 1904-2004 : Histoire d'un siècle d'innovations techniques et opérationnelles, éditions Ellipses,(in French) - Bowen, E. G.; “The development of airborne radar in Great Britain 1935-1945,” in Radar Development to 1945, ed. by Russell Burns; Peter Peregrinus, 1988, ISBM 0-86341-139-8 - Bowen, E. G., Radar Days, Institute of Physics Publishing, Bristol, 1987, ISBN 0-7503-0586-X - Bragg, Michael., RDF1 The Location of Aircraft by Radio Methods 1935-1945, Hawkhead Publishing, 1988, ISBN 0-9531544-0-8 - Brown, Jim, Radar - how it all began, Janus Pub., 1996, ISBN 1-85756-212-7 - Brown, Louis, A Radar History of World War 2 - Technical and Military Imperatives, Institute of Physics Publishing, 1999, ISBN 0-7503-0659-9 - Buderi, Robert: The invention that changed the world: the story of radar from war to peace, Simon & Schuster, 1996, ISBN 0-349-11068-9 - Burns, Peter (editor): Radar Development to 1945, Peter Peregrinus Ltd., 1988, ISBN 0-86341-139-8 - Clark, Ronald W., Tizard, MIT Press, 1965, ISBN 0-262-03010-1 (An authorized biography of radar's champion in the 1930s.) - Dummer, G. W. A., Electronic Inventions and Discoveries, Elsevier, 1976, Pergamon, 1977, ISBN 0-802-0982-3 - Erickson, John; “Radio-location and the air defense problem: The design and development of Soviet Radar 1934-40,” Social Studies of Science, vol. 2, p. 241, 1972 - Frank, Sir Charles, Operation Epsilon: The Farm Hall Transcripts U. Cal. Press, 1993 (How German scientists dealt with Nazism.) - Guerlac, Henry E., Radar in World War II,(in two volumes), Tomash Publishers / Am Inst. of Physics, 1987, ISBN 0-88318-486-9 - Hanbury Brown, Robert, Boffin: A Personal Story of the early Days of Radar and Radio Astronomy and Quantum Optics, Taylor and Francis, 1991, ISBM 0-750-030130-9 - Howse, Derek, Radar At Sea The Royal Navy in World War 2, Naval Institute Press, Annapolis, Maryland, USA, 1993, ISBN 1-55750-704-X - Jones, R. V., Most Secret War, Hamish Hamilton, 1978, ISBN 0-340-24169-1 (Account of British Scientific Intelligence between 1939 and 1945, working to anticipate Germany's radar and other developments.) - Kroge, Harry von, GEMA: Birthplace of German Radar and Sonar, translated by Louis Brown, Inst. of Physics Publishing, 2000, ISBN 0-471-24698-0 - Latham, Colin, and Anne Stobbs, Radar A Wartime Miracle, Sutton Publishing Ltd, 1996, ISBN 0-7509-1643-5 (A history of radar in the UK during WWII told by the men and women who worked on it.) - Latham, Colin, and Anne Stobbs, The Birth of British Radar: The Memoirs of Arnold 'Skip' Wilkins, 2nd Ed., Radio Society of Great Britain, 2006, ISBN 9781-9050-8675-7 - Lovell, Sir Bernard Lovel, Echoes of War - The History of H2S, Adam Hilger, 1991, ISBN 0-85274-317-3 - Nakagawa, Yasudo; Japanese Radar and Related Weapons of World War II, translated and edited by Louis Brown, John Bryant, and Naohiko Koizumi, Aegean Park Press, 1997, ISBN 0-89412-271-1 - Pritchard, David., The Radar War Germany's Pioneering Achievement 1904-1945 Patrick Stephens Ltd, Wellingborough 1989, ISBN 1-85260-246-5 - Rawnsley, C. F., and Robert Wright, Night Fighter, Mass Market Paperback, 1998 - Sayer, A. P., Army Radar - historical monograph, War Office, 1950 - Swords, Seán S., Technical History of the Beginnings of Radar, IEE/Peter Peregrinus, 1986, ISBN 0-86341-043-X - Watson, Raymond C., Jr. Radar Origins Worldwide: History of Its Evolution in 13 Nations Through World War II. Trafford Pub., 2009, ISBN 978-1-4269-2111-7 - Watson-Watt, Sir Robert, The Pulse of Radar, Dial Press, 1959, (no ISBN) (An autobiography of Sir Robert Watson-Watt) - Zimmerman, David., Britain's Shield Radar and the Defeat of the Luftwaffe, Sutton Publishing, 2001, ISBN 0-7509-1799-7 ||This article's use of external links may not follow Wikipedia's policies or guidelines. (February 2013)| - Barrett, Dick, "All you ever wanted to know about British air defence radar". The Radar Pages. (History and details of various British radar systems) - Bauer, Arthur O.; “Christian Hülsmeyer and about the early days of radar inventions,“ Foundation Centre for German Communications and Related Technologies; http://aobauer.home.xs4all.nl/Huelspart1def.pdf - Clark, Maj. Gregory C., Deflating British Radar Myths of World War II, 1997, http://www.radarpages.co.uk/download/AUACSC0609F97-3.pdf - Hollmann, Martin, "Radar Family Tree". Radar World. - ES310 "Introduction to Naval Weapons Engineering." (Radar fundamentals section) - The first operational radar in France 1934 - Penley, Bill, and Jonathan Penley, "Early Radar History - an Introduction". 2002. - Buderi, Robert, "Telephone History: Radar History". Privateline.com. (Anecdotal account of the carriage of the world's first high power cavity magnetron from Britain to the US during WW2.) - Romano, Salvatore; http://www.regiamarina.net/others/radar/radar_one_us.htm “History of the Development of Radar in Italy,” Regia Marina Italiana [Royal Italian Navy], 2004; - Sinnott, D.H., "The Development of Over-the-Horizon Radar in Australia" - The Secrets of Radar Museum (Canada's involvement in WWII Radar) - The Radar Pages - The Wizard War: WW2 & The Origins Of Radar (From Greg Goebel's In The Public Domain) - German Radar Equipment of World War II - Early radar development in the UK - A History of Radio Location in the USSR (In Russian) - RADAR Milestones: Famous Radar Pioneers and Notable Contributions
http://en.wikipedia.org/wiki/History_of_radar
13
51
In statistics, a contingency table (also referred to as cross tabulation or cross tab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. The term contingency table was first used by Karl Pearson in "On the Theory of Contingency and Its Relation to Association and Normal Correlation", part of the Drapers' Company Research Memoirs Biometric Series I published in 1904. A crucial problem of multivariate statistics is finding (direct-)dependence structure underlying the variables contained in high dimensional contingency tables. If some of the conditional independences are revealed, then even the storage of the data can be done in a smarter way (see Lauritzen (2002)). In order to do this one can use information theory concepts, which gain the information only from the distribution of probability, which can be expressed easily from the contingency table by the relative frequencies. Suppose that we have two variables, sex (male or female) and handedness (right- or left-handed). Further suppose that 100 individuals are randomly sampled from a very large population as part of a study of sex differences in handedness. A contingency table can be created to display the numbers of individuals who are male and right-handed, male and left-handed, female and right-handed, and female and left-handed. Such a contingency table is shown below. The numbers of the males, females, and right- and left-handed individuals are called marginal totals. The grand total, i.e., the total number of individuals represented in the contingency table, is the number in the bottom right corner. The table allows us to see at a glance that the proportion of men who are right-handed is about the same as the proportion of women who are right-handed although the proportions are not identical. The significance of the difference between the two proportions can be assessed with a variety of statistical tests including Pearson's chi-squared test, the G-test, Fisher's exact test, and Barnard's test, provided the entries in the table represent individuals randomly sampled from the population about which we want to draw a conclusion. If the proportions of individuals in the different columns vary significantly between rows (or vice versa), we say that there is a contingency between the two variables. In other words, the two variables are not independent. If there is no contingency, we say that the two variables are independent. The example above is the simplest kind of contingency table, a table in which each variable has only two levels; this is called a 2 x 2 contingency table. In principle, any number of rows and columns may be used. There may also be more than two variables, but higher order contingency tables are difficult to represent on paper. The relation between ordinal variables, or between ordinal and categorical variables, may also be represented in contingency tables, although such a practice is rare. Measures of association The degree of association between the two variables can be assessed by a number of coefficients: the simplest is the phi coefficient defined by where χ2 is derived from Pearson's chi-squared test, and N is the grand total of observations. φ varies from 0 (corresponding to no association between the variables) to 1 or -1 (complete association or complete inverse association). This coefficient can only be calculated for frequency data represented in 2 x 2 tables. φ can reach a minimum value -1.00 and a maximum value of 1.00 only when every marginal proportion is equal to .50 (and two diagonal cells are empty). Otherwise, the phi coefficient cannot reach those minimal and maximal values. C suffers from the disadvantage that it does not reach a maximum of 1 or the minimum of -1; the highest it can reach in a 2 x 2 table is .707; the maximum it can reach in a 4 × 4 table is 0.870. It can reach values closer to 1 in contingency tables with more categories. It should, therefore, not be used to compare associations among tables with different numbers of categories. Moreover, it does not apply to asymmetrical tables (those where the numbers of row and columns are not equal). The formulae for the C and V coefficients are: k being the number of rows or the number of columns, whichever is less. C can be adjusted so it reaches a maximum of 1 when there is complete association in a table of any number of rows and columns by dividing C by (recall that C only applies to tables in which the number of rows is equal to the number of columns and therefore equal to k). The tetrachoric correlation coefficient assumes that the variable underlying each dichotomous measure is normally distributed. The tetrachoric correlation coefficient provides "a convenient measure of [the Pearson product-moment] correlation when graduated measurements have been reduced to two categories." The tetrachoric correlation should not be confused with the Pearson product-moment correlation coefficient computed by assigning, say, values 0 and 1 to represent the two levels of each variable (which is mathematically equivalent to the phi coefficient). An extension of the tetrachoric correlation to tables involving variables with more than two levels is the polychoric correlation coefficient. The Lambda coefficient is a measure of the strength of association of the cross tabulations when the variables are measured at the nominal level. Values range from 0 (no association) to 1 (the theoretical maximum possible association). Asymmetric lambda measures the percentage improvement in predicting the dependent variable. Symmetric lambda measures the percentage improvement when prediction is done in both directions. The uncertainty coefficient is another measure for variables at the nominal level. The values range from -1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association. - Gamma test: No adjustment for either table size or ties. - Kendall tau: Adjustment for ties. See also - The pivot operation in spreadsheet software can be used to generate a contingency table from sampling data. - TPL Tables is a tool for generating and printing cross tabs. - The iterative proportional fitting procedure essentially manipulates contingency tables to match altered joint distributions or marginal sums. - The multivariate statistics in special multivariate discrete probability distributions. Some procedures used in this context can be used in dealing with contingency tables. - Ferguson, G. A. (1966). Statistical analysis in psychology and education. New York: McGraw-Hill. - Smith, S. C., & Albaum, G. S. (2004) Fundamentals of marketing research. Sage: Thousand Oaks, CA. p. 631 - Ferguson, p. 244 - Andersen, Erling B. 1980. Discrete Statistical Models with Social Science Applications. North Holland, 1980. - Bishop, Y. M. M.; Fienberg, S. E.; Holland, P. W. (1975). Discrete Multivariate Analysis: Theory and Practice. MIT Press. ISBN 978-0-262-02113-5. MR 381130. - Christensen, Ronald (1997). Log-linear models and logistic regression. Springer Texts in Statistics (Second ed.). New York: Springer-Verlag. pp. xvi+483. ISBN 0-387-98247-7. MR 1633357. - Lauritzen, Steffen L. (2002 electronic (1979, 1982, 1989)). Lectures on Contingency Tables (updated electronic version of the (University of Aalborg) 3rd (1989) ed.). - Gokhale, D. V.; Kullback, Solomon (1978). The Information in Contingency Tables. Marcel Dekker. ISBN 0-824-76698-9. |Wikimedia Commons has media related to: Contingency tables| - On-line analysis of contingency tables: calculator with examples - Interactive cross tabulation, chi-squared independent test & tutorial - Fisher and chi-squared calculator of 2 × 2 contingency table - More Correlation Coefficients - Nominal Association: Phi, Contingency Coefficient, Tschuprow's T, Cramer's V, Lambda, Uncertainty Coefficient - Customer Insight com Cross Tabulation - The POWERMUTT Project: IV. DISPLAYING CATEGORICAL DATA - StATS: Steves Attempt to Teach Statistics Odds ratio versus relative risk (January 9, 2001) - Epi Info Community Health Assessment Tutorial Lesson 5 Analysis: Creating Statistics
http://en.wikipedia.org/wiki/Contingency_table
13
87
- ACCELERATION AND GRAVITY - FRICTION AND RESISTANCE - SIMPLE MACHINES - PHYSICS CSI - AMUSEMENT PARK PHYSICS Graph Matching (Physics) (40 min.) Students will analyze the motion of a student walking across the room using the Motion Detector. They will also predict, sketch and test position and velocity vs. time kinematics graphs. Back and Forth Motion (Physics) (40 min.) The Motion Detector is used in this lab that qualitatively analyzes the motion of objects that move back and forth. Comparisons are made to catalog objects that exhibit similar motion. Objects analyzed include pendulums, dynamics carts, students jumping, springs, and bouncing balls. Projectile Motion (Physics) (40 min.) Measure the velocity of a rolling ball using photogates and computer software and then apply concepts from physics to predict the impact point of the ball in projectile motion. Newton's Second Law (Physics) (40 min.) A Force Sensor and Accelerometer will let students measure force on a cart simultaneously with the carts acceleration. Students will compare graphs of force vs. time and acceleration vs. time. They will analyze a graph of force vs. acceleration to determine the relationship between force, mass and acceleration. Newton's Second Law (Physical Science) (40 min.) In this experiment, students will use a computer-interfaced Motion Detector to determine acceleration and make conclusions about the relationship between mass and acceleration. Newton's Third Law (Physics) (40 min.) Observe the directional relationship and time variation between force pairs using two force sensors. After collecting data and analyzing the results, students will explain Newton’s third law in simple language. Simple Harmonic Motion (Physics) (40 min.) Using the Motion Detector, students will measure the position and velocity of an oscillating mass and spring system as a function of time. They will then compare the observed motion to a mathematical model of simple harmonic motion. Pendulum Periods (Physics) (40 min.) This simple experiment measures the period of a pendulum as a function of amplitude, length, and bob mass using the Photogate. Modern Galileo Experiment (Physics) (40 min.) Determine if Galileo’s assumption of uniform acceleration is valid based on the use of a Motion Detector to measure the speed of a ball down an incline. Determining g on an Incline (Physics) (40 min.) Students will use the motion detector to measure acceleration and determine the mathematical relationship between the angle of an inclined plane and the acceleration of a ball rolling down the ramp. They will also use extrapolation to determine the value of free fall acceleration and determine if this is valid. Students can also compare the results for a ball with the results for a low-friction dynamics cart. Picket Fence Free Fall (Physics) (40 min.) This is a very straight forward lab in which students will measure the acceleration of a freely falling body (g) to better than 0.5% precision using a Picket Fence and a Photogate with the Vernier software. Ball Toss (Physics) (40 min.) Predictions for the graphs of position, acceleration and velocity vs. time of tossing of a ball will be made and then students will collect data and analyze the graphs of position, acceleration, and velocity vs. time. The best fit line will be determined for the position and velocity vs. time graphs, while mean acceleration will be calculated from the acceleration vs. time graph. Ball Toss with Video Analysis (Physics) (40-60 min.) Same as above with a video component: Logger Pro software can insert videos of your experiment into files. You can then synchronize the video of the balls motion to the graphs produced. When you replay the experiment, the data and video can all be viewed together. You will use a digital camera to capture your motion as the ball is tossed. How will the velocity vs. time and acceleration vs. time graphs look as the motion of the ball changes? Bungee Jump Accelerations (Physics) (40 min.) In this experiment, students will investigate the accelerations that occur during a bungee jump. An Accelerometer will be used to analyze the motion of a toy bungee jumper and determine where acceleration is at a maximum and a minimum. Atwood's Machine (Physics) (40 min.) This lab takes a look at a classic experiment in physics. Using a Photogate, students will measure acceleration and determine the relationships between the masses on an Atwood’s machine and the acceleration. Static and Kinetic Friction (Physics) (40 min.) Students will be able to measure the force of static friction using a Dual-Range Force Sensor and will determine the relationship between force of static friction and the weight of an object. They will also use a Motion Detector to determine that the coefficient of kinetic friction depends on weight. Air Resistance (Physics) (40 min.) Using the Motion Detector, students will observe the effect of air resistance on falling coffee filters and determine how the terminal velocity of a falling object is affected by air resistance and mass. They will then choose between two competing force models. Energy of a Tossed Ball (Physics) (40 min.) Using a ball and a Motion Detector, students will see how the total energy of the ball changes during free fall by measuring the change in the kinetic and potential energies as a ball moves during free fall. Energy in Simple Harmonic Motion (Physics) (40 min.) In this lab activity, slotted masses and springs are used in coordination with a Motion Detector to examine the energies involved in simple harmonic motion and to test the principle of conservation of energy. Energy in Simple Harmonic Motion with Video Analysis (Physics) (40 min.) Same as above with video component: Logger Pro software can insert videos of your experiment into files. You can then synchronize the video of the balls motion to the graphs produced. When you replay the experiment, the data and video can all be viewed together. You will use a digital camera to capture your motion as the ball is tossed. Work and Energy (Physics) (40 min.) A Motion Detector and Force Sensor will be used to measure position and force and to determine the work done on an object. Students will also measure velocity and calculate kinetic energy. Lastly, they will be able to compare the work done on a cart to its change in mechanical energy. Momentum, Energy and Collisions (Physics) (40 min.) Students will use the dynamics cart track to observe collisions between two carts, testing for conservation of momentum. They will also measure energy changes during different types of collisions and classify collisions as elastic, inelastic, or completely inelastic. Underfoot Pressure (40 min.) Students will use various forms of technology to obtain data on foot pressure, foot area, and force. Forms include use of a Vernier force plate, forensic developing paper and ink and a Novel pressure platform. Sound Waves and Beats (Physics) (40 min.) Measure the frequency, period and amplitude of sound waves from tuning forks and observe beats between the sounds of two tuning forks. Tones, Vowels and Telephones (Physics) (40 min.) Use our microphones to analyze the frequency components of tuning forks and of the human voice. You can also record the overtones produced with the tuning forks and examine how a touch tone phone works with regard to predominant frequencies. Speed of Sound (Physics) (40 min.) Students will measure how long it takes sound waves to travel down a long tube in order to determine the speed of sound and compare the speed in air to the accepted value. Polarization of Light (Physics) (40 min.) Measure the transmission of light through two polarizing filters as a function of the angle between their axes and compare it to Malus's Law. Light, Brightness and Distance (Physics) (40 min.) Determine the mathematical relationship between the intensity of a light source and the distance from the light source. Reflectivity of Light (Physical Science) (40 min.) Use a computer interfaced light sensor to measure reflected light and calculate the percent reflectivity of various colors Polaroid Filters (Physical Science) (40 min.) Use a computer interfaced light sensor to measure the intensity of transmitted light and study the transmission of light by Polaroid filters. Emission Spectra (40 min.) In this experiment, students use a Vernier Spectrometer (SpectroVis) to measure the emission spectrum of helium, hydrogen, krypton and neon spectral tubes. Transmittance of Theatrical Lighting Filters (40 min.) In this experiment, students use a Vernier Spectrometer (SpectroVis) to measure and analyze the visible light transmittance spectrum of various samples of theatrical lighting filters. Students will compare and contrast the spectra of lighting filters with the published information. Ohm's Law (Physics) (40 min.) Students will determine the mathematical relationship between current, potential difference and resistance in a simple circuit. They will also compare the potential vs. current behavior of a resistor to that of a light bulb. First Class Levers, Pulleys, Inclined Planes (Physical Science) (40 min.) Use a computer to measure resistance force and effort force. Use this information to calculate the mechanical advantage of each lever. Use a computer-interfaced Force Sensor to measure force of single and double pulley systems. Calculate the actual and mechanical advantage as well as determine efficiency. Measure the force needed to lift an object and the force needed to pull the same object up an inclined plane using a computer-interfaced Force Sensor. Calculate and compare the work done and the efficiency. Tension and the Isosceles Triangle (40 min.) Students will collect force data for a hanging mass on a string using force sensors to analyze the concept of tension and to study vector forces in a static situation. Starry Night High School (40 min. lessons included) Starry Night High School makes it easy to teach astronomy with a comprehensive space science curriculum solution written for teachers by teachers. It offers innovative lesson plans correlated to 9th through 12th grade standards, hands-on activities, software guided explorations, DVD movie content and assessment tests. Starry Night computer exercises, hands on activities and thought-provoking discussion questions encourage students to explore advanced topics such as the life cycles of stars. The Case of the Clumsy Construction Worker (40 min.) Solve the case involving a toolbox accident by using a motion detector to obtain velocity vs. time graphs for the simulated scene. Use graphical analysis to determine acceleration from graphs. Examine how a lab model simulates a real-life situation and apply the principles of projectile motion to solve the case. Labs developed specifically for use of Vernier equipment at Knoebels Amusement Park. Topics include acceleration, potential and kinetic energy and conservation of energy. Topics include centripetal acceleration, barometric pressure- elevation. Topics include centripetal acceleration, vertical acceleration and graphical analysis. The Bumper Cars Topics include electrical work, efficiency, elastic and inelastic collisions. The Log Flume Topics include potential and kinetic energy and deceleration. The Italian Trapeze Topics include angular speed, period of rotation, tension and centripetal force. Topics include potential and kinetic energy and deceleration. Topics include pulse, respiration, blood pressure, EKG and symptoms. Or…customize your own lab using our equipment and expertise!
http://www.susqu.edu/about/10755.asp
13
106
Like the Bermuda Triangle, GRE triangle questions often bring with them a sense of mystery and uncertainty. However, while the Bermuda Triangle will likely remain a mystery, triangles themselves can be understood. Let’s look at a GRE triangle question: The first thing you want to do on a problem like this is ask yourself: What do I know? We know the lengths of sides AB, BD and BC. We also know that angle BDC is a right (90o) angle. The next thing you should ask yourself is: What does the question ask me to find? The question above asks us to find the perimeter of triangle ABC. The perimeter is the distance all the way around. Finally, you should ask yourself: What do I need to know in order to answer the question? Before we can find the perimeter of triangle ABC, we need to find the missing length, AC. If we can’t find AC independently, we can determine the lengths of AD and DC and add those values together to find AC (this is the method we’ll use for this problem.) Here is where it gets interesting—and easy. Trust me! Since triangle BDC is a right triangle, we know that BDA is also a right triangle. (Remember the rule: two angles that form a straight line must add up to 180o.) When we have a right triangle and we know the length of any two sides, we can use the Pythagorean Theorem (a2 + b2 = c2, where a and b are the two perpendicular sides and c is the hypotenuse, or longest side) to solve for the length of the missing side. However, just because we can use the Pythagorean Theorem doesn’t mean we should. Really, who wants to deal with exponents and square roots if you don’t have to? If you know two magical ratios, you will seldom need the Pythagorean Theorem. Often on GRE triangles, the lengths of the sides of a right triangle will occur in the ratio of 3:4:5 or 5:12:13. It’s important to remember that these ratios do not necessarily give the actual lengths of the sides—because the values are ratios, they represent the side lengths pared down to their simplest form. The actual lengths could be 6:8:10 or 10:24:26 or any other multiple of the basic ratios. Now, when we look at triangle BDA, we see that we have a multiple of 5 for AB, the hypotenuse. That means we could have a 3:4:5 triangle. Dividing 30 by 5 gives a value of 6, which is going to be the number by which the entire ratio has been multiplied; 6 times 3:4:5 yields 18:24:30. Since 24 and 30 are accounted for, the length of AD must be 18. (If multiplying the ratio by 6 did not give 2 of the 3 lengths the test-maker provided, then we would have to use the Pythagorean Theorem, after all.) Let’s do the same thing for triangle BDC. The hypotenuse, BC, measures 26, which is a multiple of 13 (13 x 2 = 26.) If we multiply the entire 5:12:13 ratio by 2, we get 10:24:26. Since 24 and 26 are accounted for, the length of DC must be 10. (Again, if multiplying the ratio by 2 did not give 2 of the 3 lengths the test-maker provided, then we would have to use the Pythagorean Theorem.) If we add 18 and 10, we find that the length of AC is 28. Now, we have enough information to answer the question: What is the perimeter of triangle ABC? The perimeter will be the length of AB + the length of BC + the length of AC. Adding up the actual numbers, we find that 30 + 26 + 28 = 84. Be sure that the Pythagorean Theorem and these two classic ratios are in your GRE toolbox for Test Day! Recently a reader asked me to post about strategies for long Reading Comprehension passages and Bolded Statement questions. (Mohamed also asked about vocab strategies, which I will discuss soon. Be sure to see previous vocabulary-related posts from my Kaplan colleagues.) The Kaplan New GRE Verbal Workbook includes a chapter devoted to Reading Comprehension, as well as sets of practice questions and additional resources. One of these resources is a list of additional tips for tackling the Reading Comprehension section, including Bolded Statements questions. These tips are found on pages 78-80, and I’m going to borrow from them here. There are differences between real-world reading and reading GRE passages is that on the GRE: - On Test Day, you don’t care about the facts in the passage — you only care about ideas. A passage might tell you that the character Superman first appeared in 1938. You don’t care what year Superman was introduced, but you care about WHY the author told you that. The passage may then go on to describe how the powers attributed to Superman have changed over time. In that case, knowing that Superman has been around for 70+ years might be important. - Prior knowledge is not welcome on Test Day. Forget everything you might know about Superman — everything you need to know will be contained within the passage. Wrong answer choices play on things that test-takers understand to be logically true, but if those facts aren’t mentioned in the passage, you don’t care. - If a passage tells you Superman has a twin sister, then as far as you are concerned, he has a twin sister. The passage text is TRUE. Period. You may question texts as much as you like in real-world reading, but on the GRE, accept that whatever the passage is telling you is correct. Bolded Statement questions should be tackled the same way as other Reading Comprehension question types. In these questions, you REALLY don’t care about the facts or details. You ONLY care about the purpose of the statements, and you consider each statement separately. Is it an opinion? An example? An argument? If it is an argument, is it the passage’s primary or secondary argument, or perhaps a counterargument? Is it evidence, and if so, of what? You care about the purpose of each statement in relation to the other sentences in the passage. Let me repeat that. Just as with other question types, you must consider Bolded Statements in the context of the passage as a whole. Do not skip the un-bold statements; they are your context clues for figuring out the role the Bolded Statements play. Have a question about grammar, punctuation, usage, or style? Email me at [email protected] and put “blog question” in your subject line. Then look for a response here! My student “Becky” took the GRE last Thursday and reeled in a 640-740 on the verbal section. Dipping well into the 90th percentile, this performance puts her in good standing for the elite English lit programs she has her eyes on. Needless to say, Becky was very excited and her email to me overflowed with capital letters and long strings of exclamation marks. But I’m not writing this to pat myself on the back or share yet another Kaplan success story. The most interesting feature of Becky’s email is that she didn’t even bother to mention her math score. This isn’t because she did poorly, or because we didn’t work on the math section. As a matter of fact, Becky told me at our first tutoring session that she wanted to spend all 15 of her tutoring hours on math. She was an English major, so her confidence with the verbal section — and complementary fear of the math section — was hardly surprising. Well, we did spend the first session doing math, since that was what she wanted. I was skeptical, however, that English literature programs were all that interested in her math score. “Do you know where you’re applying?” I asked her. She rattled off a list. “And have you contacted them to see what they want on the GRE?” Becky, it turned out, had no idea. I smiled. “Great! That’s your first homework assignment,” I said. “Contact the programs you’re interested in and find out what they want on the math and verbal sections.” Becky did her homework that week, and that was how she discovered that none of her programs cared a rat’s butt about her math score. She also learned that what they did want was an extremely high verbal score — much higher than what she had scored on the diagnostic, even as an English major. We proceeded to spend the entire remainder her tutoring package working on verbal. Had we beaten down the math section as Becky initially wanted, the results would have been very hilarious but also very tragic. Since everyone takes the GRE, from French historians to theoretical physicists, there is no universal concept of a “good” performance — “good” varies drastically from program to program. So now I ask you: have you contacted the schools you’re interested in? Do you know what they actually want you to get on the GRE? If not, that’s your first homework assignment. I always thought of myself as more of a verbal person than a math person. As my tenure with Kaplan enters its fourth year, however, I find myself falling harder for math every time I teach a Quantitative class. Kaplan’s strategies, combined with the innate tricks and shortcuts of mathematics, make answering many GRE Quantitative questions a breeze. Really…I promise! Don’t believe me? Ah, but you will. Let’s consider a Quantitative Comparison problem that calls on our knowledge of circles. Many test-takers see circle problems and begin to hyperventilate, but you should not be one of those test-takers. Circles are often fantastically easy to work with once you learn a few tricks. Of course, you will need to know the basic circle formulas such as area and circumference. However, another incredibly useful tool to add to your toolbox is the proportional relationship between the measures of a circle. Let me share an example: In this Quantitative Comparison problem, we are given the measure of the central angle O (45o) and the length of arc XYZ (3). We are then asked to compare 6π to the circumference of the circle. At first glance, it may seem that we don’t have enough information to answer this question. After all, many of us have been taught that the radius is everything to a circle, and without it we can do nothing. If the proportional relationship of circle measurements—the beautiful, and appropriately circular relationship that is true to all circles, everywhere—is in your toolbox, however, you can do this problem in under a minute. Here is that relationship: Arc length/circumference = central angle/360 degrees = area of sector/ area of circle Notice how the three relationships are “anchored” by the relationship between the central angle and the full degree measure of the circle. If we know the fraction of the circle that the central angle represents, then we also know the fraction that the resulting arc length is of the circumference, and the fraction that the area of the sector (the “pie piece” of the circle determined by the central angle) is of the entire area of the circle. Based on the information that we’re given for a particular circle question, we can use any two of the three proportions above to solve for a missing measurement. For example, to solve this particular problem, we can use these two proportions: Arc length/circumference = central angle/360 degrees When we plug in the values that we’re given for the central angle and arc length, we can solve for the circle’s circumference: 3/circumference = 45 degrees/ 360 degrees Simplifying the second proportion, we get: 3/circumference = 1/8 Now we know that the arc length (3) is 1/8th of the circle’s circumference (because the central angle is 1/8th of the full degree measure of the circle). Continuing forward, we can cross-multiply to solve for the circumference of the circle: 3 x 8 = 1 (circumference) 24 = circumference Let’s look back at the quantities we were asked to compare: If we remember that π is slightly more than 3 (3.14159… to be more precise), then we can estimate that 6π is slightly more than 18, which is clearly less than 24. Thus, Quantity B is greater than Quantity A. If the proportional relationship of circle measurements is not in your GRE toolbox, be sure to learn it (and practice using it often) before Test Day! ETS, the GRE test maker, just released several tables for score conversion from the old GRE to the new GRE. We know you have been eager for this information, and we’re happy to share this with you, along with some analysis. Since the new GRE launched in August, only score ranges have been available to test takers – and those ranges are based on the old 200-800 scoring scale. Here’s how the new GRE scores will work: - Starting November 8th, new GRE test takers who took the exam in August and September will begin receiving their official scores on the new 130-170 scoring scale. Official scores will continue to roll out to test takers through November. - The full score reporting schedule from ETS is available here, and breaks down as follows: |Computer-based revised General Test Dates||Approximate Score Report Mailing Dates and View Scores Online Dates| |August 1, 2011 – September 8, 2011||8-Nov-11| |September 9, 2011 – October 2, 2011||10-Nov-11| |October 3, 2011 – October 15, 2011||17-Nov-11| |October 16, 2011 – November 18, 2011||1-Dec-11| |November 19, 2011 – November 28, 2011||8-Dec-11| |November 29, 2011 or later||10 – 15 days after the test date| Also – check out ETS’ new Excel tool where you can put in old or new GRE scores and calculate predicted GMAT scores. There’s also a Flash version. ETS continues to pursue business school admissions committees aggressively. 600+ business schools, included a majority of top programs now accept the GRE as an alternative to the GMAT. Some observations on the new scores: - The new scoring scales follow a normal distribution with 150 as the mean for both math and verbal. The old 200-800 GRE scores were really skewed as the mean drifted over time. - On the old test, low verbal scaled scores matched with high percentile scores while high math scaled scores matched with low percentile scores. Before, ~620 on the math side and ~455 on the verbal side of the test were both 50th%ile. ETS has realigned the scaled score-to-percent scores for the new GRE so that a 150 Quant and a 150 Verbal are the new 50th percentile. - An 800 on the quantitative section on the old GRE corresponds with a score of only 166 on the new test. So, getting a perfect math score on the old test only puts you in the 94th percentile on the new test. ETS has made the math content harder on the new GRE to allow for differentiation of high scoring candidates for quant-intensive programs like business school, engineering and the physical sciences. - On the verbal side of the old GRE, you were already in the 99th percentile with a 730. With the new test and the new scores, 99th percentile on the quant side is a 170, and on the verbal side a 169 or 170 puts you in the 99th percentile. - Getting just a couple more questions correct will lead to a big percentile increase on this test. A 155 is 69th percentile on both the math and verbal sides of the new GRE; a 157 (getting another question or 2 correct) is 77th percentile on both sides of the new GRE. Our team will be attending a follow-up score interpretation session with ETS on November 15th. More information coming soon. Please reach out us on Facebook or Twitter if you have questions about scoring on the new GRE.
http://blog.kaplangradprep.com/tag/test-prep/
13
54
|When the black holes are within a few radii of each other, their event horizons become distorted, and they enter the merger phase. They fuse together in conditions of extreme gravity to form a single, highly perturbed black hole. Elucidating the dynamics of this process and calculating the accompanying waveforms cannot be accomplished analytically. Rather, we must use numerical relativity, in which the Einstein equations are solved on a computer, in three spatial dimensions plus time.| |Finally, the distorted remnant black hole evolves toward a symmetrically rotating state—known as a Kerr black hole—by emitting its perturbations in the form of gravitational waves. We say that the black hole "rings down," in analogy to the way a ringing bell emits its distortions as sound waves. We can solve the equations of general relativity during this ringdown phase analytically by considering the distortions to be perturbations of a Kerr black hole; the ringdown waveforms are sinusoids whose amplitudes drop off exponentially.|| Detecting the gravitational waves of such mergers would be the scientific equivalent of striking gold. Simulating a black hole merger on a computer and calculating the resulting gravitational waveforms was long considered the holy grail of numerical relativity, both for the difficulty of this endeavor and the importance of the results. From the perspective of fundamental physics, black hole mergers are the ultimate "two-body" problem in general relativity. The solution to the equations can teach us a great deal about the effects of extreme gravity on the surrounding spacetime. Black hole mergers also provide superb laboratories for testing general relativity in very strong and dynamical fields. |Seeking the Holy Grail In 1964 Susan Hahn and Richard Lindquist made the first attempt to solve Einstein's equations on a computer. They attempted to simulate the head-on collision of two equal-mass, non-spinning black holes. Since the term "black hole" had not yet been coined, they actually tried to model the head-on collision of two "wormholes." While this valiant attempt was well ahead of its time, the computer code crashed before the wormholes could collide. |In the 1970s, Larry Smarr and Kenneth Eppley first successfully simulated a head-on collision of two black holes using supercomputers at Lawrence Livermore National Laboratory (LLNL). In addition to LLNL's computer power, Smarr and Eppley benefited from advances in understanding how to write Einstein's equations in a form suitable for integration on a computer. They solved the Einstein equations in two spatial dimensions (2D) plus time (that is, symmetric about an axis), and obtained information for the first time about the gravitational waves. They also produced the first numerical relativity scientific visualizations.|| Simulating a black hole merger on a computer and calculating the resulting gravitational waveforms was long considered the holy grail of numerical relativity, both for the difficulty of this endeavor and the importance of the results. |The next major step would be to simulate orbiting black holes, which requires solving Einstein's equations in three spatial dimensions (3D) plus time. But in the 1970s and 1980s, there were too many technical difficulties with the simulation codes and insufficient computer power available. While some work continued on evolving binary neutron stars, no further progress was made.| |The 1990s brought renewed interest in black hole mergers. As plans for LIGO developed, physicists recognized these mergers as major sources of strong gravitational waves. In response to the growing experimental effort to detect gravitational waves, the NSF funded a multi-year, multi-institution Grand Challenge grant in the mid-1990s aimed at simulating black hole mergers. The several groups involved in the Grand Challenge built large-scale 3D codes, but found their evolutionary models beset by a host of instabilities. While they could simulate grazing black hole collisions, their codes crashed due to instabilities before any significant fraction of a binary orbit could be computed.| |After the Grand Challenge grant ended, the participating research groups continued working individually. A major focus was the formalisms, that is, the way the Einstein equations are expressed mathematically, including the choice of variables. They learned that instabilities from unphysical solutions to the equations were causing the codes to crash. Numerical simulations excite these modes, which grow exponentially and destroy the computation. Researchers devised strategies to rewrite Einstein's equations in different ways, to eliminate these unphysical solutions. Overall, progress was made, but incrementally.| By the early 2000s, many in the community felt a sense of despair, with some saying that "numerical relativity is impossible." This turned out to be the darkest hour before the dawn, however, because a stunning series of breakthroughs was about to begin. In late 2003, a group led by Bernd Brügmann (now at Fredrich Schiller University in Jena, Germany) achieved the first complete orbit of a black hole binary. The team's code expressed the Einstein equations in a form known to be conducive to numerical stability. They represented the black holes as "punctures," a clever method for handling black hole singularities (where the spacetime curvature becomes infinite) on a numerical grid. |Their specific approach, which was the basis for most work in numerical relativity at the time, required that the punctures remain fixed in the grid so that their singularities could be easily factored out. But since orbiting black holes actually move, Brügmann's group used a coordinate system in which the black holes are fixed in place, and the spacetime revolves around them. The team succeeded in simulating the binary for a little more than one orbit...before its code also crashed.| |In early 2005, Frans Pretorius (now at Princeton University) succeeded in simulating the first orbit, merger, and ringdown of equal-mass black holes. Rather than using punctures, he handled the singularities by excising the regions inside the event horizons from the computational domain. These black holes were not fixed in place, but rather moved on their orbits through the numerical grid. His code was based on a very different way of writing the Einstein equations, called the generalized harmonic form. Pretorius' code did not crash, and he extracted the first gravitational waveform from a black hole merger.| |At this time, most of the numerical relativity community had codes based on the traditional techniques used by Brügmann's group, including puncture black holes. Pretorius' success led to the natural question of whether stable computations of black hole mergers required his very different techniques instead of the traditional puncture approach.| |The answer came later in 2005 with the moving puncture method, which was discovered simultaneously and independently by the numerical relativity group led by Manuela Campanelli (now at the Rochester Institute of Technology) and our group at NASA's Goddard Space Flight Center. This method is based on the traditional approach to evolving puncture black holes with Einstein's equations, but with a simple and crucial difference.| |When solving the Einstein equations, we are free to choose our coordinate systems; in fact, the coordinates we use can themselves change as the simulation proceeds, in order to keep the equations in a nice form for numerical computation. The moving puncture method is based on a specific choice of coordinates that enables the punctures to move across the grid. Using moving punctures, both the Campanelli and Goddard groups achieved accurate simulations of the orbit, merger, and ringdown of equal mass, non-spinning black holes and extracted the gravitational waveforms—with no code crashes.| series of stunning advances resulted from this new moving puncture method. Our Goddard group in particular was quickly able to accomplish multiple binary orbits and uncovered the universal waveform signatures of non-spinning, equal-mass black holes. Since the moving puncture method required only simple changes to existing numerical relativity codes based on traditional puncture techniques, other groups worldwide quickly adopted this method and got into the game. Within a few months, these teams simulated mergers of unequal mass and of spinning black holes, and their results were applied to astrophysics. A true golden age of black hole simulations had opened, and the fun was only just |Anatomy of a Merger Simulation Now let's take an in-depth look at how our group calculates black hole mergers on a computer. We start with the black holes on a computational grid at some initial time. The Einstein equations of general relativity are then used to evolve the binary forward in time. What could be simpler? |A key issue in the simulation is the extreme spatial curvature inside the black holes, which approaches infinity at the centers and could thus cause computers to crash. Fortunately, things inside the black hole horizon cannot escape and influence the spacetime outside, so we do not have to include these physical infinities in our simulation. Instead, we represent each black hole as a puncture, with all the nastiness of the infinite curvature written in terms of factors taking the form (1/r)p, where p > 0. Here r is the distance from the center of the black hole, and these terms become infinite as the center is approached, r → 0. In the traditional puncture method, these (1/r)p quantities—the infinities—are factored out and dealt with separately, by hand. The computers then work only on finite quantities, and the infinite quantities are factored back in only when needed.|| A series of stunning advances resulted from this new moving puncture method. |Isolating the infinite curvatures in this manner requires an extra condition on the coordinate system to keep each puncture at a fixed location in the grid. But the black holes are orbiting each other and spiraling inward. If we evolve the binary with the black holes fixed in the coordinate system, the computational grid twists itself up and becomes highly stretched and warped. This causes various problems that can cause computer codes to crash, motivating researchers to try different strategies.| |In the summer of 2005, our group was experimenting with the puncture method and ran a simulation in which we did not isolate and factor out the infinities. Rather, we let the computer deal with the resulting large numbers. To our surprise, the code did not crash! Instead, the discrete nature of the computation cut off the infinities at the punctures while preserving the mathematical integrity of the black holes around the punctures. Campanelli's group independently came to a similar realization at about the same time.| |This discovery led to the development of a different approach. We found that if we did not isolate and factor out the infinities, we could allow the punctures to move across the grid. To ensure that the punctures move smoothly, and to produce a stable and accurate simulation, we needed a very careful choice of coordinate system. It took us several months to develop this system, but it allowed us to simulate multiple binary orbits and the final merger—our long-sought breakthrough.| |Black hole merger simulations present other technical challenges. For example, the gravitational radiation produced by the merger has wavelengths typically 10 to 100 times larger than the radius of the horizons. For accurate models, we need to resolve both the strong gravitational fields near the black holes and the radiation in the wave zone. We accomplish this by using variable grid resolution in our runs, with finer grids near the black holes and coarser ones in the outer regions where the waves propagate (figure 6).| |Because the black holes move freely through the grid, this mesh spacing must adapt to the strength of the fields. In the region where the black holes are orbiting, we base our refinement criteria on a curvature-related quantity that takes large values near the black hole and then decreases away from the source. When the value of this quantity exceeds a chosen threshold, the code doubles the number of grid zones in each direction to increase the resolution in that region. In the wave zone and beyond we use progressively coarser, fixed mesh refinement; this also makes it computationally inexpensive to push the outer boundary of the simulated space far enough away that we need not worry about reflections of the waves from the outer edges of the grid.| Despite these advances, the numerous orbits we wish to simulate typically demand thousands of CPU hours. On the Columbia system at NASA's Ames Research Center, as well as the Discover cluster here at Goddard, we often run on as many as 500 processors for up to 70 hours. As we push to binaries with higher mass ratios and spins, which require greater accuracy, we expect the larger Jaguar system at Oak Ridge National Laboratory to provide additional computational power. Once we had this moving puncture method working well, we first aimed to compute the definitive gravitational wave signal for the simplest case: the merger of equal-mass nonspinning black holes. For simulations of astrophysically realistic binaries, the black holes should ideally start out on nearly circular orbits around their common center of mass. However, the types of initial conditions being used at that time were imperfect because they had some orbital eccentricity. |To test the robustness of our models, we evolved equal-mass, nonspinning binaries starting from several different separations, from roughly two to four orbits before merger. Despite the differing amounts of initial orbital eccentricity, we found that in every run the black holes locked on to the same trajectories roughly one orbit before merger.| |Figure 7 shows the gravitational waves emanating from the merger of an equal-mass binary. We can calculate the waveform by averaging this radiation over an imaginary sphere centered on the source, and graphing the result as a function of time. When we compared the waveforms from our runs we found that the early stages of the waveforms differ by about 10%, corresponding to the different amounts of initial eccentricity. But for the final part of the waveform that corresponds to the last orbit and merger, the waveforms were nearly identical! We thus obtained a clear and universal waveform, independent of the initial starting point. This signal is shown as panel (a) in figure 8.| |So, despite our original expectations, the simulations showed that the uncertainty in the initial conditions did not significantly affect our ability to predict the gravitational waveforms expected from real mergers. Likewise, when we compared our waveforms with those produced by different groups using different techniques, we found good agreement. The field of gravitational waveform simulation was suddenly open for business.| |The waveforms were surprising in another way as well. There is an important difference in how gravitational radiation is generated in the early part of the waveform compared to its end. At first the waveform represents an inspiraling orbital system, while later it represents the "ringing" of the distorted single black hole produced in the merger. Are there dramatic features present in the waveforms that demark this seemingly important change in the underlying physics? For years, scientists had assumed that the complex and nonlinear interactions of strong-field sources near merger would leave strong imprints on the waveforms.| |Our simulations revealed quite a different result. The wave frequency initially "chirps" or increases smoothly and monotonically up to an expected peak, and after merger it matches the expected final frequency for the ringing of the final distorted black hole. Likewise, the wave amplitude increases smoothly to a peak before decaying as a damped sinusoid in the ringdown. As can be seen in figure 8 (a, p21), the waveforms are remarkable only in their simplicity, with the merger producing a smooth transition between the inspiral and ringdown. The nonlinearities of general relativity have conspired, it seems, to produce the simplest possible transition.| Nevertheless, the gravitational waveforms from all mergers do not look alike. Rather, the detailed shapes of the waveforms depend on the mass of each black hole, their spins, and the orientations of their rotational axes. Black hole binaries with unequal masses produce more complicated distortions of the surrounding spacetime, resulting in more complex waveforms from most viewing positions (figure 8, b, p21). And mergers of spinning black holes bring additional interesting physics. For example, compared to the nonspinning case, mergers of black holes with their spins aligned with the orbital plane tend to "hang up," taking longer to plunge inward and merge. This leads to more densely packed, higher-frequency wavefronts (figure 8, c, p21) . In the real Universe, black holes in binaries will almost always have unequal masses. Such a binary will emit gravitational radiation, and the momentum it carries, preferentially in the direction of the smaller hole. Early on, the black holes are widely separated and this asymmetry does not amount to much, when averaged over the course of an orbit. But in the final stages of inspiral, the radiation amplitude—and hence the amount of momentum loss—increases. When the binary enters its final plunge to merger, the system has run out of time to balance itself. |The result is something we can predict from Newton's Third Law of Motion: since the binary emits more radiation—and hence momentum—in one direction, it must compensate by moving in the opposite direction. The more momentum it loses, the larger the "recoil kick" of the post-merger black hole. Because this effect depends so strongly on the end-stage merger, it was not until the recent breakthroughs that the various research groups could provide a definitive answer to the size of such a recoil: the maximum kick is about 175 km s-1, for a binary mass ratio of about 3 to 1.|| And these advances are occurring across a broad front, with many research groups carrying out calculations with independently written codes. |Now let's add spins to the mix. When the spins are perpendicular to the orbital plane and spinning in the same direction as the orbit, which is what astronomers expect when supermassive black holes are surrounded by gaseous accretion disks, the total kick can climb to about 500 km s-1 for maximally spinning holes. In this case, the largest kick occurs for a binary with a mass ratio closer to 2 to 1.| |When the inspiraling holes spin in arbitrary directions, life becomes much more complicated: the spins precess, the orbits rise and fall, and the final recoil kick can be much larger. In fact, several research groups have found that some configurations may produce recoils up to 4,000 km s-1—easily enough to eject the merged hole from the center of even a large galaxy.| |Figure 9 shows the distribution of gravitational-wave energy emitted from the merger of two equal-mass, spinning holes. The visible north-south asymmetry gives rise to a "recoil kick" upwards.| As you might imagine, this result has sparked the interest of astronomers. If such kicks are commonplace, then many large galaxies should no longer contain massive central holes. Given the apparent robustness of the numerical results, and the ubiquity of massive central black holes in galaxies, the focus has now shifted to investigating the pre-merger configurations of the binary systems. Physics has given way to astrophysics. The recent progress in numerical relativity simulations of black hole mergers is both exciting and impressive by any measure. And these advances are occurring across a broad front, with many research groups carrying out calculations with independently written codes. |For the simplest case of equal-mass, nonspinning black holes, there is general agreement on the basic results. The waveform takes the simple shape shown in figure 8 (a, p21). The final black hole has a spin roughly 70% of the maximum value allowed by general relativity. And the total energy carried away by the gravitational waves during the merger is roughly 4% of the total energy of the binary. This yields an energy output of 1023 solar luminosities, with more massive black holes radiating this energy over longer timescales.|| The mergers seen by LISA will be very energetic, and are expected to be detected with a very large signal-to-noise ratio. |Long runs of black hole mergers are now being carried out, starting roughly 10 orbits before the merger. This is a key development, because such waveforms are needed as templates to search for signals in the data from gravitational wave detectors. The ground-based detectors have already taken some early data, and efforts are underway to incorporate the numerical relativity results into their analysis. The availability of merger waveforms will increase LISA's sensitivity to black hole mergers reaching back very early in cosmic history.| |The mergers seen by LISA will be very energetic, and are expected to be detected with a very large signal-to-noise ratio. Figure 10 shows how a "pure" waveform of the sort shown in figure 8 (a, p21) would appear in the LISA detector data stream.| finally, there is an explosion of work on mergers of unequal mass and spinning black holes and the resulting recoil kicks. Studies of the waveforms and final spins of the black holes are underway. The kicks especially have important astrophysical implications that are now under study. We live in interesting and exciting times. Stay tuned! This article was written by Joan Centrella, Bernard Kelly, Jim Van Meter, John Baker, and Robin Stebbins of the Gravitational Wave Astrophysics Laboratory at NASA's Goddard Space Flight Center, and by Goddard science writer Robert Naeye. The group acknowledges Chris Henze of NASA's Ames Research Center for supplying images, and the use of the supercomputers Columbia at NASA Ames and Discover at NASA Goddard.
http://www.scidacreview.org/0802/html/astro.html
13
62
Caption: A scanning electron micrograph, taken with an electron microscope, shows the comb-like structure of a metal plate at the center of newly published University of Florida research on quantum physics. UF physicists found that corrugating the plate reduced the Casimir force, a quantum force that draws together very close objects. The discovery could prove useful as tiny "microelectromechanical" systems -- so-called MEMS devices that are already used in a wide array of consumer products -- become so small they are affected by quantum forces. Credit: Yiliang Bao and Jie Zoue/University of Florida The physicists radically altered the shape of the metal plates, corrugating them into evenly spaced trenches so that they resembled a kind of three-dimensional comb. They then compared the Casimir forces generated by these corrugated objects with those generated by standard plates, all also against a metal sphere. The finding could one day help reduce what MEMS engineers call "stiction" — when two very small, very close objects tend to stick together. Note: To clarify the huge potential of what can happen and is happening [A brief couple of sentences on why this is "holy crap" big]. The geometry of physical objects is effecting the frequency of virtual particle creations in vacuum. Vacuum characteristics are being engineered and manipulated. Manipulating the casimir force using the shape of the microscale and nanoscale structures seems to indicate that active molecular nanotechnology structures would have far more control of the Casimir force. Towards the bottom of this article there is description and links to the theoretical and experimental efforts to utilize Casimir force manipulation for breakthrough space propulsion and energy generation. Being able to fully control the Casimir force to access Zero Point Energy could potentially be more powerful than nuclear fusion. The Casimir force is the result of virtual particles. Seemingly empty space is not actually empty but contains virtual particles associated with fluctuating electromagnetic fields. These particles push the plates from both the inside and the outside. However, only virtual particles of shorter wavelengths — in the quantum world, particles exist simultaneously as waves — can fit into the space between the plates, so that the outward pressure is slightly smaller than the inward pressure. The result is the plates are forced together. The result? "The force is smaller for the corrugated object but not as small as we anticipated," Chan said, adding that if corrugating the metal reduced its total area by half, the Casimir force was reduced by only 30 to 40 percent. Chan said the experiment shows that it is not possible to simply add the force on the constituent solid parts of the plate — in this case, the tines — to arrive at the total force. Rather, he said, "the force actually depends on the geometry of the object." "Until now, no significant or nontrivial corrections to the Casimir force due to boundary conditions have been observed experimentally," wrote Lamoreaux, now at Yale University, in a commentary accompanying publication of the paper. Measurement of the Casimir Force between a Gold Sphere and a Silicon Surface with Nanoscale Trench Arrays is the article in fournal of Physical Review Letters. We report measurements of the Casimir force between a gold sphere and a silicon surface with an array of nanoscale, rectangular corrugations using a micromechanical torsional oscillator. At distances between 150 and 500 nm, the measured force shows significant deviations from the pairwise additive formulism, demonstrating the strong dependence of the Casimir force on the shape of the interacting bodies. The observed deviation, however, is smaller than the calculated values for perfectly conducting surfaces, possibly due to the interplay between finite conductivity and geometry effects. Harvard and University of California Mainstream Casimir researchers Umar Mohideen, prof of physics at the University of California at Riverside, has been researching the Casimir force The Capasso group at Harvard have also been working on manipulating the casimir force The advance by the University of Florida gives hope and credibility to extreme technology that is possible with the ability to reduce or amplify or reverse the casimir force at will: Interstellartech Corp: Trying to use Casimir force to extract power Fabrizio Pinto published in the Journal of Physics A: Mathematical and Theoretical on Membrane actuation by Casimir force manipulation. Fabrizio Pinto is part of Interstellar Tech corp has been looking into trying to trying to create an engine by making use of the Casimir force. No Casimir force-based engine cycle could be devised if one assumed a constant Casimir force. Areas of emphasis are: 1. Casimir force modulation; [now demonstrated by the Univerisity of Florida] 2. Repulsive Casimir force; [Prof Ulf Leonhardt and Dr Thomas Philbin 2007 report] 3. Lateral Casimir force; 4. Casimir force amplification 5. Energy issues in relation to the quantum vacuum. One can implement a Casimir system engine cycle to transform thermal or optical energy into mechanical or electrical energy. The Interstellar Tech corp proposal for the Transvacer device. They describe a casimir force-based engine where zero-point energy is transformed into mechanical energy. The critical concept at the core of our idealized Casimir engine is the well-established fact that, in the realistic case of a material that is not a perfect conductor, the magnitude of the Casimir force at any distance between the plates depends on the detailed optical properties of the boundaries. That is, any process that can alter the reflectivity of the material, also affects the value of the Casimir force at any distance. Nasa study from 2004 on Casimir force Space Propulsion A 57 page study of using "Study of Vacuum Energy Physics for Breakthrough Propulsion" G. Jordan Maclay, Quantum Fields LLC, Wisconsin Jay Hammer and Rod Clark, MEMS Optical, Inc. Alabama Michael George, Yeong Kim, and Asit Kir, University of Alabama 4. Gedanken Vacuum Powered Spacecraft (on page 30) A Gedanken spacecraft is described that is propelled by means of the dynamic Casimir effect, which describes the emission of real photons when a conducting surface is moved in the vacuum with a high acceleration. The maintenance of the required boundary conditions at the moving surface requires the emission of real photons, sometimes described as the excitation of the vacuum. The recoil momentum from the photon exerts a force on the surface, causing an acceleration. If one imagines the moving surface is attached to a spacecraft, then the spacecraft will experience a net acceleration. Thus we have a propellantless spacecraft. However, we do have to provide the energy to operate the vibrating mirror. In principle, it is possible to obtain this power from the quantum vacuum, and this possibility is explored. Unfortunately with the current understanding and materials, the acceleration due to the dynamic Casimir effect is very small, on the edge of measurability. One of the objectives in this paper is to demonstrate that some of the unique properties of the quantum vacuum may be utilized in a gedanken spacecraft. We have demonstrated that it is possible, in principal, to cause a spacecraft to accelerate due to the dissipative force an accelerated mirror experiences when photons are generated from the quantum vacuum. Further we have shown that one could in principal utilize energy fromthe vacuum fluctuations to operate such a vibrating mirror assembly. The application of the dynamic Casimir effect and the static Casimir effect may be regarded as a proof of principal, with the hope that the proven feasibility will stimulate more practical approaches exploiting known or as yet unknown features of the quantum vacuum. A model gedanken spacecraft with a single vibrating mirror was proposed which showed a very unimpressive acceleration due to the dynamic Casimir effect of about 3x10−20m/ s2 with a very inefficient conversion of total energy expended into spacecraft kinetic energy. Employing a set of vibrating mirrors to form a parallel plate cavity increases the output by a factor of the finesse of the cavity, 10**10, yielding an acceleration per meter squared of plate area of about 3x10−10m/ s2 and a conversion efficiency of about 10−16. After 10 years at this acceleration, a one square meter spacecraft would be traveling at 0.1m/ s. Although these results are rather unimpressive, it is important to remember this is a proof of the principal, and to not take our conclusions regarding the final velocity in our simplified models too seriously. The choice of numerical parameters is a best guess based on current knowledge and can easily affect the final result by 5 orders of magnitude. 2006 paper cited by Calphysics reviewing Quantum Vacuum energy extraction A 2006 review of carefully selected proposals for extracting energy from a quantum vacuum field We review concepts that provide an experimental framework for exploring the possibility and limitations of accessing energy from the space vacuum environment. Quantum electrodynamics (QED) and stochastic electrodynamics (SED) are the theoretical approaches guiding this experimental investigation. This investigation explores the question of whether the quantum vacuum field contains useful energy that can be exploited for applications under the action of a catalyst, or cavity structure, so that energy conservation is not violated. This is similar to the same technical problem at about the same level of technology as that faced by early nuclear energy pioneers who searched for, and successfully discovered, the unique material structure that caused the release of nuclear energy via the neutron chain reaction. Credentialed scientists interested in seriously pursuing a laboratory investigation of the vacuum ZPF should be forewarned that many of the claims being made in the non-peer-reviewed literature are fraught with pathological science, fraud, misinformation, disinformation, and spurious physics. This is the reason why the present authors were very selective about which ZPE extraction approaches to consider for our research program. We identified six experiments that have the potential to extract useful energy from the vacuum. One of these, Forward’s Vacuum-Fluctuation Battery, was shown to be unsuitable for completing an engine cycle for pumping energy from the vacuum. The efficacy of the Mead and Nachamkin patent device has not yet been evaluated in the lab. However, four additional experimental concepts are potentially exploitable and we have selected those to pursue in a carefully guided theoretical and laboratory research program. The estimated power output from three of these concepts could under optimum conditions range from Watts to kiloWatts 2007 work: Reversing the casimir force with metamaterials Prof Ulf Leonhardt and Dr Thomas Philbin report in the New Journal of Physics they can engineer the Casimir force to repel, rather than attact. Prof Ulf Leonhardt page on leviation using Casimir forces Quantum levitation by left-handed metamaterials Left-handed metamaterials make perfect lenses that image classical electromagnetic fields with significantly higher resolution than the diffraction limit. Here, we consider the quantum physics of such devices. We show that the Casimir force of two conducting plates may turn from attraction to repulsion if a perfect lens is sandwiched between them. For optical left-handed metamaterials, this repulsive force of the quantum vacuum may levitate ultra-thin mirrors. Random other Casimir and Zero point energy related A page on the casimir force and zero point energy Over 1000 papers related to casimir effects and forces at arxiv 2007 paper suggesting that the Casimir force is the result of surface plasmons and that manipulating surface plasmons would manipulate the Casimir force Metamaterials manipulate plasmons. Zero point energy at wikipedia Many fictional references including: The Zero-Point Energy Field Manipulator, or "gravity gun" is a fictional weapon from the video game Half-Life 2. Stargate SG1 and Stargate Atlantis refer to Zero point modules.
http://nextbigfuture.com/2008/07/casimir-force-was-reduced-by-30-to-40.html
13
97
Perimeter and Area Circumference and Area of a Circle Terms Associated with Circles The circumference of a circle is its "perimeter," or the distance around its edge. If we broke the circle and bent it into one flat line, the length of this line would be its circumference: The diameter of a circle is a line segment from one point on the edge of the circle to another point on the edge, passing through the center of the circle. It is the longest line segment that cuts across the circle from one point to another. There are many different diameters, but they all have the same length: The radius of a circle is a line segment from the center of the circle to a point on the edge of the circle. It is half of a diameter, and thus its length is half the length of the diameter. Again, there are many radii, but they all have the same length. In the following diagram, a , b , and c are all radii: The area of a circle is the total number of square units that fill the circle. The area of the following circle is about 13 units. Note that we count fractional units inside the circle as well as whole units. Formula for the Circumference of a Circle Mathematicians have discovered a special number, called pi (represented by Π ), which is the ratio of the circumference of any circle to the length of its diameter. Π is roughly equal to 3.14--most scientific calculators have a " Π " button that will produce more digits. Π is a non- terminating, non- repeating decimal; thus, Π is an irrational number. Since Π is the ratio of the circumference to the diameter, Π = c/d ; c = Π×d ; and d = c/Π ; where c and d are the circumference and the diameter, respectively. The most important equations to remember are the last two. Thus, to find the circumference of a circle, multiply the diameter by Π . If you know only the radius (a more likely scenario), multiply the radius by 2 to find the diameter: c = 2×Π×r . To find the diameter of a circle, divide the circumference by Π . Use 3.14 for Π . Try it! Find a pan, trash can, or other large circular object. Measure around the edge, and then measure the diameter. The circumference divided by the diameter should be roughly equal to Π . Formula for the Area of a Circle Interestingly enough, Π is also the ratio between the area of a circle and the square of its radius. Thus, Π = A/r 2 ; A = Π×r 2 ; and r = . The most important equation to remember is the middle equation, A = Π×r 2 . Thus, to find the area of a circle, square the radius and multiply by Π . If the radius is unknown but the diameter is known, divide the diameter by 2 to find the radius. What is the circumference of a circle with diameter 5? c = d×Π = 5×3.14 = 15.7 What is the circumference of a circle with radius 3? d = 3×2 = 6;c = d×Π = 6×3.14 = 18.8 What is the area of a circle with radius 3? A = Π×r 2 = 3.14×32 = 28.3 What is the area of a circle with diameter 5? r = 5/2 = 2.5;A = Π×r 2 = 3.14×2.52 = 19.6 What is the diameter of a circle with circumference 11? d = c/Π = 11/3.14 = 3.50 What is the radius of a circle with circumference 11? r = d /2 = 3.50/2 = 1.75 What is the area of a circle with circumference 11? A = Π×r 2 = 3.14×1.752 = 9.62
http://www.sparknotes.com/math/prealgebra/perimeterarea/section3.rhtml
13
51
Explanation of the Gravity Constant Factors by Ron Kurtus - Succeed in Understanding Physics. Key words: Universal Gravitation Equation, Earth, radius, mass, height, physical science, School for Champions. Copyright © Restrictions Gravity Constant Factors by Ron Kurtus (revised 21 February 2011) The value of g in the gravity force equation F = mg is the acceleration due to gravity. It is considered a constant for objects relatively near the Earth's surface. The gravity constant comes from the Universal Gravitation Equation at the Earth's surface. By substituting in values for the mass and radius of the Earth, you can calculate the value of the gravity constant at the Earth's surface. The fact that the acceleration due to gravity is a constant facilitates the derivations of the gravity equations for falling objects, as well as those projected downward or upward. However, the value of g starts to vary at high altitudes. Questions you may have include: - What is the derivation of the gravity constant? - What is the value of the constant at the Earth's surface? - How does the acceleration due to gravity vary with altitude? This lesson will answer those questions. Useful tool: Metric-English Conversion Derivation of gravity constant The acceleration due to gravity constant comes from Newton's Universal Gravitation Equation, which shows the force of attraction between any two objects—typically astronomical objects: F = GMm/R2 - F is the force of attraction, as measured in newtons (N) or kg-m/s2 - G is the Universal Gravitational Constant: 6.674*10−11 m3/s2-kg - M and m are the masses of the objects in kilograms (kg) - R is the separation of the centers of the objects in meters (m) (See Universal Gravitation Equation for more information.) One assumption made is that the mass of each object is concentrated at its center. Thus, if you considered a hypothetical point object of mass m that was at the surface of the Earth, the force between them would be: F = GMEm/RE2 - F is the force of attraction at the surface of the Earth - G is the Universal Gravitational Constant - ME is the mass of the Earth - m is the mass of the object - RE is the separation between the center of the Earth and an object on its surface; it is also the radius of the Earth Since GME/RE2 is a constant, set: g = GME/RE2 This is the gravity constant or acceleration due to gravity. Thus, the gravity equation is: F = mg Value of g You can find the value of g by substituting the following items into the equation: G = 6.674*10−11 m3/s2-kg ME = 5.974*1024 kg RE = 6.371*106 m Note: Since the Earth is not a perfect sphere, the radius varies in different locations, including being greater at the equator and less at the poles. The accepted average or mean radius is 6371 km. The result is: g = (6.674*10−11 m3/s2-kg)(5.974*1024 kg)/(6.371*106 m)2 g = (6.674*10−11)(5.974*1024)/(40.590*1012) m/s2 g = 0.9823*101 m/s2 g = 9.823 m/s2 This value is close to the official value of g = 9.807 m/s2 or 32.174 ft/s2, defined by the international General Conference on Weights and Measures in 1901. Factors such as the rotation of the Earth and the effect of large masses of matter, such as mountains were taken into effect in their definition. Although, the value of g varies from place to place around the world, we use the common values of: g = 9.8 m/s2 or 32 ft/s2 On other planets The same principles of gravity on Earth can apply to other astronomical bodies, when objects are relatively close to the planet or moon. We typically consider "gravity" as concerning Earth. If you are talking about the force of gravity on another planet, you should say, "gravity on Mars" or such. Acceleration due to gravity on the: - Earth: 9.8 m/s2 - Moon: 1.6 m/s2 - Mars: 3.7 m/s2 - Sun: 275 m/s2 Variation with altitude Although g is considered a constant, its value does vary with altitude or height from the ground. You can show the variation with height from the equation: gh = GME/(RE + h)2 - gh is the acceleration due to gravity at height h - h is the height above the Earth's surface or the altitude of the object Height or altitude above Earth's surface To facilitate calculations, it is easier to state h as a percentage or decimal fraction of RE. For example, if h = 10% of RE or 0.1RE, then: gh = GME/(1.1RE)2 gh = 0.826GME/RE2 = 0.826g Charting h and gh: |67.31 m (220.8 ft)||0.001%||0.99998g = 9.8 m/s2| |637.1 m (2207.8 ft)||0.01%||0.9998g = 9.8 m/s2| |6.371 km (3.95 mi)||0.1%||0.998g = 9.78 m/s2| |63.71 km (39.5 mi)||1%||0.980g = 9.6 m/s2| |637.1 km (395 mi)||10%||0.826g = 8.09 m/s2| As you can see, the value of g starts to deviate from 9.8 m/s2 at about 6.4 km or 4 miles in altitude. At about 64 km or 40 mi, the change in g is sufficient to noticeably affect the results of gravity equations. Effect on gravity derivations The derivations of the equations for velocity, time and displacement for objects dropped, projected downward, or projected upward depend on g being a constant. Even a 1% or 2% variation in the value of g can affect the derivations. (See Overview of Gravity Equation Derivations for more information.) The acceleration due to gravity, g, is considered a constant and comes from the Universal Gravitation Equation, calculated at the Earth's surface. By substituting in values for the mass and radius of the Earth, you can find the value of g. A constant acceleration due to gravity facilitates the derivations of the gravity equations. However, the value of g starts to vary at high altitudes. Think clearly and logically Resources and references Acceleration Due to Gravity - TutorVista.com The Value of g - Physics Classroom Acceleration Due to Gravity - Haverford College The Acceleration of Gravity - Physics Classroom What do you think? Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible. Click on a button to send an email, Facebook message, Tweet, or other message to share the link for this page: Students and researchers The Web address of this page is: Please include it as a link on your website or as a reference in your report, document, or thesis. Where are you now? Gravity Constant Factors
http://school-for-champions.com/science/gravity_constant.htm
13
72
In mathematics, a square number, sometimes also called a perfect square, is an integer that can be written as the square of some other integer; in other words, it is the product of some integer with itself. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and This article refers to the REM live recording For the mathematical term see Perfect square. The integers (from the Latin integer, literally "untouched" hence "whole" the word entire comes from the same origin but via French In Algebra, the square of a number is that number multiplied by itself So, for example, 9 is a square number, since it can be written as 3 × 3. Square numbers are non-negative. A negative number is a Number that is less than zero, such as −2 Another way of saying that a (non-negative) number is a square number, is that its square root is again an integer. In Mathematics, a square root of a number x is a number r such that r 2 = x, or in words a number r whose For example, √9 = 3, so 9 is a square number. A positive integer that has no perfect square divisors except 1 is called square-free. In Mathematics, a divisor of an Integer n, also called a factor of n, is an integer which evenly divides n without In Mathematics, an element r of a Unique factorization domain R is called square-free if it is not divisible by a non-trivial square The usual notation for the formula for the square of a number n is not the product n × n, but the equivalent exponentiation n2, usually pronounced as "n squared". For a non-negative integer n, the nth square number is n2, with 02 = 0 being the zeroth square. The zeroth item is the initial item of a zero -based sequence (that is a sequence which is numbered beginning from zero rather than one such as the non-negative integers (see The concept of square can be extended to some other number systems. If rational numbers are included, then a square is the ratio of two square integers, and, conversely, the ratio of two square integers is a square (e. g. , 4/9 = (2/3)2). Starting with 1, there are ⌊√m⌋ square numbers up to and including m. The number m is a square number if and only if one can arrange m points in a square: |12 = 1| |22 = 4| |32 = 9| |42 = 16| |52 = 25| The formula for the nth square number is n2. In Mathematics, a natural number (also called counting number) can mean either an element of the set (the positive Integers or an The On-Line Encyclopedia of Integer Sequences ( OEIS) also cited simply as Sloane's, is an extensive searchable Database of Integer sequences Mathematics For any number x: x ·1 = 1· x = x (1 is the multiplicative identity In mathematics Four is the smallest Composite number, its proper Divisors being and. In mathematics Nine is a Composite number, its proper Divisors being 1 and 3. 25 ( twenty-five) is the Natural number following 24 and preceding 26. 36 ( thirty-six) is the Natural number following 35 and preceding 37. This page is for the number For the steamboat see Forty-Nine (steamboat 49 ( forty-nine) is the Natural number following 48 64 ( sixty-four) is the Natural number following 63 and preceding 65. 81 ( eighty-one) is the Natural number following 80 and preceding 82. 196 is a Natural number following 195 and preceding 197. It is the square of 14. This is also equal to the sum of the first n odd numbers as can be seen in the above pictures, where a square results from the previous one by adding an odd number of points (marked as '+'). In Mathematics, the parity of an object states whether it is even or odd So for example, 52 = 25 = 1 + 3 + 5 + 7 + 9. The nth square number can be calculated from the previous two by doubling the (n − 1)-th square, subtracting the (n − 2)-th square number, and adding 2, because n2 = 2(n − 1)2 − (n − 2)2 + 2. For example, 2×52 − 42 + 2 = 2×25 − 16 + 2 = 50 − 16 + 2 = 36 = 62. A square number is also the sum of two consecutive triangular numbers. A triangular number is the sum of the n Natural numbers from 1 to n. The sum of two consecutive square numbers is a centered square number. In Elementary number theory, a centered square number is a centered Figurate number that gives the number of dots in a square with a dot in the Every odd square is also a centered octagonal number. A centered octagonal number is a centered Figurate number that represents an Octagon with a dot in the center and all other dots surrounding the center Lagrange's four-square theorem states that any positive integer can be written as the sum of 4 or fewer perfect squares. Lagrange's four-square theorem, also known as Bachet's conjecture, was proven in 1770 by Joseph Louis Lagrange. Three squares are not sufficient for numbers of the form 4k(8m + 7). A positive integer can be represented as a sum of two squares precisely if its prime factorization contains no odd powers of primes of the form 4k + 3. This is generalized by Waring's problem. In Number theory, Waring's problem, proposed in 1770 by Edward Waring, asks whether for every Natural number k there exists an associated positive A square number can only end with digits 00,1,4,6,9, or 25 in base 10, as follows: An easy way to find square numbers is to find two numbers which have a mean of it, 212:20 and 22, and then multiply the two numbers together and add the square of the distance from the mean: 22×20 = 440 + 12 = 441. This works because of the identity known as the difference of two squares. In Mathematics, the difference of two squares is when a number is squared, or multiplied by itself and is then subtracted from another squared number Thus (21–1)(21 + 1) = 212 − 12 = 440, if you work backwards. A square number cannot be a perfect number. In mathematics a perfect number is defined as a positive integer which is the sum of its proper positive Divisors that is the sum of the positive divisors excluding Squares of even numbers are even, since (2n)2 = 4n2. Squares of odd numbers are odd, since (2n + 1)2 = 4(n2 + n) + 1. It follows that square roots of even square numbers are even, and square roots of odd square numbers are odd. Chen Jingrun showed in 1975 that there always exists a number P which is either a prime or product of two primes between n2 and (n + 1)2. Chen Jingrun ( May 22 1933 – March 19 1996) was a Chinese Mathematician who made significant contributions to Number In Mathematics, a prime number (or a prime) is a Natural number which has exactly two distinct natural number Divisors 1 In Mathematics, a semiprime (also called biprime or 2- Almost prime, or pq number) is a Natural number that is the product See also Legendre's conjecture. Legendre's conjecture, proposed by Adrien-Marie Legendre, states that there is a Prime number between n 2 and ( n  + 12
http://www.citizendia.org/Square_number
13
84
CENTRIFUGAL FORCE an introduction Edit Definition- centrifugal force means "center fleeing" and is not a real force, but the perceived effect of inertia. As an object travels in a circle, it appears to have a force pulling it outward. This is called a centrifugal force. This force is not to be confused with a centripetal force which is the force pulling an object in to the center as it moves in a circle. CENTRIPETAL V. CENTRIFUGAL Edit For every action there is an equal and opposite reaction. Centripetal force, an action, has the reaction of centrifugal force. The two forces are equal in magnitude and opposite in direction. Infact the newton's third law states every action is having equal and opposit reaction. However this cannot be indicated as "every force is having equal and opposite force" . There is a difference between force and action. The third law is true only for action and not for force alone. although the two forces have the same magnitude, they are significantly different - The centripetal force acts on the body in motion. ---> For example, if I were swinging a yo-yo in a circle, the centripetal force would be pulling in on the plastic part of the yo-yo - The centrifugal force acts on the source of the centripetal force --> For example, the centrifugal force would be pulling out on the person holding the string of the yo-yo THE NONEXISTENT FORCE Edit Imagine you are in a car moving on a straight road. The road (and car) suddenly make a curve to the left and your body slams to the right. It feels as if there is a force pushing you to the right; however, there is none. Your body moves to the right because according to Newtons first law, an object moving in a straight line should not change direction unless a force is acted upon it. Your body naturally moves straight, the force is the car moving to the left. It is the same situation with centrifugal force which is often refferred to as The Nonexistent force. For example, if you were to attach a ball to a string, hold the other end of the string and spin in a circle, it would feel as if the ball were pulling out on your arm. However, it is natural for the ball to go in a straight line (again, according to Newtons first law) so the centrifugal force that you feel is in fact just the nature of the ball working against the centripetal force that is pulling the ball into the circle. Because we feel this "force" so strongly and regularly, we will treat it as if it was an actual force. A (VERY) BRIEF HISTORY Edit There is only one name publicly known to be connected to the history of centrifugal force and that is Newton. Newton's laws of motion newton's 3 laws of motion have both shaped and initiated our understanding of centrifugal force, their concepts have been mentioned throughout this site. THE MATHEMATICS Edit We can find the equation for centrifugal force by finding the equation for centripetal force because we know that the two forces are equal in magnitude. We have already derived the formula for centripetal acceleration([]) and know that ac=v2/r or centripetal acceleration=velocity squared divided by the radius (in meters). We also know that f=ma or force=mass times acceleration (for derivation click []). When we combine these two equations we end up with Now that we have the equation for centripetal force we almost have the equation for centrifugal force as well. The acceleration for centripetal force is aimed into the center of the circle but the acceleration for centrifugal force is aimed away from the circle. Therefor, because centripetal acceleration is positive and centrifugal acceleration is going in the oposite direction, it is negative. FINAL EQUATION FOR CENTRIFUGAL FORCEEdit The final equation for centrifugal force is f= force (in newtons) m= mass (in kilograms) v= velocity (in meters per second) r= radius (in meters) TEST YOUR UNDERSTANDINGEdit - image taken from The solution to this problem is: the radius equals 5*10-1 or .5 meters we can find this solution by: first solving the equation for "r" which leaves us with r=mv2/Fc we then plug in the values that we have which leaves us with r=(2.5*10-2kg)(32m/s)/(4.5*10-1N) when we complete the calculations we get .5 meters USEFUL TERMS Edit In order for an object to execute circular motion - even at a constant speed - the object must be accelerating towards the center of rotation. This acceleration is called the centripetal or radial acceleration and has a magnitude of ac = Centripetal acceleration SI: m/s2 vT = Tangential velocity or speed SI: m/s r = Radius of object's path SI: m w = Angular velocity SI: rad/s this definition is from is the force that compels a body to move in a circular path. -- ---> According to the law of inertia, in the absence of forces, an object moves in a straight line at a constant speed. An outside force must act on an object to make it move in a curved path. When you whirl a stone around on a string, you must pull on the string to keep the stone from flying off in a straight line. The force the string applies to the object is the centripetal force. The word centripetal is from two Latin words meaning to seek the center. this definition is from Newton's first law of motion states that "An object at rest tends to stay at rest and an object in motion tends to stay in motion with the same speed and in the same direction unless acted upon by an unbalanced force." Objects "tend to keep on doing what they're doing." In fact, it is the natural tendency of objects to resist changes in their state of motion. This tendency to resist changes in their state of motion is described as inertia. this definition is from This site both quickly defines (if you only read the beginning)and goes into depth about centripetal force Although you must have a background in physics to fully understand this site, it does a nice job of clearly defining centripetal and centrifugal forces as well as centripetal acceleration This website describes Newton's three laws of motion with ease and clarity This site's definition of inertia is the one listed on my site, it also goes into further detail on the subject This site's description of the differences between centripetal and centrifugal forces is complicated and pretty hard to understand This source provides a complex but extremely detailed derivation of the "final formula" for centrifugal force This is a fun site that gives a very broad and simple difinition of centrifugal force This site does a clear (yet somewhat sophisticated) job of describing the differences between centripetal and centrifugal forces Zitzewitz, Paul W.,Ph.D. Physics: Principles and Problems. Colombus,OH: Glencoe/McGraw-Hill, 1999. This textbook has only half a page on centrifugal force; however, it is worth reading and creates a clear picture of the subject
http://schools.wikia.com/wiki/Centrifugal_Force
13
144
In this lesson, the students will explore the relationship between mass, volume and density. They will use model boats to discover how changing mass and volume affect density. Previous knowledge should include an understanding of mass and how to measure mass on a triple beam balance. Some previous experience calculating volume of regular solids and irregular solids by displacement would be helpful for the extension activities. The students should have the math skills to multiply and divide using decimals. This investigation will take 2-3 days to complete. Pre-viewing activities, the video and making the boats can take one class period. Testing the boats and discussing results can take another class period or two depending on how may trials the students perform with their boats. Eureka #25: Volume and Density Students will be able to: 1. identify two methods for changing density. 2. demonstrate how the density of an object 3. predict and demonstrate how changes in mass and volume affect density and floatation. Texas Assessment of Academic Skills (TAAS) Objectives Science Objectives, Grade 8: #1: The student will demonstrate the ability to acquire scientific data and/or information. #3: The student will demonstrate the ability to communicate scientific data and/or information. #5: The student will demonstrate the ability to make inferences, form generalized statements, and/or make predictions, using scientific data and/ Math Objectives, Exit Level: #1: The student will demonstrate an understanding of number concepts. #2: The students will demonstrate an understanding of mathematical relations, functions, and other algebraic concepts. #4: The student will demonstrate an understanding of measurement concepts using metric and customary units. #6-9: The student will use the operations of addition, subtraction, multiplication, and division to solve problems #11: The student will determine solution strategies and will analyze or solve problems. NCTM Standards for Grades 9-12: Standard 1: Mathematics as Problem Solving Standard 2: Mathematics as Communications Standard 3: Mathematics as Reasoning Per group of 4 students: - 2 pieces of EXTRA HEAVY aluminum foil, 22 cm x 22 cm - 2 metric rulers - 1 triple beam balance - approx. 50 washers, marbles, pennies or any other equal weights - 1 small tub filled with water for floating boats - 1 calculator For teacher demonstration: - data sheet - 1 aquarium filled with water - 1 ball of clay - 1 Coke - 1 Diet Coke - 1 rectangular container smaller than the aquarium - mass - the amount of matter in an object; measured in this experiment with grams (g) - volume - the amount of space an object takes up; measured in this experiment with cubic - centimeters (cm3). - density - the ratio between mass and volume; measured in this experiment with grams per cubic centimeter (g/cm3). Before the students walk into the classroom, have the aquarium set up in front with a Coke and Diet Coke floating in it. (Actually, the Diet Coke will float but the Coke will sink). Initiate a short discussion about this discrepant event, encouraging the students to ask questions and make hypotheses. Do not acknowledge whether their hypotheses are right or wrong or actually conduct any experiments yet. After about five minutes, ask the students to write down their hypotheses as to why the Diet Coke floats but the regular Coke sinks. Next, take a ball of clay and ask the students to predict whether the clay will float or sink. Put the clay in the water. Students will note that it sank. Remove the clay from the water and shape it into a small boat. Ask the students to predict whether this will float or sink. Place the clay boat in the water. Students will note that it now floats. Ask the students to write an explanation as to why the ball of clay sank but the clay boat floated. Allow the students to share some of their explanations. The wording of their explanations will influence how you proceed from this point, but it is very likely that at least some students will use phrases such as "it weighed more" or "it was bigger" while some may actually use terms such as mass, volume or density. It is not necessary here to acknowledge whether each explanation is right or wrong. Synthesize from these explanations something like the following statement: "So we've determined that the reason an object floats has something to do with the amount of space it takes up and possibly how much mass (or weight if the term mass was not mentioned by the students) it has. We will watch a video called Eureka to learn more about how space and mass affect a property of matter called density. During the video, you will need to take several notes. First, there will be a space in the video notes section of your data sheet to define the terms volume and density. In addition, you will need to describe on the video notes section two ways the density of an object can be increased." Pass out the data sheets to each student. Ask the students how they would begin a complete sentence to answer this question. Write on the board and have students begin their sentence in the video notes section like this: The density of an object can be increased by: Tell the students that they will also need to write the mathematical formula for calculating density during the video. Ask the students how that formula would start out and have them write it on their paper. (density = ) Say, "After the video we will be conducting some experiments with boats to learn more about density. We will also test your theories about the Coke cans. This video will last only about five minutes and I will be pausing periodically to discuss certain segments so pay close attention." BEGIN the Eureka video at the beginning. PAUSE the video when the narrator asks, "How small should you crush them so that they'll fit exactly in to the container?" Ask the students what it is they have to know before figuring out how small these eight cars have to be crushed to fit exactly into the container. (how big the container is) RESUME the video until the narrator says, "That's all that the word volume means - how much space something envelops." PAUSE the video. Ask a volunteer to repeat this definition and write it on the board. Ask, "How do you figure out how much space that cube envelops?" This question will help you assess how familiar the students are with the concept of volume. At this age level, most of the students should know that volume can be calculated by multiplying length, height, and width. The video answers this question in the very next sentence, so the students' answers are validated. Inquiries like this also help to keep the students focused when the video is resumed. RESUME the video. PAUSE when the screen shows "Volume = 64 m3" and the narrator says, "The container has a volume of 64 cubic centimeters." Ask the students where the little 3 comes from next to the "m" in 64 m3. Demonstrate that the superscript 3 comes from multiplying 3 different quantities measured in meters together. "If you have eight cars but only 64 cubic meters to stuff these cars into, how do you figure out how big each car can be? (Divide 64 by 8.) RESUME the video. PAUSE when the narrator says, "In other words, you'll have to crush each old wreck into a cube measuring two meters by two meters by two meters." Ask the students if there is another shape the cars could be crushed into besides 2 x 2 x 2 and still all fit into the box. (4 x 2 x 1) RESUME the video. PAUSE when the narrator says, "A density machine takes a car with a mass of say 2000 kilograms and squeezes the kilograms into a much smaller volume." Let the picture of the car with the formula Density = Mass/Volume appear but pause before anything is said on this screen. Tell the students the following: "They have not actually given us a verbal definition of density, but they have given us a mathematical one. [Point to the mass/volume part of the picture on the screen.] What is this part the equation called? [You might have to give other examples like m/v or _ to get the students to understand that this is a fraction.] A fraction is a way of comparing two numbers. In density we are comparing an object's mass to its volume, or how much matter there is to how much space this matter takes up. Another way of stating this is to say we are making a ratio of mass compared to volume. So we can define density mathematically using the formula, and verbally as the ratio of mass to volume in an object." [Write this definition on the board.] REWIND back to the picture of the density machine and RESUME the video. PAUSE after the narrator says, "In this case you increase density by keeping the volume the same but increasing the mass." Ask, "So which was more dense, the box with two cars or the same box with eight cars? (8) What was changed in the system? (mass) How was mass changed? (increased) Which would be more dense, 2000 kg in this container (point to aquarium) or 2000 kg in this container? (hold up a smaller box) What is the difference between these two systems? (different volumes) Now you should be able to write two ways to increase density if you haven't already." RESUME the video as students complete their statements. STOP at the end. Discuss the students' statements on how to increase density. Show examples on the board to demonstrate how increasing the numerator (mass) will always increase the density if volume remains the same. Show how increasing the denominator (volume) will always decrease density if mass stays the same. Demonstrate how doubling the volume will halve the density but doubling the mass will double the density. Ask the student to now make a hypothesis about the Coke cans using the following three words: mass, volume, and density. Have a volunteer measure the mass of each can and calculate density. The density of the Diet Coke should be less than one because it floats while the density of the Coke should be greater than one. [Note: Make sure you test your soda cans first. Not all brands of diet and regular sodas follow this rule. In general, the diet drinks float because they contain aspartame (nutrasweet) rather than sugar and much less aspartame is needed to sweeten the soda compared to sugar.] Emphasize that when the mass to volume ratio is greater than one (the density of water at room temperature), objects will sink. If an object has a mass to volume ratio that is less than one, however, it is less dense than water and will float on the water. Tell the students that they will be constructing and experimenting with two boats to explore the concept of density further. They will need to begin with two identical pieces of aluminum foil which they will fold into the shape of a boat according to the diagram. Because these two pieces of foil are the same size, they should also be the same mass. Before students begin folding they should mass each piece of foil to make sure they are within at least 0.5 grams of each other. The students should then follow the instructions on the student activity sheet and complete the questions at the end of the activity with their group. [Note: Due to the precision necessary for this activity the teacher should allow at least 50 minutes for the boat construction, testing and discussion. It might be most efficient to construct each boat and find the average mass of one washer directly after the video but wait to do the data collection and analysis until the next class period.] To find the average mass of one washer, have each team mass one washer and then use these masses to find an average the entire class can use in After the activity is complete, discuss the results with the class putting special emphasis on the idea of mass to volume ratio and how this was changed with each boat. The students should note that the boat with the smaller volume had a higher mass to volume ratio and, therefore, needed less mass to make it sink. The sinking boats should all have had mass to volume ratios greater than one. There may likely be teams whose results did not seem to follow this general rule. Use this opportunity to initiate a discussion on experimental error and control of variables. Ask the students to identify any aspect of the experiment that might have led to inaccurate data collection. It is important to emphasize that this is the type of analysis that scientists must conduct any time they perform an experiment. After students identify any inconsistencies in experimental technique, have them discuss how this could be improved upon in a future experiment. Emphasize that this is why scientists often use repeated trials in an experiment to control error or bias. It would be helpful to put together the data from all of the teams to calculate class averages upon which they can base their conclusions. Students can choose one of the following fruits to test at home: banana, orange, apple. Will the fruit float in water? How does peeling the fruit change its mass to volume ratio (or does it)? Invite a scuba diver to class to discuss how divers control their mass to volume ratio. Have students conduct research on how submarines dive and return to the surface. Science: Allow students to conduct tests of various regular and diet sodas to determine if they float or sink. What about other types of canned drinks? Have students create their own data chart comparing various brands of soda. Science: Challenge students to explain (using the terms density, mass and volume) why helium balloons float Math: Use this activity to explore other common ratios such as speed, acceleration, blood alcohol content, and in baseball, earned run average. Creative Writing: The density of ice is less than the density of liquid water. Imagine and describe how different the world would be if ice behaved like most substances and became more dense than liquid water when it solidified. Would lakes freeze from the bottom up? How would this affect the sea level? 1995-1996 National Teacher Training Institute / Austin Master Teacher: Carol Fletcher
http://www.thirteen.org/edonline/nttidb/lessons/as/derbyas.html
13
80
Science CBSE Physics Flotation Term-II Class IX Force : Pressure :Thrust : Atmospheric pressure: Buoyant force Pressure in fluids – All liquids and gases are fluids. - A solid exerts pressure on a surface due to its weight - Similarly, fluids have weight, and they also exert pressure on the base and walls of the container in which they are enclosed. - Pressure exerted in any confined mass of fluid is transmitted undiminished in all directions. - The pressure in a liquid is the same at all points at the same horizontal level. As we go deeper in the liquid, the pressure increases. - When an object is placed in a liquid, the liquid exerts an upward force on it e.g. When a piece of cork is held below the surface of water and then released the cork immediately rises to the surface. - It is a common experience that a mug filled with water appears to be heavier when it is lifted above the surface of water in . - In general, whenever an object is immersed in water, it appears to and feels lighter. The weight of the object in water is called apparent weight. It is less than its true weight. - The objects appear to be less heavy when submerged in water because the water exerts an upward force on them. - The upward force acting on an object immersed in a liquid is called buoyant force. The buoyant force is also known as upthrust. It is due to the buoyant force exerted by the liquid that the weight of an object appears to be less in the liquid than its actual weight in air. - It is due to the buoyant force exerted by water that we are able to swim in water and ships float on water. - The tendency of a liquid to exert an upward force on an object placed in it is called buoyancy. - As more and more volume of the object is immersed in a liquid, the upward buoyant force acting on it increases. But once the object is completely immersed in a liquid, then lowering it further in the liquid does not increase the buoyant force. This means that maximum upward buoyant force acts on an object when it is completely immersed in the liquid. 1. The buoyant force exerted by a liquid depends on the volume of the solid object immersed in the liquid. - As the volume of the solid object immersed inside the liquid increases, the upward buoyant force also increases. And when the object is completely immersed in the liquid, the buoyant force becomes maximum and remains constant. - The magnitude of buoyant force acting on a solid object does not depend on the nature of the solid object, e.g. if two balls made of different metals having different weights but equal volumes are fully immersed in a liquid, they will experience an equal loss in weight and thus equal upward buoyant force. This is because both the balls displace equal weight of the liquid due to their equal volumes. - The liquid having higher density exerts more upward buoyant force on an object than another liquid having lower density. Thus, as the density of liquid increases, the buoyant force exerted by it also increases, - e.g. sea water has higher density than fresh water, therefore, sea-water will exert more buoyant force on an object immersed in it than the fresh water. It is easier to swim in sea water because it exerts a greater buoyant force on the swimmer. - Similarly, mercury is a liquid having very high density. So, mercury will exert a very great buoyant force on an object immersed in it. Even a very heavy material like an iron block floats in mercury because mercury exerts a very high buoyant force on iron block due to its very high density. 2. Buoyant force (B) acting upwards. An object will float or sink in a liquid will depend on the relative magnitude of these two forces acting on the object in opposite directions. Three cases arise: 1. If B exerted by the liquid < W of the object, the object will sink in the liquid. 2. If B = W, the object will float in the liquid. 3. If B > W, the object will rise in the liquid and then float. Thus an object will float in a liquid if the upward buoyant force it receives from the liquid is great enough to overcome the downward force of its weight. For an object to float, Weight of object = Buoyant force But, Buoyant force = Weight of liquid displaced by the object\Weight of object = Weight of liquid displaced by the object. Thus an object will float in a liquid if the weight of object is equal to the weight of liquid displaced by it. The above relation holds true if the object has a lower density than the liquid. - If the object has a higher density than the liquid, then the weight of liquid displaced will be less than the weight of object, and the object will sink. - · An object will also float in a liquid if its density is equal to that of the liquid. - When we put a piece of iron in water, it sinks immediately because iron is denser than water. But a ship made from iron and steel floats on water. This is because a ship is a hollow object having a lot of air in it. Air has low density due to which the average density of ship becomes less than the density of water and the ship floats in water. - This can be explained in another way. A heavy ship floats in water as it displaces a large weight of water which provides a great buoyant force to keep it afloat. Buoyant force acting = Weight of liquid displaced on an object by that object Archimedes’ principle is applicable to objects in fluids, i.e. liquids as well as gases. and Buoyant force = Loss in weight of body in water.\ Loss in weight of body in water = Weight of water displaced by body. Applications of Archimedes’ principle – 1. It is used in designing ships and submarines. 2. It is used in determining the relative density of a substance. 3. The lactometers used for determining the purity of milk are based on Archimedes’ principle. 4. The hydrometers used for determining the density of liquids are based on Archimedes’ principle. Density – The density of a substance is defined as mass of the substance per unit volume. - Density = Mass of the substance/Volume of the substance - The SI unit of density is kilograms per cubic meter (Kg/m3). - The density of a substance, under specified conditions, is always the same. So, the density of a substance is one of its characteristic properties. - The density of a given substance can help us to determine its purity. - Different substances have different densities e.g. density of water is 1000 Kg/m3 which means that the mass of 1 cubic metre volume of water is 1000 kg. - Relative density of a substance = Density of the substance/Density of water - Since the relative density is a ratio, it has no units. It is a pure number. - The relative density of a substance expresses the heaviness (or density) of the substance in comparison to water e.g. the relative density of iron is 7.8, which means iron is 7.8 times as heavy as an equal volume of water. - The relative density of water is 1. If the relative density of a substance is more than 1, then it will be heavier than water and hence it will sink in water. - On the other hand, if the relative density of a substance is less than 1, then it will be lighter than water and hence float in water. e.g. Ice has a density of about 900 kg/m3 and water has a density 1000kg/m3. - Thus an ice cube has a relative density of 0.9 so it floats in water. The relative density of iron is7.8, so an iron nail sinks in water.
http://cbseadda.blogspot.com/2012/08/science-cbse-physics-flotation-term-ii.html
13
84
If fundamental particles have even a minimal radius, many orders of magnitude less than the Planck length, then they can't form black holes, and the relativistic effects on their behavior at Planck length or greater is trivial, although fundamental particles would be more dense than atomic nuclei if they had volumes on the order of magnitude of the Planck length. Indeed, there is no mechanism in nature, apart from the Big Bang, that can produce black holes smaller than three stellar masses, which coincidentally (or perhaps not so coincidentally) have mass densities (assuming that their total mass is distributed evenly throughout their unobservable behind the event horizon volumes) no greater than the same order of magnitude as neutron stars and atomic nuclei. The larger a black hole, the lower its mass density. And, it isn't obvious that any relics of smaller black holes that haven't evaporated due to Hawking radiation still exist in the universe, nor is it clear that any stars more dense than neutron stars actually exist, although they have been hypothesized. Black Holes and Neutron Stars Everybody knows that general relativity predicts the existence of black holes, from which even massless photons moving at the speed of light cannot escape became the gravity of a black hole within its "event horizon" is overwhelming. Less well known is the fact that in all black holes created in the usual way, through the gravitational collapse of a large star (the minimum threshold for this to happen is about 3 solar masses), the total mass of the black hole divided by the volume of the region within the event horizon is always less than or equal to the mass density of a neutron star, which are predicted to be up to about twice the mass density of a typical atomic nucleus. The density of black holes (measured by its mass divided by the volume included in the event horizon) increases as black hole mass decreases. At about three solar masses (6*10^30 kg), the mass of a black hole divided by its event horizon volume is comparable to the density of an atomic nucleus, and larger black holes are less dense than an atomic nucleus. Smaller black holes are theoretically possible in the equations of general relativity, but they aren't formed by gravitational collapse and have never been observed, even though they could conceivably have been created by non-gravitational forces in the Big Bang or some highly energetic controlled conditions like those in a particle accelerator. Another thing that many people don't realize about black holes is that they do radiate and lose mass over time through what is called Hawking radiation, after celebrity physicist Stephen Hawking. The smaller the black hole, the more rapidly it loses mass. All black holes are believed by many theorists to emit Hawking radiation at a rate inversely proportional to their mass. Since this emission further decreases their mass, black holes with very small mass would experience runaway evaporation, creating a massive burst of radiation at the final phase, equivalent to millions of one-megaton hydrogen bombs exploding. A regular black hole (of about 3 solar masses) cannot lose all of its mass within the lifetime of the universe (they would take about 10^69 years to do so, even without any matter falling in). However, since primordial black holes [if they exist] are not formed by stellar core collapse, [theoretically] they may be of any size. A black hole with a mass [at the time it is formed] of about 10^11 kg would have a lifetime about equal to the age of the universe. Thus, if black holes formed in the Big Bang that initially had masses of more than 10^11 kg and less than 10^30 kg, if they exist at all, and they were formed in great enough numbers, we should be able to observe some of those that are relatively nearby in our own Milky Way galaxy exploding today. If a black hole too small to be a black hole created by the collapse of a star was created around the time of the Big Bang, it could be no bigger than 10^23 kg now because it would have been losing mass to Hawking radiation over time. This is about one 10,000,000th the mass of the Sun. This is about the one-third of the mass of the planet Mercury. These small black holes could be considerably lighter, if it started out smaller. No such black holes, called "primordial black holes" have ever been observed, although astronomers are actively in engaged in the business of looking for them. The most dense stars we are pretty sure exist (about 2,000 of them that have been observed) are neutron stars, which are estimated to be on the order of 12-15 km or less in radius: A typical neutron star has a mass between 1.35 and about 2.0 solar masses, with a corresponding radius of about 12 km . . . . In contrast, the Sun's radius is about 60,000 times that. Neutron stars have overall densities . . . of 3.7×10^17 to 5.9×10^17 kg/m3 (2.6×10^14 to 4.1×10^14 times the density of the Sun), which compares with the approximate density of an atomic nucleus of 3×10^17 kg/m3. The neutron star's density varies from below 1×10^9 kg/m3 in the crust, increasing with depth to above 6×10^17 or 8×10^17 kg/m3 deeper inside (denser than an atomic nucleus). . . . In general, compact stars of less than 1.44 solar masses – the Chandrasekhar limit – are white dwarfs, and above 2 to 3 solar masses (the Tolman–Oppenheimer–Volkoff limit), a quark star might be created; however, this is uncertain. Gravitational collapse will usually occur on any compact star between 10 and 25 solar masses and produce a black hole. No quark stars have ever been definitively observed, although three quark star candidates (a minuscule number relative to the 2,000 observed neutron stars) have been identified for further investigation. And, somewhat surprisingly, models of quark stars suggest that a quark star with 2.5 solar masses would have more volume than a neutron star with 2.0 solar masses, so the difference in mass density between the two varieties of post-supernovae stars would be modest. The equations of general relativity imply that deep within a black hole is a singularity in a finite mass in an infinitesimal volume, but the laws of nature, in an act of cosmic censorship make it not just practically, but theoretically impossible to ever observe that singularity. If no primordial black holes, which have no mechanism for forming after the Big Bang, exist, this would imply that there is nothing in the macroscopic universe with an observable density much greater than a neutron star or greater than twice the density of an atomic nucleus. How Big Would Fundamental Particles Be At Atomic Densities? The fundamental particles of quantum mechanics (quarks, leptons, photons, gluons and W bosons and Z bosons) are treated as point-like in quantum mechanics. But, given their known masses, they would not form gravitational singularities, which happens at the Schwarzschild gravitational radius = 2Gmc−2 (where m is the gravitational mass as measured by a distant observer), unless they each a a radius many orders of magnitude smaller than the Planck length ∼ 10^−33cm. Suppose that fundamental particles had mass densities comparable to that of atomic nuclei. How big would they be? Atomic nuclei densities provides an estimate of the size of the neutron and proton: "The diameter of the nucleus is in the range of 1.75 fm (femtometer) (1.75×10−15 m) for hydrogen (the diameter of a single proton) to about 15 fm for the heaviest atoms, such as uranium. These dimensions are much smaller than the diameter of the atom itself (nucleus + electronic cloud), by a factor of about 23,000 (uranium) to about 145,000 (hydrogen)." A nuclear radius is roughly the cube root of the number of protons and neutrons combined in the atom times 1.25 × 10−15 m +/- 0.2 fm (this +/- factor varies from atom to atom), and the shape is approximately spherical. Protons and neutrons, of course, are made up of three quarks each bound by gluons. Bare quarks make up about 1% of the mass of a proton or neutron, so a first generation quark is about 0.3% of the mass of a proton, with the rest of its mass coming from the "glue" and thus quarks would be 14% of the size of a proton or neutron if they had an equivalent mass density, i.e. about 0.2 fm (2*10^-16 meters). Interestingly, this coincides rather neatly with the apparent charge distribution within a neutron (which has one up quark of charge +2/3 that appears to be centrally located and two down quarks of charge -1/3 that appear to orbit the up quark): "The neutron has a positively charged core of radius ≈ 0.3 fm surrounded by a compensating negative charge of radius between 0.3 fm and 2 fm. The proton has an approximately exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm." But, a 10^-16 meter order of magnitude for up and down quarks would be much larger than the Planck length that conventional wisdom would ordinarily expect. Of course, the cosmic censorship imposed by the quark confinement implied by QCD in the case of quarks other than top quarks, and exceedingly rapid weak force decays almost exclusively to bottom quarks, in the case of top quarks, greatly constrain our capacity to directly observe the radius (if any) of a quark. At a mass density equivalent to atomic nuclei, electrons would have about half the radius of an up or down quark, but this would seem to contradict the limits set by experiment that suggest that an electron cannot have a radius of more 10^-20 meters, a factor of 10,000 smaller. However, some of these experiments seem to set limits on electron compositeness, rather than actual electron radius in a genuinely fundamental non-composite electron with a homogeneous spherically distributed charge. Such a simple space filling electron lacks the traits of a composite electron, it would behave far more like a point particle, but does address inconsistencies between general relativity and quantum mechanics. An electron neutrino would have a radius of about 10^-18 or 10^-19 meters if its radius were set based on it having a mass density comparable to an atomic nucleus, a radius that also seems to exceed the experimental limits on that radius by a factor of at least 10 to 100. Thus, first generation electrons and neutrinos, at least, would seem to have a greater mass density than atomic nuclei, but would not be singularities in general relativity unless their size in space was many orders of magnitude smaller than the Planck length.
http://dispatchesfromturtleisland.blogspot.com/2011/10/black-hole-density-coincidence.html
13
74
In elementary algebra letters are used to stand for numbers. For example, in the equation ax2+bx+c=0, the letters a, b, and c stand for various known constant numbers called coefficients and the letter x is an unknown variable number whose value depends on the values of a, b, and c and may be determined by solving the equation. Much of classical algebra is concerned with finding solutions to equations or systems of equations, i.e., finding the roots, or values of the unknowns, that upon substitution into the original equation will make it a numerical identity. For example, x=-2 is a root of x2-2x-8=0 because (-2)2-2(-2)-8=4+4-8=0; substitution will verify that x=4 is also a root of this equation. The equations of elementary algebra usually involve polynomial functions of one or more variables (see function). The equation in the preceding example involves a polynomial of second degree in the single variable x (see quadratic). One method of finding the zeros of the polynomial function f(x), i.e., the roots of the equation f(x)=0, is to factor the polynomial, if possible. The polynomial x2-2x-8 has factors (x+2) and (x-4), since (x+2)(x-4)=x2-2x-8, so that setting either of these factors equal to zero will make the polynomial zero. In general, if (x-r) is a factor of a polynomial f(x), then r is a zero of the polynomial and a root of the equation f(x)=0. To determine if (x-r) is a factor, divide it into f(x); according to the Factor Theorem, if the remainder f(r)—found by substituting r for x in the original polynomial—is zero, then (x-r) is a factor of f(x). Although a polynomial has real coefficients, its roots may not be real numbers; e.g., x2-9 separates into (x+3)(x-3), which yields two zeros, x=-3 and x=+3, but the zeros of x2+9 are imaginary numbers. The Fundamental Theorem of Algebra states that every polynomial f(x)=anxn+an-1xn-1+ … +a1x+a0, with an≠0 and n≥1, has at least one complex root, from which it follows that the equation f(x)=0 has exactly n roots, which may be real or complex and may not all be distinct. For example, the equation x4+4x3+5x2+4x+4=0 has four roots, but two are identical and the other two are complex; the factors of the polynomial are (x+2)(x+2)(x+i)(x-i), as can be verified by multiplication. Modern algebra is yet a further generalization of arithmetic than is classical algebra. It deals with operations that are not necessarily those of arithmetic and that apply to elements that are not necessarily numbers. The elements are members of a set and are classed as a group, a ring, or a field according to the axioms that are satisfied under the particular operations defined for the elements. Among the important concepts of modern algebra are those of a matrix and of a vector space. See M. Artin, Algebra (1991). Branch of algebra concerned with methods of solving systems of linear equations; more generally, the mathematics of linear transformations and vector spaces. “Linear” refers to the form of the equations involved—in two dimensions, math.amath.x + math.bmath.y = math.c. Geometrically, this represents a line. If the variables are replaced by vectors, functions, or derivatives, the equation becomes a linear transformation. A system of equations of this type is a system of linear transformations. Because it shows when such a system has a solution and how to find it, linear algebra is essential to the theory of mathematical analysis and differential equations. Its applications extend beyond the physical sciences into, for example, biology and economics. Learn more about linear algebra with a free trial on Britannica.com. Generalized version of arithmetic that uses variables to stand for unspecified numbers. Its purpose is to solve algebraic equations or systems of equations. Examples of such solutions are the quadratic formula (for solving a quadratic equation) and Gauss-Jordan elimination (for solving a system of equations in matrix form). In higher mathematics, an “algebra” is a structure consisting of a class of objects and a set of rules (analogous to addition and multiplication) for combining them. Basic and higher algebraic structures share two essential characteristics: (1) calculations involve a finite number of steps and (2) calculations involve abstract symbols (usually letters) representing more general objects (usually numbers). Higher algebra (also known as modern or abstract algebra) includes all of elementary algebra, as well as group theory, theory of rings, field theory, manifolds, and vector spaces. Learn more about algebra with a free trial on Britannica.com. Symbolic system used for designing logic circuits and networks for digital computers. Its chief utility is in representing the truth value of statements, rather than the numeric quantities handled by ordinary algebra. It lends itself to use in the binary system employed by digital computers, since the only possible truth values, true and false, can be represented by the binary digits 1 and 0. A circuit in computer memory can be open or closed, depending on the value assigned to it, and it is the integrated work of such circuits that give computers their computing ability. The fundamental operations of Boolean logic, often called Boolean operators, are “and,” “or,” and “not”; combinations of these make up 13 other Boolean operators. Learn more about Boolean algebra with a free trial on Britannica.com. In most natural examples, one also has that the involution is isometric, i.e. By a theorem of Gelfand and Naimark, given a B* algebra A there exists a Hilbert space H and an isometric *-homomorphism from A into the algebra B(H) of all bounded linear operators on H. Thus every B* algebra is isometrically *-isomorphic to a C*-algebra. Because of this, the term B* algebra is rarely used in current terminology, and has been replaced by the (overloading of) the term 'C* algebra'.
http://www.reference.com/browse/algebra
13
69
This third volume of the book series Lessons In Electric Circuits makes a departure from the former two in that the transition between electric circuits and electronic circuits is formally crossed. Electric circuits are connections of conductive wires and other devices whereby the uniform flow of electrons occurs. Electronic circuits add a new dimension to electric circuits in that some means of control is exerted over the flow of electrons by another electrical signal, either a voltage or a current. In and of itself, the control of electron flow is nothing new to the student of electric circuits. Switches control the flow of electrons, as do potentiometers, especially when connected as variable resistors (rheostats). Neither the switch nor the potentiometer should be new to your experience by this point in your study. The threshold marking the transition from electric to electronic, then, is defined by how the flow of electrons is controlled rather than whether or not any form of control exists in a circuit. Switches and rheostats control the flow of electrons according to the positioning of a mechanical device, which is actuated by some physical force external to the circuit. In electronics, however, we are dealing with special devices able to control the flow of electrons according to another flow of electrons, or by the application of a static voltage. In other words, in an electronic circuit, electricity is able to control electricity. Historically, the era of electronics began with the invention of the Audion tube, a device controlling the flow of an electron stream through a vacuum by the application of a small voltage between two metal structures within the tube. A more detailed summary of so-called electron tube or vacuum tube technology is available in the last chapter of this volume for those who are interested. Electronics technology experienced a revolution in 1948 with the invention of the transistor. This tiny device achieved approximately the same effect as the Audion tube, but in a vastly smaller amount of space and with less material. Transistors control the flow of electrons through solid semiconductor substances rather than through a vacuum, and so transistor technology is often referred to as solid-state electronics. An active device is any type of circuit component with the ability to electrically control electron flow (electricity controlling electricity). In order for a circuit to be properly called electronic, it must contain at least one active device. Components incapable of controlling current by means of another electrical signal are called passive devices. Resistors, capacitors, inductors, transformers, and even diodes are all considered passive devices. Active devices include, but are not limited to, vacuum tubes, transistors, silicon-controlled rectifiers (SCRs), and TRIACs. A case might be made for the saturable reactor to be defined as an active device, since it is able to control an AC current with a DC current, but I've never heard it referred to as such. The operation of each of these active devices will be explored in later chapters of this volume. All active devices control the flow of electrons through them. Some active devices allow a voltage to control this current while other active devices allow another current to do the job. Devices utilizing a static voltage as the controlling signal are, not surprisingly, called voltage-controlled devices. Devices working on the principle of one current controlling another current are known as current-controlled devices. For the record, vacuum tubes are voltage-controlled devices while transistors are made as either voltage-controlled or current controlled types. The first type of transistor successfully demonstrated was a current-controlled device. The practical benefit of active devices is their amplifying ability. Whether the device in question be voltage-controlled or current-controlled, the amount of power required of the controlling signal is typically far less than the amount of power available in the controlled current. In other words, an active device doesn't just allow electricity to control electricity; it allows a small amount of electricity to control a large amount of electricity. Because of this disparity between controlling and controlled powers, active devices may be employed to govern a large amount of power (controlled) by the application of a small amount of power (controlling). This behavior is known as amplification. It is a fundamental rule of physics that energy can neither be created nor destroyed. Stated formally, this rule is known as the Law of Conservation of Energy, and no exceptions to it have been discovered to date. If this Law is true -- and an overwhelming mass of experimental data suggests that it is -- then it is impossible to build a device capable of taking a small amount of energy and magically transforming it into a large amount of energy. All machines, electric and electronic circuits included, have an upper efficiency limit of 100 percent. At best, power out equals power in: Usually, machines fail even to meet this limit, losing some of their input energy in the form of heat which is radiated into surrounding space and therefore not part of the output energy stream. Many people have attempted, without success, to design and build machines that output more power than they take in. Not only would such a perpetual motion machine prove that the Law of Energy Conservation was not a Law after all, but it would usher in a technological revolution such as the world has never seen, for it could power itself in a circular loop and generate excess power for "free:" Despite much effort and many unscrupulous claims of "free energy" or over-unity machines, not one has ever passed the simple test of powering itself with its own energy output and generating energy to spare. There does exist, however, a class of machines known as amplifiers, which are able to take in small-power signals and output signals of much greater power. The key to understanding how amplifiers can exist without violating the Law of Energy Conservation lies in the behavior of active devices. Because active devices have the ability to control a large amount of electrical power with a small amount of electrical power, they may be arranged in circuit so as to duplicate the form of the input signal power from a larger amount of power supplied by an external power source. The result is a device that appears to magically magnify the power of a small electrical signal (usually an AC voltage waveform) into an identically-shaped waveform of larger magnitude. The Law of Energy Conservation is not violated because the additional power is supplied by an external source, usually a DC battery or equivalent. The amplifier neither creates nor destroys energy, but merely reshapes it into the waveform desired: In other words, the current-controlling behavior of active devices is employed to shape DC power from the external power source into the same waveform as the input signal, producing an output signal of like shape but different (greater) power magnitude. The transistor or other active device within an amplifier merely forms a larger copy of the input signal waveform out of the "raw" DC power provided by a battery or other power source. Amplifiers, like all machines, are limited in efficiency to a maximum of 100 percent. Usually, electronic amplifiers are far less efficient than that, dissipating considerable amounts of energy in the form of waste heat. Because the efficiency of an amplifier is always 100 percent or less, one can never be made to function as a "perpetual motion" device. The requirement of an external source of power is common to all types of amplifiers, electrical and non-electrical. A common example of a non-electrical amplification system would be power steering in an automobile, amplifying the power of the driver's arms in turning the steering wheel to move the front wheels of the car. The source of power necessary for the amplification comes from the engine. The active device controlling the driver's "input signal" is a hydraulic valve shuttling fluid power from a pump attached to the engine to a hydraulic piston assisting wheel motion. If the engine stops running, the amplification system fails to amplify the driver's arm power and the car becomes very difficult to turn. Because amplifiers have the ability to increase the magnitude of an input signal, it is useful to be able to rate an amplifier's amplifying ability in terms of an output/input ratio. The technical term for an amplifier's output/input magnitude ratio is gain. As a ratio of equal units (power out / power in, voltage out / voltage in, or current out / current in), gain is naturally a unitless measurement. Mathematically, gain is symbolized by the capital letter "A". For example, if an amplifier takes in an AC voltage signal measuring 2 volts RMS and outputs an AC voltage of 30 volts RMS, it has an AC voltage gain of 30 divided by 2, or 15: Correspondingly, if we know the gain of an amplifier and the magnitude of the input signal, we can calculate the magnitude of the output. For example, if an amplifier with an AC current gain of 3.5 is given an AC input signal of 28 mA RMS, the output will be 3.5 times 28 mA, or 98 mA: In the last two examples I specifically identified the gains and signal magnitudes in terms of "AC." This was intentional, and illustrates an important concept: electronic amplifiers often respond differently to AC and DC input signals, and may amplify them to different extents. Another way of saying this is that amplifiers often amplify changes or variations in input signal magnitude (AC) at a different ratio than steady input signal magnitudes (DC). The specific reasons for this are too complex to explain at this time, but the fact of the matter is worth mentioning. If gain calculations are to be carried out, it must first be understood what type of signals and gains are being dealt with, AC or DC. Electrical amplifier gains may be expressed in terms of voltage, current, and/or power, in both AC and DC. A summary of gain definitions is as follows. The triangle-shaped "delta" symbol (Δ) represents change in mathematics, so "ΔVoutput / ΔVinput" means "change in output voltage divided by change in input voltage," or more simply, "AC output voltage divided by AC input voltage": If multiple amplifiers are staged, their respective gains form an overall gain equal to the product (multiplication) of the individual gains: In its simplest form, an amplifier's gain is a ratio of output over input. Like all ratios, this form of gain is unitless. However, there is an actual unit intended to represent gain, and it is called the bel. As a unit, the bel was actually devised as a convenient way to represent power loss in telephone system wiring rather than gain in amplifiers. The unit's name is derived from Alexander Graham Bell, the famous American inventor whose work was instrumental in developing telephone systems. Originally, the bel represented the amount of signal power loss due to resistance over a standard length of electrical cable. Now, it is defined in terms of the common (base 10) logarithm of a power ratio (output power divided by input power): Because the bel is a logarithmic unit, it is nonlinear. To give you an idea of how this works, consider the following table of figures, comparing power losses and gains in bels versus simple ratios: It was later decided that the bel was too large of a unit to be used directly, and so it became customary to apply the metric prefix deci (meaning 1/10) to it, making it decibels, or dB. Now, the expression "dB" is so common that many people do not realize it is a combination of "deci-" and "-bel," or that there even is such a unit as the "bel." To put this into perspective, here is another table contrasting power gain/loss ratios against decibels: As a logarithmic unit, this mode of power gain expression covers a wide range of ratios with a minimal span in figures. It is reasonable to ask, "why did anyone feel the need to invent a logarithmic unit for electrical signal power loss in a telephone system?" The answer is related to the dynamics of human hearing, the perceptive intensity of which is logarithmic in nature. Human hearing is highly nonlinear: in order to double the perceived intensity of a sound, the actual sound power must be multiplied by a factor of ten. Relating telephone signal power loss in terms of the logarithmic "bel" scale makes perfect sense in this context: a power loss of 1 bel translates to a perceived sound loss of 50 percent, or 1/2. A power gain of 1 bel translates to a doubling in the perceived intensity of the sound. An almost perfect analogy to the bel scale is the Richter scale used to describe earthquake intensity: a 6.0 Richter earthquake is 10 times more powerful than a 5.0 Richter earthquake; a 7.0 Richter earthquake 100 times more powerful than a 5.0 Richter earthquake; a 4.0 Richter earthquake is 1/10 as powerful as a 5.0 Richter earthquake, and so on. The measurement scale for chemical pH is likewise logarithmic, a difference of 1 on the scale is equivalent to a tenfold difference in hydrogen ion concentration of a chemical solution. An advantage of using a logarithmic measurement scale is the tremendous range of expression afforded by a relatively small span of numerical values, and it is this advantage which secures the use of Richter numbers for earthquakes and pH for hydrogen ion activity. Another reason for the adoption of the bel as a unit for gain is for simple expression of system gains and losses. Consider the last system example where two amplifiers were connected tandem to amplify a signal. The respective gain for each amplifier was expressed as a ratio, and the overall gain for the system was the product (multiplication) of those two ratios: If these figures represented power gains, we could directly apply the unit of bels to the task of representing the gain of each amplifier, and of the system altogether: Close inspection of these gain figures in the unit of "bel" yields a discovery: they're additive. Ratio gain figures are multiplicative for staged amplifiers, but gains expressed in bels add rather than multiply to equal the overall system gain. The first amplifier with its power gain of 0.477 B adds to the second amplifier's power gain of 0.699 B to make a system with an overall power gain of 1.176 B. Recalculating for decibels rather than bels, we notice the same phenomenon: To those already familiar with the arithmetic properties of logarithms, this is no surprise. It is an elementary rule of algebra that the antilogarithm of the sum of two numbers' logarithm values equals the product of the two original numbers. In other words, if we take two numbers and determine the logarithm of each, then add those two logarithm figures together, then determine the "antilogarithm" of that sum (elevate the base number of the logarithm -- in this case, 10 -- to the power of that sum), the result will be the same as if we had simply multiplied the two original numbers together. This algebraic rule forms the heart of a device called a slide rule, an analog computer which could, among other things, determine the products and quotients of numbers by addition (adding together physical lengths marked on sliding wood, metal, or plastic scales). Given a table of logarithm figures, the same mathematical trick could be used to perform otherwise complex multiplications and divisions by only having to do additions and subtractions, respectively. With the advent of high-speed, handheld, digital calculator devices, this elegant calculation technique virtually disappeared from popular use. However, it is still important to understand when working with measurement scales that are logarithmic in nature, such as the bel (decibel) and Richter scales. When converting a power gain from units of bels or decibels to a unitless ratio, the mathematical inverse function of common logarithms is used: powers of 10, or the antilog. Converting decibels into unitless ratios for power gain is much the same, only a division factor of 10 is included in the exponent term: Because the bel is fundamentally a unit of power gain or loss in a system, voltage or current gains and losses don't convert to bels or dB in quite the same way. When using bels or decibels to express a gain other than power, be it voltage or current, we must perform the calculation in terms of how much power gain there would be for that amount of voltage or current gain. For a constant load impedance, a voltage or current gain of 2 equates to a power gain of 4 (22); a voltage or current gain of 3 equates to a power gain of 9 (32). If we multiply either voltage or current by a given factor, then the power gain incurred by that multiplication will be the square of that factor. This relates back to the forms of Joule's Law where power was calculated from either voltage or current, and resistance: Thus, when translating a voltage or current gain ratio into a respective gain in terms of the bel unit, we must include this exponent in the equation(s): The same exponent requirement holds true when expressing voltage or current gains in terms of decibels: However, thanks to another interesting property of logarithms, we can simplify these equations to eliminate the exponent by including the "2" as a multiplying factor for the logarithm function. In other words, instead of taking the logarithm of the square of the voltage or current gain, we just multiply the voltage or current gain's logarithm figure by 2 and the final result in bels or decibels will be the same: The process of converting voltage or current gains from bels or decibels into unitless ratios is much the same as it is for power gains: Here are the equations used for converting voltage or current gains in decibels into unitless ratios: While the bel is a unit naturally scaled for power, another logarithmic unit has been invented to directly express voltage or current gains/losses, and it is based on the natural logarithm rather than the common logarithm as bels and decibels are. Called the neper, its unit symbol is a lower-case "n." For better or for worse, neither the neper nor its attenuated cousin, the decineper, is popularly used as a unit in American engineering applications. It is also possible to use the decibel as a unit of absolute power, in addition to using it as an expression of power gain or loss. A common example of this is the use of decibels as a measurement of sound pressure intensity. In cases like these, the measurement is made in reference to some standardized power level defined as 0 dB. For measurements of sound pressure, 0 dB is loosely defined as the lower threshold of human hearing, objectively quantified as 1 picowatt of sound power per square meter of area. A sound measuring 40 dB on the decibel sound scale would be 104 times greater than the threshold of hearing. A 100 dB sound would be 1010 (ten billion) times greater than the threshold of hearing. Because the human ear is not equally sensitive to all frequencies of sound, variations of the decibel sound-power scale have been developed to represent physiologically equivalent sound intensities at different frequencies. Some sound intensity instruments were equipped with filter networks to give disproportionate indications across the frequency scale, the intent of which to better represent the effects of sound on the human body. Three filtered scales became commonly known as the "A," "B," and "C" weighted scales. Decibel sound intensity indications measured through these respective filtering networks were given in units of dBA, dBB, and dBC. Today, the "A-weighted scale" is most commonly used for expressing the equivalent physiological impact on the human body, and is especially useful for rating dangerously loud noise sources. Another standard-referenced system of power measurement in the unit of decibels has been established for use in telecommunications systems. This is called the dBm scale. The reference point, 0 dBm, is defined as 1 milliwatt of electrical power dissipated by a 600 Ω load. According to this scale, 10 dBm is equal to 10 times the reference power, or 10 milliwatts; 20 dBm is equal to 100 times the reference power, or 100 milliwatts. Some AC voltmeters come equipped with a dBm range or scale (sometimes labeled "DB") intended for use in measuring AC signal power across a 600 Ω load. 0 dBm on this scale is, of course, elevated above zero because it represents something greater than 0 (actually, it represents 0.7746 volts across a 600 Ω load, voltage being equal to the square root of power times resistance; the square root of 0.001 multiplied by 600). When viewed on the face of an analog meter movement, this dBm scale appears compressed on the left side and expanded on the right in a manner not unlike a resistance scale, owing to its logarithmic nature. An adaptation of the dBm scale for audio signal strength is used in studio recording and broadcast engineering for standardizing volume levels, and is called the VU scale. VU meters are frequently seen on electronic recording instruments to indicate whether or not the recorded signal exceeds the maximum signal level limit of the device, where significant distortion will occur. This "volume indicator" scale is calibrated in according to the dBm scale, but does not directly indicate dBm for any signal other than steady sine-wave tones. The proper unit of measurement for a VU meter is volume units. When relatively large signals are dealt with, and an absolute dB scale would be useful for representing signal level, specialized decibel scales are sometimes used with reference points greater than the 1mW used in dBm. Such is the case for the dBW scale, with a reference point of 0 dBW established at 1 watt. Another absolute measure of power called the dBk scale references 0 dBk at 1 kW, or 1000 watts. Lessons In Electric Circuits copyright (C) 2000-2003 Tony R. Kuphaldt, under the terms and conditions of the Design Science License.
http://www.faqs.org/docs/electric/Semi/SEMI_1.html
13
51
The purpose of this week’s programming assignment is to learn how to deal with events from multiple sources. In addition, we will learn how to use some of Java’s graphics capabilities for drawing shapes in a display area. The programming assignment for this week is to implement a simple paddle ball game. Paddle Ball Game Overview The paddle ball game is a simplification of the Pong game. In the Pong game, a ball is moving around the display, bouncing off walls. The player moves a paddle in one dimension, trying to hit the ball whenever possible. If the ball hits the paddle, it bounces off and continues on its way. If the paddle misses the ball, the ball goes off the end of the display area and the player loses a point. In our paddle ball game, the ball moves around the display, bouncing off walls just as in the Pong game. The player controls a paddle trying to hit the ball just like in the Pong game. If the ball hits the paddle, it bounces off and continues on its way. The main difference in our game is that if the player misses the ball, the ball simply bounces off the wall behind the paddle and continues to travel. There is no scoring in this game. Object Oriented Design If we analyze the game description we can see there are several different objects which are interacting. There is a ball object which is traveling around the display. There is a paddle object which is being moved by the player. There is a display which is presenting the graphical representation of the game to the user. These objects are readily apparent from the description of the game. The ball object has a size and a position within the field of play. It also has a direction of travel in both the X and Y dimensions. The ball must be able to update its position when needed. Since the ball normally bounces off walls, it needs to know the length and width of the field of play when updating its position. The ball must also be able to draw itself. The paddle has a size and a position within the field of play. The paddle must be able to update its position within the field of play. The paddle must also be able to draw itself. The display has a display area which has a length and width. It has a means for drawing graphical shapes within the display area. The display area must be able to update the display area the position of the ball or paddle changes. The display itself has no concept of the game or its objects, so it will need to interact with some other object to draw all the game objects. At this point, we have a ball, a paddle, and a display object. What is missing is some object which provides the game context. The ball has no awareness of the paddle. The paddle has no awareness of the ball. The display has no awareness of either the ball or paddle. We need an object which enforces the rules and concepts of the game and manages the behaviors of the other objects involved in the game. Let’s call this the controller object. The controller has a ball, a paddle, and a display object. The controller manages game events. It gets mouse events and tells the paddle to update its position based on the position of the mouse cursor. It gets timer events and tells the ball to update its position based on the size of the display area which it gets from the display. It tells the display when to redraw the display area. It also provides a method which the display object uses to request the controller to draw the current state of the game. The controller is responsible for telling the ball and paddle to draw themselves. The controller is also responsible for detecting when game objects are interacting. Specifically, it must be able to determine when the ball has made contact with the paddle. This requires that the controller is able to determine if a specific surface of the ball is in contact with a specific surface of the paddle. When this is detected, the controller tells the ball to reverse the direction of travel in the X or Y direction. This places a requirement on the ball and paddle to provide methods allowing the controller to determine the X or Y position of any given surface of the ball or paddle. The Ball Class Given that the ball has a size, a position, and moves a specific amount on each update, the Ball class will need member variables for all of these attributes. The size of the ball (diameter) and the number of pixels to move per update should be established when a ball object is created (explicit value constructor). To simplify things, the distance to move (increment) will initially be the same in the X and Y directions. The X and Y position members need to be set to some starting position by the constructor. The X, Y position of the ball is the upper left corner of the minimal box which contains the ball. When the X increment is positive, the ball is traveling to the right. When it is negative it is traveling to the left. When the Y increment is positive the ball is traveling down. When it is negative it is traveling up. According to the description in the previous section, the ball must provide the position of its edges to the controller. To do that, the Ball class provides the following methods: getTop, getBottom, getLeft, getRight The first two methods return the Y coordinate of the top/bottom edge of the ball. The other two methods return the X coordinate of the left/right edge of the ball. These values can easily be calculated from the X, Y position and the diameter of the ball. The Ball class must provide a method which takes a Graphics object parameter and uses it to draw the ball at its current position. When the controller detects that the ball has hit the paddle, the controller must tell the ball to reverse its direction of travel in either the X or Y direction. To be flexible, the Ball class should provide two methods, one to reverse the X direction and one to reverse the Y direction. To the ball, reversing direction simply means negating the value of the X or Y increment. The controller also must know whether the center of the ball is within the boundaries of the paddle in order to detect contact. This means the Ball class must provide methods to report its horizontal or vertical center. This can be easily computed from the current position and the diameter. Finally, the Ball class provides a method which the controller calls any time the ball needs to change position. This method adds the X increment to the current X position, and the Y increment to the current Y position. The Ball class is responsible for detecting when it has encountered a wall. So, this method needs to know where the edges of the game area are. This method must be given the length and height of the game area as parameters. The following conditions must be checked for detecting contact with a wall: 1. Top edge of ball <= 0 then reverse Y travel direction 2. Bottom edge of ball >= height then reverse Y travel direction 3. Left edge of ball <= 0 then reverse X travel direction 4. Right edge of ball >= length then reverse X travel direction The Paddle Class The Paddle class has a length and a position. Both of these should be initialized by an explicit value constructor. That allows the controller to determine the paddle length and the position of the paddle relative to the wall. We will simplify things by stating that the paddle will have a horizontal orientation and will move left to right only. The paddle cannot move in the vertical direction once it has been created. The X and Y position of the paddle should represent the center of the top surface of the paddle. This will support the correlation of the mouse cursor position to the center of the paddle. The draw method of the Paddle class takes a Graphics object parameter and draws a filled rectangle based on the current position of the paddle. Drawing a rectangle is based on the upper left corner of the rectangle as its reference position. Calculation of the reference position is easily done using the X and Y position of the paddle and the length of the paddle. The Paddle class controls what the width of the paddle will be. To support the controller, the Paddle class must have methods which return the position of the top, bottom, left, and right edges of the paddle. The controller needs this information for detecting when the ball and paddle come into contact. The getTop and getBottom methods return a Y position. The getLeft and getRight methods return an X position. These values are easily calculated from the X and Y postion, and the length and width of the paddle. Finally, a method must be provided to allow the controller to change the position of the paddle based on the X and Y coordinates of the mouse cursor. This method must restrict the movement of the paddle to changes to the X coordinate only. The Display Class The Display class provides a window to present the visual rendering of the state of the game. It also must interact with the controller. The easiest way to create a Display class which can draw various graphics is to have the class extend the JPanel class and override the paintComponent method. This method should simply fill a rectangle with some background color. The size of the rectangle is simply the size of the panel which is acquired from the getWidth and getHeight methods of the JPanel class. After drawing the background, the display object must pass its Graphics object to the controller, enabling the controller to arrange for the other game components to draw themselves. The only other method needed in this class is an explicit value constructor which takes a controller object and stores it into a member variable. The constructor also creates a JFrame object storing it into a member variable. The display object adds itself to the JFrame. The frame initialization should be completed, including setting the initial size of the window which will display the game. The Controller Class The controller manages all aspects of the game including various game parameters. It determines the diameter of the ball and the distance the ball travels with each move. The distance travelled should be less than the diameter. The speed of ball movement is controlled by a combination of distance travelled and frequency of timer events. Timer events should be set up to occur around every 25 milliseconds. The length of the paddle should be at least twice the diameter of the ball. The game is to be set up so the paddle moves horizontally near the top of the display area. There should be at least 5% of the display height between the paddle and the top of the display area. The Controller class contains a ball object, a paddle object, and a display object. In addition, the controller is responsible for managing the events that drive the game. Therefore, this class must implement the ActionListener interface to deal with Timer events which drive the ball. It must also implement the MouseMotionListener interface to cause the paddle to move based on mouse movement. The constructor of this class creates the three objects mentioned above. It also must create a Timer object which will fire every 25 milliseconds and send ActionEvents to the controller object’s actionPerformed method. The last thing the constructor needs to do is to register the controller object as a mouseMotionListener on the display object. So, when the mouse cursor moves across the display area, MouseEvents will be delivered to the mouseDragged or mouseMoved methods of the controller object. The controller provides a draw method which takes a Graphics object parameter and is called by the display object. This method should check to see if the ball and paddle objects exist, and if they do, ask each object to draw itself, passing the Graphics object into the draw methods of those objects. The mouseDragged and mouseMoved methods are responsible for managing the padddle position. When a MouseEvent occurs and one of these methods executes, the method uses its MouseEvent parameter to get the current X and Y coordinates of the mouse cursor. These coordinates are given to the update method of the paddle which adjusts the paddle position based on these new coordinates. Once the paddle has been updated, the controller calls the repaint method on the display object to cause the display to reflect the new position of the paddle. The controller’s actionPerformed method is responsible for causing the ball to move and detecting whether the ball is in contact with the paddle. It calls the update method on the ball, passing in the width and height of the display which it gets from the display object. To detect whether the ball has just contacted the paddle, the following steps should be executed: 1. Get the Y coordinates of the top of the ball and the bottom of the paddle. 2. For contact to have just occurred, the top of the ball must be less than or equal to the bottom of the paddle, and it must also be greater than or equal to the bottom of the paddle minus the amount the ball just moved. 3. If step 2 is true, the horizontal center of the ball must be between the left and right ends of the paddle. 4. If step 3 is true, tell the ball to reverse its vertical direction. Finally, the actionPerformed method invokes the repaint method to draw the display with the new position of the ball. Play the Game Create a test application that creates a controller object and play the game! Experiment with changing the game parameters and observe how the game reacts. When the game works, take a screen shot of the game. Turn in ALL your code along with the screen shot. Make sure that your code is fully commented!
http://www.chegg.com/homework-help/questions-and-answers/introduction-the-purpose-of-this-weeks-programming-assignment-is-to-learn-how-to-deal-with-q3656762
13
137
Retreat of glaciers since 1850 The retreat of glaciers since 1850 affects the availability of fresh water for irrigation and domestic use, mountain recreation, animals and plants that depend on glacier-melt, and in the longer term, the level of the oceans. Studied by glaciologists, the temporal coincidence of glacier retreat with the measured increase of atmospheric greenhouse gases is often cited as an evidentiary underpinning of global warming. Mid-latitude mountain ranges such as the Himalayas, Alps, Rocky Mountains, Cascade Range, and the southern Andes, as well as isolated tropical summits such as Mount Kilimanjaro in Africa, are showing some of the largest proportionate glacial losses. In general glaciers are continuing to melt and retreat. The Little Ice Age was a period from about 1550 to 1850 when the world experienced relatively cooler temperatures compared to the present. Subsequently, until about 1940, glaciers around the world retreated as the climate warmed substantially. Glacial retreat slowed and even reversed temporarily, in many cases, between 1950 and 1980 as a slight global cooling occurred. Since 1980, a significant global warming has led to glacier retreat becoming increasingly rapid and ubiquitous, so much so that some glaciers have disappeared altogether, and the existence of a great number of the remaining glaciers of the world is threatened. In locations such as the Andes of South America and Himalayas in Asia, the demise of glaciers in these regions will have potential impact on water supplies. The retreat of mountain glaciers, notably in western North America, Asia, the Alps, Indonesia and Africa, and tropical and subtropical regions of South America, has been used to provide qualitative evidence for the rise in global temperatures since the late 19th century. The recent substantial retreat and an acceleration of the rate of retreat since 1995 of a number of key outlet glaciers of the Greenland and West Antarctic ice sheets, may foreshadow a rise in sea level, having a potentially dramatic effect on coastal regions worldwide. Glacier mass balance Crucial to the survival of a glacier is its mass balance, the difference between accumulation and ablation (melting and sublimation). Climate change may cause variations in both temperature and snowfall, causing changes in mass balance. A glacier with a sustained negative balance is out of equilibrium and will retreat. A glacier with sustained positive balance is also out of equilibrium, and will advance to reestablish equilibrium. Currently, there are a few advancing glaciers, although their modest growth rates suggest that they are not far from equilibrium. Glacier retreat results in the loss of the low-elevation region of the glacier. Since higher elevations are cooler, the disappearance of the lowest portion of the glacier reduces overall ablation, thereby increasing mass balance and potentially reestablishing equilibrium. If the mass balance of a significant portion of the accumulation zone of the glacier is negative, it is in disequilibrium with the climate and will melt away without a colder climate and or an increase in frozen precipitation. The key symptom of a glacier in disequilibrium is thinning along the entire length of the glacier. This indicates thinning in the accumulation zone. The result is marginal recession of the accumulation zone margin, not just of the terminus. In effect, the glacier no longer has a consistent accumulation zone and without an accumulation zone cannot survive. For example, Easton Glacier will likely shrink to half its size, but at a slowing rate of reduction, and stabilize at that size, despite the warmer temperature, over a few decades. However, the Grinnell Glacier will shrink at an increasing rate until it disappears. The difference is that the upper section of Easton Glacier remains healthy and snow covered, while even the upper section of the Grinnell Glacier is bare, is melting and has thinned. Small glaciers with minimal altitude range are most likely to fall into disequilibrium with the climate. Mid-latitude glaciers Mid-latitude glaciers are located either between the Tropic of Cancer and the Arctic Circle, or between the Tropic of Capricorn and the Antarctic Circle. These two regions support glacier ice from mountain glaciers, valley glaciers and even smaller icecaps, which are usually located in higher mountainous regions. All of these glaciers are located in mountain ranges, notably the Himalayas; the Alps; the Pyrenees; Rocky Mountains and Pacific Coast Ranges of North America; the Patagonian Andes in South America; and mountain ranges in New Zealand. Glaciers in these latitudes are more widespread and tend to be greater in mass the closer they are located to the polar regions. These glaciers are the most widely studied over the past 150 years. As is true with the glaciers located in the tropical zone, virtually all the glaciers in the mid-latitudes are in a state of negative mass balance and are retreating. Eastern hemisphere Since 1870 the Argentière Glacier and Mont Blanc Glacier have receded by 1,150 m (3,770 ft) and 1,400 m (4,600 ft), respectively. The largest glacier in France, the Mer de Glace, which is 11 km (6.8 mi) long and 400 m (1,300 ft) thick, has lost 8.3% of its length, or 1 km (0.62 mi), in 130 years, and thinned by 27%, or 150 m (490 ft), in the midsection of the glacier since 1907. The Bossons Glacier in Chamonix, France, has retreated 1,200 m (3,900 ft) from extents observed in the early 20th century. In 2010, out of 95 Swiss glaciers studied, 86 retreated from where their terminal points had been in 2009, 6 showed no change and 3 had advanced. Other researchers have found that glaciers across the Alps appear to be retreating at a faster rate than a few decades ago. In 2008, the Swiss Glacier survey of 85 glaciers found 78 retreating, 2 stationary and 5 advancing. The Trift Glacier had retreated over 500 m (1,600 ft) just in the three years of 2003 to 2005, which is 10% of its total length. The Grosser Aletsch Glacier, the largest glacier in Switzerland, has retreated 2,600 m (8,500 ft) since 1880. This rate of retreat has also increased since 1980, with 30%, or 800 m (2,600 ft), of the total retreat occurring in the last 20% of the time period. Similarly, of the glaciers in the Italian Alps, only about a third were in retreat in 1980, while by 1999, 89% of these glaciers were retreating. In 2005, the Italian Glacier Commission found that 123 glaciers were retreating, 1 advancing and 6 stationary. Repeat photography of glaciers in the Alps provides clear evidence that glaciers in this region have retreated significantly since 1980. The Morteratsch Glacier, Switzerland is one key example and yearly measurements of the length changes commenced in 1878. The overall retreat from 1878 to 1998 has been 2 km (1.2 mi) with a mean annual retreat rate of approximately 17 m (56 ft) per year. This long-term average was markedly surpassed in recent years with the glacier receding 30 m (98 ft) per year during the period between 1999–2005. One major concern which has in the past had great impact on lives and property is the death and destruction from a Glacial Lake Outburst Flood (GLOF). Glaciers stockpile rock and soil that has been carved from mountainsides at their terminal end. These debris piles often form dams that impound water behind them and form glacial lakes as the glaciers melt and retreat from their maximum extents. These terminal moraines are frequently unstable and have been known to burst if overfilled or displaced by earthquakes, landslides or avalanches. If a glacier has a rapid melting cycle during warmer months, the terminal moraine may not be strong enough to hold the rising water behind it, leading to a massive localized flood. This is an increasing risk due to the creation and expansion of glacial lakes resulting from glacier retreat. Past floods have been deadly and have resulted in enormous property damage. Towns and villages in steep, narrow valleys that are downstream from glacial lakes are at the greatest risk. In 1892 a GLOF released some 200,000 m3 (260,000 cu yd) of water from the lake of the Glacier de Tête Rousse, resulting in the deaths of 200 people in the French town of Saint Gervais. GLOFs have been known to occur in every region of the world where glaciers are located. Continued glacier retreat is expected to create and expand glacial lakes, increasing the danger of future GLOFs. Though the glaciers of the Alps have received more attention from glaciologists than in other areas of Europe, research indicates that throughout most of Europe, glaciers are rapidly retreating. In the Kebnekaise Mountains of northern Sweden, a study of 16 glaciers between 1990 and 2001 found that 14 glaciers were retreating, one was advancing and one was stable. During the 20th century, glaciers in Norway retreated overall with brief periods of advance around 1910, 1925 and in the 1990s. In the 1990s, 11 of 25 Norwegian glaciers observed had advanced due to several consecutive winters with above normal precipitation. However, following several consecutive years of little winter precipitation since 2000, and record warmth during the summers of 2002 and 2003, Norwegian glaciers have decreased significantly since 2000s. By 2005 only 1 of the 25 glaciers monitored in Norway was advancing, two were stationary and 22 were retreating. In 2010 27 glaciers retreated, one was stationary (less than 2 meters of change) and three advanced. The Norwegian Engabreen Glacier has retreated 185 m (607 ft) since 1999, while the Brenndalsbreen and Rembesdalsskåka glaciers have retreated 276 m (906 ft) and 250 m (820 ft), respectively, since 2000. The Briksdalsbreen glacier retreated 96 m (315 ft) in 2004 alone—the largest annual retreat recorded for this glacier since monitoring began in 1900. This figure was exceeded in 2006 with five glaciers retreating over 100 m (330 ft) from the fall of 2005 to the fall of 2006. Four outlets from the Jostedalsbreen ice cap, Kjenndalsbreen, Brenndalsbreen, Briksdalsbreen and Bergsetbreen had a frontal retreat of more than 100 m (330 ft). Gråfjellsbrea, an outlet from Folgefonna, had a retreat of almost 100 m (330 ft). Overall, from 1999 to 2005, Briksdalsbreen retreated 336 metres (1,102 ft). In the Spanish Pyrenees, recent studies have shown important losses in extent and volume of the glaciers of the Maladeta massif during the period 1981–2005. These include a reduction in area of 35.7%, from 2.41 km2 (600 acres) to .627 km2 (155 acres), a loss in total ice volume of .0137 km3 (0.0033 cu mi) and an increase in the mean altitude of the glacial termini of 43.5 m (143 ft). For the Pyrenees as a whole 50–60% of the glaciated area has been lost since 1991. The Balaitus, Perdigurero and La Munia glaciers have disappeared in this period. Monte Perdido Glacier has shrunk from 90 hectares to 40 hectares. Siberia and the Russian Far East Siberia is typically classified as a polar region, owing to the dryness of the winter climate and has glaciers only in the high Altai Mountains, Verkhoyansk Range, Cherskiy Range and Suntar-Khayata Range, plus possibly a few very small glaciers in the ranges near Lake Baikal, which have never been monitored and may have completely disappeared since 1989. In the more maritime and generally wetter Russian Far East, Kamchatka, exposed during winter to moisture from the Aleutian Low, has much more extensive glaciation totaling around 2,500 square kilometres (970 sq mi). Despite generally heavy winter snowfall and cool summer temperatures, the high summer rainfall of the more southerly Kuril Islands and Sakhalin in historic times melt rates have been too high for a positive mass balance even on the highest peaks. In the Chukotskiy Peninsula small alpine glaciers are numerous, but the extent of glaciation, though larger than further west, is much smaller than in Kamchatka, totalling around 300 square kilometres (120 sq mi). Details on the retreat of Siberian and Russian Far East glaciers less adequate than in most other glaciated areas of the world. There are several reason for this, the principle one being that since the collapse of Communism there has been a large reduction in the number of monitoring stations. Another factor is that in the Verkhoyansk and Cherskiy Ranges it was thought glaciers were absent before they were discovered during the 1940s, whilst in ultra-remote Kamchatka and Chukotka, although the existence of glaciers was known earlier, monitoring of their size dates back no earlier than the end of World War II. Nonetheless, available records do indicate a general retreat of all glaciers in the Altai Mountains with the exception of volcanic glaciers in Kamchatka. Sakha’s glaciers, totaling seventy square kilometers, have shrunk by around 28 percent since 1945 reaching several percent annually in some places, whilst in the Altai and Chukotkan mountains and non-volcanic areas of Kamchatka, the shrinkage is considerably larger. The Himalayas and other mountain chains of central Asia support large regions that are glaciated. These glaciers provide critical water supplies to arid countries such as Mongolia, western China, Pakistan, Afghanistan and India. As is true with other glaciers worldwide, the glaciers of Asia are experiencing a rapid decline in mass. The loss of these glaciers would have a tremendous impact on the ecosystem of the region. In the Wakhan Corridor of Afghanistan 28 of 30 glaciers examined retreated significantly during the 1976–2003 period, the average retreat was 11 m (36 ft) per year. One of these glaciers, the Zemestan Glacier, has retreated 460 m (1,510 ft) during this period, not quite 10% of its 5.2 km (3.2 mi) length. In examining 612 glaciers in China between 1950 and 1970, 53% of the glaciers studied were retreating. After 1990, 95% of these glaciers were measured to be retreating, indicating that retreat of these glaciers was becoming more widespread. Glaciers in the Mount Everest region of the Himalayas are all in a state of retreat. The Rongbuk Glacier, draining the north side of Mount Everest into Tibet, has been retreating 20 m (66 ft) per year. In the Khumbu region of Nepal along the front of the main Himalaya of 15 glaciers examined from 1976–2007 all retreated significantly and the average retreat was 28 m (92 ft) per year. The most famous of these, the Khumbu Glacier, retreated at a rate of 18 m (59 ft) per year from 1976–2007. However, in the second half of the last century the glacier melt in High Asia also showed interruptions. In the Inner Himalayas slight advances took place from 1970 to 1980. In India the Gangotri Glacier, retreated 34 m (112 ft) per year between 1970 and 1996, and has averaged a loss of 30 m (98 ft) per year since 2000. However, the glacier is still over 30 km (19 mi) long. In 2005, the Tehri Dam was finished on the Bhagirathi River and it is a 2400 mW facility that began producing hydropower in 2006. The headwaters of the Bhagirathi River is the Gangotri and Khatling Glacier, Garhwal Himalaya. Gangotri Glacier has retreated 1 km in the last 30 years, and with an area of 286 square kilometres (110 sq mi), provides up to 190 m3/second of water volume.(Singh et al., 2006). For the Indian Himalaya, retreat averaged 19 m (62 ft) per year for 17 glaciers. In Sikkim 26 glaciers examined were retreating at an average rate of 13.02 m per year from 1976 to 2005. For the 51 glaciers in the main Himalayan Range of India, Nepal and Sikkim, 51 glaciers are retreating, at an average rate of 23 metres (75 ft) per year. In the Karokoram Range of the Himalaya there is a mix of advancing and retreating glaciers with 18 advancing and 22 retreating during the 1980–2003 period. With the retreat of glaciers in the Himalayas, a number of glacial lakes have been created. A growing concern is the potential for Glacial Lake Outburst Floods—researchers estimate 20 glacial lakes in Nepal and 24 in Bhutan pose hazards to human populations should their terminal moraines fail. One glacial lake identified as potentially hazardous is Bhutan's Raphstreng Tsho, which measured 1.6 km (0.99 mi) long, .96 km (0.60 mi) wide and was 80 m (260 ft) deep in 1986. By 1995 the lake had swollen to a length of 1.94 km (1.21 mi), 1.13 km (0.70 mi) in width and a depth of 107 m (351 ft). In 1994 a GLOF from Luggye Tsho, a glacial lake adjacent to Raphstreng Tsho, killed 23 people downstream. Glaciers in the Ak-shirak Range in Kyrgyzstan experienced a slight loss between 1943 and 1977 and an accelerated loss of 20% of their remaining mass between 1977 and 2001. In the Tien Shan mountains, which Kyrgyzstan shares with China and Kazakhstan, studies in the northern areas of that mountain range show that the glaciers that help supply water to this arid region, lost nearly 2 km3 (0.48 cu mi) of ice per year between 1955 and 2000. The University of Oxford study also reported that an average of 1.28% of the volume of these glaciers had been lost per year between 1974 and 1990. The Pamirs mountain range located primarily in Tajikistan, has many thousands of glaciers, all of which are in a general state of retreat. During the 20th century, the glaciers of Tajikistan lost 20 km3 (4.8 cu mi) of ice. The 70 km (43 mi) long Fedchenko Glacier, which is the largest in Tajikistan and the largest non-polar glacier on Earth, lost 1.4% of its length, or 1 km (0.62 mi), 2 km3 (0.48 cu mi) of its mass, and the glaciated area was reduced by 11 km2 (4.2 sq mi) during the 20th century. Similarly, the neighboring Skogatch Glacier lost 8% of its total mass between 1969 and 1986. The country of Tajikistan and neighboring countries of the Pamir Range are highly dependent upon glacial runoff to ensure river flow during droughts and the dry seasons experienced every year. The continued demise of glacier ice will result in a short-term increase, followed by a long-term decrease in glacial melt water flowing into rivers and streams. The Tibetan Plateau contains the world's third-largest store of ice. Qin Dahe, the former head of the China Meteorological Administration, said that the recent fast pace of melting and warmer temperatures will be good for agriculture and tourism in the short term; but issued a strong warning: Temperatures are rising four times faster than elsewhere in China, and the Tibetan glaciers are retreating at a higher speed than in any other part of the world ... In the short term, this will cause lakes to expand and bring floods and mudflows ... In the long run, the glaciers are vital lifelines for Asian rivers, including the Indus and the Ganges. Once they vanish, water supplies in those regions will be in peril. In New Zealand the mountain glaciers have been in general retreat since 1890, with an acceleration of this retreat since 1920. Most of the glaciers have thinned measurably and have reduced in size, and the snow accumulation zones have risen in elevation as the 20th century progressed. During the period 1971–75, Ivory Glacier receded 30 m (98 ft) from the glacial terminus, and about 26% of the surface area of the glacier was lost over the same period. Since 1980 numerous small glacial lakes were created behind the new terminal moraines of several of these glaciers. Glaciers such as Classen, Godley and Douglas now all have new glacial lakes below their terminal locations due to the glacial retreat over the past 20 years. Satellite imagery indicates that these lakes are continuing to expand. There has been significant and ongoing ice volume losses on the largest New Zealand glaciers, including the Tasman, Ivory, Classen, Mueller, Maud, Hooker, Grey, Godley, Ramsay, Murchison, Therma, Volta and Douglas Glaciers. The retreat of these glaciers has been marked by expanding proglacial lakes and terminus region thinning. The loss in volume from 1975–2005 is 11 percent of the total. Several glaciers, notably the much-visited Fox and Franz Josef Glaciers on New Zealand's West Coast, have periodically advanced, especially during the 1990s, but the scale of these advances is small when compared to 20th-century retreat. Both glaciers are currently more than 2.5 km (1.6 mi) shorter than a century ago. These large, rapidly flowing glaciers situated on steep slopes have been very reactive to small mass-balance changes. A few years of conditions favorable to glacier advance, such as more westerly winds and a resulting increase in snowfall, are rapidly echoed in a corresponding advance, followed by equally rapid retreat when those favorable conditions end. The glaciers that have been advancing in a few locations in New Zealand have been doing so due to transient local weather conditions, which have brought more precipitation and cloudier, cooler summers since 2002. Western hemisphere North American glaciers are primarily located along the spine of the Rocky Mountains in the United States and Canada, and the Pacific Coast Ranges extending from northern California to Alaska. While Greenland is geologically associated with North America, it is also a part of the Arctic region. Apart from the few tidewater glaciers such as Taku Glacier, that are in the advance stage of their tidewater glacier cycle prevalent along the coast of Alaska, virtually all the glaciers of North America are in a state of retreat. The observed retreat rate has increased rapidly since approximately 1980, and overall each decade since has seen greater rates of retreat than the preceding one. There are also small remnant glaciers scattered throughout the Sierra Nevada mountains of California and Nevada. The Cascade Range of western North America extends from southern British Columbia in Canada to northern California. Excepting Alaska, about half of the glacial area in the U.S. is contained in the more than 700 glaciers of the North Cascades, a portion of the range between the Canadian border and I-90 in central Washington. These glaciers store as much water as that contained in all the lakes and reservoirs in the rest of the state, and provide much of the stream and river flow in the dry summer months, approximating some 870,000 m3 (1,140,000 cu yd). As recently as 1975, many North Cascade glaciers were advancing due to cooler weather and increased precipitation that occurred from 1944 to 1976. By 1987 however, all the North Cascade glaciers were retreating, and the pace of the glacier retreat has increased each decade since the mid-1970s. Between 1984 and 2005, the North Cascade glaciers lost an average of more than 12.5 metres (41 ft) in thickness and 20–40 percent of their volume. Glaciologists researching the North Cascades glaciers have found that all 47 monitored glaciers are receding and that four glaciers—Spider Glacier, Lewis Glacier, Milk Lake Glacier, and David Glacier—have disappeared completely since 1985. The White Chuck Glacier (near Glacier Peak) is a particularly dramatic example. The glacier area shrank from 3.1 km2 (1.2 sq mi) in 1958 to .9 km2 (0.35 sq mi) by 2002. Between 1850 and 1950, the Boulder Glacier on the southeast flank of Mount Baker retreated 8,700 feet (2,700 m). William Long of the United States Forest Service observed the glacier beginning to advance due to cooler/wetter weather in 1953. This was followed by a 2,438 feet (743 m) advance by 1979. The glacier again retreated 450 m (1,480 ft) from 1987 to 2005, leaving barren terrain behind. This retreat has occurred during a period of reduced winter snowfall and higher summer temperatures. In this region of the Cascades, winter snowpack has declined 25% since 1946, and summer temperatures have risen 0.7 °C (1.2 °F) during the same period. The reduced snowpack has occurred despite a small increase in winter precipitation—thus, it reflects warmer winter temperatures leading to rainfall and melting on glaciers even during the winter. As of 2005, 67% of the North Cascade glaciers observed are in disequilibrium and will not survive the continuation of the present climate. These glaciers will eventually disappear unless temperatures fall and frozen precipitation increases. The remaining glaciers are expected to stabilize, unless the climate continues to warm, but will be much reduced in size. U.S. Rocky Mountains On the sheltered slopes of the highest peaks of Glacier National Park in Montana, its eponymous glaciers are diminishing rapidly. The area of each glacier has been mapped by the National Park Service and the U.S. Geological Survey for decades. Comparing photographs taken in the mid-19th century with contemporary images provides ample evidence that the glaciers in the park have retreated notably since 1850. Repeat photography over the decades since clearly show that glaciers throughout the park such as Grinnell Glacier are all retreating. The larger glaciers are now approximately a third of their former size when first studied in 1850, and numerous smaller glaciers have disappeared completely. Only 27% of the 99 km2 (38 sq mi) area of Glacier National Park covered by glaciers in 1850 remained covered by 1993. Researchers believe that by the year 2030, the vast majority of glacial ice in Glacier National Park will be gone unless current climate patterns reverse their course. Grinnell Glacier is just one of many glaciers in Glacier National Park that have been well documented by photographs for many decades. The photographs below clearly demonstrate the retreat of this glacier since 1938. 1938 T.J. Hileman GNP The semiarid climate of Wyoming still manages to support about a dozen small glaciers within Grand Teton National Park, which all show evidence of retreat over the past 50 years. Schoolroom Glacier is located slightly southwest of Grand Teton is one of the more easily reached glaciers in the park and it is expected to disappear by 2025. Research between 1950 and 1999 demonstrated that the glaciers in Bridger-Teton National Forest and Shoshone National Forest in the Wind River Range shrank by over a third of their size during that period. Photographs indicate that the glaciers today are only half the size as when first photographed in the late 1890s. Research also indicates that the glacial retreat was proportionately greater in the 1990s than in any other decade over the last 100 years. Gannett Glacier on the northeast slope of Gannett Peak is the largest single glacier in the Rocky Mountains south of Canada. It has reportedly lost over 50% of its volume since 1920, with almost half of that loss occurring since 1980. Glaciologists believe the remaining glaciers in Wyoming will disappear by the middle of the 21st century if the current climate patterns continue. Canadian Rockies and Coast and Columbia Mountains In the Canadian Rockies, the glaciers are generally larger and more widespread than they are to the south in the United States Rocky Mountains. One of the more accessible glaciers in the Canadian Rockies is the Athabasca Glacier, which is an outlet glacier of the 325 km2 (125 sq mi) Columbia Icefield. The Athabasca Glacier has retreated 1,500 m (4,900 ft) since the late 19th century. The rate of retreat for this glacier has increased since 1980, following a period of slow retreat from 1950 to 1980. The Peyto Glacier in Alberta covers an area of about 12 km2 (4.6 sq mi), and retreated rapidly during the first half of the 20th century, stabilized by 1966, and resumed shrinking in 1976. In Garibaldi Provincial Park in Southwestern British Columbia over 505 km2 (195 sq mi), or 26%, of the park, was covered by glacier ice at the beginning of the 18th century. Ice cover decreased to 297 km2 (115 sq mi) by 1987–1988 and to 245 km2 (95 sq mi) by 2005, 50% of the 1850 area. The 50 km2 (19 sq mi) loss in the last 20 years coincides with negative mass balance in the region. During this period all nine glaciers examined have retreated significantly. There are thousands of glaciers in Alaska, though only a relative few of them have been named. The Columbia Glacier near Valdez in Prince William Sound has retreated 15 km (9.3 mi) in the last 25 years. Icebergs calved off this glacier were a partial cause of the Exxon Valdez oil spill, as the oil tanker had changed course to avoid the icebergs. The Valdez Glacier is in the same area, and though it does not calve, it has also retreated significantly. "A 2005 aerial survey of Alaskan coastal glaciers identified more than a dozen glaciers, many former tidewater and calving glaciers, including Grand Plateau, Alsek, Bear, and Excelsior Glaciers that are rapidly retreating. Of 2,000 glaciers observed, 99% are retreating." Icy Bay in Alaska is fed by three large glaciers—Guyot, Yahtse, and Tyndall Glaciers—all of which have experienced a loss in length and thickness and, consequently, a loss in area. Tyndall Glacier became separated from the retreating Guyot Glacier in the 1960s and has retreated 24 km (15 mi) since, averaging more than 500 m (1,600 ft) per year. The Juneau Icefield Research Program has monitored the outlet glaciers of the Juneau Icefield since 1946. On the west side of the ice field, the terminus of the Mendenhall Glacier, which flows into suburban Juneau, Alaska, has retreated 580 m (1,900 ft). Of the nineteen glaciers of the Juneau Icefield, eighteen are retreating, and one, the Taku Glacier, is advancing. Eleven of the glaciers have retreated more than 1 km (0.62 mi) since 1948 — Antler Glacier, 5.4 km (3.4 mi); Gilkey Glacier, 3.5 km (2.2 mi); Norris Glacier, 1.1 km (0.68 mi) and Lemon Creek Glacier, 1.5 km (0.93 mi). Taku Glacier has been advancing since at least 1890, when naturalist John Muir observed a large iceberg calving front. By 1948 the adjacent fjord had filled in, and the glacier no longer calved and was able to continue its advance. By 2005 the glacier was only 1.5 km (0.93 mi) from reaching Taku Point and blocking Taku Inlet. The advance of Taku Glacier averaged 17 m (56 ft) per year between 1988 and 2005. The mass balance was very positive for the 1946–88 period fueling the advance; however, since 1988 the mass balance has been slightly negative, which should in the future slow the advance of this mighty glacier. Long-term mass balance records from Lemon Creek Glacier in Alaska show slightly declining mass balance with time. The mean annual balance for this glacier was −0.23 m (0.75 ft) each year during the period of 1957 to 1976. Mean annual balance has been increasingly negatively averaging −1.04 m (3.4 ft) per year from 1990 to 2005. Repeat glacier altimetry, or altitude measuring, for 67 Alaska glaciers find rates of thinning have increased by more than a factor of two when comparing the periods from 1950 to 1995 (0.7 m (2.3 ft) per year) and 1995 to 2001 (1.8 m (5.9 ft) per year). This is a systemic trend with loss in mass equating to loss in thickness, which leads to increasing retreat—the glaciers are not only retreating, but they are also becoming much thinner. In Denali National Park, all glaciers monitored are retreating, with an average retreat of 20 m (66 ft) per year. The terminus of the Toklat Glacier has been retreating 26 m (85 ft) per year and the Muldrow Glacier has thinned 20 m (66 ft) since 1979. Well documented in Alaska are surging glaciers that have been known to rapidly advance, even as much as 100 m (330 ft) per day. Variegated, Black Rapids, Muldrow, Susitna and Yanert are examples of surging glaciers in Alaska that have made rapid advances in the past. These glaciers are all retreating overall, punctuated by short periods of advance. Andes and Tierra del Fuego A large region of population surrounding the central and southern Andes of Argentina and Chile reside in arid areas that are dependent on water supplies from melting glaciers. The water from the glaciers also supplies rivers that have in some cases been dammed for hydroelectric power. Some researchers believe that by 2030, many of the large ice caps on the highest Andes will be gone if current climate trends continue. In Patagonia on the southern tip of the continent, the large ice caps have retreated a 1 km (0.62 mi) since the early 1990s and 10 km (6.2 mi) since the late 19th century. It has also been observed that Patagonian glaciers are receding at a faster rate than in any other world region. The Northern Patagonian Ice Field lost 93 km2 (36 sq mi) of glacier area during the years between 1945 and 1975, and 174 km2 (67 sq mi) from 1975 to 1996, which indicates that the rate of retreat is increasing. This represents a loss of 8% of the ice field, with all glaciers experiencing significant retreat. The Southern Patagonian Ice Field has exhibited a general trend of retreat on 42 glaciers, while four glaciers were in equilibrium and two advanced during the years between 1944 and 1986. The largest retreat was on O'Higgins Glacier, which during the period 1896–1995 retreated 14.6 km (9.1 mi). The Perito Moreno Glacier is 30 km (19 mi) long and is a major outflow glacier of the Patagonian ice sheet, as well as the most visited glacier in Patagonia. Perito Moreno Glacier is presently in equilibrium, but has undergone frequent oscillations in the period 1947–96, with a net gain of 4.1 km (2.5 mi). This glacier has advanced since 1947, and has been essentially stable since 1992. Perito Moreno Glacier is one of three glaciers in Patagonia known to have advanced, compared to several hundred others in retreat. The two major glaciers of the Southern Patagonia Icefield to the north of Moreno, Upsala and Viedma Glacier have retreated 4.6 km (2.9 mi) in 21 years and 1 km (0.62 mi) in 13 years respectively. In the Aconcagua River Basin, glacier retreat has resulted in a 20% loss in glacier area, declining from 151 km2 (58 sq mi) to 121 km2 (47 sq mi). The Marinelli Glacier in Tierra del Fuego has been in retreat since at least 1960 through 2008. Tropical glaciers Tropical glaciers are located between the Tropic of Cancer and the Tropic of Capricorn, in the region that lies 23° 26′ 22″ north or south of the equator. Tropical glaciers are the most uncommon of all glaciers for a variety of reasons. Firstly, the tropics are the warmest part of the planet. Secondly, the seasonal change is minimal with temperatures warm year round, resulting in a lack of a colder winter season in which snow and ice can accumulate. Thirdly, few taller mountains exist in these regions upon which enough cold air exists for the establishment of glaciers. All of the glaciers located in the tropics are on isolated high mountain peaks. Overall, tropical glaciers are smaller than those found elsewhere and are the most likely glaciers to show rapid response to changing climate patterns. A small temperature increase of only a few degrees can have almost immediate and adverse impact on tropical glaciers. Near the Equator, ice is still found in East Africa, the Ecuadorian Andes, and New Guinea. The retreat of equatorial glaciers has been documented via maps and photographs covering the period from the late 1800s to nearly the present. 99.64% of tropical glaciers are in Andean mountains of South America, 0.25% on the African glaciers of Rwenzori, Mount Kenya and Kilimanjaro, and 0.11% in the Irian Jaya region in New Guinea. With almost the entire continent of Africa located in the tropical and subtropical climate zones, glaciers are restricted to two isolated peaks and the Ruwenzori Range. Kilimanjaro, at 5,895 m (19,341 ft), is the highest peak on the continent. Since 1912 the glacier cover on the summit of Kilimanjaro has apparently retreated 75%, and the volume of glacial ice is now 80% less than it was a century ago due to both retreat and thinning. In the 14-year period from 1984 to 1998, one section of the glacier atop the mountain receded 300 m (980 ft). A 2002 study determined that if current conditions continue, the glaciers atop Kilimanjaro will disappear sometime between 2015 and 2020. A March 2005 report indicated that there is almost no remaining glacial ice on the mountain, and it is the first time in 11,000 years that barren ground has been exposed on portions of the summit. Researchers reported Kilimanjaro's glacier retreat was due to a combination of increased sublimation and decreased snow fall. The Furtwängler Glacier is located near the summit of Kilimanjaro. Between 1976 and 2000, the area of Furtwängler Glacier was cut almost in half, from 113,000 m2 (1,220,000 sq ft) to 60,000 m2 (650,000 sq ft). During fieldwork conducted early in 2006, scientists discovered a large hole near the center of the glacier. This hole, extending through the 6 m (20 ft) remaining thickness of the glacier to the underlying rock, was expected to grow and split the glacier in two by 2007. To the north of Kilimanjaro lies Mount Kenya, which at 5,199 m (17,057 ft) is the second tallest mountain on the African continent. Mount Kenya has a number of small glaciers that have lost at least 45% of their mass since the middle of the 20th century. According to research compiled by the U.S. Geological Survey (USGS), there were eighteen glaciers atop Mount Kenya in 1900, and by 1986 only eleven remained. The total area covered by glaciers was 1.6 km2 (0.62 sq mi) in 1900, however by the year 2000 only about 25%, or 0.4 km2 (0.15 sq mi) remained. To the west of Mounts Kilimanjaro and Kenya, the Ruwenzori Range rises to 5,109 m (16,762 ft). Photographic evidence of this mountain range indicates a marked reduction in glacially covered areas over the past century. In the 35-year period between 1955 and 1990, glaciers on the Rwenzori Mountains receded about 40%. It is expected that due to their proximity to the heavy moisture of the Congo region, the glaciers in the Ruwenzori Range may recede at a slower rate than those on Kilimanjaro or in Kenya. South America A study by glaciologists of two small glaciers in South America reveals another retreat. More than 80% of all glacial ice in the northern Andes is concentrated on the highest peaks in small glaciers of approximately 1 km2 (0.39 sq mi) in size. A 1992 to 1998 observation of the Chacaltaya Glacier in Bolivia and Antizana Glacier in Ecuador indicated that between 0.6 m (2.0 ft) and 1.9 m (6.2 ft) of ice was lost per year on each glacier. Figures for Chacaltaya Glacier show a loss of 67% of its volume and 40% of its thickness over the same period. Chacaltaya Glacier has lost 90% of its mass since 1940 and is expected to disappear altogether sometime between 2010 and 2015. Antizana is also reported to have lost 40% of its surface area between 1979 and 2007. Research also indicates that since the mid-1980s, the rate of retreat for both of these glaciers has been increasing. In Colombia, the glaciers atop Nevado del Ruiz have lost more than half their area in the last 40 years. Further south in Peru, the Andes are at a higher altitude overall, and there are approximately 722 glaciers covering an area of 723 km2 (279 sq mi). Research in this region of the Andes is less extensive but indicates an overall glacial retreat of 7% between 1977 and 1983. The Quelccaya Ice Cap is the largest tropical icecap in the world, and all of the outlet glaciers from the icecap are retreating. In the case of Qori Kalis Glacier, which is Quelccayas' main outlet glacier, the rate of retreat had reached 155 m (509 ft) per year during the three-year period of 1995 to 1998. The melting ice has formed a large lake at the front of the glacier since 1983, and bare ground has been exposed for the first time in thousands of years. Jan Carstensz's 1623 report of glaciers covering the equatorial mountains of New Guinea in 1623 was originally met with ridicule, but in the early 20th century at least five subranges of the Maoke Mountains (meaning "Snowy Mountains") were indeed still found to be covered with large ice caps. Due to the location of the island within the tropical zone, there is little to no seasonal variation in temperature. The tropical location has a predictably steady level of rain and snowfall, as well as cloud cover year round, and there has been no noticeable change in the amount of moisture which has fallen during the 20th century. In 1913, 4,550 m (14,930 ft) high Prins Hendrik peaks (now Puncak Yamin) was named and reported to have "eternal" snow, but this observation was never repeated. The ice cap of 4,720 m (15,490 ft) Wilhelmina Peaks, which reached below 4,400 m (14,400 ft) in 1909, vanished between 1939 and 1963. The Mandala / Juliana ice cap disappeared in the 1990s. and the Idenburg glacier on Ngga Pilimsit dried up in 2003. This leaves currently only the remnants of the once continuous icecap on New Guinea's highest mountain, Mount Carstensz with the 4,884 m (16,024 ft) high Puncak Jaya summit, which is estimated to have had an area of 20 km2 (7.7 sq mi) in 1850 . For this mountain there is photographic evidence of massive glacial retreat since the region was first extensively explored by airplane in 1936 in preparation for the peak's first ascent (see e.g. ). Between then and 2010, the mountain lost 80 percent of its ice — two-thirds of which since another scientific expedition in the 1970s. That research between 1973 and 1976 showed glacier retreat for the Meren Glacier of 200 m (660 ft) while the Carstensz Glacier lost 50 m (160 ft). The Northwall Firn, the largest remnant of the icecap that once was atop Puncak Jaya, has itself split into two separate glaciers after 1942. IKONOS satellite imagery of the New Guinean glaciers indicated that by 2002 only 2.1 km2 (0.81 sq mi) glacial area remained, that in the two years from 2000 to 2002, the East Northwall Firn had lost 4.5%, the West Northwall Firn 19.4% and the Carstensz 6.8% of their glacial mass, and that sometime between 1994 and 2000, the Meren Glacier had disappeared altogether. An expedition to the remaining glaciers on Puncak Jaya in 2010 discovered that the ice on the glaciers there is about 32 metres (105 ft) thick and thinning at a rate of 7 metres (23 ft) annually. At that rate, the remaining glaciers are expected to last only to the year 2015. Polar regions Despite their proximity and importance to human populations, the mountain and valley glaciers of tropical and mid-latitude glaciers amount to only a small fraction of glacial ice on the Earth. About 99 percent of all freshwater ice is in the great ice sheets of polar and subpolar Antarctica and Greenland. These continuous continental-scale ice sheets, 3 km (1.9 mi) or more in thickness, cap much of the polar and subpolar land masses. Like rivers flowing from an enormous lake, numerous outlet glaciers transport ice from the margins of the ice sheet to the ocean. The northern Atlantic island nation of Iceland is home to the Vatnajökull, which is the largest ice cap in Europe. The Breiðamerkurjökull Glacier is one of the Vatnajökull outlet glaciers, and had receded by as much as 2 km (1.2 mi) between 1973 and 2004. In the early 20th century, Breiðamerkurjökull extended to within 250 m (820 ft) of the ocean, but by 2004 Breiðamerkurjökull's terminus had retreated 3 km (1.9 mi) further inland. This glacier retreat exposed a rapidly expanding lagoon that is filled with icebergs calved from its front. The lagoon is 110 m (360 ft) deep and nearly doubled its size between 1994 and 2004. Mass-balance measurements of Iceland's glaciers show alternating positive and negative mass balance of glaciers during the period 1987–95, but the mass balance has been predominantly negative since. On Hofsjökull ice cap, mass balance has been negative each year from 1995–2005. Most of the Icelandic glaciers retreated rapidly during the warm decades from 1930 to 1960, slowing down as the climate cooled during the following decade, and started to advance after 1970. The rate of advance peaked in the 1980s, after which it slowed down until about 1990. As a consequence of rapid warming of the climate that has taken place since the mid-1980s, most glaciers in Iceland began to retreat after 1990, and by 2000 all monitored non-surge type glaciers in Iceland were retreating. An average of 45 non-surging termini were monitored each year by the Icelandic Glaciological Society from 2000–2005. The Canadian Arctic islands have a number of substantial ice caps, including Penny and Barnes ice caps on Baffin Island, Bylot Ice Cap on Bylot Island, and Devon Ice Cap on Devon Island. All of these ice caps have been thinning and receding slowly. The Barnes and Penny ice caps on Baffin Island have been thinning at over 1 m (3.3 ft) per year in the lower elevations from 1995 to 2000. Overall, between 1995 and 2000, ice caps in the Canadian Arctic lost 25 km2 (9.7 sq mi) of ice per year. Between 1960 and 1999, the Devon Ice Cap lost 67 km3 (16 cu mi) of ice, mainly through thinning. All major outlet glaciers along the eastern Devon Ice Cap margin have retreated from 1 km (0.62 mi) to 3 km (1.9 mi) since 1960. On the Hazen Plateau of Ellesmere Island, the Simmon Ice Cap has lost 47% of its area since 1959. If the current climatic conditions continue, the remaining glacial ice on the Hazen Plateau will be gone around 2050. On August 13, 2005, the Ayles Ice Shelf broke free from the north coast of Ellesmere Island. The 66 km2 (25 sq mi) ice shelf drifted into the Arctic Ocean. This followed the splitting of the Ward Hunt Ice Shelf in 2002. The Ward Hunt has lost 90% of its area in the last century. Northern Europe Arctic islands north of Norway, Finland and Russia have all shown evidence of glacier retreat. In the Svalbard archipelago, the island of Spitsbergen has numerous glaciers. Research indicates that Hansbreen (Hans Glacier) on Spitsbergen retreated 1.4 km (0.87 mi) from 1936 to 1982 and another 400 m (1,300 ft) during the 16-year period from 1982 to 1998. Blomstrandbreen, a glacier in the King's Bay area of Spitsbergen, has retreated approximately 2 km (1.2 mi) in the past 80 years. Since 1960 the average retreat of Blomstrandbreen has been about 35 m (115 ft) a year, and this average was enhanced due to an accelerated rate of retreat since 1995. Similarly, Midre Lovenbreen retreated 200 m (660 ft) between 1977 and 1995. In the Novaya Zemlya archipelago north of Russia, research indicates that in 1952 there was 208 km (129 mi) of glacier ice along the coast. By 1993 this had been reduced by 8% to 198 km (123 mi) of glacier coastline. In Greenland, glacier retreat has been observed in outlet glaciers, resulting in an increase of the ice flow rate and destabilization of the mass balance of the ice sheet that is their source. The net loss in volume and hence sea level contribution of the Greenland Ice Sheet (GIS) has doubled in recent years from 90 km3 (22 cu mi) to 220 km3 (53 cu mi) per year. Researchers also noted that the acceleration was widespread affecting almost all glaciers south of 70 N by 2005. The period since 2000 has brought retreat to several very large glaciers that had long been stable. Three glaciers that have been researched—Helheim Glacier, Kangerdlugssuaq Glacier, and Jakobshavn Isbræ—jointly drain more than 16% of the Greenland Ice Sheet. In the case of Helheim Glacier, researchers used satellite images to determine the movement and retreat of the glacier. Satellite images and aerial photographs from the 1950s and 1970s show that the front of the glacier had remained in the same place for decades. In 2001 the glacier began retreating rapidly, and by 2005 the glacier had retreated a total of 7.2 km (4.5 mi), accelerating from 20 m (66 ft) per day to 35 m (115 ft) per day during that period. Jakobshavn Isbræ in west Greenland, a major outlet glacier of the Greenland Ice Sheet, has been the fastest moving glacier in the world over the past half century. It had been moving continuously at speeds of over 24 m (79 ft) per day with a stable terminus since at least 1950. In 2002 the 12 km (7.5 mi) long floating terminus of the glacier entered a phase of rapid retreat, with the ice front breaking up and the floating terminus disintegrating and accelerating to a retreat rate of over 30 m (98 ft) per day. On a shorter timescale, portions of the main trunk of Kangerdlugssuaq Glacier that were flowing at 15 m (49 ft) per day from 1988 to 2001 were measured to be flowing at 40 m (130 ft) per day in the summer of 2005. Not only has Kangerdlugssuaq retreated, it has also thinned by more than 100 m (330 ft). The rapid thinning, acceleration and retreat of Helheim, Jakobshavns and Kangerdlugssuaq glaciers in Greenland, all in close association with one another, suggests a common triggering mechanism, such as enhanced surface melting due to regional climate warming or a change in forces at the glacier front. The enhanced melting leading to lubrication of the glacier base has been observed to cause a small seasonal velocity increase and the release of meltwater lakes has also led to only small short term accelerations. The significant accelerations noted on the three largest glaciers began at the calving in front and propagated inland and are not seasonal in nature. Thus, the primary source of outlet glacier acceleration widely observed on small and large calving glaciers in Greenland is driven by changes in dynamic forces at the glacier front, not enhanced meltwater lubrication. This was termed the Jakobshavns Effect by Terence Hughes at the University of Maine in 1986. The climate of Antarctica is one of intense cold and great aridity. Most of the world's freshwater ice is contained in the great ice sheets that cover the continent of Antarctica. The most dramatic example of glacier retreat on the continent is the loss of large sections of the Larsen Ice Shelf on the Antarctic Peninsula. Ice shelves are not stable when surface melting occurs, and the collapse of Larsen Ice Shelf has been caused by warmer melt season temperatures that have led to surface melting and the formation of shallow ponds of water on the ice shelf. The Larsen Ice Shelf lost 2,500 km2 (970 sq mi) of its area from 1995 to 2001. In a 35-day period beginning on January 31, 2002, about 3,250 km2 (1,250 sq mi) of shelf area disintegrated. The ice shelf is now 40% the size of its previous minimum stable extent. The recent collapse of Wordie Ice Shelf, Prince Gustav Ice Shelf, Mueller Ice Shelf, Jones Ice Shelf, Larsen-A and Larsen-B Ice Shelf on the Antarctic Peninsula has raised awareness of how dynamic ice shelf systems are. Jones Ice Shelf had an area of 35 km2 (14 sq mi) in the 1970s but by 2008 it had disappeared. Wordie Ice Shelf has gone from an area of 1,500 km2 (580 sq mi) in 1950 to 1,400 km2 (540 sq mi) in 2000. Prince Gustav Ice Shelf has gone from an area of 1,600 km2 (620 sq mi) to 1,100 km2 (420 sq mi) in 2008. After their loss the reduced buttressing of feeder glaciers has allowed the expected speed-up of inland ice masses after shelf ice break-up. The Wilkins Ice Shelf is another ice shelf that has suffered substantial retreat. The ice shelf had an area of 16,000 km2 (6,200 sq mi) in 1998 when 1,000 km2 (390 sq mi) was lost that year. In 2007 and 2008 significant rifting developed and led to the loss of another 1,400 km2 (540 sq mi) of area and some of the calving occurred in the Austral winter. The calving seemed to have resulted from preconditioning such as thinning, possibly due to basal melt, as surface melt was not as evident, leading to a reduction in the strength of the pinning point connections. The thinner ice then experienced spreading rifts and breakup. This period culminated in the collapse of an ice bridge connecting the main ice shelf to Charcot Island leading to the loss of an additional 700 km2 (270 sq mi) between February and June 2009. Pine Island Glacier, an Antarctic outflow glacier that flows into the Amundsen Sea, thinned 3.5 m (11 ft)± 0.9 m (3.0 ft) per year and retreated a total of 5 km (3.1 mi) in 3.8 years. The terminus of the Pine Island Glacier is a floating ice shelf, and the point at which it starts to float retreated 1.2 km (0.75 mi) per year from 1992 to 1996. This glacier drains a substantial portion of the West Antarctic Ice Sheet and along with the neighboring Thwaites Glacier, which has also shown evidence of thinning, has been referred to as the weak underbelly of this ice sheet. Additionally, the Dakshin Gangotri Glacier, a small outlet glacier of the Antarctic ice sheet, receded at an average rate of 0.7 m (2.3 ft) per year from 1983 to 2002. On the Antarctic Peninsula, which is the only section of Antarctica that extends well north of the Antarctic Circle, there are hundreds of retreating glaciers. In one study of 244 glaciers on the peninsula, 212 have retreated an average of 600 m (2,000 ft) from where they were when first measured in 1953. The greatest retreat was seen in Sjogren Glacier, which is now 13 km (8.1 mi) further inland than where it was in 1953. There are 32 glaciers that were measured to have advanced; however, these glaciers showed only a modest advance averaging 300 m (980 ft) per glacier, which is significantly smaller than the massive retreat observed. Impacts of glacier retreat The continued retreat of glaciers will have a number of different quantitative impacts. In areas that are heavily dependent on water runoff from glaciers that melt during the warmer summer months, a continuation of the current retreat will eventually deplete the glacial ice and substantially reduce or eliminate runoff. A reduction in runoff will affect the ability to irrigate crops and will reduce summer stream flows necessary to keep dams and reservoirs replenished. This situation is particularly acute for irrigation in South America, where numerous artificial lakes are filled almost exclusively by glacial melt. Central Asian countries have also been historically dependent on the seasonal glacier melt water for irrigation and drinking supplies. In Norway, the Alps, and the Pacific Northwest of North America, glacier runoff is important for hydropower. Some of this retreat has resulted in efforts to slow down the loss of glaciers in the Alps. To retard melting of the glaciers used by certain Austrian ski resorts, portions of the Stubai and Pitztal Glaciers were partially covered with plastic. In Switzerland plastic sheeting is also used to reduce the melt of glacial ice used as ski slopes. While covering glaciers with plastic sheeting may prove advantageous to ski resorts on a small scale, this practice is not expected to be economically practical on a much larger scale. Many species of freshwater and saltwater plants and animals are dependent on glacier-fed waters to ensure the cold water habitat to which they have adapted. Some species of freshwater fish need cold water to survive and to reproduce, and this is especially true with salmon and cutthroat trout. Reduced glacial runoff can lead to insufficient stream flow to allow these species to thrive. Alterations to the ocean currents, due to increased freshwater inputs from glacier melt, and the potential alterations to thermohaline circulation of the World Ocean, may impact existing fisheries upon which humans depend as well. The potential for major sea level rise depends mostly on a significant melting of the polar ice caps of Greenland and Antarctica, as this is where the vast majority of glacial ice is located. If all the ice on the polar ice caps were to melt away, the oceans of the world would rise an estimated 70 m (230 ft). Although previously it was thought that the polar ice caps were not contributing heavily to sea level rise (IPCC 2007), recent studies have confirmed that both Antarctica and Greenland are contributing 0.5 millimetres (0.020 in) a year each to global sea level rise. The fact that the IPCC estimates did not include rapid ice sheet decay into their sea level predictions makes it difficult to ascertain a plausible estimate for sea level rise but recent studies find that the minimum sea level rise will be around 0.8 metres (2.6 ft) by 2100. See also - Intergovernmental Panel on Climate Change. "Graph of 20 glaciers in retreat worldwide". Climate Change 2001 (Working Group I: The Scientific Basis). Retrieved February 14, 2006. Thomas Mölg. "Worldwide glacier retreat". RealClimate. Retrieved March 18, 2005. - Himalaya Rush; Scientists scurry to figure out the status of glaciers on the roof of the world 25.August.2012; Vol.182 #4 (p. 18) Science News - Intergovernmental panel on climate change. "220.127.116.11 Mountain glaciers". Climate Change 2001 (Working Group I: The Scientific Basis. Retrieved February 14, 2006. National Snow and Ice Data Center. "Global glacier recession". GLIMS Data at NSIDC. Retrieved February 14, 2006. - Mote, Philip W.; Kaser, Georg (2007). "The Shrinking Glaciers of Kilimanjaro: Can Global Warming Be Blamed?". American Scientist 95 (4): 318–325. - Trabant, D.C., R.S. March, and D.S. Thomas. "Hubbard Glacier, Alaska: Growing and Advancing in Spite of Global Climate Change and the 1986 and 2002 Russell Lake Outburst Floods" (PDF). Retrieved 2006. - Pelto, Mauri S.. "The Disequilibrium of North Cascade, Washington Glaciers 1984–2004". In "Hydrologic Processes". Retrieved February 14, 2006. - Pelto, M.S. (2010). "Forecasting temperate alpine glacier survival from accumulation zone observations" (PDF). The Cryosphere 4 (1): 67–75. doi:10.5194/tc-4-67-2010. - Swiss Glacier Monitoring Network. "Glacier length variations of the year 2010". Retrieved March 2011. - Swiss Federal Institute of Technology Zurich. "Swiss Glacier Monitoring Network". Variations of Grosser Aletschgletscher. Retrieved August 18, 2005. - Italian Glaciological Committee. "Glaciers". Glaciers in Italy. Archived from the original on April 28, 2005. Retrieved September 8, 2005. - Jürg Alean, Michael Hambrey. "Ice retreat at high and low altitudes (and associated subpages)". Morteratsch ice retreat. Retrieved May 30, 2006. - Mauri S. Pelto. "Recent Global Glacier Retreat Overview". Retrieved February 2006. - Glaciology, Stockholm University. "Glaciers of Sweden". Mass balance data. Retrieved September 29, 2003. - Hallgeir Elverhoy. "Glacier length change observations" (web page). Norwegian Water Resources and Energy Directorate. Retrieved 2011-03-27. - J. Chueca et alia (2007). "Recent evolution (1981–2005) of the Maladeta glaciers, Pyrenees, Spain: extent and volume losses and their relation with climatic and topographic factors". Journal of Glaciology 53 (183): 547–557. Bibcode:2007JGlac..53..547C. doi:10.3189/002214307784409342. - Serrano, E., E. Martinez and F. Lampre (2004). "Desaparición de Glaciares Pirenaicos Españoles". Retrieved 2009-06-26. - "Area of mountain and subpolar glaciers" (Appendix 1) in Glaciers and the Changing Earth System - "Northern Asia" (PDF). Retrieved 2010-03-20. - "Glaciers and Glacial Periods in Their Relations to Climate"; Nature, 22, pp. 424–426 (March 1882) - "Modern Glaciation of the Soviet Arctic" in United States Office of Naval Research; Arctic Sea Ice: Proceedings of the Conference pp. 17–21 - Haritashya,; Bishop, Shroder, Andrew, Bush, Bulley (2009). "Space-based assessment of glacier fluctuations in the Wakhan Pamir, Afghanistan" (PDF). Climate Change 94 (1–2): 5–18. doi:10.1007/s10584-009-9555-9. - Mauri S. Pelto. "Ice Shelf Instability". Retrieved 2009. - Sandeep Chamling Rai, Trishna Gurung, et alia. "An Overview of Glaciers, Glacier Retreat and Subsequent Impacts in Nepal, India and China" (PDF). WWF Nepal Program. Retrieved March 2005. - Bajracharya, Mool. "Glaciers, glacial lakes and glacial lake outburst floods in the Mount Everest region, Nepal". International Centre for Integrated Mountain Development. Retrieved January 10, 2010. - Achenbach, H. (2011): Historische und rezente Gletscherstandsschwankungen in den Einzugsgebieten des Cha Lungpa (Mukut-, Hongde- und Tongu-Himalaja sowie Tach Garbo Lungpa), des Khangsar Khola (Annapurna N-Abdachung) und des Kone Khola (Muktinath-, Purkhung- und Chulu-Himalaja). Dissertation, Universität Göttingen, 260 S. (elektronische Version)http://webdoc.sub.gwdg.de/diss/2011/achenbach/ - Bishop, MP; Barry, RG; Bush, ABG; et al. (2004). "Global land-ice measurements from space (GLIMS): remote sensing and GIS investigations of the Earth’s cryosphere". Geocarto Int 19 (2): 57–84. - V.K. Raina. "Himalayan GlaciersA State-of-Art Review of Glacial Studies,Glacial Retreat and Climate Change" (PDF). Geological Survey of India. Retrieved January 10, 2010. - Hewitt, K. "The Karakoram anomaly? Glacier Expansion and the ‘Elevation Effect,’ Karakoram Himalaya". Mt Res Dev 25 (4): 332–340. - United Nations Environment Programme. "Global Warming Triggers Glacial Lakes Flood Threat – April 16, 2002". UNEP News Release 2002/20. Retrieved April 16, 2002. - T. E. Khromova, M. B. Dyurgerov and R. G. Barry (2003). "Late-twentieth century changes in glacier extent in the Ak-shirak Range, Central Asia, determined from historical data and ASTER imagery (Abstract)". American Geophysical Union 30 (16): 1863. - Kirby, Alex (September 4, 2003). "Kazakhstan's glaciers 'melting fast'". BBC News. - V. Novikov. "Tajikistan 2002, State of the Environment Report". Climate Change. Retrieved March 3, 2003. - "Global warming benefits to Tibet: Chinese official". Google.com. AFP. 2009-08-17. Retrieved 2010-03-20. - Salinger, Jim and Andrew Willsman (February 2007). "Annual Glacier Volumes in New Zealand 1995–2005" (PDF). NIWA Client Report: AKL2007-014. Statistics New Zealand. Retrieved 2009-11-05. - U.S. Geological Survey, United States Department of the Interior. "Glaciers of New Zealand". Retrieved May 4, 2000. - Goodenough, Patrick (July 7, 2008). "A Glacier Grows, Undeterred by Heated Kyoto Debate". Cybercast News Service. - Huegel, Tony (2008). Sierra Nevada Byways: 51 of the Sierra Nevada's Best Backcountry Drives (Backcountry Byways). Wilderness Press. p. 2. ISBN 0-89997-473-2. Retrieved 2011-10-15. - Castor, Stephen B.; Keith G, Papke, Richard O. Meeuwig (2004). Proceedings of the 39th Forum on the Geology of Industrial Minerals, Nevada. Nevada Bureau of Mines and Geology. p. 192. Retrieved 2011-10-15. - Pelto, Mauri S. "Recent Global Glacier Retreat Overview". Retrieved 2011-10-15. - Mauri S. Pelto; Cliff Hedlund (2001). "Terminus behavior and response time of North Cascade glaciers, Washington, U.S.A". Journal of Glaciology 47 (158): 497–506. Bibcode:2001JGlac..47..497P. doi:10.3189/172756501781832098. - Mauri S. Pelto (Nichols College). "North Cascade Glacier Terminus Behavior". Retrieved January, 2010. - U.S. Geological Survey. "Glacier Monitoring in Glacier National Park". Retrieved April 25, 2003. - U.S. Geological Survey, U.S.Department of the Interior. "Glacier Retreat in Glacier National Park, Montana". Retrieved April 25, 2003. - Wyoming Water Resources Data System Library. "Glacial Icemelt in the Wind River Range, Wyoming". Retrieved July 11, 1990. - Canadian Cryospheric Information Network. "Past Variability of Canadian Glaciers". Retrieved February 14, 2006. - J.Koch, B.Menounos and J.Clague (2009). "Glacier change in Garibaldi Provincial Park, southern Coast Mountains, British Columbia, since the Little Ice Age". Global and Planetary Change. 66. (3–4) 161–178 (3–4): 161. Bibcode:2009GPC....66..161K. doi:10.1016/j.gloplacha.2008.11.006. - Bruce F. Molnia. "Repeated Rapid Retreats of Bering Glacier by Disarticulation—The Cyclic Dynamic Response of an Alaskan Glacier System". Retrieved December 2005. - Bruce F. Molnia. "Fast-flow advance and parallel rapid retreat of non-surging tidewater glaciers in Icy Bay and Yakutat Bay, Alaska 1888–2003". Retrieved September 6, 2003. - Mauri S. Pelto and Maynard M. Miller. "Terminus Behavior of Juneau Icefield Glaciers 1948–2005". North Cascade Glacier Climate Project. Retrieved December 2006. - Mauri S. Pelto, Maynard M. Miller. "Mass Balance of the Taku Glacier, Alaska 1946–1986" (PDF). Retrieved 2008. - Maynard M. Miller, Mauri S. Pelto. "Mass Balance Measurements of the Lemon Creek Glacier, Juneau Icefield, Alaska, 1953–2005". Retrieved February 2006. - Anthony A. Arendt et al. (July 19, 2002). "Rapid Wastage of Alaska Glaciers and Their Contribution to Rising Sea Level". Science 297 (5580): 382–386. Bibcode:2002Sci...297..382A. doi:10.1126/science.1072497. PMID 12130781. - Guy W. Adema et al. "Melting Denali: Effects of Climate Change on the Glaciers of Denali National Park and Preserve" (PDF). Retrieved September 9, 2007. - News, BBC (April 27, 2000). "Patagonian ice in rapid retreat". BBC News. - Skvarca, P. and R. Naruse (1997). "Dynamic behavior of glaciar Perito Moreno, Southern Patagonia". Annals of Glaciology 24 (1): 268–271. Casassa, G., H. Brecher, A. Rivera and M. Aniya, (1997). "A century-long record of glacier O’Higgins, Patagonia". Annals of Glaciology 24 (1): 106–110. - "Huge glaciers retreat on a large scale in Patagonia, South America". Earth Observation research Center. July 15, 2005. Retrieved June 13, 2009. - Brown, F., Rivera, A., Acuna, C. (2008). "Recent glaciers variations at the Aconcagua Basin, central Chilean Andes" (PDF). Annals of glaciology 48 (2): 43–48. Bibcode:2008AnGla..48...43B. doi:10.3189/172756408784700572. - Pierrehumbert, Raymond (May 23, 2005). "Tropical Glacier Retreat". RealClimate. Retrieved March 8, 2010. - Hastenrath, Stefan (2008). Recession of equatorial glaciers : a photo documentation. Madison, Wis.: Sundog Publishing. p. 142. ISBN 978-0-9729033-3-2. - Osmaton and Kaser (2002). Tropical Glaciers. New York: Cambridge. p. 19. ISBN 978-0-521-63333-8. - "Snows of Kilimanjaro Disappearing, Glacial Ice Loss Increasing". Ohio State University. Retrieved August 31, 2006. - Andrew Wielochowski. "Glacial recession on Kilimanjaro". Retrieved October 6, 2009. - Lonnie G. Thompson; et alia (2002 (October 18)). "Kilimanjaro Ice Core Records: Evidence of Holocene Climate Change in Tropical Africa". Science 298 (5593): 589–593. Bibcode:2002Sci...298..589T. doi:10.1126/science.1073198. PMID 12386332. Ohio State University. "African Ice Core Analysis reveals catastrophic droughts, shrinking ice fields and civilization shifts". Ohio State Research News. Retrieved October 3, 2002. - Unlimited, Guardian (March 14, 2005). "The peak of Mt Kilimanjaro as it has not been seen for 11,000 years". The Guardian. Tyson, Peter. "Vanishing into Thin Air". Volcano Above the Clouds. NOVA. Retrieved August 2006. - Thompson, Lonnie G., et al. "Kilimanjaro Ice Core Records: Evidence of Holocene Climate Change in Tropical Africa" (PDF). Science. Retrieved August 31, 2006. - U.S. Geological Survey. "Glaciers of Africa" (PDF). U.S. Geological Survey Professional Paper 1386-G-3. - Andrew Wielochowski. "Glacial recession in the Rwenzori". Retrieved July 20, 2007. - Tegel, Simeon. "Antisana's Glaciers: Victims of Climate Change". GlobalPost. Retrieved 13 August 2012. - Bernard Francou. "Small Glaciers Of The Andes May Vanish In 10–15 Years". UniSci, International Science News. Retrieved January 22, 2001. - Huggel, Cristian; Ceballos, Jorge Luis; Pulgarín, Bernardo; Ramírez, Jair; Thouret, Jean-Claude (2007). "Review and reassessment of hazards owing to volcano–glacier interactions in Colombia" (PDF). Annals of Glaciology 45 (1): 128–136. Bibcode:2007AnGla..45..128H. doi:10.3189/172756407782282408. - U.S. Geological Survey, U.S.Department of the Interior. "Peruvian Cordilleras". Retrieved February 9, 2007. - In Sign of Warming, 1,600 Years of Ice in Andes Melted in 25 Years April 4, 2013 New York Times - Byrd Polar Research Center, The Ohio State University. "Peru – Quelccaya (1974–1983)". Ice Core Paleoclimatology Research Group. Retrieved February 10, 2006. - E.J. Brill, Tijdschrift van het Koninklijk Nederlandsch Aardrijkskundig Genootschap, 1913, p. 180. - Ian Allison and James A. Peterson. "Glaciers of Irian Jaya, Indonesia and New Zealand". U.S. Geological Survey, U.S.Department of the Interior. Retrieved April 28, 2009. - Klein, A.G., Kincaid, J.L., 2008. On the disappearance of the Puncak Mandala ice cap, Papua. Journal of Glaciology 54, 195–198. - McDowell, Robin (July 1, 2010). "Indonesia’s Last Glacier Will Melt ‘Within Years’". Jakarta Globe. Retrieved 2011-10-23. - Joni L. Kincaid and Andrew G. Klein. "Retreat of the Irian Jaya Glaciers from 2000 to 2002 as Measured from IKONOS Satellite Images" (PDF). 61st Eastern Snow Conference Portland, Maine, USA 2004. Retrieved 2007. - Jakarta Globe (July 2, 2010). "Papua Glacier’s Secrets Dripping Away: Scientists". Retrieved 2010-09-14. - Kusky, Timothy (2010). Encyclopedia of Earth and Space Science. Facts on File. p. 343. ISBN 0-8160-7005-9. Retrieved 2011-10-15. - Sveinsson, Óli Gretar Blondal (August 11–13, 2008). "XXV Nordic Hydrological Conference" (pdf). Nordic Association for Hydrology. Retrieved 2011-10-15. - Sigurdsson, Oddur, Trausti Jonsson and Tomas Johannesson. "Relation between glacier-termini variations and summer temperature in Iceland since 1930" (PDF). Hydrological Service, National Energy Authority. Retrieved September 7, 2007. - W. Abdalati et alia (November 20, 2004). "Elevation changes of ice caps in the Canadian Arctic Archipelago (Abstract)". American Geophysical Union 109 (F04007): F04007. - David O. Burgess and Martin J. Sharpa (2003 (December)). "Recent Changes in Areal Extent of the Devon Ice Cap, Nunavut, Canada". BioOne 36 (2): 261–271. doi:10.1657/1523-0430(2004)036[0261:RCIAEO]2.0.CO;2. ISSN 1523-0430. - Braun, Carsten; Hardy, D.R.; and Bradley, R.S. (2004). "Mass balance and area changes of four High Arctic plateau ice caps, 1959–2002" (PDF). Geografiska Annaler 86 (A): 43–52. doi:10.1111/j.0435-3676.2004.00212.x. - National Geographic. "Giant Ice Shelf Breaks Off in Canadian Arctic". Retrieved December 2006. - Mueller, Vincent and Jeffries. "Break-up of the largest Arctic ice shelf and associated loss of an epishelf lake". Retrieved December 2006. - Glowacki, Piotr. "Glaciology and environmental monitoring". Research in Hornsund. Retrieved February 14, 2006. - GreenPeace (2002). "Arctic environment melts before our eyes". Global Warming—Greenpeace Pictures in Spitsbergen. Retrieved February 14, 2006. - David Rippin, Ian Willis, Neil Arnold, Andrew Hodson, John Moore, Jack Kohler and Helgi Bjornsson (2003). "Changes in Geometry and Subglacial Drainage of Midre Lovenbreen, Svalbard, Determined from Digital Elevation Models" (PDF). Earth Surface Processes and Landforms 28 (3): 273–298. Bibcode:2003ESPL...28..273R. doi:10.1002/esp.485. - Aleksey I. Sharov (2005). "Studying changes of ice coasts in the European Arctic" (PDF). Geo-Marine Letters 25 (2–3): 153–166. Bibcode:2005GML....25..153S. doi:10.1007/s00367-004-0197-7. - Rignot, E. and Kanagaratnam, P. (February 17, 2006). "Changes in the Velocity Structure of the Greenland Ice Sheet". Science 311 (5763): 986–990. Bibcode:2006Sci...311..986R. doi:10.1126/science.1121381. PMID 16484490. Retrieved November 2009. - Ian Howat. "Rapidly accelerating glaciers may increase how fast the sea level rises". UC Santa Cruz, November 14–27, 2005 Vol. 10, No. 14. Retrieved November 27, 2007. - M Truffer, University of Alaska Fairbanks; M Fahnestock, University of New Hampshire. "The Dynamics of Glacier System Response: Tidewater Glaciers and the Ice Streams and Outlet Glaciers of Greenland and Antarctica I". Retrieved 2007. - S.Das, I, Joughin, M. Behm, I. Howat, M. King, D. Lizarralde, M. Bhatia. "Fracture Propagation to the Base of the Greenland Ice Sheet During Supraglacial Lake Drainage". Retrieved 2009. - M. Pelto. "Moulins, Calving Fronts and Greenland Outlet Glacier Acceleration". Retrieved 2009. - T. Hughes (1986). "The Jakobshanvs effect". Geophysical Research Letters 13 (1): 46–48. Bibcode:1986GeoRL..13...46H. doi:10.1029/GL013i001p00046. - National Snow and Ice Data Center (March 21, 2002). "Larsen B Ice Shelf Collapses in Antarctica". The Cryosphere, Where the World is Frozen. Retrieved November 5, 2009. - A. J. Cook and D. G. Vaughan. "Overview of areal changes of the ice shelves on the Antarctic Peninsula over the past 50 years". The Cryosphere Discussions 3 (2): 579–630. doi:10.5194/tcd-3-579-2009. Retrieved January, 2010. - Rignot, E.; Casassa, G.; Gogineni, P.; Krabill, W.; Rivera, A.; Thomas, R. (2004). "Accelerated ice discharge from the Antarctic Peninsula following the collapse of Larsen B ice shelf". Geophysical Research Letters 31 (18): L18401. Bibcode:2004GeoRL..3118401R. doi:10.1029/2004GL020697. Retrieved 2011-10-22. - M. Humbert, A. Braun and A. Moll (2008). "Changes of Wilkins Ice Shelf over the past 15 years and inferences on its stability". The Cryosphere 2 (3): 341–382. doi:10.5194/tcd-2-341-2000. - "Satellite imagery shows fragile Wilkins Ice Shelf destabilised". European Space Agency. June 13, 2009. - Rignot, E. J. (July 24, 1998). "Fast Recession of a West Antarctic Glacier". Science 281 (5376): 549–551. Bibcode:1998Sci...281..549R. doi:10.1126/science.281.5376.549. PMID 9677195. - News, AAAS (April 21, 2005). "New Study in Science Finds Glaciers in Retreat on Antarctic Peninsula". American Association for the Advancement of Science. - News, BBC (April 21, 2005). "Antarctic glaciers show retreat". BBC News. - News, BBC (October 9, 2003). "Melting glaciers threaten Peru". BBC News. - M. Olefs and A. Fischer. "Comparative study of technical measures to reduce snow and ice ablation in Alpine glacier ski resorts". in "Cold Regions Science and Technology, 2007". Retrieved September 6, 2009. - ENN (July 15, 2005). "Glacial Cover-Up Won't Stop Global Warming, But It Keeps Skiers Happy". Environmental News Network. - The Economics of Adapting Fisheries to Climate Change. OECD Publishing. 2011. pp. 47–55. ISBN 92-64-09036-3. Retrieved 2011-10-15. - Rahmstorf S, Cazenave A, Church JA, et al. (May 2007). "Recent climate observations compared to projections". Science 316 (5825): 709. Bibcode:2007Sci...316..709R. doi:10.1126/science.1136843. PMID 17272686. - Velicogna, I. (2009). "Increasing rates of ice mass loss from the Greenland and Antarctic ice sheets revealed by GRACE". Geophysical Research Letters 36 (19). Bibcode:2009GeoRL..3619503V. doi:10.1029/2009GL040222. - Cazenave, A.; Dominh, K.; Guinehut, S.; Berthier, E.; Llovel, W.; Ramillien, G.; Ablain, M.; Larnicol, G. (2009). "Sea level budget over 2003–2008: A reevaluation from GRACE space gravimetry, satellite altimetry and Argo". Global and Planetary Change 65: 83–88. Bibcode:2009GPC....65...83C. doi:10.1016/j.gloplacha.2008.10.004. - Pfeffer WT, Harper JT, O'Neel S (September 2008). "Kinematic constraints on glacier contributions to 21st-century sea-level rise". Science 321 (5894): 1340–3. Bibcode:2008Sci...321.1340P. doi:10.1126/science.1159099. PMID 18772435. Further reading - Aniya, M. and Y.Wakao (1997). "Glacier variations of Heilo Patagonico Norte, Chile between 1945–46 and 1995–96". Bulletin of Glacier Research 15: 11–18. - Hall M.H. and Fagre, D.B (2003). "Modeled Climate-Induced Glacier Change in Glacier National Park, 1850–2100". BioScience 53 (2): 131–140. doi:10.1641/0006-3568(2003)053[0131:MCIGCI]2.0.CO;2. ISSN 0006-3568. - Hastenrath, S. (2008). Recession of Equatorial Glaciers: A Photodocumentation. Madison, WI: Sundog Publishing. ISBN 978-0-9729033-3-2. - IUGG(CCS)/UNEP/UNESCO (2005). In Haeberli, W., Zemp, M., Frauenfelder, R., Hoelzle, M. and Kääb, A. Fluctuations of Glaciers 1995–2000, Vol. VIII. Paris: World Glacier Monitoring Service. - National Park Service, U.S. Department of the Interior. "Icefields and Glaciers". Tongass National Forest, Forest Facts. United States Forest Service. Retrieved July 10, 2002. - NOAA. "Arctic Change". Study of Environmental Arctic Change. Retrieved February 15, 2006. - Pelto, M.S. and Hartzell, P.L. (2004). "Change in longitudinal profile on three North Cascades glaciers during the last 100 years". Hydrologic Processes 18 (6): 1139–1146. Bibcode:2004HyPr...18.1139P. doi:10.1002/hyp.5513. - Pelto, M.S. and Hedlund, C. (2001). "The terminus behavior and response time of North Cascade glaciers". Journal of Glaciology 47 (158): 497–506. Bibcode:2001JGlac..47..497P. doi:10.3189/172756501781832098. - Pidwirny M. "Glacial Processes". PhysicalGeography.net. Retrieved February 2, 2006. - University College London. "Climate change and the aquatic ecosystems of the Rwenzori Mountains, Uganda". Glaciology—assessing the magnitude of current glacial recession. Retrieved September 3, 2003. - Wielochowski A. "Glacial recession on Kilimanjaro". Retrieved October 6, 1998. |Wikimedia Commons has media related to: Glaciers| - "United Nations Environment Programme: Global Outlook for Ice and Snow". - Melting Glaciers Liberate Ancient Microbes; The release of life-forms in cold storage for eons raises new concerns about the impacts of climate change April 18, 2012 - As Alaska Glaciers Melt, It’s Land That’s Rising May 17, 2009 New York Times - Meltwater from Greenland glacier wipes out key crossing 25.July.2012 The Guardian
http://en.wikipedia.org/wiki/Retreat_of_glaciers_since_1850
13
53
During the time of the dinosaurs seed plants (spermatophytes) were well developed and were the most dominant vegetation on earth, especially the lush seed ferns, conifers and palmlike cycads. These primitive seed plants are called gymnosperms (meaning "naked seeds") because their seeds are not enclosed in a ripened fruit but are protected by cones or by a fleshy seed coat. |Most gymnosperms (and flowering plants) have both sexes on the same plant, but the Ginkgo is a dioecious gymnosperm, male and female are separate trees, its seeds have a fleshy The Ginkgo and the cycads are the only living seed-producing plants that have motile or free swimming sperm (more info here). In earlier classification systems the Ginkgo tree was placed in the class Coniferopsida, because it is thought to be more related to conifers than to any other gymnosperm, but the two groups appear to have evolved independently. Green algae (Coccomyxa) live in symbiosis with Ginkgo tissues, recent research has shown. So far this association is not known on any other tree and only occurs in the animal kingdom. old Ginkgo in Japan photo Hiroshi Takahashi ||You can distuinguish a Ginkgo from other gymnosperms by its fan shaped and bilobed leaves. All Ginkgo trees have a relatively primitive vascular system. The veins continuously divide into two's. This vein pattern (dichotomous venation) is unique to the Ginkgo.| Because of its unique position botanists found it difficult to classify the Ginkgo. Therefore the Ginkgo has been placed in a separate group in recent years, the division (phylum) Ginkgophyta. This division consists of the single order Ginkgoales (Engler 1898), a single family Ginkgoaceae (Engler 1897), a single extant genus Ginkgo. Click here for the classification scheme There are two extinct genera: Ginkgoites and Baiera (known from fossilized leaves). The only living representative of the order Ginkgoales is the Ginkgo biloba. A Ginkgo tree can reach about 30 sometimes 40 metres (100 feet) height and a spread of 9 metres. The trunk can become about 4 metres (13 feet) wide in diameter (in open areas much larger; near temples 50 m with girth 10 m grow!) and is straight columnar and sparingly branched. Some trees are very wide spreading, others are narrow. Young trees have a central trunk, pyramidal in shape, with regular, lateral, ascending, asymmetrical branching and open growth. Older trees have an oval to upright spreading growth and sometimes irregular branching and tremendous sized limbs and trunk. When about 100 years old its canopy begins to widen. The male tree usually has a slim column form and is slightly longer, the female tree has a wider crown and a more spread out form. chichi on 'Tenjinsama no ichou', girth 10 m Aomori Prefecture, N. Honshu, Japan Photo © Hiroshi Takahashi |The Ginkgo has long and short at nearly right angles. A short branch may become a long branch and the tip of a long branch may change into a short branch. That's why older trees may have a more irregular form. The buds are mounded with distinct form and leaf scars. The leaves grow alternate on the long branches during spring. On the ends of short, lateral shoots they grow very slowly in clusters and produce a long shoot with scattered leaves after a number of years. The short shoots also produce the seeds and pollen. The stems are tan, light brown or gray, relatively smooth and are somewhat reflective in the winter sun. Some trees tend to have branches crossing the trunk. The girth of the trunk of the older trees may become large because of secondary growth. The tree usually loses its central leader and gives rise to several vertical trunks ("basal chichi") that keep reaching great heights. These socalled lignotubers can also be observed on the Sumter plantation (see Usage-page) where the trees are regularly cut down to groundlevel and produce lignotubers that give new shoots and roots. buds and bark Video I made of an old Ginkgo tree at Maldegem (Belgium), c. 1840, female, with chichi The Ginkgo also produces peg-like structures (chi-chi = nipples, sort of "aerial" lignotubers) along the trunk and branches that can grow into the ground and form roots as well as leafy branches above because of the embedded vegetative buds, which is characteristic only for the Ginkgo. The chichi (Chinese: zhong ru) seem to be connected to traumatic events, environmental stress and individual properties of a tree. It is seen with old, but also with younger trees. It is thought the chichi, its resistance against diseases, its adaptability and individual properties of the tree etc. contribute to the long history of the survival of the Ginkgo. Inside the trunk the wood is yellow. The bark is light brown to brownish-gray; more brown, deeply furrowed and ridged on older trees and has a corky texture. click picture for enlargement and more photos of the wood |As said before although the Ginkgo is more like a conifer than a deciduous broadleaf tree it is neither, it has a unique position. This also becomes clear when looking at the microscopic structure of the wood. It lacks vessels in the xylem but has slightly rounded tracheids giving rise to intercellular spaces (conifers have regularly close-fitting rectangular tracheids). It has beautiful crystals (called druses) consisting of calcium oxalate.| |The leaves are an easy recognizable feature of the deciduous Ginkgo biloba. They are 5- 8 cm wide and are sometimes twice as broad although they vary in size and shape. The leathery leaves have a wax layer on both sides and are slightly thicker than other Northern tree leaves. They consist of a leaf stalk and a fan-shaped dichotomously veined blade: two parallel veins enter each blade from the point of attachment of the long leafstalk and divide repeatedly into two's (picture click here) , are not often cross-connected (seldom fuse). The veins are slightly raised giving a ribbed appearance. The pores are recessed and limited thereby reducing waterloss from evaporation. The form is bilobed, it has no midrib and is fan-shaped. The leafstalk is also about 8 cm (3 inches) long causing the foliage to flutter in the slightest breeze.|| - GINKGO LEAVES : |The leaf resembles the leafshape of a Maidenhair fern (Adiantum), hence the plant's nickname, the Maidenhair A deep vertical slit in the top center divides the leaf into two lobes mostly on the upper part of long branches. The leaf can also have more than two lobes, esp. on the lower part of the tree. There is great variation in the degree of lobing on the same tree and this also seems to vary from tree to tree. The colour is gray-green to yellow- to darkgreen in summer, turning in yellow and in good years a beautiful golden yellow colour in fall. Certain selected cultivars yearly have this golden yellow colour in fall. |The leaves remain on the tree until late in the season and then can all fall rapidly in a single or a few days and even in 1 or 2 hours! HD-Video: Ginkgo in fall in Amsterdam. Slideshow: Ginkgo in fall impressions. The form of a butterfly resembles a Ginkgo leaf. The extract of the dried leaves is popular for their use as a diet supplement and/or herbal medicine (prescribed in Europe) for the brain, legs, eyes, heart and ears. Scientific studies show that good extracts may improve bloodcirculation and memory, prevent bloodclotting, damage by free radicals and give an improved sense of well-being and can be used for many other disorders. The leaves are also used as tea for a variety of ailments. The fungus Bartheletia paradoxa is a living fossil like Ginkgo biloba and grows on fallen Ginkgo leaves (it is not present on leaves on the tree), it does not grow on other plant species. Read more about this on my Usage-page. Climate zones 3 to 9 (-30/40 to +20/40 C). So from Iceland to Australia, read my Where-page. Nearly every arboretum or botanical garden will contain specimens. The supreme specimens are to be found on temple grounds in China, Korea and Japan. In China they also grow in forests and valleys on acidic, well-drained sandy loam (pH 5-5.5) and they are cultivated below 2,000 m (see Where-page). Ginkgo has survived in some areas of China where the impact of glaciation was minimal. Populations of Ginkgo biloba are found across the country, but are generally associated with human activities. Questions about the extent of Ginkgo biloba’s native range in China have been the subject of debate among botanists for well over a hundred years. DNA analyses (Literature-page) have demonstrated that isolated Ginkgo populations in southwestern China, especially around the southern slopes of Jinfo Mountain (Jinfo Shan) of Nanchuan County at the boundary of Chongqing Municipality and Guizhou Province (28°53'N; 107°27'E) possess a significantly higher degree of genetic diversity than populations in other parts of the country. Southwestern China was less affected by cold air from Siberia during the glaciations. The area has a mesic, warm-temperate climate with a mean annual temperature of 16.6°C and a mean annual precipitation of 1185mm, with Ginkgo trees growing mainly between elevations of 800 and 1300 m. The largest Ginkgo tree was estimated to be 2500 years old with a mean diameter of 3.69 m at breast height. click photo to enlarge and to view more photos |Ecological work in this area, as well as in adjacent parts of Guizhou Province, has identified numerous small populations, for instance in Wuchuan County and Tuole, that can be considered to be either wild or remnants of wild plants, despite their proximity to small villages practising subsistence agriculture. Both the Jinfo Mountain and Wuchuan County populations are situated to the east of Mt Dalou. This region has the greatest biodiversity in China due to the relatively stable environment and diverse topography. Evidence for the persistance of wild Ginkgo biloba (Ginkgoaceae) populations in the valley and lower mountain slopes of the Dalou mountains in southwestern China was published in 2012 (Tang et al.). More info and photos. The Ginkgos on West Tianmu Mountain, which were previously considered to be wild by many researchers, may, instead, have been introduced by Buddhist monks. However, more research is needed. For places where the Ginkgo has a special growing spot see the page Where on my homepage. The Ginkgo can have a long life span, 1,000 or older. In China the oldest Ginkgo is about 3,500 years old! The majority of ginkgos live as a hardy ornamental tree and, being nearly cosmopolitan, specimens are planted around the globe in almost any temperate and subtropical areas. In the USA only 0.2-2% of the total number of roadside trees is a Ginkgo, in Europe this is even less (1992). The tree is farmed extensively (esp. for its medicinal use as a herb) in Europe, Japan, Korea and the USA. In China Ginkgo trees of more than 100 years old are listed as second class protected plants of the state. Roads and buildings should give way in order to protect them well. Some people think there's a good opportunity to plant a Ginkgo tree on special occasions like the death of a beloved one, the birth of a child, an anniversary, moving house etc. prefers full sun to partial sun and moist, deep, well-drained soils loam), but is very adaptable, so it also grows in poor soils, compacted soils, various soil pHs, heat, drought, salt spray in winter and air pollution. 1-2 times a year. Don't mulch with shredded bark round the trunk, keep it airy. It roots deeply (photos of root system). The roots of most Ginkgos are infected by vesicular-arbuscular mycorrhizae (VAM) that play an important role in the uptake of the element phosphorus. |The Ginkgo tree is particularly resistant to insect pests and to fungal, viral and bacterial diseases as well as to ozone and sulfer dioxide pollution, fire and even radioactive radiation (atom bomb WWII). Therefore it is used as a street tree, esp. in cities, it never needs spraying. It can tolerate snow-and icestorms. Research showes that the Ginkgo has no trouble adapting to greenhouse-effect conditions (elevated CO2). It is also planted as a park and landscape tree and in gardens. It is also grown for its shade (little shade when it's young). It is particularly easy to establish in the garden. It initially grows somewhat slowly: it takes 10 to 12 years to become 6 metres (20 feet) tall and it takes about 20 years before it has a rounded shape. It can be trained as an espalier (photos), hedge or climber. Playlist videos of the Ginkgo as a street tree. Midousuji Boulevard, Osaka, Japan 900 Ginkgo trees of 1937 in a row of 4.5 km Photo © Sando Tomoki |Plant in spring or fall. Young trees tend to grow crooked and should at first be staked. Give plenty of water (more during dry/hot periods) until they are about 6 metres It is slow to recover from transplanting. In favourable conditions the Ginkgo grows from about late May to the end of August over 30 cm per year for the first 30 years of its life. In some years it doesn't grow at all, in others 1 metre of growth can occur, independent of watering or nutrients. The tree needs no pruning. the tree is dioecious, male and female trees are separate. The sex chromosomes (XX females and XY males, just like humans) are difficult to distinguish, so the tree's gender is not easily classified. The pollen and ovules grow on the short spurts, very seldom on the leaves (Ohatsuki). Occasionally both genders are found on the same tree. After a hot summer or grown in a warm sunny position the tree produces them more reliably. Ohatsuki ovule. More photos of old tree in Japan and ohatsuki leaves. Ohatsuki pollen remain small photo © Hiroshi Takahashi They look like cherries. It takes about 20-35 years before they appear for the first time in spring. Catkin-like pollen cones (microsporangia) containing the sperms on the male tree also grow on short shoots in spring (also after about 20-35 years) and the pollination usually takes place via the wind. The female tree can carry seeds without pollination (sterile). Variations in the cycle of pollination, fertilization and seed abscission in Ginkgo are mainly due to the latitude and the local climate of the region in which the tree is growing. |When the ovules are fertilized they develop into yellowish, plumlike seeds about 2,5 cm (1 inch) long, consisting of a large "nut" (the size of an almond) with a fleshy outer layer. The actual fertilization of the seed by free swimming sperm occurs mostly on the tree (read more here).|| The propagation can also be done by cuttings (the best way to ensure the gender) or by grafting a female branch onto a male plant or visa versa. Read further about this on my Propagation-page. The fresh nutritious seeds (also canned with fleshy outer coat removed) are sold in markets esp. in the Orient. The "nut" has for long been used in Chinese medicine for asthma, coughs with thick phlegm, bronchitis, digestive aid and urinary incontinence etc. Read further about this on my Usage-page. I made a special page about the Ginkgo as a bonsai tree. Click here to go there. HD-video: one of the oldest Ginkgo trees outside Asia Geetbets, Belgium, c. 1750 follow me on Twitter Many garden centers sell the Ginkgo biloba. Special cultivars are for sale at nurseries. The Ginkgo can often be found in the conifer catalogue (check out my Links-page and/or try search engines). Male trees are often propagated from cuttings. Always buy from a reputable firm. Of course I would very much like to have male and female trees in my own garden, but I only have space for one big tree.... If you have a suitable spot then please plant a female tree or grow Ginkgos from seed (can become male or female tree), because they are so rare and when only male Ginkgos are planted it cannot survive without human intervention. The Ginkgo is listed in the IUCN Red List of Endangered Plants. The Ginkgo tree can grow large, therefore it is not the tree for every backyard. Selections are made to make it suitable for places with less space and also to meet with desired shapes etc. Upright, dwarf, narrow and conical, pendulous and variegated cultivars exist, search for them on the internet, ask for them at garden centers and nurseries. Cultivars are mentioned below (but no doubt there will be many more). 'Anny's Dwarf': dwarf form 'Autumn Gold': better fall colour and/or modified broad spreading growth habit, compact form, male. 'Barabits Nana': small bushy form, up to 2 metres. 'Beijing Gold': shrub form, 4 m, yellow leaves also in spring and summer ( in summer somewhat striped) 'Bergen op Zoom': small straight up to 4 metres. 'Chase Manhattan': small, tiny darkgreen leaves, compact, ideal for bonsai and rockgarden, 1.5 m 'Chichi (Icho)': smaller leaves and a textured trunk, bark has breast-shaped protuberances 'Chris's Dwarf' (or 'Munchkin'?): see 'Munchkin' 'Chotek': weeping form of 'Witches Broom'; cultivar from Czech Republic; found by Mr Horak, Bystrice pod Hostinemin. Named to tribute the house of Choteks, the family of archbishop F. M. Chotek. 'Eastern Star': female, bears abundant crops of large nuts. 'Elmwood': vertical columnar form 'Epiphylla': female. Max. 4 m h., more wide. Seeds form on rather young plant. 'Elsie': upright growing, female. 'Fairmount': slender form, big leaves, dense pyramidal crown, male, 15 m. 'Fastigiata': architectural vertical accent, nearly columnar form, slightly wider at the base, big leaves, male (also available as female). 'Geisha': female, long pendulous branches and dark green foliage which turns lemon-yellow in fall, heavy crops of large nuts. 'Globosa': Graft on stock, bulb-shaped, compact 'Globus': Bullet-form, big leaves. 'Golden globe': Full head and spectacular yellow fall color. Trees are unusually densely branched for Ginkgos. Young trees have full crowns that mature in a broad, rounded head. Male. (from a seedling of Cleveland Tree Co. ) 'Gresham': Wide spreading horizontal branch habit. (from Gresham High School Ginkgos at Gresham, Oregon) 'Heksenbezem Leiden' (Witches broom): quite compact, rounded, dwarf form, branching closely grouped, up to 3 metres. tall and wide form, many side-branches. 'King of Dongting': slow growing, very big leaves. 'Liberty Splendor': broad pyramidal form with strong trunk, female. 'Long March': Upright growing, female is cultivated for heavy crops of tasty nuts. 'Magyar': uniform symmetrical branching, upright narrow pyramid form, up to 19 metres, male. 'Munchkin' (or 'Chris's Dwarf' ?): Upright habit and numerous slender branches, it has a tendency to be more regular in shape. 'Ohasuki': up to 4 metres, halfround big leaves, female. 'Pendula': branches more or less pendulous ("weeping"), slow growing, decorative.+ video 'Prague or Pragense': low spreading and parasol-shaped. well known cultivar, slow growing, big decorative leaves, upright conical form gives very formal focal point, male, 30 m. 'Rainbow': striped with green/yellow leaves, about 3 m. Improved 'Variegata'. Remove green leaved branch immediately. 'Salem Lady': female from Oregon. 'Santa Cruz': female, low, spreading, umbrella-shaped. 'Shangri-La': fast growing with compact pyramidal form, 14 metres, grows somewhat faster, male. 'Spring Grove': dwarf, very small and compact, about 3 m. 'Tit': = Chichi (Icho). form with variegated foliage, some leaves 'halved' green and gold, others striped and others half gold/half striped, up to 3 metres, female. 'Windover': broad oval outline, shade tree, 17 m. can also be grown as a bonsai HOME - INDEX © Cor Kwant
http://kwanten.home.xs4all.nl/thetree.htm
13
96
Triangles are plane figures (or polygons) that have only three sides (edges) and three corners (vertices). Along with circles and quadrilaterals (four-sided polygons), they are the most basic of all plane figures. Also, their properties make them one of the most applicable geometric shapes in science and engineering. There are several basic facts about them you should be aware of. The first fact is that the sum of the measures of all internal angles inside any triangle is exactly 180 degrees. This means that if you know the measure of two angles inside a triangle, you can easily determine the measure of the third one by subtracting the combined size of the two known angles from 180 degrees. Secondly, a triangle also has something called exterior angles. Exterior angles are supplementary to their corresponding interior angles and the sum of their measures is always 360 degrees. Also, if you know the measure of an exterior angle, you can calculate the measure of its corresponding interior angle by subtracting the measure of the known angle from 180 degrees. The third fact to keep in mind is that the sum of the lengths of any two sides of a triangle is larger than the length of the third side and that is a rule with no exceptions. This fact is also known as the principle of triangle inequality. The fourth fact to remember (and it is a very important one) is that two triangles can be similar. They are similar if at least two angles in a triangle are the same as the corresponding angles in the other triangle. Also, they are similar if two or three of their corresponding sides are proportional. If the two triangles are the same in size and shape, they are considered congruent. Types of triangles We can classify a triangle according to two criteria: by its relative lengths of sides and by its internal angles. By the relative lengths of its sides, a triangle can be: - An equilateral triangle – all of its sides are the same length and the measure of its interior angles is the same (60 degrees each). - An isosceles triangle – two of its sides are equal, as well as their corresponding angles. - A scalene triangle – all of its sides are unequal. Its angles are also different in measure. By internal angles, a triangle can be: - A right triangle – also called a right-angled or a rectangle triangle is the one that has one of its interior angles measuring 90 degrees. They obey the Pythagorean theorem and have special names for their sides. The side opposite to the right angle is called the hypothenuse and it is the longest side of the triangle. The other two sides are legs or catheti of the triangle. - An oblique triangle – the one in which no angles measure 90 degrees. - An acute triangle – a triangle in which the measures of all angels are smaller than 90 degrees. - An obtuse triangle – the triangle in which one of the angles is larger than 90 degrees. Area of a triangle There are a couple of ways to calculate the area of a triangle (T). The simplest formula is: T = (b * h)/2 The letter b is the symbol for the length of the base of the triangle and the letter h we use to mark the height (altitude) of the triangle. This is the formula we will be using in our examples for now. The base of the triangle is the side perpendicular to the height of the triangle. The height (or the altitude) of a triangle is a straight line that passes through a vertex (angle) and is perpendicular to the opposite side (called the base). We will now solve a couple of examples in order to show how you can use this formula to calculate the area of any triangle whose height and length of its base are known. Calculate the surface of this scalene triangle. The length of its base is 9.2 cm and its height is 4.3 cm. This one is very easy. All you have to do is to place the appropriate values inside our formula and finish the calculation. T = (9.2 * 4.3)/2 T = 39.56 / 2 T = 19.78 cm2 We can now see that the size of the area is 19.78 cm2. It is extremely important to keep your units in check when calculating areas. If the size of the sides is in meters, the result will be in meters squared (m2). This assignment was pretty straightforward. Let us try to solve one that is a bit more complicated. Find the area of this triangle. Its height is 6.7 cm and the length of the opposite side is 4 cm. Now, this one is a bit trickier. You can see that the height does not pass through a vertex, but it looks like it forms a larger right triangle with an extended version of what should be the base. Do not let it confuse you. You can still use the formula we used before and the same procedure applies. T = (6.7 * 4) / 2 T = 26.8 / 2 T = 13.4 You may be asking yourself: “How come?” Well, we will solve it using a more complicated way to show you why this approach still works. First, we will form the equation for calculating the area of the larger right triangle. The height is one leg of the triangle and the extended base is the other. And since two right triangles can form a rectangle, its area is half the area of a rectangle with the legs as its sides. T1 = (h * (b + x)) / 2 T1 = (6.7 * (4 + x)) / 2 T1 = (26.8 + 6.7 * x) / 2 This is almost as far as we can go with that equation. So now we have to find the area of the smaller right triangle whose legs are the height of the triangle and x (the extension of the base). If we use the same principles as before, the equation for its area (T2) is: T2 = (6.7 * x) / 2 The only thing left to do now is to subtract the area of the smaller right triangle from the area of the larger right triangle. That should give us the area we are interested in as a result. T = ((26.8 + 6.7 * x) / 2) – ((6.7 * x) / 2) T = (26.8 + 6.7 * x – 6.7 * x) / 2 T = 26.8 / 2 T = 13.4 As you can see, the result is the same as before. So do not let an unusual drawing confuse you. If you know the height of a triangle and the length of its base, you can calculate its area using this formula. This is the basic information you need to have about a triangle. As we progress and put more lessons on this site, we will expand this article with everything you need to know in order to grow your mathematical skills and knowledge. If you wish to practice everything you learned about triangles here, please feel free to use the math worksheets below. Triangles exams for teachers |Exam Name||File Size||Downloads||Upload date| Classification – Sides |Triangles – Classification – Sides – easy||624.3 kB||1271||October 13, 2012| |Triangles – Classification – Sides – medium||595.8 kB||895||October 13, 2012| |Triangles – Classification – Sides – hard||454.1 kB||788||October 13, 2012| Classification – Angles |Triangles – Classification – Angles – easy||621.2 kB||1012||October 13, 2012| |Triangles – Classification – Angles – medium||610.9 kB||726||October 13, 2012| |Triangles – Classification – Angles – hard||454.6 kB||584||October 13, 2012| Classification – Sides and angles |Triangles – Classification – Sides and angles – easy||613.8 kB||1060||October 13, 2012| |Triangles – Classification – Sides and angles – medium||610.2 kB||806||October 13, 2012| |Triangles – Classification – Sides and angles – hard||454.2 kB||662||October 13, 2012| |Triangles – Finding angles||528.5 kB||1437||October 13, 2012| |Triangles – Equations – Angles||541.1 kB||720||October 13, 2012| Triangles – Area |Triangles – Area – very easy||659.3 kB||474||October 13, 2012| |Triangles – Area – easy||638.7 kB||447||October 13, 2012| |Triangles – Area – medium||653.3 kB||521||October 13, 2012| |Triangles – Area – hard||647.2 kB||423||October 13, 2012| Triangles worksheets for students |Worksheet Name||File Size||Downloads||Upload date| |Triangles – Classification||6.8 MB||1014||October 14, 2012| |Triangles – Find the missing angle||6.9 MB||2127||October 14, 2012| |Triangles – Area||6 MB||620||October 14, 2012|
http://www.mathx.net/triangles/
13
76
During the seventeenth century, European mathematicians were at work on four major problems. These four problems gave birth to the subject of Calculus. The problems were the tangent line problem, the velocity and acceleration problem, the minimum and maximum problem, and the area problem. Each of these four problems involves the idea of limits. The tangent line problem There is a given function (f), and a point (P} on its graph. The idea of this problem is to find the equation of the tangent line to the graph at that point. This problem is equivalent to finding the slope of the tangent line at that point. This may be approximated by using a line through the point of tangency and a second point on the curve (Q)—this gives us a secant line. As point Q approaches point P, the secant line will become a better and better approximation of the tangent line. This uses the concept of limits—the limit as Q approaches P will give you the slope of the tangent line. In other words, choosing points closer and closer to the point of tangency would give you more accurate approximations. The derivative of a function gives us the slope of the tangent line to the function. Although partial solutions to this problem were given by Pierre de Fermat (1601-1665), Rene Descartes (1596-1650), Christian Huygens (1629-1695), and Isaac Barrow (1630-1677), credit for the first general solution is usually given to Sir Isaac Newton (1642-1727) and Gottfried Leibniz (1646-1716). The velocity and acceleration problem The velocity and acceleration of a particle can be found by using Calculus. This was one of the problems faced by mathematicians in the seventeenth century. The derivative of a function can not only be used to determine slopes, but also to determine the rate of change between two variables. This may be used to describe the motion of an object moving in a straight line. This is the position function, which, if differentiated (or the derivative of it is found) gives us the velocity function. In other words, the velocity function is the derivative of the position function. You may also find the acceleration function by finding the derivative of the velocity function. So the velocity and acceleration problem helped in the development of Calculus. The minimum and maximum problem What if we want to examine a function by finding where it is increasing? Where it is decreasing? What is the behavior of its concavity? When does it have a maximum point? Where does it have a minimum point? All of these questions were answered with the development of Calculus. The minimum or maximum of the function must occur at a critical point, or a critical number. If we find the derivative of a function, its zeros are called critical numbers. Now, we must analyze the behavior of the function. The values over which the derivative is positive equates into the actual function increasing. When the derivative is negative, the function is decreasing. If the function is increasing, and then changes to decreasing, that point is a relative maximum of the function. Similarly, if the function is decreasing, and then changes to increasing, that point is a relative minimum. An easier way to analyze the minimum and maximum problem is to graph the derivative. If the point to the left of the critical number is a negative, and the point to the right of it is a positive, then the critical number is a minimum of the function. Similarly, if the point to the left of the critical point is a positive, and the point to the right is a negative, the point is a maximum of the function. We may also analyze concavity. If the second derivative of the function is positive over a given interval, then the function is concave up over that given interval. If the second derivative is negative, then the function is concave down. The area problem This classic Calculus problem is used to find the area of a plane region that is bounded by the graphs of functions. Like the tangent line problem, the limit concept is applied here. To approximate the area of the plane region underneath the graph, one may break the region up into several rectangles, and sum up the values of the rectangles. This method is a form of the Riemann Sums. This would give an approximation of the area of the graph. Now, if the amount of rectangles is increased, the approximation will become more and more precise. The area will therefore be the sum of the areas of the rectangles as the number of rectangles increases without bound. In other words, the limit as the number of rectangles approaches infinity, will give you the area of the region. This eventually leads into the idea of integration.
http://everything2.com/user/Irfan/writeups/Calculus
13
50
Define the following forces using a description and formulas Identify when the forces above are acting on a body AND describe their direction of influence Draw a free body diagram (“fbd”) using the skills and definitions above. Introduction to Free Body Diagrams Free body diagrams, often abbreviated "fbd" are a tool for solving problems with multiple forces acting on a single body. The methods developed here can also be used for the summation of force fields. The purpose of a free body diagram is to reduce the complexity of situation for easy analysis. The diagram is used as a starting point to develop a mathematical model of the forces acting on an object. Below is a picture of a flying jet. Imagine how time consuming and cluttered it would be if you were to draw this picture with the forces acting on the plane. It might look like the one below. A free body diagram is a picture showing the forces that act on a body. Most importantly it shows the forces' directions without the clutter of drawing the body. The word “body” is used to describe any object. But physicists and engineers likes to simplify the drawing of the object by drawing a dot instead of a detailed picture. Occasionally some simple details are added to create further clarify the situation. Below is an example of a “body.” A free body diagram of the jet might look like the one below. Weight is a term we use to describe the pull of gravity. If a physicist had defined the terms we use everyday in our conversations with each other, then instead of “weighing 120 pounds” at the doctor’s office, the doctor would instead “measure the pull of gravity as a 120 pound force.” If a body has mass then is has weight when it is near the surface of the Earth. Weight always pulls down towards the center of the Earth. On an fbd weight always pulls towards the ground. Weight is defined at the mass of a body times the pull of gravity. Using our math model this is written as Where “W” is the weight measured in Newtons [N.] Always use a Capital “N.” [The letter you write must look like the accepted capital “N.” That means it cannot look like a lower case “n” that is written bigger.] "m" is the mass in kilograms. "g" is the acceleration due to gravity. On the Earth's surface this is 9.80 m/s2. The normal force is a force that is: perpendicular to the surface, a reaction force to presence of other forces. The normal force keeps two surfaces from sinking into each other. The symbol for the normal force is the Greek letter “eta.” It look like a lower case “n” with a tail. Currently you are sitting down or standing. The ground experiences a force pulling you down. The normal force is perpendicular to the surface and is a reaction to the force(s) holding you down. If you are standing on an incline the normal force would be the reaction force keeping you from sinking into the incline AND it would be perpendicular to the incline's surface. There are many types of frictional forces. No matter what math formula is used to describe the magnitude of the frictional force, they all point in the opposite direction of motion, (or the intended motion if there were no friction,) and parallel to the surfaces of contact. The symbol for a frictional force is a cursive "F." (The "F" you write does not need to look exactly like the one below, but it should look "cursive." We are going to focus in friction due to contact between two surfaces. This frictional force’s magnitude is a percentage of the normal force pressing the two surfaces together. The “coefficient of friction” is the percentage of the normal force. It is found through experimentation and depends on the nature of the materials in contact. Just because there is a normal force does not mean that there has to be a frictional force. If you were standing on a the slickest ice in the world a normal force would be keeping your from sinking into the ice but the frictional force of the “slickest” ice in the world might have no measurable value. Below is a table of examples frictional coefficients. Notice that some coefficients are greater than one. In the table above you can see two types of frictional forces exist between the surfaces, static and kinetic friction. Static friction means the two surfaces in contact are not sliding across each other. Kinetic friction means the two surfaces are sliding across each other. The girl that is standing up in the middle of the playground slide, (right side,) is experiencing static friction between her shoes and the slide because these two surfaces are not moving relative to each other. The girl that is moving down the slide, (left side,) is experiencing kinetic friction between her pants and the slide. This is because the pants are moving relative to the slide’s surface. In both cases the friction opposes the direction of motion (or the intended direction without friction) and is parallel to surface. The freebody for each girl would look like the one below. The other forces have been dimmed to highlight the frictional force. There are six basic principles of friction due to contact between surfaces. Friction acts parallel to the surfaces that are in contact and in the direction opposite to the motion of the object or to any force tending to produce such motion. Friction depends on the nature of the materials in contact and the smoothness of their surfaces. Sliding friction is less than starting friction. Sliding and starting friction are called kinetic and static friction respectively. Kinetic friction is practically independent of speed. Friction is practically independent of the area of contact. Friction is directly proportional to the force pressing the surfaces together. The reaction force to pressing the surface is the NORMAL force. (See the formula above.) Tension exists in any body that is pulled by to opposing forces. Typically we talk about ropes and chains as being in tension but any body can be in put in tension. Tension is a pair of equal and opposite forces. In the drawing above the two 20 N forces represent the tension in the rope. The forces are equal in magnitude and pulling in opposite directions. The tension in the rope is said to be “20 N.”In the picture below, the girls' arms are in tension. One of the clever features of tension and ropes has to do with corners and a single pulley. If the rope passes around a single pulley then the direction of the tension is redirected. Assuming the pulley is frictionless (as are all of our pulleys) then the tension's magnitude will remained unchanged. The symbol for tension is "T." There is not a formula for tension. Tension's value has to be either known in the problem or calculated from the other forces. The "net" of anything is what is left over after everything is added and subtracted. Example: When you get hired for a job the employer promises $10 per hour. But when you get your paycheck your NET pay is $7 per hour. This is because the federal government, state government and someone called "FICA" took out $ 3. What the employer could have said is that your pay is $10 per hour but your net pay will be $7 per hour. When a body is changing velocity the net force is not zero. This is because Newton's 2nd law says an unbalanced force is visually demonstrated by a body changing its velocity. Below is an example showing you what this looks like as well as the various free body diagrams. The net force is represented by the symbol, The vector that represents the net force does not touch the body. This is because the "net force" is not a force that acts on the body. It is the mathematical result of adding up all the other forces. That is why it is called "net." Note in the animation above how when the opposite force vectors are the same length the net force is zero and the body moves with a constant velocity.[ Examples "C" and "D." ] When the force in one direction are bigger than the forces in the other directions, this is shown on the freebody by having one arrow is longer than the other. When this happens the net force is not zero and the body accelerates or decelerates. [Examples "A" and "B." ] The math used to model the situation is derived directly from the the free body diagram. (This is covered in class and in the online notes.) The next page shows how to apply the math to freebody diagrams. by Tony Wayne ...(If you are a teacher, please feel free to use these resources in your teaching.)
http://www.mrwaynesclass.com/freebodies/reading/index.html
13
130
In relativity, proper-velocity, also known as celerity, is an alternative to velocity for measuring motion. Whereas velocity relative to an observer is distance per unit time where both distance and time are measured by the observer, proper velocity relative to an observer divides observer-measured distance by the time elapsed on the clocks of the traveling object. Proper velocity equals velocity at low speeds. Proper velocity at high speeds, moreover, retains many of the properties that velocity loses in relativity compared with Newtonian theory. For example proper-velocity equals momentum per unit mass at any speed, and therefore has no upper limit. At high speeds, as shown in the figure at right, it is proportional to an object's energy as well. Proper-velocity is one of three related derivatives in special relativity (coordinate velocity v = dx/dt, proper-velocity w = dx/dτ, and Lorentz factor γ = dt/dτ) that describe an object's rate of travel. For unidirectional motion, each of these is also simply related to a traveling object's hyperbolic velocity angle or rapidity η by In flat spacetime, proper-velocity is the ratio between distance traveled relative to a reference map-frame (used to define simultaneity) and proper time τ elapsed on the clocks of the traveling object. It equals the object's momentum p divided by its rest mass m, and is made up of the space-like components of the object's four-vector velocity. William Shurcliff's monograph mentioned its early use in the Sears and Brehme text. Fraundorf has explored its pedagogical value while Ungar, Baylis and Hestenes have examined its relevance from group theory and geometric algebra perspectives. Proper-velocity is sometimes referred to as celerity. Unlike the more familiar coordinate velocity v, proper-velocity is useful for describing both super-relativistic and sub-relativistic motion. Like coordinate velocity and unlike four-vector velocity, it resides in the three-dimensional slice of spacetime defined by the map-frame. This makes it more useful for map-based (e.g. engineering) applications, and less useful for gaining coordinate-free insight. Proper-speed divided by lightspeed c is the hyperbolic sine of rapidity η, just as the Lorentz factor γ is rapidity's hyperbolic cosine, and coordinate speed v over lightspeed is rapidity's hyperbolic tangent. Imagine an object traveling through a region of space-time locally described by Hermann Minkowski's flat-space metric equation (cdτ)2 = (cdt)2 - (dx)2. Here a reference map frame of yardsticks and synchronized clocks define map position x and map time t respectively, and the d preceding a coordinate means infinitesimal change. A bit of manipulation allows one to show that proper-velocity w = dx/dτ = γv where as usual coordinate velocity v = dx/dt. Thus finite w ensures that v is less than lightspeed c. By grouping γ with v in the expression for relativistic momentum p, proper velocity also extends the Newtonian form of momentum as mass times velocity to high speeds without a need for relativistic mass. Proper velocity addition formula where is the beta factor given by . In the unidirectional case this becomes commutative and simplifies to a Lorentz-factor product times a coordinate-velocity sum, e.g. to wAC = γABγBC(vAB+vBC), as discussed in the application section below. Relation to other velocity parameters speed table The table below illustrates how the proper-velocity of w=c or "one map-lightyear per traveler-year" is a natural benchmark for the transition from sub-relativistic to super-relativistic motion. |Condition\Parameter||Coordinate velocity v dx/dt in units of c |Velocity angle η |Proper velocity w dx/dτ in units of c |Lorentz factor γ dt/dτ = E/mc2 |Traveler stopped in map-frame ⇔ |Momentum = ½mc ⇔ |1/√5 ≅ 0.447||ln[(1+√5)/2] ≅ 0.481||½||√5/2 ≅ 1.118| |Rapidity of 0.5 hyperbolic radian||(e-1)/(e+1) ≅ 0.462||½||½(√e-1/√e) ≅ 0.521||½(√e+1/√e) ≅ 1.128| |Coordinate velocity = ½ c ⇔ |½||½ln ≅ 0.549||1/√3 ≅ 0.577||2/√3 ≅ 1.155| |Momentum = mc ⇔ |1/√2 ≅ 0.707||ln[1+√2] ≅ 0.881||1||√2 ≅ 1.414| |Rapidity of 1 hyperbolic radian||(e2-1)/(e2+1) ≅ 0.761||1||½(e-1/e) ≅ 1.175||½(e+1/e) ≅ 1.543| |Kinetic energy = mc2 ⇔ |√3/2 ≅ 0.866||ln[√3+2] ≅ 1.317||√3 ≅ 1.732||2| |Momentum = 2mc ⇔ |2/√5 ≅ 0.894||ln[2+√5] ≅ 1.444||2||√5 ≅ 2.236| |Rapidity of 2 hyperbolic radians||(e4-1)/(e4+1) ≅ 0.964||2||½(e2-1/e2) ≅ 3.627||½(e2+1/e2) ≅ 3.762| |Coordinate velocity = c ⇔ Note from above that velocity angle η and proper-velocity w run from 0 to infinity and track coordinate-velocity when w<<c. On the other hand when w>>c, proper-velocity tracks Lorentz factor while velocity angle is logarithmic and hence increases much more slowly. interconversion equations The following equations convert between four alternate measures of speed (or unidirectional velocity) that flow from Minkowski's flat-space metric-equation: Lorentz factor γ: Energy over mc2 ≥ 1 proper-velocity w: Momentum per unit mass coordinate velocity: v ≤ c hyperbolic velocity angle or rapidity or in terms of logarithms: Comparing velocities at high speed Proper-velocity is useful for comparing the speed of objects with momentum per unit mass (w) greater than lightspeed c. The coordinate speed of such objects is generally near lightspeed, whereas proper-velocity tells us how rapidly they are covering ground on traveling-object clocks. This is important for example if, like some cosmic ray particles, the traveling objects have a finite lifetime. Proper velocity also clues us in to the object's momentum, which has no upper bound. For example, a 45 GeV electron accelerated by the Large Electron-Positron Collider (LEP) at Cern in 1989 would have had a Lorentz factor γ of about 88,000 (90 GeV divided by the electron rest mass of 511 keV). Its coordinate speed v would have been about sixty four trillionths shy of lightspeed c at 1 lightsecond per map second. On the other hand, its proper-speed would have been w = γv ~88,000 lightseconds per traveler second. By comparison the coordinate speed of a 250 GeV electron in the proposed International Linear Collider (ILC) will remain near c, while its proper-speed will significantly increase to ~489,000 lightseconds per traveler second. Proper-velocity is also useful for comparing relative velocities along a line at high speed. In this case where A, B and C refer to different objects or frames of reference. For example wAC refers to the proper-speed of object A with respect to object C. Thus in calculating the relative proper-speed, Lorentz factors multiply when coordinate speeds add. Hence each of two electrons (A and C) in a head-on collision at 45 GeV in the lab frame (B) would see the other coming toward them at vAC ~c and wAC = 88,0002(1+1) ~1.55×1010 lightseconds per traveler second. Thus from the target's point of view, colliders can explore collisions with much higher projectile energy and momentum per unit mass. Proper-velocity-based dispersion relations Plotting "(γ-1) versus proper velocity" after multiplying the former by mc2 and the latter by mass m, for various values of m yields a family of kinetic energy versus momentum curves that includes most of the moving objects encountered in everyday life. Such plots can for example be used to show where lightspeed, Planck's constant, and Boltzmann energy kT figure in. To illustrate, the figure at right with log-log axes shows objects with the same kinetic energy (horizontally related) that carry different amounts of momentum, as well as how the speed of a low-mass object compares (by vertical extrapolation) to the speed after perfectly inelastic collision with a large object at rest. Highly sloped lines (rise/run=2) mark contours of constant mass, while lines of unit slope mark contours of constant speed. Objects that fit nicely on this plot are humans driving cars, dust particles in Brownian motion, a spaceship in orbit around the sun, molecules at room temperature, a fighter jet at Mach 3, one radio wave photon, a person moving at one lightyear per traveler year, the pulse of a 1.8 MegaJoule laser, a 250 GeV electron, and our observable universe with the blackbody kinetic energy expected of a single particle at 3 Kelvin. Unidirectional acceleration via proper velocity Proper acceleration at any speed is the physical acceleration experienced locally by an object. In spacetime it is a three-vector acceleration with respect to the object's instantaneously varying free-float frame. Its magnitude α is the frame-invariant magnitude of that object's four-acceleration. Proper-acceleration is also useful from the vantage point (or spacetime slice) of external observers. Not only may observers in all frames agree on its magnitude, but it also measures the extent to which an accelerating rocket "has its pedal to the metal". In the unidirectional case i.e. when the object's acceleration is parallel or anti-parallel to its velocity in the spacetime slice of the observer, the change in proper-velocity is the integral of proper acceleration over map-time i.e. Δw=αΔt for constant α. At low speeds this reduces to the well-known relation between coordinate velocity and coordinate acceleration times map-time, i.e. Δv=aΔt. For constant unidirectional proper-acceleration, similar relationships exist between rapidity η and elapsed proper-time Δτ, as well as between Lorentz factor γ and distance traveled Δx. To be specific: where as noted above the various velocity parameters are related by These equations describe some consequences of accelerated travel at high speed. For example, imagine a spaceship that can accelerate its passengers at "1-gee" (or 1.03 lightyears/year2) halfway to their destination, and then decelerate them at "1-gee" for the remaining half so as to provide earth-like artificial gravity from point A to point B over the shortest possible time (Brachistochrone curve). For a map-distance of ΔxAB, the first equation above predicts a midpoint Lorentz factor (up from its unit rest value) of γmid=1+α(ΔxAB/2)/c2. Hence the round-trip time on traveler clocks will be Δτ = 4(c/α)cosh−1[γmid], during which the time elapsed on map clocks will be Δt = 4(c/α)sinh[cosh−1[γmid]]. This imagined spaceship could offer round trips to Proxima Centauri lasting about 7.1 traveler years (~12 years on earth clocks), round trips to the Milky Way's central black hole of about 40 years (~54,000 years elapsed on earth clocks), and round trips to Andromeda Galaxy lasting around 57 years (over 5 million years on earth clocks). Unfortunately, sustaining 1-gee acceleration for years is easier said than done. See also - Kinematics: for studying ways that position changes with time - Lorentz factor: γ=dt/dτ or kinetic energy over mc2 - Rapidity: hyperbolic velocity angle in imaginary radians - Four-velocity: combining travel through time and space - Uniform Acceleration: holding coordinate acceleration fixed - Gullstrand–Painlevé coordinates: free-float frames in curved space-time. Notes and references - W. A. Shurcliff (1996) Special relativity: the central ideas (19 Appleton St, Cambridge MA 02138) - Francis W. Sears & Robert W. Brehme (1968) Introduction to the theory of relativity (Addison-Wesley, NY) LCCN 680019344, section 7-3 - P. Fraundorf (1996) "A one-map two-clock approach to teaching relativity in introductory physics" (arXiv:physics/9611011) - A. A. Ungar (2006) "The relativistic proper-velocity transformation group", Progress in Electromagnetics Research 60, 85-94. - W. E. Baylis (1996) Clifford (geometric) algebras with applications to physics (Springer, NY) ISBN 0-8176-3868-7 - D. Hestenes (2003) "Spacetime physics with geometric algebra", Am. J. Phys. 71, 691-714 - Bernard Jancewicz (1988) Multivectors and Clifford algebra in electrodynamics (World Scientific, NY) ISBN 9971-5-0290-9 - G. Oas (2005) "On the use of relativistic mass in various published works" (arXiv:physics/0504111) - Thomas Precession: Its Underlying Gyrogroup Axioms and Their Use in Hyperbolic Geometry and Relativistic Physics, Abraham A. Ungar, Foundations of Physics, Vol. 27, No. 6, 1997 - Analytic hyperbolic geometry and Albert Einstein's special theory of relativity, Abraham A. Ungar, World Scientific, 2008, ISBN 978-981-277-229-9 - Ungar, A. A. (2006), "The relativistic proper-velocity transformation group", Progress in Electromagnetics Research, PIER 60, pp. 85–94, equation (12) - B. Barish, N. Walker and H. Yamamoto, "Building the next generation collider" Scientific American (Feb 2008) 54-59 - This velocity-addition rule is easily derived from rapidities α and β, since sinh(α + β) = cosh α cosh β (tanh α + tanh β). - Edwin F. Taylor & John Archibald Wheeler (1966 1st ed. only) Spacetime Physics (W.H. Freeman, San Francisco) ISBN 0-7167-0336-X, Chapter 1 Exercise 51 page 97-98: "Clock paradox III" - Excerpts from the first edition of Spacetime Physics, and other resources posted by Edwin F. Taylor - William A. Shurcliff obituaries: BostonGlobe, NYtimes, WashingtonPost.
http://en.wikipedia.org/wiki/Proper_velocity
13
62
Rocket science deserves its reputation as a subject that only geniuses dare study. Modern rockets are immensely complex systems that push the limits of aeronautics, chemistry, materials science, and engineering. But at the same time, the basic physics of rockets is simple enough for a college freshman to grasp. It does not take a rocket scientist to understand how a rocket works. A rocket produces thrust by expelling matter at high velocities. It does not take a rocket scientist to recognize that the momentum of the rocket-fuel system is conserved. Therefore, if the fuel is expelled at a rate -dm/dt with speed vex, a corresponding recoil force F = -vexdm/dt acts back the rocket, accelerating it in a direction opposite the exhaust flow. Integrating Newton's Second Law over time, one finds a relation between the velocity boost and the mass loss: Here m0 is the launch mass of the rocket (payload plus fuel), while m is the mass of the payload. The required boost Δv depends on the specific mission, and usually varies between 8–20 km/s. With such high boost velocities, the logarithm in Equation (1) does not bode well. For a fixed exhaust velocity and payload mass, the launch mass increases exponentially with Δv. It took a three-hundred foot Saturn rocket to send three astronauts to the moon and back; for a mission to Mars, the dimensions grow even more staggering. It does not take a rocket scientist to recognize that simply making bigger rockets will not get us to Mars. It is possible, of course, to increase the exhaust velocity, but with a conventional (chemical) rocket, this approach only goes so far. The chemical reaction between the fuel and oxidizer heats up the propellant, and the ensuing thermal energy is converted into kinetic energy of the exhaust. In an ideal rocket, the conversion efficiency is 100%, and the exhaust velocity is: where E/m is the energy density – the energy per unit mass released in the chemical reaction. Naturally, one wants to pick fuels that maximize the energy density; it turns out that hydrogen and oxygen are as good as you can get. This sets a limit to the exhaust velocity of vex < 4.5 km/s. This is not good news for deep-space explorers. With an exhaust velocity limited to 4.5 km/s, the launch mass of a moon rocket will be at least 15 times the size of its payload, and for a Mars rocket, the ratio is around 100:1. Missions to the outer planets are staggeringly more expensive. The rocket equations stem from fundamental laws of physics – namely, energy and momentum conservation. No amount of hard work or ingenuity can circumvent them. It does not take a rocket scientist to recognize that deep space travel requires something beyond conventional chemical rockets. The obvious solution to the energy-density problem is to use nuclear power. Uranium has an energy density a million times greater than hydrogen fuel. This raises the maximum exhaust velocity to around 5,000 km/s. However, it so happens that converting fission products into thrust is not easy, and most practical nuclear rocket concepts have an exhaust velocities much smaller than the upper limit. Of these concepts, only one – the Nuclear Thermal Rocket – has ever been built and tested. |Fig. 1: Schematic of a nuclear thermal rocket. Propellant enters through the turbopump on the right, is sent through passages in the reactor, and expelled out the nozzle. Credit: CommiM, Wikipedia [c].| The Nuclear Thermal Rocket consists of a high-temperature nuclear reactor with a series of thin channels for the propellant, as shown in Figure 1. The reactor is run as hot as practically possible, usually around 2500-2800 °K, just below the melting point of the fuel . Hydrogen gas is used as a propellant because its low molecular mass enables it to be expelled at very high speeds. The propellant is sent through the channels and heated to the temperature of the reactor, and thereafter expelled through the rocket nozzle. Using the heat capacity of hydrogen, it is easy to compute the energy density of the fuel [a], and consequently the exhaust speed, which is plotted in Figure 2. |Fig. 2: Plot of the maximum hydrogen exhaust speed as a function of reactor temperature [b].| For a reactor operating at 3000 °K, the propellant will be expelled at a speed of 10 km/s. This is far from the theoretical maximum of 5,000 km/s, but nevertheless is more than twice the exhaust speed of the best conventional rockets. Thanks to the logarithm in Equation (1), this doubling works substantial changes in the mass ratio of liftoff to payload mass; for a nuclear-powered lunar mission, the mass ratio is reduced from 15:1 to around 4:1, while for a Mars mission, it drops from 100:1 to 10:1. The impossible becomes possible. There is another, simpler way to arrive at this factor of two. The velocity of the rocket propellant scales as vex ~ (T/m)1/2, where T is the propellant temperature and m is its molecular mass. In a chemical hydrogen-oxygen rocket, T ~ 6000 °K and m = 18 amu. For a nuclear rocket, the temperature is halved (T ~ 3000 °K), but the mass drops by a factor of nine (m = 2 amu), so the quotient increases by about a factor of four, doubling the exhaust speed. Rocket reactors differ from power plant reactors in two main respects: their temperature and their power density. In order to beat their chemical competitors, a nuclear rocket must operate at very high temperatures. At the same time, it must be extremely light and compact, so as not to weigh down the spacecraft it is launching. These requirements pose a unique set of challenges that the rocket scientist must address when designing the reactor. |Fig. 3: Cross-sectional view of fuel elements for the experimental NERVA reactor. Credit: NASA [d].| Obviously, the reactor should be designed to run at the hottest temperature possible, i.e. just below the melting point of the fuel elements. The fuel of choice is typically highly enriched uranium carbide, a hard ceramic material that melts at 3100 °K, and the reactor itself is typically run at 2500–2800 °K. The fuel is dispersed within a matrix of graphite, chosen because it is one of the few moderators that can withstand the extreme temperatures. (Certain metallic carbides can also be used as a reactor material, although many are neutron-absorbing and thus necessitate that the rocket be run as a fast reactor .) Graphite, however, has one major limitation: at the extreme temperatures involved, it is quickly corroded by the hydrogen propellant to produce hydrocarbons. This problem can be overcome (or at least mitigated) by lining the propellant channels with a thin layer of metallic carbide, NbC being used in test reactors in the 1960's . The carbide slows, but does not stop, the corrosion, allowing test reactors to run for as long as 100 minutes . Compactness mandates that the reactor run at a power density far above that of a power plant. During America's nuclear rocket program, the record average power density was set by the Pewee reactor (2.34 GW/m3) , and some more advanced reactor concepts work at upwards of 40 GW/m3 . To maximize the power density, one uses highly enriched uranium fuel and surrounds the reactor with a neutron reflector . |Fig. 4: Reactor failure test KIWI-TNT. Credit: NASA [d].| While the rocket is designed to avoid radioactive leakage products, inevitably a small amount of radioactive waste will make it into the rocket's exhaust . This, in addition to the fear of a Chernobyl-like rocket meltdown, has led many to believe that nuclear rockets should not be operated in the Earth's atmosphere. Even so, a reactor meltdown was simulated in the KIWI TNT test in 1965, and the radioactive fallout was not significant . Moreover, the control rods can be fixed during launch, rendering the nuclear reactor safely subcritical even in the event or a launch abort . Protecting the crew from radiation is as important as preventing a nuclear meltdown. Neutron shielding, which can weigh several tons, must be installed between the reactor and the crew, and the distance between the two should be maximized to take greatest advantage of the fact that radiation flux falls off with the square of the distance. Placing the fuel tanks between the crew and the rocket is a very effective way to shield the crew, since hydrogen makes an excellent neutron scatterer . Paradoxically, a well-shielded nuclear rocket will on the whole decrease the crew's radiation exposure for long missions. This is because the nuclear rocket, with its larger exhaust velocity, can complete the journey more quickly. Space, not the reactor, is the primary radiation concern. As Table 2 demonstrates, the key to reducing radiation exposure is to limit mission time, which is one thing nuclear rockets are particularly well-suited to do . America's nuclear rocket program, Project Rover, was established in 1955 and ran for nearly two decades until its cancellation in 1972. Rover was operated from Los Alamos Scientific Laboratory (today LANL) under the direction of NASA . Rover demonstrated that a nuclear rocket was indeed viable with 1960's technology, but for a variety of reasons, it was canceled and has been largely ignored since. |Fig. 5: List of important reactor tests conducted during the Rover program [d]. See Ref. .| The first reactors built were the KIWI reactors: a series of non-flyable engines designed to test the physics of hydrogen-cooled reactor designs. KIWI-A operated at a modest 100 MW, and its successor KIWI-B ran at a ten times this power. However, a faulty design feature caused the reactor to vibrate uncontrollably and the device had to be reconfigured. Later tests, however, conclusively proved that nuclear rocketry could be feasible. The NERVA reactor line built on the knowledge rained from KIWI, but was designed to be a fully functional rocket engine rather than a scientific prototype. The goal was a 1 MN thruster, but due to budget cuts in the late 1960's, it was scaled it down a 330 kN version. Tests began in 1964 and culminated in the 244 kN XE' engine tested just before the program's cancellation. In parallel with NERVA, Los Alamos continued to experiment on more advanced rocket prototypes, including PHOEBUS, which was intended to scale up KIWI; and Pewee, which focused on small, high-power-density engines with advanced fuel elements. In 1961, noting Project Rover's rapid progress, NASA considered replacing the upper J-2 stage of the Saturn rocket with a nuclear engine . Such an engine could reduce the mass of the stage by up to 30% . However, instabilities in the early Kiwi engines caused serious concern that the nuclear technology was not yet viable, and in 1963 NASA chose to use a chemical upper stage instead, deferring indefinitely plans to test an in-flight nuclear rocket . By the time NERVA had built a viable, flight-ready nuclear rocket, we had landed a man on the moon and the space race was over. In 1972, facing budget cuts in the wake of the Vietnam War, Project Rover came to an end. While the NERVA program created a tested, flight-ready rocket engine with 1960's technology, it is has several limitations. For one, the exhaust velocity, while much greater than a chemical rocket, is far less than theoretically possible. In addition, the thrust-to-weight ratio (4:1) is marginal . More advanced concepts have been considered to overcome NERVA's limitations, but none have been developed into a working model. Two of the most promising solid-core alternatives are the Particle Bed reactor, designed for the Air Force's Space Nuclear Thermal Propulsion Program, and the CERMET reactor, also developed by the Air Force. The particle bed design contains the fuel in a dispersion of 1-mm particles cooled by a flow of hydrogen propellant. Due to the increased surface area, the reactor can operate at over 5 times the power density of NERVA, achieving a thrust-to-weight ratio of 20:1. The CERMET reactor, by contrast, looks much more like NERVA but operates as a fast reactor. Its chief advantage is robustness and longevity; since it does not require a graphite moderator, it does not suffer the corrosion problems that plague NERVA. It is believed that the CERMET engine could burn for as long as 40 hours. Such a reactor would be advantageous for a refuelable earth-to-moon "ferry" or a reusable Mars rocket [4, 8]. Liquid-core and gas-core nuclear reactors have been considered to raise the fuel temperature above 3000 °K and increase the propellant velocity beyond the 10 km/s limit for solid-core reactors. Such designs are not yet feasible with current technology and pose many unanswered physics and design questions, but offer the potential of dramatically reduced mission times and liftoff-to-payload mass ratios. One such concept, the Open Cycle Gas Core Reactor, is shown in the figure below. |Fig. 6: Seceral advanced nuclear rocket concepts considered in a 1991 space exploration technology review. From Left to Right: Particle Bed Reactor, CERMET Fast Reactor, Open-Cycle Gas Core Reactor [d]. See Ref. .| On April 15, President Obama outlined a new vision for manned deep-space exploration. By 2025, man is to set foot on a near-Earth asteroid for the first time in history, paving the way for a much longer Martian expedition in the mid-2030's. But will America put forward the resources needed to accomplish this extraordinary challenge? With conventional technology, such a mission would take at least two years and require tens if not hundreds of billions of dollars. The answer, consequently, is "no". This is not a lack of "vision"; it is an exercise in economy. Those billions could be better spent elsewhere. Will nuclear propulsion change this? No one can be sure. The physics is sound. The technology is tested. Project Rover died with the space race, but if that race is ever rekindled, perhaps we will consider bringing it back. © Ryan Hamerly. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author. [a] Strictly speaking, the exhaust velocity is limited by the specific enthalpy of the fuel, not the energy density. This comes from Bernoulli's Law, which states that along laminar flows, the quantity h + v2/2 is conserved. However, in a nuclear rocket, the fuel is heated at quasi-constant pressure, so the enthalpy increase equals the energy injected into the fuel by the reactor, and so energy conservation is not violated. S. Borowski, R. Corban, M. McGuire, & E. Beke, "Nuclear Thermal Rocket / Vehicle Design Options for Future NASA Missions to the Moon and Mars", in Space Programs and Technologies Conference and Exhibit (Huntsville, AL, 21-23 Sep. 1993). ADS: 1995STIN...9611955B J. S. Clark & T. J. Miller, "Nuclear Rocket Propulsion: NASA Plans and Progress – FY 1991", in Proceedings of the 26th Intersociety Energy Conversion Engineering Conference (Boston, MA, 4-9 Aug. 1991). ADS: 1991iece....1..391C
http://www.stanford.edu/~rhamerly/cgi-bin/Ph241-1/Ph241-1.php
13
73
Science Fair Project Encyclopedia A hard disk uses rigid rotating platters. It stores and retrieves digital data from a planar magnetic surface. Information is written to the disk by transmitting an electromagnetic flux through an antenna or write head that is very close to a magnetic material, which in turn changes its polarization due to the flux. The information can be read back in a reverse manner, as the magnetic fields cause electrical change in the coil or read head that passes over it. A typical hard disk drive design consists of a central axis or spindle upon which the platters spin at a constant speed. Moving along and between the platters on a common armature are the read-write heads, with one head for each platter face. The armature moves the heads radially across the platters as they spin, allowing each head access to the entirety of the platter. The associated electronics control the movement of the read-write armature and the rotation of the disk, and perform reads and writes on demand from the disk controller. Modern drive electronics are capable of scheduling reads and writes efficiently across the disk, and of remapping sectors of the disk which have failed. Also, most major hard drive and motherboard vendors now support S.M.A.R.T. technology, by which impending failures can often be predicted, allowing the user to be alerted in time to prevent data loss. The (mostly) sealed enclosure protects the drive internals from dust, condensation, and other sources of contamination. The hard disk's read-write heads fly on an air bearing (a cushion of air) only nanometres above the disk surface. The disk surface and the drive's internal environment must therefore be kept immaculately clean, as fingerprints, hair, dust, and even smoke particles have mountain-sized dimensions when compared to the submicroscopic gap that the heads maintain. Some people believe a disk drive contains a vacuum — this is incorrect, as the system relies on air pressure inside the drive to support the heads at their proper flying height while the disk is in motion. Another common misconception is that a hard drive is totally sealed. A hard disk drive requires a certain range of air pressures in order to operate properly. If the air pressure is too low, the air will not exert enough force on the flying head, the head will not be at the proper height, and there is a risk of head crashes and data loss. (Specially manufactured sealed and pressurized drives are needed for reliable high-altitude operation, above about 10,000 feet. Please note this does not apply to pressurized enclosures, like an airplane cabin.) Modern drives include temperature sensors and adjust their operation to the operating environment. Hard disk drives are not airtight. They have a permeable filter (a breather filter) between the top cover and inside of the drive, to allow the pressure inside and outside the drive to equalize while keeping out dust and dirt. The filter also allows moisture in the air to enter the drive. Very high humidity year-round will cause accelerated wear of the drive's heads (by increasing stiction, or the tendency for the heads to stick to the disk surface, which causes physical damage to the disk and spindle motor). You can see these breather holes on all drives -- they usually have a warning sticker next to them, informing the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning disk platters. This air passes through an internal filter to remove any leftover contaminants from manufacture, any particles that may have somehow entered the drive, and any particles generated by head crash. Due to the extremely close spacing of the heads and disk surface, any contamination of the read-write heads or disk platters can lead to a head crash — a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film. For GMR heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) will still result in the head temporarily overheating, due to friction with the disk surface, and renders the disk unreadable until the head temperature stabilizes. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, wear and tear, or poorly manufactured disks. Normally, when powering down, a hard disk moves its heads to a safe area of the disk, where no data is ever kept (the landing zone). However, especially in old models, sudden power interruptions or a power supply failure can result in the drive shutting down with the heads in the data zone, which increases the risk of data loss. Newer drives are designed such that the rotational inertia in the platters is used to safely park the heads in the case of unexpected power loss. In recent years, IBM pioneered drives with "head unloading" technology, where the heads are lifted off the platters onto "ramps" instead of having them rest on the platters. Other manufacturers have begun using this technology as well. Spring tension from the head mounting constantly pushes the heads towards the disk. While the disk is spinning, the heads are supported by an air bearing, and experience no physical contact wear. The sliders (the part of the head that is closest to the disk and contains the pickup coil itself) are designed to reliably survive a number of landings and takeoffs from the disk surface, though wear and tear on these microscopic components eventually takes its toll. Most manufacturers design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises above 50%. However, the decay rate is not linear — when a drive is younger and has fewer start/stop cycles, it has a better chance of surviving the next startup than an older, higher-mileage drive (literally, as the head drags along the drive surface until the air bearing is established). For the Maxtor DiamondMax series of drives, for instance, the drive typically has a 0.02% chance of failing after 4,500 cycles, a 0.05% chance after 7,500 cycles, with the chance of failure rising geometrically to 50% after 50,000 cycles, and increasing ever after. Using rigid platters and sealing the unit allows much tighter tolerances than in a floppy disk. Consequently, hard disks can store much more data than floppy disk, and access and transmit it faster. In 2004, a typical workstation hard disk might store between 80 GiB and 400 GiB of data, rotate at 5,400 to 10,000 rpm, and have an average transfer rate of over 30 MB/s. The fastest workstation hard drives spin at 15,000 rpm. Notebook hard drives, which are physically smaller than their desktop counterparts, tend to be slower and have less capacity. Most spin at only 4,200 rpm or 5,400 rpm, though the newest top models spin at 7,200 rpm. - Seek time is a measure of the speed with which the drive can position its read/write heads over any particular data track. Because neither the starting position of the head nor the distance from there to the desired track is fixed, seek time varies greatly, and it is almost always measured as an average seek time, though full-track (the longest possible) and track-to-track (the shortest possible) seeks are also quoted sometimes. The standard way to measure seek time is to time a large number of disk accesses to random locations, subtract the latency (see below) and take the mean. Note, however, that two different drives with identical average seek times can display quite different performance characteristics. Seek time is always measured in milliseconds (ms), and often regarded as the single most important determinant of drive performance, though this claim is debated. (More on seek time.) - All drives have rotational latency: the time that elapses between the moment when the read/write head settles over the desired data track and the moment when the first byte of the required data appears under the head. For any individual read or write operation, latency is random between zero (if the first data sector happens to be directly under the head at the exact moment that the head is ready to begin reading or writing) and the full rotational period of the drive (for a typical 7200 rpm drive, just under 8.4 ms). However, on average, latency is always equal to one half of the rotational period. Thus, all 5400 rpm drives of any make or model have 5.56 ms latency; all 7200 rpm drives, 4.17 ms; all 10,000 rpm drives, 3.0 ms; and all 15,000 rpm drives have 2.0 ms latency. Like seek time, latency is a critical performance factor and is always measured in milliseconds. (More on latency.) - The internal data rate is the speed with which the drive's internal read channel can transfer data from the magnetic media. (Or, less commonly, in the reverse direction.) Previously a very important factor in drive performance, it remains significant but less so than in prior years, as all modern drives have very high internal data rates. Internal data rates are normally measured in Megabits per second (Mbit/s). Subsidiary performance factors include: - Access time is simply the sum of the seek time and the latency. It is important not to mistake seek time figures for access time figures! The access time is by far the most important performance benchmark of a modern HDD. It almost alone defines how fast the disk performs in a typical system. However, people tend to pay much more attention to the data rates, which rarely make any significant difference in typical systems. Of course, in some usage scenarios it may be vise-versa, so you need to know your system before buying a HDD. - The external data rate is the speed with which the drive can transfer data from its buffer to the host computer system. Although in theory this is vital, in practice it is usually a non-issue. It is a relatively trivial matter to design an electronic interface capable of outpacing any possible mechanical read/write mechanism, and it is routine for computer makers to include a hard drive controller interface that is significantly faster than the drive it will be attached to. As a general rule, modern ATA and SCSI interfaces are capable of dealing with at least twice as much data as any single drive can deliver; they are, after all, designed to handle two or more drives per bus even though a desktop computer usually mounts only one. For a single-drive computer, the difference between ATA-100 and ATA-133, for example, is largely one of marketing rather than performance. No drive yet manufactured can utilise the full bandwidth of an ATA-100 interface, and few are able to send more data than an ATA-66 interface can accept. The external data rate is usually measured in MB/s or MiB/s. - Command overhead is the time it takes the drive electronics to interpret instructions from the host computer and issue commands to the read/write mechanism. In modern drives it is negligible. Access and interfaces Back in the days of the ST-506 interface, the data encoding scheme was also important. The first ST-506 disks used Modified Frequency Modulation (MFM) encoding (which is still used on the common "1.44 MB" (1.4 MiB) 3.5-inch floppy), and ran at a data rate of 5 megabits per second. Later on, controllers using 2,7 RLL (or just "RLL") encoding increased this by half, to 7.5 megabits per second; it also increased drive capacity by half. Many ST-506 interface drives were only certified by the manufacturer to run at the lower MFM data rate, while other models (usually more expensive versions of the same basic drive) were certified to run at the higher RLL data rate. In some cases, the drive was overengineered just enough to allow the MFM-certified model to run at the faster data rate; however, this was often unreliable and was not recommended. (An RLL-certified drive could run on a MFM controller, but with 1/3 less data capacity and speed.) ESDI also supported multiple data rates (ESDI drives always used 2,7 RLL, but at 10, 15 or 20 megabits per second), but this was usually negotiated automatically by the drive and controller; most of the time, however, 15 or 20 megabit ESDI drives weren't downward compatible (i.e. a 15 or 20 megabit drive wouldn't run on a 10 megabit controller). ESDI drives typically also had jumpers to set the number of sectors per track and (in some cases) sector size. SCSI originally had just one speed, 5 MHz (for a maximum data rate of 5 megabytes per second), but this was increased dramatically later. The SCSI bus speed had no bearing on the drive's internal speed because of buffering between the SCSI bus and the drive's internal data bus; however, many early drives had very small buffers, and thus had to be reformatted to a different interleave (just like ST-506 drives) when used on slow computers, such as early IBM PC compatibles and Apple Macintoshes. ATA drives have typically had no problems with interleave or data rate, due to their controller design, but many early models were incompatible with each other and couldn't run in a master/slave setup (two drives on the same cable). This was mostly remedied by the mid-1990s, when ATA's specfication was standardised and the details begun to be cleaned up, but still causes problems occasionally (especially with CD-ROM and DVD-ROM drives, and when mixing Ultra DMA and non-UDMA devices). Serial ATA does away with master/slave setups entirely, placing each drive on its own channel (with its own set of I/O ports) instead. - Capacity (measured in gigabytes) - MTBF (mean time between failures) - Power consumption (especially important in battery-powered laptops) - audible noise (in dBA) - G-shock rating (surprisingly high in modern drives) There are two modes of addressing the data blocks on more recent hard disks. The older one is the CHS addressing (Cylinder-Head-Sector), used on old ST-506 and ATA drives and internally by the PC BIOS, and the more recent one the LBA (Logical Block Addressing), used by SCSI drives and newer ATA drives (ATA drives power up in CHS mode for historical reasons). CHS describes the disk space in terms of its physical dimensions, data-wise; this is the traditional way of accessing a disk on IBM PC compatible hardware, and while it works well for floppies (for which it was originally designed) and small hard disks, it caused problems when disks started to exceed the design limits of the PC's CHS implementation. The traditional CHS limit was 1024 cylinders, 16 heads and 63 sectors; on a drive with 512-byte sectors, this comes to 504 MiB (528 megabytes). The origin of the CHS limit lies in a combination of the limitations of IBM's BIOS interface (which allowed 1024 cylinders, 256 heads and 64 sectors; sectors were counted from 1, reducing that number to 63, giving an addressing limit of 8064 MiB or 7.8 GiB), and a hardware limitation of the AT's hard disk controller (which allowed up to 65536 cylinders and 256 sectors, but only 16 heads, putting its addressing limit at 2^28 bits or 128 GiB). When drives larger than 504 MiB began to appear in the mid-1990s, many system BIOSes had problems communicating with them, requiring LBA BIOS upgrades or special driver software to work correctly. Even after the introduction of LBA, similar limitations reappeared several times over the following years: at 2.1, 4.2, 8.4, 32, and 128 GiB. The 2.1, 4.2 and 32 GiB limits are hard limits: fitting a drive larger than the limit results in a PC that refuses to boot, unless the drive includes special jumpers to make it appear as a smaller capacity. The 8.4 and 128 GiB limits are soft limits: the PC simply ignores the extra capacity and reports a drive of the maximum size it is able to communicate with. SCSI drives, however, have always used LBA addressing, which describes the disk as a linear, sequentially-numbered set of blocks. SCSI mode page commands can be used to get the physical specifications of the disk, but this is not used to read or write data; this is an artifact of the early days of SCSI, circa 1986, when a disk attached to a SCSI bus could just as well be an ST-506 or ESDI drive attached through a bridge (and therefore having a CHS configuration that was subject to change) as it could a native SCSI device. Because PCs use CHS addressing internally, the BIOS code on PC SCSI host adapters does CHS-to-LBA translation, and provides a set of CHS drive parameters that tries to match the total number of LBA blocks as closely as possible. ATA drives can either use their native CHS parameters (only on very early drives; hard drives made since the early 1990s use zone bit recording, and thus don't have a set number of sectors per track), use a "translated" CHS profile (similar to what SCSI host adapters provide), or run in ATA LBA mode, as specified by ATA-2. To maintain some degree of compatibility with older computers, LBA mode generally has to be requested explicitly by the host computer. ATA drives larger than 8 GiB are always accessed by LBA, due to the 8 GiB limit described above. Most of the world's hard disks are now manufactured by just a handful of large firms: Seagate, Maxtor, Western Digital, Samsung, and the former drive manufacturing division of IBM, now sold to Hitachi. Fujitsu continues to make specialist notebook and SCSI drives but exited the mass market in 2001. Toshiba is a major manufacturer of 2.5-inch and 1.8-inch notebook drives. Firms that have come and gone Dozens of former hard drive manufacturers have gone out of business, merged, or closed their hard drive divisions; as capacities and demand for products increased, profits became hard to find, and there were shakeouts in the late 1980s and late 1990s. The first notable casualty of the business in the PC era was Computer Memories International or CMI; after the 1985 incident with the faulty 20MB AT drives, CMI's reputation never recovered, and they exited the hard drive business in 1987. Another notable failure was MiniScribe, who went bankrupt in 1990 after it was found that they had "cooked the books" and inflated sales numbers for several years. Many other smaller companies (like Kalok , Microscience , LaPine, Areal, Priam and PrairieTek) also did not survive the shakeout, and had disappeared by 1993; Micropolis was able to hold on until 1997, and JTS, a relative latecomer to the scene, lasted only a few years and was gone by 1999. Rodime was also an important manufacturer during the 1980s, but stopped making drives in the early 1990s amid the shakeout and now concentrates on technology licensing; they hold a number of patents related to 3.5-inch form factor hard drives. There have also been a number of notable mergers in the hard disk industry: - Tandon sold its disk manufacturing division to Western Digital (which was then a controller maker and ASIC house) in 1988; by the early 1990s Western Digital disks were among the top sellers. - Quantum bought DEC's storage division in 1994, and later (2000) sold the hard disk division to Maxtor to concentrate on tape drives. - In 1995, Conner Peripherals announced a merger with Seagate (who had earlier bought Imprimis from CDC), which completed in early 1996. - JTS infamously merged with Atari in 1996, giving it the capital it needed to bring its drive range into production. - In 2003, following the controversy over the mass failures of the Deskstar 75GXP range (which resulted in lost sales of its follow-ons), hard disk pioneer IBM sold the majority of its disk division to Hitachi, who renamed it Hitachi Global Storage Technologies. "Marketing" capacity versus true capacity It is important to note that hard drive manufacturers often use the metric definition of the prefixes "giga" and "mega." However, nearly all operating system utilities report capacities using binary definitions for the prefixes. This is largely historical, since when storage capacities started to exceed thousands of bytes, there were no standard binary prefixes (the IEC only standardized binary prefixes in 1999), so 210 (1024) bytes was called a kilobyte because 1024 is "close enough" to the metric prefix kilo, which is defined as 103 or 1000. This trend became habit and continued to be applied to the prefixes "mega," "giga," and even "tera." Obviously the discrepancy becomes much more noticeable in reported capacities in the multiple gigabyte range, and users will often notice that the volume capacity reported by their OS is significantly less than that advertised by the hard drive manufacturer. For example, a drive advertised as 200 GB can be expected to store close to 200 x 109, or 200 billion, bytes. This uses the proper SI definition of "giga," 109 and cannot be considered as incorrect. Since utilities provided by the operating system probably define a Gigabyte as 230, or 1073741824, bytes, the reported capacity of the drive will be closer to 186.26 GB (actually, GiB), a difference of well over ten gigabytes. For this very reason, many utilities that report capacity have begun to use the aforementioned IEC standard binary prefixes (e.g. KiB, MiB, GiB) since their definitions are not ambiguous. Another side point is that many people mistakenly attribute the discrepancy in reported and advertised capacities to reserved space used for file system and partition accounting information. However, this data rarely occupies more than several KiB or a few MiB, and therefore cannot possibly account for the apparent "loss" of tens of Gigabytes. Hard disk usage From the original use of a hard drive in a single computer, techniques for guarding against hard disk failure were developed such as the redundant array of independent disks (RAID). Hard disks are also found in network attached storage devices, but for large volumes of data are most efficiently used in a storage area network. Applications for hard disk drives expanded to include personal video recorder(TiVo), portable music players(Apple iPOD), and digital camera's. In 2005 the first cellular telephone to include a hard disk drive was introduced by Samsung. The first computer with a hard disk drive as standard was the IBM 350 Disk File, introduced in 1955 with the IBM 305 computer. This drive had fifty 24 inch platters, with a total capacity of five million characters. In 1952, an IBM engineer named Reynold Johnson developed a massive hard disk consisting of fifty platters, each two feet wide, that rotated on a spindle at 1200 rpm with read/write heads for the first database running RCAs Bismark computer. In 1973, IBM introduced the 3340 "Winchester" disk system (the 30MB + 30 millisecond access time led the project to be named after the Winchester 30-30 rifle), the first to use a sealed head/disk assembly (HDA). Almost all modern disk drives now use this technology, and the term "Winchester" became a common description for all hard disks, though generally falling out of use during the 1990s. For many years, hard disks were large, cumbersome devices, more suited to use in the protected environment of a data center or large office than in a harsh industrial environment (due to their delicacy), or small office or home (due to their size and power consumption). Before the early 1980s, most hard disks had 8-inch or 14-inch platters, required an equipment rack or a large amount of floor space (especially the large removable-media drives, which were often referred to as "washing machines"), and in many cases needed special power hookups for the large motors they used. Because of this, hard disks were not commonly used with microcomputers until after 1980, when Seagate Technology introduced the ST-506, the first 5.25-inch hard drive, with a capacity of 5 megabytes. In fact, in its factory configuration the original IBM PC (IBM 5150) was not equipped with a hard drive. Most microcomputer hard disk drives in the early 1980s were not sold under their manufacturer's names, but by OEMs as part of larger peripherals (such as the Corvus Disk System and the Apple ProFile). The IBM PC/XT had an internal hard disk, however, and this started a trend toward buying "bare" drives (often by mail order) and installing them directly into a system. Hard disk makers started marketing to end users as well as OEMs, and by the mid-1990s, hard disks had become available on retail store shelves. While internal drives became the system of choice on PCs, external hard drives remained popular for much longer on the Apple Macintosh and other platforms. Every Mac made between 1986 and 1998 has a SCSI port on the back, making external expansion easy; also, "toaster" Macs did not have easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive bay at all), so on those models, external SCSI disks were the only reasonable option. External SCSI drives were also popular with older microcomputers such as the Apple II series and the Commodore 64, and were also used extensively in servers, a usage which is still popular today. The appearance in the late 1990s of high-speed external interfaces such as USB and IEEE 1394 (FireWire) has made external disk systems popular among regular users once again, especially for users that move large amounts of data between two or more locations, and most hard disk makers now make their disks available in external cases. The capacity of hard drives has grown exponentially over time. With early personal computers, a drive with a 20 megabyte capacity was considered large. In the latter half of the 1990's, hard drives with capacities of 1 gigabyte and greater became available. As of early 2005, the "smallest" desktop hard disk in production has a capacity of 40 gigabytes, while the largest-capacity drives approach one half terabyte (500 gigabytes), and are expected to exceed that mark by year's end. As far as PC history is concerned - the major drive families have been MFM, RLL, ESDI, SCSI, IDE, EIDE, and now SATA. MFM drives required that the electronics on the "controller" be compatible with the electronics on the card - disks and controllers had to be compatible. RLL (Run Length Limited) was a way of encoding bits onto the platters that allowed for better density. Most RLL drives also needed to be "compatible" with the controllers that communicated with them. ESDI was an interface developed by Maxtor. It allowed for faster comminication between the PC and the disk. SCSI (originally named SASI for Shuman (sic) Associates) or Small Computer System Interface was an early competitor with ESDI. When the price of electronics dropped (and because of a demand by consumers) the electronics that had been stored on the controller card was moved to the disk drive itself. This advance was known as "Independent Drive Electronics" or IDE. Eventually, IDE manufacturers wanted the speed of IDE to approach the speed of SCSI drives. IDE drives were slower because they did not have as big a cache as the SCSI drives, and they could not write directly to RAM. IDE manufacturers attempted to close this speed gap by introducing Logical Block Addressing (LBA). These drives were known as EIDE. While EIDE was introduced, though, SCSI manufacturers continued to improve SCSI's performance. The increase in SCSI performance came at a price -its interfaces were more expensive. In order for EIDE's performance to increase (while keeping the cost of the associated electronics low), it was realized that the only way to do this was to move from "parallel" interfaces to "serial" interfaces, the result of which is the SATA interface. Fiber channel interfaces are left to discussions of server drives. - The PC Guide: A Brief History of the Hard Disk Drive - Binary versus Decimal - Multi Disk System Tuning HOWTO - Windows NT Server Resource Kit: Disk Management Basics (See section "About Disks and Disk Organization") - Behold the God Box - Less's Law and future implications of massive cheap hard disk storage - Hitachi's plans for perpendicular recording allowing 20 GB microdrives or 1 TB 3.5 inch drives The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Hard_drive
13
55
A galaxy is a massive, gravitationally bound system consisting of stars, stellar remnants, an interstellar medium of gas and dust, and, dark matter, an important but poorly understood component. The word galaxy is derived from the Greek galaxias (γαλαξίας), literally "milky", a reference to the Milky Way. Examples of galaxies range from dwarfs with as few as ten million (107) stars to giants with a hundred trillion (1014) stars, each orbiting their galaxy's own center of mass. Galaxies contain varying numbers of star systems, star clusters and types of interstellar clouds. In between these objects is a sparse interstellar medium of gas, dust, and cosmic rays. Supermassive black holes reside at the center of all galaxies. They are thought to be the primary driver of active galactic nuclei found at the core of some galaxies. The Milky Way galaxy is known to harbor at least one such object. Galaxies have been historically categorized according to their apparent shape, usually referred to as their visual morphology. A common form is the elliptical galaxy, which has an ellipse-shaped light profile. Spiral galaxies are disk-shaped with dusty, curving arms. Those with irregular or unusual shapes are known as irregular galaxies and typically originate from disruption by the gravitational pull of neighboring galaxies. Such interactions between nearby galaxies, which may ultimately result in a merger, sometimes induce significantly increased incidents of star formation leading to starburst galaxies. Smaller galaxies lacking a coherent structure are referred to as irregular galaxies. There are probably more than 170 billion (1.7 × 1011) galaxies in the observable universe. Most are 1,000 to 100,000 parsecs in diameter and usually separated by distances on the order of millions of parsecs (or megaparsecs). Intergalactic space (the space between galaxies) is filled with a tenuous gas of an average density less than one atom per cubic meter. The majority of galaxies are organized into a hierarchy of associations known as groups and clusters, which, in turn usually form larger superclusters. At the largest scale, these associations are generally arranged into sheets and filaments, which are surrounded by immense voids. On December 12, 2012, astronomers, working with the Hubble Space Telescope, reported that the most distant known galaxy, UDFj-39546284, is now estimated to be even further away than previously believed. The galaxy, which is estimated to have formed around "380 million years" after the Big Bang (about 13.8 billion years ago), and has a z (redshift) of 11.9, is approximately 13.42 billion light years from Earth. The word galaxy derives from the Greek term for our own galaxy, galaxias (γαλαξίας, "milky one"), or kyklos ("circle") galaktikos ("milky") for its appearance as a lighter colored band in the sky. In Greek mythology, Zeus places his son born by a mortal woman, the infant Heracles, on Hera's breast while she is asleep so that the baby will drink her divine milk and will thus become immortal. Hera wakes up while breastfeeding and then realizes she is nursing an unknown baby: she pushes the baby away and a jet of her milk sprays the night sky, producing the faint band of light known as the Milky Way. In the astronomical literature, the capitalized word 'Galaxy' is used to refer to our galaxy, the Milky Way, to distinguish it from the billions of other galaxies. The English term Milky Way can be traced back to a story by Chaucer: "See yonder, lo, the Galaxyë Which men clepeth the Milky Wey, For hit is whyt." When William Herschel constructed his catalog of deep sky objects in 1786, he used the name spiral nebula for certain objects such as M31. These would later be recognized as immense conglomerations of stars, when the true distance to these objects began to be appreciated, and they would be termed island universes. However, the word Universe was understood to mean the entirety of existence, so this expression fell into disuse and the objects instead became known as galaxies. Observation history The realization that we live in a galaxy, and that there were, in fact, many other galaxies, parallels discoveries that were made about the Milky Way and other nebulae in the night sky. Milky Way The Greek philosopher Democritus (450–370 BC) proposed that the bright band on the night sky known as the Milky Way might consist of distant stars. Aristotle (384–322 BC), however, believed the Milky Way to be caused by "the ignition of the fiery exhalation of some stars that were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the World that is continuous with the heavenly motions." The Neoplatonist philosopher Olympiodorus the Younger (c. 495–570 AD) was scientifically critical of this view, arguing that if the Milky Way were sublunary (situated between the Earth and the Moon) it should appear different at different times and places on the Earth, and that it should have parallax, which it does not. In his view, the Milky Way was celestial. This idea would be influential later in the Islamic world. According to Mohani Mohamed, the Arabian astronomer Alhazen (965–1037) made the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it was very remote from the Earth and did not belong to the atmosphere." The Persian astronomer al-Bīrūnī (973–1048) proposed the Milky Way galaxy to be "a collection of countless fragments of the nature of nebulous stars." The Andalusian astronomer Ibn Bajjah ("Avempace", d. 1138) proposed that the Milky Way was made up of many stars that almost touch one another and appear to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars as evidence of this occurring when two objects are near. In the 14th century, the Syrian-born Ibn Qayyim proposed the Milky Way galaxy to be "a myriad of tiny stars packed together in the sphere of the fixed stars". Actual proof of the Milky Way consisting of many stars came in 1610 when the Italian astronomer Galileo Galilei used a telescope to study the Milky Way and discovered that it is composed of a huge number of faint stars. In 1750 the English astronomer Thomas Wright, in his An original theory or new hypothesis of the Universe, speculated (correctly) that the galaxy might be a rotating body of a huge number of stars held together by gravitational forces, akin to the solar system but on a much larger scale. The resulting disk of stars can be seen as a band on the sky from our perspective inside the disk. In a treatise in 1755, Immanuel Kant elaborated on Wright's idea about the structure of the Milky Way. The first attempt to describe the shape of the Milky Way and the position of the Sun in it was carried out by William Herschel in 1785 by carefully counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the solar system close to the center. Using a refined approach, Kapteyn in 1920 arrived at the picture of a small (diameter about 15 kiloparsecs) ellipsoid galaxy with the Sun close to the center. A different method by Harlow Shapley based on the cataloguing of globular clusters led to a radically different picture: a flat disk with diameter approximately 70 kiloparsecs and the Sun far from the center. Both analyses failed to take into account the absorption of light by interstellar dust present in the galactic plane, but after Robert Julius Trumpler quantified this effect in 1930 by studying open clusters, the present picture of our host galaxy, the Milky Way, emerged. Distinction from other nebulae In the 10th century, the Persian astronomer Al-Sufi made the earliest recorded observation of the Andromeda Galaxy, describing it as a "small cloud". Al-Sufi, who published his findings in his Book of Fixed Stars in 964, also identified the Large Magellanic Cloud, which is visible from Yemen, though not from Isfahan; it was not seen by Europeans until Magellan's voyage in the 16th century. The Andromeda Galaxy was independently rediscovered by Simon Marius in 1612. These are the only galaxies outside the Milky Way that are easily visible to the unaided eye, so they were the first galaxies to be observed from Earth. In 1750 Thomas Wright, in his An original theory or new hypothesis of the Universe, speculated (correctly) that the Milky Way was a flattened disk of stars, and that some of the nebulae visible in the night sky might be separate Milky Ways. In 1755, Immanuel Kant introduced the term[where?] "island Universe" for these distant nebulae. Toward the end of the 18th century, Charles Messier compiled a catalog containing the 109 brightest nebulae (celestial objects with a nebulous appearance), later followed by a larger catalog of 5,000 nebulae assembled by William Herschel. In 1845, Lord Rosse constructed a new telescope and was able to distinguish between elliptical and spiral nebulae. He also managed to make out individual point sources in some of these nebulae, lending credence to Kant's earlier conjecture. In 1912, Vesto Slipher made spectrographic studies of the brightest spiral nebulae to determine if they were made from chemicals that would be expected in a planetary system. However, Slipher discovered that the spiral nebulae had high red shifts, indicating that they were moving away at a rate higher than the Milky Way's escape velocity. Thus they were not gravitationally bound to the Milky Way, and were unlikely to be a part of the galaxy. In 1917, Heber Curtis had observed a nova S Andromedae within the "Great Andromeda Nebula" (as the Andromeda Galaxy, Messier object M31, was known). Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within our galaxy. As a result he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the so-called "island universes" hypothesis, which holds that spiral nebulae are actually independent galaxies. In 1920 the so-called Great Debate took place between Harlow Shapley and Heber Curtis, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula was an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift. The matter was conclusively settled in the early 1920s. In 1922, the Estonian astronomer Ernst Öpik gave a distance determination that supported the theory that the Andromeda Nebula is indeed a distant extra-galactic object. Using the new 100 inch Mt. Wilson telescope, Edwin Hubble was able to resolve the outer parts of some spiral nebulae as collections of individual stars and identified some Cepheid variables, thus allowing him to estimate the distance to the nebulae: they were far too distant to be part of the Milky Way. In 1936 Hubble produced a classification system for galaxies that is used to this day, the Hubble sequence. Modern research In 1944, Hendrik van de Hulst predicted microwave radiation at a wavelength of 21 cm resulting from interstellar atomic hydrogen gas; this radiation was observed in 1951. The radiation allowed for much improved study of the Milky Way Galaxy, since it is not affected by dust absorption and its Doppler shift can be used to map the motion of the gas in the Galaxy. These observations led to the postulation of a rotating bar structure in the center of the Galaxy. With improved radio telescopes, hydrogen gas could also be traced in other galaxies. In the 1970s it was discovered in Vera Rubin's study of the rotation speed of gas in galaxies that the total visible mass (from the stars and gas) does not properly account for the speed of the rotating gas. This galaxy rotation problem is thought to be explained by the presence of large quantities of unseen dark matter. Beginning in the 1990s, the Hubble Space Telescope yielded improved observations. Among other things, it established that the missing dark matter in our galaxy cannot solely consist of inherently faint and small stars. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion (1.25×1011) galaxies in the Universe. Improved technology in detecting the spectra invisible to humans (radio telescopes, infrared cameras, and x-ray telescopes) allow detection of other galaxies that are not detected by Hubble. Particularly, galaxy surveys in the Zone of Avoidance (the region of the sky blocked by the Milky Way) have revealed a number of new galaxies. Types and morphology Galaxies come in three main types: ellipticals, spirals, and irregulars. A slightly more extensive description of galaxy types based on their appearance is given by the Hubble sequence. Since the Hubble sequence is entirely based upon visual morphological type, it may miss certain important characteristics of galaxies such as star formation rate (in starburst galaxies) and activity in the core (in active galaxies). The Hubble classification system rates elliptical galaxies on the basis of their ellipticity, ranging from E0, being nearly spherical, up to E7, which is highly elongated. These galaxies have an ellipsoidal profile, giving them an elliptical appearance regardless of the viewing angle. Their appearance shows little structure and they typically have relatively little interstellar matter. Consequently these galaxies also have a low portion of open clusters and a reduced rate of new star formation. Instead they are dominated by generally older, more evolved stars that are orbiting the common center of gravity in random directions. The stars contain low abundances of heavy elements because star formation ceases after the initial burst. In this sense they have some similarity to the much smaller globular clusters. The largest galaxies are giant ellipticals. Many elliptical galaxies are believed to form due to the interaction of galaxies, resulting in a collision and merger. They can grow to enormous sizes (compared to spiral galaxies, for example), and giant elliptical galaxies are often found near the core of large galaxy clusters. Starburst galaxies are the result of such a galactic collision that can result in the formation of an elliptical galaxy. Spiral galaxies consist of a rotating disk of stars and interstellar medium, along with a central bulge of generally older stars. Extending outward from the bulge are relatively bright arms. In the Hubble classification scheme, spiral galaxies are listed as type S, followed by a letter (a, b, or c) that indicates the degree of tightness of the spiral arms and the size of the central bulge. An Sa galaxy has tightly wound, poorly defined arms and possesses a relatively large core region. At the other extreme, an Sc galaxy has open, well-defined arms and a small core region. A galaxy with poorly defined arms is sometimes referred to as a flocculent spiral galaxy; in contrast to the grand design spiral galaxy that has prominent and well-defined spiral arms. In spiral galaxies, the spiral arms do have the shape of approximate logarithmic spirals, a pattern that can be theoretically shown to result from a disturbance in a uniformly rotating mass of stars. Like the stars, the spiral arms rotate around the center, but they do so with constant angular velocity. The spiral arms are thought to be areas of high-density matter, or "density waves". As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the higher density. (The velocity returns to normal after the stars depart on the other side of the arm.) This effect is akin to a "wave" of slowdowns moving along a highway full of moving cars. The arms are visible because the high density facilitates star formation, and therefore they harbor many bright and young stars. A majority of spiral galaxies have a linear, bar-shaped band of stars that extends outward to either side of the core, then merges into the spiral arm structure. In the Hubble classification scheme, these are designated by an SB, followed by a lower-case letter (a, b or c) that indicates the form of the spiral arms (in the same manner as the categorization of normal spiral galaxies). Bars are thought to be temporary structures that can occur as a result of a density wave radiating outward from the core, or else due to a tidal interaction with another galaxy. Many barred spiral galaxies are active, possibly as a result of gas being channeled into the core along the arms. Our own galaxy, the Milky Way, is a large disk-shaped barred-spiral galaxy about 30 kiloparsecs in diameter and a kiloparsec thick. It contains about two hundred billion (2×1011) stars and has a total mass of about six hundred billion (6×1011) times the mass of the Sun. Other morphologies Peculiar galaxies are galactic formations that develop unusual properties due to tidal interactions with other galaxies. An example of this is the ring galaxy, which possesses a ring-like structure of stars and interstellar medium surrounding a bare core. A ring galaxy is thought to occur when a smaller galaxy passes through the core of a spiral galaxy. Such an event may have affected the Andromeda Galaxy, as it displays a multi-ring-like structure when viewed in infrared radiation. A lenticular galaxy is an intermediate form that has properties of both elliptical and spiral galaxies. These are categorized as Hubble type S0, and they possess ill-defined spiral arms with an elliptical halo of stars. (Barred lenticular galaxies receive Hubble classification SB0.) In addition to the classifications mentioned above, there are a number of galaxies that can not be readily classified into an elliptical or spiral morphology. These are categorized as irregular galaxies. An Irr-I galaxy has some structure but does not align cleanly with the Hubble classification scheme. Irr-II galaxies do not possess any structure that resembles a Hubble classification, and may have been disrupted. Nearby examples of (dwarf) irregular galaxies include the Magellanic Clouds. Despite the prominence of large elliptical and spiral galaxies, most galaxies in the Universe appear to be dwarf galaxies. These galaxies are relatively small when compared with other galactic formations, being about one hundredth the size of the Milky Way, containing only a few billion stars. Ultra-compact dwarf galaxies have recently been discovered that are only 100 parsecs across. Many dwarf galaxies may orbit a single larger galaxy; the Milky Way has at least a dozen such satellites, with an estimated 300–500 yet to be discovered. Dwarf galaxies may also be classified as elliptical, spiral, or irregular. Since small dwarf ellipticals bear little resemblance to large ellipticals, they are often called dwarf spheroidal galaxies instead. A study of 27 Milky Way neighbors found that in all dwarf galaxies, the central mass is approximately 10 million solar masses, regardless of whether the galaxy has thousands or millions of stars. This has led to the suggestion that galaxies are largely formed by dark matter, and that the minimum size may indicate a form of warm dark matter incapable of gravitational coalescence on a smaller scale. Unusual dynamics and activities The average separation between galaxies within a cluster is a little over an order of magnitude larger than their diameter. Hence interactions between these galaxies are relatively frequent, and play an important role in their evolution. Near misses between galaxies result in warping distortions due to tidal interactions, and may cause some exchange of gas and dust. Collisions occur when two galaxies pass directly through each other and have sufficient relative momentum not to merge. The stars within these interacting galaxies will typically pass straight through without colliding. However, the gas and dust within the two forms will interact. This can trigger bursts of star formation as the interstellar medium becomes disrupted and compressed. A collision can severely distort the shape of one or both galaxies, forming bars, rings or tail-like structures. At the extreme of interactions are galactic mergers. In this case the relative momentum of the two galaxies is insufficient to allow the galaxies to pass through each other. Instead, they gradually merge to form a single, larger galaxy. Mergers can result in significant changes to morphology, as compared to the original galaxies. In the case where one of the galaxies is much more massive, however, the result is known as cannibalism. In this case the larger galaxy will remain relatively undisturbed by the merger, while the smaller galaxy is torn apart. The Milky Way galaxy is currently in the process of cannibalizing the Sagittarius Dwarf Elliptical Galaxy and the Canis Major Dwarf Galaxy. Stars are created within galaxies from a reserve of cold gas that forms into giant molecular clouds. Some galaxies have been observed to form stars at an exceptional rate, known as a starburst. Should they continue to do so, however, they would consume their reserve of gas in a time frame lower than the lifespan of the galaxy. Hence starburst activity usually lasts for only about ten million years, a relatively brief period in the history of a galaxy. Starburst galaxies were more common during the early history of the Universe, and, at present, still contribute an estimated 15% to the total star production rate. Starburst galaxies are characterized by dusty concentrations of gas and the appearance of newly formed stars, including massive stars that ionize the surrounding clouds to create H II regions. These massive stars produce supernova explosions, resulting in expanding remnants that interact powerfully with the surrounding gas. These outbursts trigger a chain reaction of star building that spreads throughout the gaseous region. Only when the available gas is nearly consumed or dispersed does the starburst activity come to an end. Starbursts are often associated with merging or interacting galaxies. The prototype example of such a starburst-forming interaction is M82, which experienced a close encounter with the larger M81. Irregular galaxies often exhibit spaced knots of starburst activity. Active nucleus A portion of the galaxies we can observe are classified as active. That is, a significant portion of the total energy output from the galaxy is emitted by a source other than the stars, dust and interstellar medium. The standard model for an active galactic nucleus is based upon an accretion disc that forms around a supermassive black hole (SMBH) at the core region. The radiation from an active galactic nucleus results from the gravitational energy of matter as it falls toward the black hole from the disc. In about 10% of these objects, a diametrically opposed pair of energetic jets ejects particles from the core at velocities close to the speed of light. The mechanism for producing these jets is still not well understood. Active galaxies that emit high-energy radiation in the form of x-rays are classified as Seyfert galaxies or quasars, depending on the luminosity. Blazars are believed to be an active galaxy with a relativistic jet that is pointed in the direction of the Earth. A radio galaxy emits radio frequencies from relativistic jets. A unified model of these types of active galaxies explains their differences based on the viewing angle of the observer. Possibly related to active galactic nuclei (as well as starburst regions) are low-ionization nuclear emission-line regions (LINERs). The emission from LINER-type galaxies is dominated by weakly ionized elements. Approximately one-third of nearby galaxies are classified as containing LINER nuclei. Formation and evolution The study of galactic formation and evolution attempts to answer questions regarding how galaxies formed and their evolutionary path over the history of the Universe. Some theories in this field have now become widely accepted, but it is still an active area in astrophysics. Current cosmological models of the early Universe are based on the Big Bang theory. About 300,000 years after this event, atoms of hydrogen and helium began to form, in an event called recombination. Nearly all the hydrogen was neutral (non-ionized) and readily absorbed light, and no stars had yet formed. As a result this period has been called the "Dark Ages". It was from density fluctuations (or anisotropic irregularities) in this primordial matter that larger structures began to appear. As a result, masses of baryonic matter started to condense within cold dark matter halos. These primordial structures would eventually become the galaxies we see today. Evidence for the early appearance of galaxies was found in 2006, when it was discovered that the galaxy IOK-1 has an unusually high redshift of 6.96, corresponding to just 750 million years after the Big Bang and making it the most distant and primordial galaxy yet seen. While some scientists have claimed other objects (such as Abell 1835 IR1916) have higher redshifts (and therefore are seen in an earlier stage of the Universe's evolution), IOK-1's age and composition have been more reliably established. The existence of such early protogalaxies suggests that they must have grown in the so-called "Dark Ages". Nonetheless, in December 2012, astronomers reported that the UDFj-39546284 galaxy is the most distant galaxy known and has a redshift value of 11.9. The galaxy, estimated to have existed around "380 million years" after the Big Bang (which was about 13.8 billion years ago), is about 13.42 billion light years away. The detailed process by which such early galaxy formation occurred is a major open question in astronomy. Theories could be divided into two categories: top-down and bottom-up. In top-down theories (such as the Eggen–Lynden-Bell–Sandage [ELS] model), protogalaxies form in a large-scale simultaneous collapse lasting about one hundred million years. In bottom-up theories (such as the Searle-Zinn [SZ] model), small structures such as globular clusters form first, and then a number of such bodies accrete to form a larger galaxy. Once protogalaxies began to form and contract, the first halo stars (called Population III stars) appeared within them. These were composed almost entirely of hydrogen and helium, and may have been massive. If so, these huge stars would have quickly consumed their supply of fuel and became supernovae, releasing heavy elements into the interstellar medium. This first generation of stars re-ionized the surrounding neutral hydrogen, creating expanding bubbles of space through which light could readily travel. Within a billion years of a galaxy's formation, key structures begin to appear. Globular clusters, the central supermassive black hole, and a galactic bulge of metal-poor Population II stars form. The creation of a supermassive black hole appears to play a key role in actively regulating the growth of galaxies by limiting the total amount of additional matter added. During this early epoch, galaxies undergo a major burst of star formation. During the following two billion years, the accumulated matter settles into a galactic disc. A galaxy will continue to absorb infalling material from high-velocity clouds and dwarf galaxies throughout its life. This matter is mostly hydrogen and helium. The cycle of stellar birth and death slowly increases the abundance of heavy elements, eventually allowing the formation of planets. The evolution of galaxies can be significantly affected by interactions and collisions. Mergers of galaxies were common during the early epoch, and the majority of galaxies were peculiar in morphology. Given the distances between the stars, the great majority of stellar systems in colliding galaxies will be unaffected. However, gravitational stripping of the interstellar gas and dust that makes up the spiral arms produces a long train of stars known as tidal tails. Examples of these formations can be seen in NGC 4676 or the Antennae Galaxies. As an example of such an interaction, the Milky Way galaxy and the nearby Andromeda Galaxy are moving toward each other at about 130 km/s, and—depending upon the lateral movements—the two may collide in about five to six billion years. Although the Milky Way has never collided with a galaxy as large as Andromeda before, evidence of past collisions of the Milky Way with smaller dwarf galaxies is increasing. Such large-scale interactions are rare. As time passes, mergers of two systems of equal size become less common. Most bright galaxies have remained fundamentally unchanged for the last few billion years, and the net rate of star formation probably also peaked approximately ten billion years ago. Future trends At present, most star formation occurs in smaller galaxies where cool gas is not so depleted. Spiral galaxies, like the Milky Way, only produce new generations of stars as long as they have dense molecular clouds of interstellar hydrogen in their spiral arms. Elliptical galaxies are already largely devoid of this gas, and so form no new stars. The supply of star-forming material is finite; once stars have converted the available supply of hydrogen into heavier elements, new star formation will come to an end. The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013–1014 years), as the smallest, longest-lived stars in our astrosphere, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions. Larger-scale structures Deep sky surveys show that galaxies are often found in relatively close association with other galaxies. Solitary galaxies that have not significantly interacted with another galaxy of comparable mass during the past billion years are relatively scarce. Only about 5% of the galaxies surveyed have been found to be truly isolated; however, these isolated formations may have interacted and even merged with other galaxies in the past, and may still be orbited by smaller, satellite galaxies. Isolated galaxies[note 2] can produce stars at a higher rate than normal, as their gas is not being stripped by other nearby galaxies. On the largest scale, the Universe is continually expanding, resulting in an average increase in the separation between individual galaxies (see Hubble's law). Associations of galaxies can overcome this expansion on a local scale through their mutual gravitational attraction. These associations formed early in the Universe, as clumps of dark matter pulled their respective galaxies together. Nearby groups later merged to form larger-scale clusters. This on-going merger process (as well as an influx of infalling gas) heats the inter-galactic gas within a cluster to very high temperatures, reaching 30–100 megakelvins. About 70–80% of the mass in a cluster is in the form of dark matter, with 10–30% consisting of this heated gas and the remaining few percent of the matter in the form of galaxies. Most galaxies in the Universe are gravitationally bound to a number of other galaxies. These form a fractal-like hierarchy of clustered structures, with the smallest such associations being termed groups. A group of galaxies is the most common type of galactic cluster, and these formations contain a majority of the galaxies (as well as most of the baryonic mass) in the Universe. To remain gravitationally bound to such a group, each member galaxy must have a sufficiently low velocity to prevent it from escaping (see Virial theorem). If there is insufficient kinetic energy, however, the group may evolve into a smaller number of galaxies through mergers. Larger structures containing many thousands of galaxies packed into an area a few megaparsecs across are called clusters. Clusters of galaxies are often dominated by a single giant elliptical galaxy, known as the brightest cluster galaxy, which, over time, tidally destroys its satellite galaxies and adds their mass to its own. Superclusters contain tens of thousands of galaxies, which are found in clusters, groups and sometimes individually. At the supercluster scale, galaxies are arranged into sheets and filaments surrounding vast empty voids. Above this scale, the Universe appears to be isotropic and homogeneous. The Milky Way galaxy is a member of an association named the Local Group, a relatively small group of galaxies that has a diameter of approximately one megaparsec. The Milky Way and the Andromeda Galaxy are the two brightest galaxies within the group; many of the other member galaxies are dwarf companions of these two galaxies. The Local Group itself is a part of a cloud-like structure within the Virgo Supercluster, a large, extended structure of groups and clusters of galaxies centered on the Virgo Cluster. Multi-wavelength observation After galaxies external to the Milky Way were found to exist, initial observations were made mostly using visible light. The peak radiation of most stars lies here, so the observation of the stars that form galaxies has been a major component of optical astronomy. It is also a favorable portion of the spectrum for observing ionized H II regions, and for examining the distribution of dusty arms. The dust present in the interstellar medium is opaque to visual light. It is more transparent to far-infrared, which can be used to observe the interior regions of giant molecular clouds and galactic cores in great detail. Infrared is also used to observe distant, red-shifted galaxies that were formed much earlier in the history of the Universe. Water vapor and carbon dioxide absorb a number of useful portions of the infrared spectrum, so high-altitude or space-based telescopes are used for infrared astronomy. The first non-visual study of galaxies, particularly active galaxies, was made using radio frequencies. The atmosphere is nearly transparent to radio between 5 MHz and 30 GHz. (The ionosphere blocks signals below this range.) Large radio interferometers have been used to map the active jets emitted from active nuclei. Radio telescopes can also be used to observe neutral hydrogen (via 21 cm radiation), including, potentially, the non-ionized matter in the early Universe that later collapsed to form galaxies. Ultraviolet and X-ray telescopes can observe highly energetic galactic phenomena. An ultraviolet flare was observed when a star in a distant galaxy was torn apart from the tidal forces of a black hole. The distribution of hot gas in galactic clusters can be mapped by X-rays. The existence of super-massive black holes at the cores of galaxies was confirmed through X-ray astronomy. See also - Dark galaxy - Galactic orientation - Galaxy formation and evolution - List of galaxies - List of nearest galaxies - Luminous infrared galaxy - Supermassive black hole - Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure - Galaxies to the left side of the Hubble classification scheme are sometimes referred to as "early-type", while those to the right are "late-type". - The term "field galaxy" is sometimes used to mean an isolated galaxy, although the same term is also used to describe galaxies that do not belong to a cluster but may be a member of a group of galaxies. - Sparke & Gallagher III 2000, p. i - Hupp, E.; Roy, S.; Watzke, M. (2006-08-12). "NASA Finds Direct Proof of Dark Matter". NASA. Retrieved 2007-04-17. - "Unveiling the Secret of a Virgo Dwarf Galaxy". ESO. 2000-05-03. Retrieved 2007-01-03. - Uson, J. M.; Boughn, S. P.; Kuhn, J. R. (1990). "The central galaxy in Abell 2029 – An old supergiant". Science 250 (4980): 539–540. Bibcode:1990Sci...250..539U. doi:10.1126/science.250.4980.539. - Finley, D.; Aguilar, D. (2005-11-02). "Astronomers Get Closest Look Yet At Milky Way's Mysterious Core". National Radio Astronomy Observatory. Retrieved 2006-08-10. - Hoover, A. (2003-06-16). "UF Astronomers: Universe Slightly Simpler Than Expected". Hubble News Desk. Retrieved 2011-03-04. Based upon: - Graham, A. W.; Guzmán, R. (2003). "HST Photometry of Dwarf Elliptical Galaxies in Coma, and an Explanation for the Alleged Structural Dichotomy between Dwarf and Bright Elliptical Galaxies". Astronomical Journal 125 (6): 2936–2950. arXiv:astro-ph/0303391. Bibcode:2003AJ....125.2936G. doi:10.1086/374992. - Jarrett, T. H. "Near-Infrared Galaxy Morphology Atlas". California Institute of Technology. Retrieved 2007-01-09. - Deutsch, David (2011). The Fabric of Reality. Penguin Books Limited. pp. 234–. ISBN 978-0-14-196961-9. - "Galaxy Clusters and Large-Scale Structure". University of Cambridge. Retrieved 2007-01-15. - Wall, Mike (December 12, 2012). "Ancient Galaxy May Be Most Distant Ever Seen". Space.com. Retrieved December 12, 2012. - "Cosmic Detectives". The European Space Agency (ESA). 2013-04-02. Retrieved 2013-04-15. - Harper, D. "galaxy". Online Etymology Dictionary. Retrieved 2011-11-11. - Waller & Hodge 2003, p. 91 - Koneãn˘, Lubomír. "Emblematics, Agriculture, and Mythography in The Origin of the Milky Way". Academy of Sciences of the Czech Republic. Archived from the original on July 20, 2006. Retrieved 2007-01-05. - Rao, J. (2005-09-02). "Explore the Archer's Realm". Space.com. Retrieved 2007-01-03. - Plutarch (2006). The Complete Works Volume 3: Essays and Miscellanies. Chapter 3: Echo Library. p. 66. ISBN 978-1-4068-3224-2. - Montada, J. P. (2007-09-28). "Ibn Bajja". Stanford Encyclopedia of Philosophy. Retrieved 2008-07-11. - Heidarzadeh 2008, pp. 23–25 - Mohamed 2000, pp. 49–50 - Bouali, H.-E.; Zghal, M.; Lakhdar, Z. B. (2005). "Popularisation of Optical Phenomena: Establishing the First Ibn Al-Haytham Workshop on Photography". The Education and Training in Optics and Photonics Conference. Retrieved 2008-07-08. - O'Connor, John J.; Robertson, Edmund F., "Abu Rayhan Muhammad ibn Ahmad al-Biruni", MacTutor History of Mathematics archive, University of St Andrews. - Al-Biruni 2004, p. 87 - Heidarzadeh 2008, p. 25, Table 2.1 - Livingston, J. W. (1971). "Ibn Qayyim al-Jawziyyah: A Fourteenth Century Defense against Astrological Divination and Alchemical Transmutation". Journal of the American Oriental Society 91 (1): 96–103 . doi:10.2307/600445. JSTOR 600445. - O'Connor, J. J.; Robertson, E. F. (November 2002). "Galileo Galilei". University of St. Andrews. Retrieved 2007-01-08. - Evans, J. C. (1998-11-24). "Our Galaxy". George Mason University. Retrieved 2007-01-04. - Paul 1993, pp. 16–18 - Trimble, V. (1999). "Robert Trumpler and the (Non)transparency of Space". Bulletin of the American Astronomical Society 31 (31): 1479. Bibcode:1999AAS...195.7409T. - Kepple & Sanner 1998, p. 18 - "Abd-al-Rahman Al Sufi (December 7, 903 – May 25, 986 A.D.)". Observatoire de Paris. Retrieved 2007-04-19. - "The Large Magellanic Cloud, LMC". Observatoire de Paris. Retrieved 2007-04-19. - See text quoted from Wright's An original theory or new hypothesis of the Universe in Dyson, F. (1979). Disturbing the Universe. Pan Books. p. 245. ISBN 0-330-26324-2. - Abbey, L. "The Earl of Rosse and the Leviathan of Parsontown". Retrieved 2007-01-04. - Slipher, V. M. (1913). "The radial velocity of the Andromeda Nebula". Lowell Observatory Bulletin 1: 56–57. Bibcode:1913LowOB...2...56S. - Slipher, V. M. (1915). "Spectrographic Observations of Nebulae". Popular Astronomy 23: 21–24. Bibcode:1915PA.....23...21S. - Curtis, H. D. (1988). "Novae in Spiral Nebulae and the Island Universe Theory". Publications of the Astronomical Society of the Pacific 100: 6. Bibcode:1988PASP..100....6C. doi:10.1086/132128. - Weaver, H. F. "Robert Julius Trumpler". US National Academy of Sciences. Retrieved 2007-01-05. - Öpik, E. (1922). "An estimate of the distance of the Andromeda Nebula". Astrophysical Journal 55: 406. Bibcode:1922ApJ....55..406O. doi:10.1086/142680. - Hubble, E. P. (1929). "A spiral nebula as a stellar system, Messier 31". Astrophysical Journal 69: 103–158. Bibcode:1929ApJ....69..103H. doi:10.1086/143167. - Sandage, A. (1989). "Edwin Hubble, 1889–1953". Journal of the Royal Astronomical Society of Canada 83 (6): 351–362. Retrieved 2007-01-08. - Tenn, J. "Hendrik Christoffel van de Hulst". Sonoma State University. Retrieved 2007-01-05. - López-Corredoira, M.; et al. (2001). "Searching for the in-plane Galactic bar and ring in DENIS". Astronomy and Astrophysics 373 (1): 139–152. arXiv:astro-ph/0104307. Bibcode:2001A&A...373..139L. doi:10.1051/0004-6361:20010560. - Rubin, V. C. (1983). "Dark matter in spiral galaxies". Scientific American 248 (6): 96–106. Bibcode:1983SciAm.248...96R. doi:10.1038/scientificamerican0683-96. - Rubin, V. C. (2000). "One Hundred Years of Rotating Galaxies". Publications of the Astronomical Society of the Pacific 112 (772): 747–750. Bibcode:2000PASP..112..747R. doi:10.1086/316573. - "Hubble Rules Out a Leading Explanation for Dark Matter". Hubble News Desk. 1994-10-17. Retrieved 2007-01-08. - "How many galaxies are there?". NASA. 2002-11-27. Retrieved 2007-01-08. - Kraan-Korteweg, R. C.; Juraszek, S. (2000). "Mapping the hidden Universe: The galaxy distribution in the Zone of Avoidance". Publications of the Astronomical Society of Australia 17 (1): 6–12. arXiv:astro-ph/9910572. Bibcode:1999astro.ph.10572K. doi:10.1071/AS00006. - Barstow, M. A. (2005). "Elliptical Galaxies". Leicester University Physics Department. Retrieved 2006-06-08. - "Galaxies". Cornell University. 2005-10-20. Retrieved 2006-08-10. - Smith, G. (2000-03-06). "Galaxies — The Spiral Nebulae". University of California, San Diego Center for Astrophysics & Space Sciences. Retrieved 2006-11-30. - Van den Bergh 1998, p. 17 - Bertin & Lin 1996, pp. 65–85 - Belkora 2003, p. 355 - Eskridge, P. B.; Frogel, J. A. (1999). "What is the True Fraction of Barred Spiral Galaxies?". Astrophysics and Space Science. 269/270: 427–430. Bibcode:1999Ap&SS.269..427E. doi:10.1023/A:1017025820201. - Bournaud, F.; Combes, F. (2002). "Gas accretion on spiral galaxies: Bar formation and renewal". Astronomy and Astrophysics 392 (1): 83–102. arXiv:astro-ph/0206273. Bibcode:2002A&A...392...83B. doi:10.1051/0004-6361:20020920. - Knapen, J. H.; Pérez-Ramírez, D.; Laine, S. (2002). "Circumnuclear regions in barred spiral galaxies — II. Relations to host galaxies". Monthly Notices of the Royal Astronomical Society 337 (3): 808–828. arXiv:astro-ph/0207258. Bibcode:2002MNRAS.337..808K. doi:10.1046/j.1365-8711.2002.05840.x. - Alard, C. (2001). "Another bar in the Bulge". Astronomy and Astrophysics Letters 379 (2): L44–L47. arXiv:astro-ph/0110491. Bibcode:2001A&A...379L..44A. doi:10.1051/0004-6361:20011487. - Sanders, R. (2006-01-09). "Milky Way galaxy is warped and vibrating like a drum". UCBerkeley News. Retrieved 2006-05-24. - Bell, G. R.; Levine, S. E. (1997). "Mass of the Milky Way and Dwarf Spheroidal Stream Membership". Bulletin of the American Astronomical Society 29 (2): 1384. Bibcode:1997AAS...19110806B. - Gerber, R. A.; Lamb, S. A.; Balsara, D. S. (1994). "Ring Galaxy Evolution as a Function of "Intruder" Mass". Bulletin of the American Astronomical Society 26: 911. Bibcode:1994AAS...184.3204G. - "ISO unveils the hidden rings of Andromeda" (Press release). European Space Agency. 1998-10-14. Retrieved 2006-05-24. - "Spitzer Reveals What Edwin Hubble Missed". Harvard-Smithsonian Center for Astrophysics. 2004-05-31. Retrieved 2006-12-06. - Barstow, M. A. (2005). "Irregular Galaxies". University of Leicester. Retrieved 2006-12-05. - Phillipps, S.; Drinkwater, M. J.; Gregg, M. D.; Jones, J. B. (2001). "Ultracompact Dwarf Galaxies in the Fornax Cluster". Astrophysical Journal 560 (1): 201–206. arXiv:astro-ph/0106377. Bibcode:2001ApJ...560..201P. doi:10.1086/322517. - Groshong, K. (2006-04-24). "Strange satellite galaxies revealed around Milky Way". New Scientist. Retrieved 2007-01-10. - Schirber, M. (2008-08-27). "No Slimming Down for Dwarf Galaxies". ScienceNOW. Retrieved 2008-08-27. - "Galaxy Interactions". University of Maryland Department of Astronomy. Archived from the original on May 9, 2006. Retrieved 2006-12-19. - "Interacting Galaxies". Swinburne University. Retrieved 2006-12-19. - "Happy Sweet Sixteen, Hubble Telescope!". NASA. 2006-04-24. Retrieved 2006-08-10. - "Starburst Galaxies". Harvard-Smithsonian Center for Astrophysics. 2006-08-29. Retrieved 2006-08-10. - Kennicutt Jr., R. C.; et al. (2005). "Demographics and Host Galaxies of Starbursts". Starbursts: From 30 Doradus to Lyman Break Galaxies. Springer. p. 187. Bibcode:2005sdlb.proc..187K. - Smith, G. (2006-07-13). "Starbursts & Colliding Galaxies". University of California, San Diego Center for Astrophysics & Space Sciences. Retrieved 2006-08-10. - Keel, B. (September 2006). "Starburst Galaxies". University of Alabama. Retrieved 2006-12-11. - Keel, W. C. (2000). "Introducing Active Galactic Nuclei". University of Alabama. Retrieved 2006-12-06. - Lochner, J.; Gibb, M. "A Monster in the Middle". NASA. Retrieved 2006-12-20. - Heckman, T. M. (1980). "An optical and radio survey of the nuclei of bright galaxies — Activity in normal galactic nuclei". Astronomy and Astrophysics 87: 152–164. Bibcode:1980A&A....87..152H. - Ho, L. C.; Filippenko, A. V.; Sargent, W. L. W. (1997). "A Search for "Dwarf" Seyfert Nuclei. V. Demographics of Nuclear Activity in Nearby Galaxies". Astrophysical Journal 487 (2): 568–578. arXiv:astro-ph/9704108. Bibcode:1997ApJ...487..568H. doi:10.1086/304638. - "Search for Submillimeter Protogalaxies". Harvard-Smithsonian Center for Astrophysics. 1999-11-18. Retrieved 2007-01-10. - Firmani, C.; Avila-Reese, V. (2003). "Physical processes behind the morphological Hubble sequence". Revista Mexicana de Astronomía y Astrofísica 17: 107–120. arXiv:astro-ph/0303543. Bibcode:2003RMxAC..17..107F. - McMahon, R. (2006). "Journey to the birth of the Universe". Nature 443 (7108): 151–2. Bibcode:2006Natur.443..151M. doi:10.1038/443151a. PMID 16971933. - Eggen, O. J.; Lynden-Bell, D.; Sandage, A. R. (1962). "Evidence from the motions of old stars that the Galaxy collapsed". Reports on Progress in Physics 136: 748. Bibcode:1962ApJ...136..748E. doi:10.1086/147433. - Searle, L.; Zinn, R. (1978). "Compositions of halo clusters and the formation of the galactic halo". Astrophysical Journal 225 (1): 357–379. Bibcode:1978ApJ...225..357S. doi:10.1086/156499. - Heger, A.; Woosley, S. E. (2002). "The Nucleosynthetic Signature of Population III". Astrophysical Journal 567 (1): 532–543. arXiv:astro-ph/0107037. Bibcode:2002ApJ...567..532H. doi:10.1086/338487. - Barkana, R.; Loeb, A. (1999). "In the beginning: the first sources of light and the reionization of the Universe". Physics Reports 349 (2): 125–238. arXiv:astro-ph/0010468. Bibcode:2001PhR...349..125B. doi:10.1016/S0370-1573(01)00019-9. - "Simulations Show How Growing Black Holes Regulate Galaxy Formation". Carnegie Mellon University. 2005-02-09. Retrieved 2007-01-07. - Massey, R. (2007-04-21). "Caught in the act; forming galaxies captured in the young Universe". Royal Astronomical Society. Retrieved 2007-04-20. - Noguchi, M. (1999). "Early Evolution of Disk Galaxies: Formation of Bulges in Clumpy Young Galactic Disks". Astrophysical Journal 514 (1): 77–95. arXiv:astro-ph/9806355. Bibcode:1999ApJ...514...77N. doi:10.1086/306932. - Baugh, C.; Frenk, C. (May 1999). "How are galaxies made?". PhysicsWeb. Retrieved 2007-01-16. - Gonzalez, G. (1998). "The Stellar Metallicity — Planet Connection". Proceedings of a workshop on brown dwarfs and extrasolar planets. p. 431. Bibcode:1998bdep.conf..431G. - Moskowitz, Clara (September 25, 2012). "Hubble Telescope Reveals Farthest View Into Universe Ever". Space.com. Retrieved September 26, 2012. - Conselice, C. J. (February 2007). "The Universe's Invisible Hand". Scientific American 296 (2): 35–41. doi:10.1038/scientificamerican0207-34. - Ford, H.; et al. (2002-04-30). "Hubble's New Camera Delivers Breathtaking Views of the Universe". Hubble News Desk. Retrieved 2007-05-08. - Struck, C. (1999). "Galaxy Collisions". Physics Reports 321: 1. arXiv:astro-ph/9908269. doi:10.1016/S0370-1573(99)00030-7. - Wong, J. (2000-04-14). "Astrophysicist maps out our own galaxy's end". University of Toronto. Archived from the original on January 8, 2007. Retrieved 2007-01-11. - Panter, B.; Jimenez, R.; Heavens, A. F.; Charlot, S. (2007). "The star formation histories of galaxies in the Sloan Digital Sky Survey". Monthly Notices of the Royal Astronomical Society 378 (4): 1550–1564. arXiv:astro-ph/0608531. Bibcode:2007MNRAS.378.1550P. doi:10.1111/j.1365-2966.2007.11909.x. - Kennicutt Jr., R. C.; Tamblyn, P.; Congdon, C. E. (1994). "Past and future star formation in disk galaxies". Astrophysical Journal 435 (1): 22–36. Bibcode:1994ApJ...435...22K. doi:10.1086/174790. - Knapp, G. R. (1999). Star Formation in Early Type Galaxies. Astronomical Society of the Pacific. Bibcode:1998astro.ph..8266K. ISBN 1-886733-84-8. OCLC 41302839. - Adams, Fred; Laughlin, Greg (2006-07-13). "The Great Cosmic Battle". Astronomical Society of the Pacific. Retrieved 2007-01-16. - Pobojewski, S. (1997-01-21). "Physics offers glimpse into the dark side of the Universe". University of Michigan. Retrieved 2007-01-13. - McKee, M. (2005-06-07). "Galactic loners produce more stars". New Scientist. Retrieved 2007-01-15. - "Groups & Clusters of Galaxies". NASA/Chandra. Retrieved 2007-01-15. - Ricker, P. "When Galaxy Clusters Collide". San Diego Supercomputer Center. Retrieved 2008-08-27. - Dahlem, M. (2006-11-24). "Optical and radio survey of Southern Compact Groups of galaxies". University of Birmingham Astrophysics and Space Research Group. Archived from the original on June 13, 2007. Retrieved 2007-01-15. - Ponman, T. (2005-02-25). "Galaxy Systems: Groups". University of Birmingham Astrophysics and Space Research Group. Retrieved 2007-01-15. - Girardi, M.; Giuricin, G. (2000). "The Observational Mass Function of Loose Galaxy Groups". The Astrophysical Journal 540 (1): 45–56. arXiv:astro-ph/0004149. Bibcode:2000ApJ...540...45G. doi:10.1086/309314. - Dubinski, J. (1998). "The Origin of the Brightest Cluster Galaxies". Astrophysical Journal 502 (2): 141–149. arXiv:astro-ph/9709102. Bibcode:1998ApJ...502..141D. doi:10.1086/305901. - Bahcall, N. A. (1988). "Large-scale structure in the Universe indicated by galaxy clusters". Annual Review of Astronomy and Astrophysics 26 (1): 631–686. Bibcode:1988ARA&A..26..631B. doi:10.1146/annurev.aa.26.090188.003215. - Mandolesi, N.; et al. (1986). "Large-scale homogeneity of the Universe measured by the microwave background". Letters to Nature 319 (6056): 751–753. Bibcode:1986Natur.319..751M. doi:10.1038/319751a0. - van den Bergh, S. (2000). "Updated Information on the Local Group". Publications of the Astronomical Society of the Pacific 112 (770): 529–536. arXiv:astro-ph/0001040. Bibcode:2000PASP..112..529V. doi:10.1086/316548. - Tully, R. B. (1982). "The Local Supercluster". Astrophysical Journal 257: 389–422. Bibcode:1982ApJ...257..389T. doi:10.1086/159999. - "Near, Mid & Far Infrared". IPAC/NASA. Retrieved 2007-01-02. - "The Effects of Earth's Upper Atmosphere on Radio Signals". NASA. Retrieved 2006-08-10. - "Giant Radio Telescope Imaging Could Make Dark Matter Visible". ScienceDaily. 2006-12-14. Retrieved 2007-01-02. - "NASA Telescope Sees Black Hole Munch on a Star". NASA. 2006-12-05. Retrieved 2007-01-02. - Dunn, R. "An Introduction to X-ray Astronomy". Institute of Astronomy X-Ray Group. Retrieved 2007-01-02. - Al-Biruni (2004). The Book of Instruction in the Elements of the Art of Astrology. R. Ramsay Wright (transl.). Kessinger Publishing. ISBN 0-7661-9307-1. - Belkora, L. (2003). Minding the Heavens: the Story of our Discovery of the Milky Way. CRC Press. ISBN 0-7503-0730-7. - Bertin, G.; Lin, C.-C. (1996). Spiral Structure in Galaxies: a Density Wave Theory. MIT Press. ISBN 0-262-02396-2. - Binney, J.; Merrifield, M. (1998). Galactic Astronomy. Princeton University Press. ISBN 0-691-00402-1. OCLC 39108765. - Dickinson, T. (2004). The Universe and Beyond (4th ed.). Firefly Books. ISBN 1-55297-901-6. OCLC 55596414. - Heidarzadeh, T. (2008). A History of Physical Theories of Comets, from Aristotle to Whipple. Springer. ISBN 1-4020-8322-X. - Kepple, G. R.; Sanner, G. W. (1998). The Night Sky Observer's Guide, Volume 1. Willmann-Bell. ISBN 0-943396-58-1. - Merritt, D. (2013). Dynamics and Evolution of Galactic Nuclei. Princeton University Press. ISBN 9781400846122. - Mohamed, M. (2000). Great Muslim Mathematicians. Penerbit UTM. ISBN 983-52-0157-9. OCLC 48759017. - Paul, E. R. (1993). The Milky Way Galaxy and Statistical Cosmology, 1890–1924. Cambridge University Press. ISBN 0-521-35363-7. - Sparke, L. S.; Gallagher III, J. S. (2000). Galaxies in the Universe: An Introduction. Cambridge University Press. ISBN 0-521-59740-4. - Van den Bergh, S. (1998). Galaxy Morphology and Classification. Cambridge University Press. ISBN 0-521-62335-9. - Waller, W. H.; Hodge, P. W. (2003). Galaxies and the Cosmic Frontier. Harvard University Press. ISBN 0-674-01079-5. |Find more about Galaxy at Wikipedia's sister projects| |Definitions and translations from Wiktionary| |Media from Commons| |News stories from Wikinews| |Textbooks from Wikibooks| |Travel information from Wikivoyage| - Galaxies, SEDS Messier pages - An Atlas of The Universe - Galaxies — Information and amateur observations - The Oldest Galaxy Yet Found - Galaxy classification project, harnessing the power of the internet and the human brain - How many galaxies are in our Universe? - The most beautiful galaxies on Astronoo - 3-D Video (01:46) – Over a Million Galaxies of Billions of Stars each – BerkeleyLab/animated.
http://en.wikipedia.org/wiki/Island_universe
13
68
Since the gravitational force is experienced by all matter in the universe, from the largest galaxies down to the smallest particles, it is often called universal gravitation. (Based upon observations of distant supernovas around the turn of the 21st cent., a repulsive force, termed dark energy, that opposes the self-attraction of matter has been proposed to explain the accelerated expansion of the universe.) Sir Isaac Newton was the first to fully recognize that the force holding any object to the earth is the same as the force holding the moon, the planets, and other heavenly bodies in their orbits. According to Newton's law of universal gravitation, the force between any two bodies is directly proportional to the product of their masses (see mass) and inversely proportional to the square of the distance between them. The constant of proportionality in this law is known as the gravitational constant; it is usually represented by the symbol G and has the value 6.670 × 10-11 N-m2/kg2 in the meter-kilogram-second (mks) system of units. Very accurate early measurements of the value of G were made by Henry Cavendish.The Relativistic Explanation of Gravitation Newton's theory of gravitation was long able to explain all observable gravitational phenomena, from the falling of objects on the earth to the motions of the planets. However, as centuries passed, very slight discrepancies were observed between the predictions of Newtonian theory and actual events, most notably in the motions of the planet Mercury. The general theory of relativity proposed in 1916 by Albert Einstein explained these differences and provided a geometric explanation for gravitational phenomena, holding that matter causes a curvature of the space-time framework in its immediate neighborhood.The Search for Gravity Waves Tantalizing evidence for the existence of gravity waves, which are predicted by Einstein's general theory of relativity and would be analogous to electromagnetic waves, comes from astronomical observations of a binary pulsar designated 1913 + 16. The rate at which the two neutron stars in the binary rotate around each other is changing in a manner that is consistent with the emission of gravity waves. A hypothetical particle, given the name graviton, has been suggested as the mediator of the gravitational force; it is analogous to the photon, the particle embodying the quantum properties of electromagnetic waves (see quantum theory). The search for gravity waves continues with the building of large interferometers that would be sensitive enough to detect the faint waves directly (see interference). Millions of dollars have already been spent on the Laser Interferometer Gravitational Wave Observatory (LIGO), supported by the National Science Foundation, and work is beginning on the even more ambitious Laser Interferometer Space Antenna (LISA). The term gravity is commonly used synonymously with gravitation, but in correct usage a definite distinction is made. Whereas gravitation is the attractive force acting to draw any bodies together, gravity indicates that force in operation between the earth and other bodies, i.e., the force acting to draw bodies toward the earth. The force tending to hold objects to the earth's surface depends not only on the earth's gravitational field but also on other factors, such as the earth's rotation. The measure of the force of gravity on a given body is the weight of that body; although the mass of a body does not vary with location, its weight does vary. It is found that at any given location, all objects are accelerated equally by the force of gravity, observed differences being due to differences in air resistance, etc. Thus, the acceleration due to gravity, symbolized as g, provides a convenient measure of the strength of the earth's gravitational field at different locations. The value of g varies from about 9.832 meters per second per second (m/sec2) at the poles to about 9.780 m/sec2 at the equator. Its value generally decreases with increasing altitude. Because variations in the value of g are not large, for ordinary calculations a value of 9.8 m/sec2, or 32 ft/sec2, is commonly used. See A. S. Eddington, Space, Time and Gravitation (1920); J. A. Wheeler, A Journey into Gravity and Spacetime (1990); M. Bartusiak, Einstein's Unfinished Symphony: Listening to the Sounds of Space-Time (2000). Universal force of attraction that acts between all bodies that have mass. Though it is the weakest of the four known forces, it shapes the structure and evolution of stars, galaxies, and the entire universe. The laws of gravity describe the trajectories of bodies in the solar system and the motion of objects on Earth, where all bodies experience a downward gravitational force exerted by Earth's mass, the force experienced as weight. Isaac Newton was the first to develop a quantitative theory of gravitation, holding that the force of attraction between two bodies is proportional to the product of their masses and inversely proportional to the square of the distance between them. Albert Einstein proposed a whole new concept of gravitation, involving the four-dimensional continuum of space-time which is curved by the presence of matter. In his general theory of relativity, he showed that a body undergoing uniform acceleration is indistinguishable from one that is stationary in a gravitational field. Learn more about gravitation with a free trial on Britannica.com. Statement that any particle of matter in the universe attracts any other with a force (math.F) that is proportional to the product of their masses (math.m1 and math.m2) and inversely proportional to the square of the distance (math.R) between them. In symbols: math.F = math.G(math.m1math.m2)/math.R2, where math.G is the gravitational constant. Isaac Newton put forth the law in 1687 and used it to explain the observed motions of the planets and their moons, which had been reduced to mathematical form by Johannes Kepler early in the 17th century. Learn more about Newton's law of gravitation with a free trial on Britannica.com. The terms gravitation and gravity are mostly interchangeable in everyday use, but a distinction may be made in scientific usage. "Gravitation" is a general term describing the phenomenon responsible for keeping the Earth and the other planets in their orbits around the Sun; for keeping the Moon in its orbit around the Earth, for the formation of tides; for convection (by which hot fluids rise); for heating the interiors of forming stars and planets to very high temperatures; and for various other phenomena that we observe. "Gravity", on the other hand, is described as the theoretical force responsible for the apparent attraction between a mass and the Earth. In general relativity, gravitation is defined as the curvature of spacetime which governs the motion of inertial objects. In the 4th century BC, the Greek philosopher Aristotle believed that there was no effect without a cause, and therefore no motion without a force. He hypothesized that everything tried to move towards its proper place in the crystalline spheres of the heavens, and that physical bodies fell toward the center of the Earth in proportion to their weight. Brahmagupta, in the Brahmasphuta Siddhanta (AD 628), responded to critics of the heliocentric system of Aryabhata (AD 476–550) stating that "all heavy things are attracted towards the center of the earth" and that "all heavy things fall down to the earth by a law of nature, for it is the nature of the earth to attract and to keep things, as it is the nature of water to flow, that of fire to burn, and that of wind to set in motion... The earth is the only low thing, and seeds always return to it, in whatever direction you may throw them away, and never rise upwards from the earth. In the 9th century, the eldest Banū Mūsā brother, Muhammad ibn Musa, in his Astral Motion and The Force of Attraction, hypothesized that there was a force of attraction between heavenly bodies, foreshadowing Newton's law of universal gravitation. In the 1000s, the Persian scientist Ibn al-Haytham (Alhacen), in the Mizan al-Hikmah, discussed the theory of attraction between masses, and it seems that he was aware of the magnitude of acceleration due to gravity. In 1121, Al-Khazini, in The Book of the Balance of Wisdom, differentiated between force, mass, and weight, and theorized that gravity varies with the distance from the centre of the Earth, though he believed that the weight of heavy bodies increase as they are farther from the centre of the Earth. All these early attempts at trying to explain the force of gravity were philosophical in nature. Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted by the actions of the other planets. Calculations by John Couch Adams and Urbain Le Verrier both predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of Neptune. Ironically, it was another discrepancy in a planet's orbit that helped to point out flaws in Newton's theory. By the end of the 19th century, it was known that the orbit of Mercury showed slight perturbations that could not be accounted for entirely under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein's new General Theory of Relativity, which accounted for the small discrepancy in Mercury's orbit. Although Newton's theory has been superseded, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is a much simpler theory to work with than General Relativity, and gives sufficiently accurate results for most applications. Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight lines are called geodesics. Like Newton's First Law, Einstein's theory stated that if there is a force applied to an object, it would deviate from the geodesics in spacetime. For example, we are no longer following the geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us. Thus, we are non-inertial on the ground. This explains why moving along the geodesics in spacetime is considered inertial. Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The geodesic paths for a spacetime are calculated from the metric tensor. Notable solutions of the Einstein field equations include: General relativity has enjoyed much success because of how its predictions of phenomena which are not called for by the theory of gravity have been regularly confirmed. For example: Several decades after the discovery of general relativity it was realized that general relativity is incompatible with quantum mechanics. It is possible to describe gravity in the framework of quantum field theory like the other fundamental forces, such that the attractive force of gravity arises due to exchange of virtual gravitons, in the same way as the electromagnetic force arises from exchange of virtual photons. This reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length, where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required. Many believe the complete theory to be string theory, or more currently M Theory. The strength of the gravitational field is numerically equal to the acceleration of objects under its influence, and its value at the Earth's surface, denoted g, is approximately expressed below as the standard average. g = 9.8 m/s2 = 32.2 ft/s2 This means that, ignoring air resistance, an object falling freely near the earth's surface increases its velocity with 9.8 m/s (32.2 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.8 m/s (32.2 ft/s) after one second, 19.6 m/s (64.4 ft/s) after two seconds, and so on, adding 9.8 m/s (32.2 ft/s) to each resulting velocity. According to Newton's 3rd Law, the Earth itself experiences an equal and opposite force to that acting on the falling object, meaning that the Earth also accelerates towards the object. However, because the mass of the Earth is huge, the acceleration of the Earth by this same force is negligible, when measured relative to the system's center of mass. Under an assumption of constant gravity, Newton’s law of gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s². The acceleration due to gravity is equal to this g. An initially-stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first 1/20th of a second the ball drops one unit of distance (here, a unit is about 12 mm); by 2/20ths it has dropped at total of 4 units; by 3/20ths, 9 units and so on. Under the same constant gravity assumptions, the potential energy, Ep, of a body at height h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid only over small distances h from the surface of the Earth. Similarly, the expression for the maximum height reached by a vertically projected body with velocity v is useful for small heights and small initial velocities only. In case of large initial velocities we have to use the principle of conservation of energy to find the maximum height reached. This same expression can be solved for v to determine the velocity of an object dropped from a height h immediately before hitting the ground, , assuming negligible air resistance. In general relativity, gravitational radiation is generated in situations where the curvature of spacetime is oscillating, such as is the case with co-orbiting objects. The gravitational radiation emitted by the solar system is far too small to measure. However, gravitational radiation has been indirectly observed as an energy loss over time in binary pulsar systems such as PSR 1913+16. It is believed that neutron star mergers and black hole formation may create detectable amounts of gravitational radiation. Gravitational radiation observatories such as LIGO have been created to study the problem. No confirmed detections have been made of this hypothetical radiation, but as the science behind LIGO is refined and as the instruments themselves are endowed with greater sensitivity over the next decade, this may change. There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.
http://www.reference.com/browse/wiki/Gravitation
13
52
In physics, in the area of electrodynamics, the Larmor formula (not to be confused with the Larmor precession from classical nuclear magnetic resonance) is used to calculate the total power radiated by a nonrelativistic point charge as it accelerates. It was first derived by J. J. Larmor in 1897, in the context of the wave theory of light. When accelerating or decelerating, any charged particle (such as an electron) radiates away energy in the form of electromagnetic waves. For velocities that are small relative to the speed of light, the total power radiated is given by the Larmor formula: where is the acceleration, is the charge, and is the speed of light. A relativistic generalization is given by the Liénard–Wiechert potentials. Derivation 1: Mathematical Approach We first need to find the form of the electric and magnetic fields. The fields can be written (for a fuller derivation see Liénard–Wiechert potential) where is the charge's velocity divided by , is the charge's acceleration divided by c, is a unit vector in the direction, is the magnitude of , and is the charge's location. The terms on the right are evaluated at the retarded time . These field equations divide themselves up into velocity and acceleration fields. The velocity field depends only upon while the acceleration field depends on both and and the angular relationship between the two. Since the velocity field is proportional to , it falls off very quickly with distance. On the other hand, the acceleration field is proportional to , which means that it falls much more slowly with distance. Because of this, the acceleration field is representative of the radiation field and is responsible for carrying most of the energy away from the charge. where the 'a' subscripts emphasize that we are taking only the acceleration field. Substituting in the relation between the magnetic and electric fields while assuming that the particle instantaneously at rest at time and simplifying gives[note 1] If we let the angle between the acceleration and the observation vector be equal to , and we introduce the acceleration , then This quantity the power radiated per unit solid angle by the charge. The total power radiated is found by integrating this quantity over all solid angles (that is, over and ). This gives which is the Larmor result for a non-relativistic accelerated charge. It relates the power radiated by the particle to its acceleration. It clearly shows that the faster the charge accelerates the greater the radiation will be. We would expect this since the radiation field is dependent upon acceleration. Derivation 2: Using Edward M. Purcell approach The full derivation can be found here. Here is an explanation which can help understanding the above page. This approach is based on the finite speed of light. A charge moving with constant velocity has a radial electric field (at distance from the charge), always emerging from the future position of the charge, and there is no tangential component of the electric field . This future position is completely deterministic as long as the velocity is constant. When the velocity of the charge changes, (say it bounces back during a short time) the future position "jumps", so from this moment and on, the radial electric field emerges from a new position. Given the fact that the electric field must be continuous, a non-zero tangential component of the electric field appears, which decreases like (unlike the radial component which decreases like ). Hence, at large distances from the charge, the radial component is negligible relative to the tangential component, and in addition to that, fields which behave like cannot radiate, because the Poynting vector associated with them will behave like . The tangential component comes out (SI units): And to obtain the Larmour formula, one has to integrate over all angles, at large distance from the charge, the Poynting vector associated with , which is: giving (SI units) This is mathematically equivalent to: Relativistic Generalisation Covariant Form We can do this by rewriting the Larmor formula in terms of momentum and then using the four vector generalisation of momentum (see four momentum), . We know that the power is a Lorentz invariant, so all we have to show is that our generalisation is also invariant and that it reduces to the Larmor formula in the low velocity limit. So; Assume the generalisation; When we expand and rearrange the energy-momentum four vector product we get; where I have used the fact that . When you let tend to zero, tends to one, so that tends to dt. Thus we recover the non relativistic case. This is an interesting equation. It says that the power radiated by the particle into space depends upon its rate of change of momentum with respect to its time. It also says that the power radiated is proportional to the charge squared and inversely proportional to the mass squared. Thus for a highly charged, extremely small particle the radiation will be much greater than that for a large particle with a small charge. Non-Covariant Form To obtain the non-covariant form of the generalisation we first substitute in to the above and then performing the differentiation as follows (for brevity I have omitted the constants from the calculation below); Although the above is correct as it stands, it is not immediately obvious what sort of relationship the radiated power has to the velocity and the acceleration of the particle. If we make this relationship more explicit then it will be clear how the radiation depends on the particle's motion, and what happens in different cases. We can obtain this relation by adding and subtracting to the above giving; If we apply the vector identity; Then we obtain; where I have replaced all the constants and the negative sign dropped earlier. This is the Lienard result, which was first obtained in 1898. The means that when is very close to one (i.e. ) the radiation emitted by the particle is likely to be negligible. However when is greater than one (i.e. ) the radiation explodes as the particle tries to lose its energy in the form of EM waves. It's also interesting that when the acceleration and velocity are orthogonal the power is reduced by a factor of . The faster the motion becomes the greater this reduction gets. In fact, it seems to imply that as tends to one the power radiated tends to zero (for orthogonal motion). This would suggest that a charge moving at the speed of light, in instantaneously circular motion, emits no radiation. However, it would be impossible to accelerate a charge to this speed because the would explode to , meaning that the particle would radiate a gigantic amount of energy which would require you to put more and more energy in to keep accelerating it. This would imply that there is a cosmic speed limit, namely c. Such a connection was not made until 1905 when Einstein published his paper on Special Relativity. We can use Lienard's result to predict what sort of radiation losses to expect in different kinds of motion. Angular distribution The angular distribution of radiated power is given by a general formula (applicable whether or not the particle is relativistic): where is a unit vector pointing from the particle towards the observer. In the case of linear motion (velocity parallel to acceleration), this simplifies to where is the angle between the observer and the particle's motion. Issues and implications Radiation reaction The radiation from a charged particle carries energy and momentum. In order to satisfy energy and momentum conservation, the charged particle must experience a recoil at the time of emission. The radiation must exert an additional force on the charged particle. This force is known as the Abraham-Lorentz force in the nonrelativistic limit and the Abraham-Lorentz-Dirac force in the relativistic limit. Atomic physics A classical electron orbiting a nucleus experiences acceleration and should radiate. Consequently the electron loses energy and the electron should eventually spiral into the nucleus. Atoms, according to classical mechanics, are consequently unstable. This classical prediction is violated by the observation of stable electron orbits. The problem is resolved with a quantum mechanical or stochastic electrodynamic description of atomic physics. See also - Atomic theory - Cyclotron radiation - Electromagnetic wave equation - Maxwell's equations in curved spacetime - Radiation reaction - Wave equation - Wheeler-Feynman absorber theory - The case where is more complicated and is treated, for example, in Griffiths's Introduction to Electrodynamics. - Purcell Simplified - Jackson eq (14.38) - Jackson eq (14.39) - J. Larmor, "On a dynamical theory of the electric and luminiferous medium", Philosophical Transactions of the Royal Society 190, (1897) pp. 205–300 (Third and last in a series of papers with the same name). - Jackson, John D. (1998). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-30932-X. (Section 14.2ff) - Misner, Charles; Thorne, Kip S. & Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0.
http://en.wikipedia.org/wiki/Larmor_formula
13
73
An Investigation of This page is designed so that high school students can explore triangles and their centers using Geometer's Sketchpad. They will investigate centroids, orthocenters, circumcenters, and incenters. Students may draw conclusions based on the constructions -- some of which will be theorems that are to be proven in class. The centroid of a triangle involves constructing all of the medians on a triangle. Let's start with an acute triangle and construct the medians. Do you notice anything special? What would happen if we started with an obtuse triangle? What about a right triangle? [Instead, of starting over with an obtuse triangle, choose a vertex of the acute triangle and drag it until it makes the triangle obtuse. Do the same for the right triangle.] Construct any triangle and its centroid. You may want to hide the segments that are the medians. Now, construct segments from each vertex and midpoint to the centroid -- it should look exactly the same as before you hid the medians. Using the measure segment function in Geometer's Sketchpad, measure each of the segments. Do you see any relationship in the distance from the vertex to the centroid to the distance from the centroid to the midpoint? Make a hypothesis and try it out on several different The centroid, G, of a triangle is the common intersection of the three medians. The medians of a triangle intersect in a point that is 2/3 of the distance from each vertex to the midpoint of the opposite side. The centroid is also called the center of gravity. If you were to cut out a triangle out of cardboard and construct its centroid, it would be perfectly balanced at The orthocenter of a triangle involves constructing the altitudes of the triangle. Construct an acute triangle and its altitudes (altitudes are lines that are perpendicular to a side of the triangle and go through the opposite vertex). Do you notice anything special about the intersection of the altitudes? What would happen if we had started with an obtuse triangle or a right triangle? Make a hypothesis about each type of triangle and its orthocenter and they try it out for several triangles. The orthocenter, H, of a triangle is the common intersection of the three lines containing the altitudes. If the triangle is acute, the orthocenter will lie inside the triangle. If the triangle is obtuse, the orthocenter will lie outside the triangle. If the triangle is right, the orthocenter will lie on the vertex that corresponds with the right angle. The circumcenter of a triangle involves constructing the perpendicular bisectors of a triangle. Again, we will start with an acute triangle and construct the perpendicular bisector of each side (construct the midpoint of each side and then choose the side and the midpoint to construct the perpendicular line). By now, you should realize that the lines are to intersect at the same point. Do you notice anything special about the intersection of the perpendicular bisectors? Using this intersection and a vertex on the triangle, construct a circle. What do you notice now? Does changing the original triangle to an obtuse or a right change anything? Construct any triangle and its circumcenter. You may hide the perpendicular bisectors needed to find the circumcenter. Now, construct the segments from each vertex to the circumcenter. Using the measure segment function in Geometer's Sketchpad, measure each segment. Do you notice any special relationship? Make a hypothesis and test it on several different triangles. Does it work for acute triangles? obtuse triangles? right triangles? Let's expand on this in the case of the right triangle. Construct the circumcenter on a right triangle. Construct a segment from each vertex of the hypotenuse to the circumcenter. What relationship do you notice about the circumcenter and the hypotenuse? Make a hypothesis and test it for several right triangles. C, of a triangle is the point in the plane equidistant from the three vertices of the triangle. will always coincide with the midpoint of the hypotenuse of a The incenter of a triangle involves constructing the angle bisectors. We will start by constructing an acute triangle with its angle bisectors. Do you notice anything special about the intersection of the angle bisectors? What if we had started with an obtuse triangle or a right triangle? Now, hide the angle bisectors. Construct a perpendicular segment from the incenter to any side. Using the incenter as the center and the point of intersection of the perpendicular line to the side as another point, make a circle. What do you notice about the circle? Make a hypothesis and test it for several triangles. The incenter, I, of a triangle is the point on the interior of the triangle that is equidistant from the three sides. Now, let's see if there are any relationships between these centers of a triangle. Construct any triangle with the centroid, orthocenter, circumcenter, and incenter labeling them G, H, C, and I, respectively. Construct a segment between each pair of points. Your construction may be too cluttered, so you may delete the incenter for now - we will return to it later. Do you notice a special relationship in these The centroid, circumcenter, and orthocenter always lie on a straight line -- the Euler Line. Do you notice any special relationship between the distances between the points? The distance from the orthocenter to the centroid is always twice as long as the distance from the centroid to the circumcenter. Now, add the incenter to the construction. Do you notice anything special about the incenter with its relation to the Euler Line? The incenter lies on the Euler Line if the triangle is isosceles. What happens to the points if the triangle is equilateral? If the triangle is equilateral, the points are concurrent.
http://jwilson.coe.uga.edu/EMT668/EMAT6680.2002.Fall/Ledford/ledford4/tricent.html
13
68
CHAPTER 05.05: SPLINE METHOD: Quadratic Spline Interpolation: Example: Part 1 of 2 I'm going to take an example of quadratic spline interpolation. We have already gone through the theory. So the example which we're going to take is that of a rocket which is going up as a function of time. So in the table you are given the velocity as a function of time at 0, 10, 15, 20, 22.5, and 30 seconds, and what you are asked to do is to use quadratic splines, and but you are asked to do three things. You are asked to not only find the velocity a point which is not given to you. So as you can see there that the value of the velocity is not given at 16, so you're asked to find the value of the velocity at 16 seconds, but also there are other parts to the problem. You're asked to find the acceleration at t equal to 16 seconds, which will involve some kind of a differentiation of the velocity expression, and then the distance covered between two times, t equal to 11 and t equal to 16, which again, will involve some integration of the velocity expression. So let's go ahead and see how that is done. Now, here is the . . . here are the data which I just showed you for velocity versus function of time. I'm just showing you the plot here for the velocity as a function of time. So you're going to basically what you're going to do is you're going to draw a quadratic spline from 0 to 10, then another quadratic spline from 10 to 15, then another from 15 to 20, then the next one from 20 to 22.5, and the next one from 22.5 to 30. So you will have one quadratic spline here, then the second quadratic spline here, then the third quadratic spline here, then the fourth one right here, and the fifth one right here. So you can see that you'll have five quadratic splines which will go through these six consecutive data points. So these are the five . . . five consecutive quadratic splines splines which we are talking about. Each spline will be a quadratic equation, so you've got to understand that we have three unknowns in each spline. So let's suppose the first spline is going from point 0 to 10, and it has three unknowns, a1, b1, c1. So similarly, the next spline has three unknowns, and so on and so forth. So you are, what you are finding out is that you have fifteen unknowns. You have fifteen unknowns, because you have five splines going through the six consecutive data points, and each spline has three unknowns, ai, bi, and ci, so you have fifteen unknowns, so you've got to somehow set up fifteen equations to be able to solve fifteen equations and fifteen unknowns. So let's go ahead and set up these fifteen equations and fifteen unknowns, and see how where we get those from. Now, the first thing which you have to realize is that each spline will go through two consecutive data points. What do we mean by that? It is as follows, that if I look at the first spline right here, that's my first spline, and that is this particular spline right here. So if I look at the first spline, I have three unknowns, a1, b1, c1. I cannot find all three unknowns by simply saying that this spline goes through point t equal to 0 and t equal to 10, because that will only set up two equations, but it is a start here that the first spline is going through the points 0 and 10, so points 0 and 10, the first spline is going through it. So this equation which you are seeing here is the equation for the spline going through time, t equal to 0, and this equation which you are seeing here is for the same spline going through time, t equal to 10. So since we have two equations coming from each spline going through two consecutive data points, and since we have five splines, we will get ten equations like that. The reason why we get ten equations is because we have five splines, and each spline is going through two consecutive data points, setting up two equations just like this one and this one, and hence it will result in ten equations. So let's go ahead and see what those ten equations are. These are the ten equations which you are getting. So this is . . . these two equations are coming from that the second spline is going through 10 and 15 . . . 10 and 15. Then let's look at the next spline. Next spline is saying that, hey, you have these two equations, and these two equations are coming from that the third spline is going through time, t equal to 15 and 20 data points, so that will give you two more equations. And then we have the next two, which are these two equations right here. Those are coming from this data point and this data point, because you can very well see that time, t equal to 20 and 22.5 have been substituted into that spline. That gives you two more equations. And then we're left with the last two data points, and that is that, hey, these two equations which you have from the fifth spline are obtained from that the spline is going through time, t equal to 22.5 and 30. So that . . . this whole thing is setting up ten equations. So we have ten equations, as we said that we have to set up fifteen equations, fifteen unknowns. So we already have set up ten equations by saying that the . . . each spline is going through consecutive data points. Now, the other equations which you're going to set up is by saying that the derivatives are continuous at the interior data points. So what that means is that if you look at the first spline, and you take that, its derivative, and you look at the second spline, and you take its derivative, that derivative has to be same at the common point, which is 10 there. So that's what we are showing right here, that we are going to take the derivative of the first spline, we're going to take the derivative of the second spline. Keep in mind that the two derivatives are not same everywhere, they're only same at the common point, which is t equal to 10, which ensures that the slope of the two splines is same through the . . . through the data points. So when I take the derivative, I get this for the first spline, I get this for the second spline, and then I'll have to substitute the value of t equal to 10, because that's where the splines are the same. So this is what I get as my final equation. Now I take everything which is on the right-hand side to the left-hand side. The reason why that is so is because every time you write an equation, you always take the unknowns the left-hand side, and since a2 and b2 are unknowns, we need to take them to the left-hand side. So this sets up one equation at time, t equal to 10. Now, similarly you'll see that, hey, you will have other equations, you'll have the equation at time, t equal to 15, time, t equal to 20, and time, t equal to 22.5, you will get similar equations, that the derivatives are continuous of the consecutive splines. So you have four equations coming from here, so you've got four equations coming from the fact that your derivative is continuous at the interior data points. You're only able to set up four equations, although you have six data points, but you're only able to set up four equations, because there are only four interior data points. The other two data points, t equal to 0 and t equal to 30, t equal to 0 and 30, they are external data points, and we cannot say that the first derivative is continuous at t equal to 0 and 30, because we don't have adjoining . . . we don't have a spline . . . we don't have two splines going through those two data points. So what we have now is that we needed fifteen equations, we needed fifteen equations, we got ten equations from saying that, hey, each spline is going through consecutive data points, we got four equations saying that each spline has continuous first derivative at the interior data points, so we end up with fourteen equations. So since we have fourteen equations now, we need one more equation, because we needed fifteen, because we have fifteen unknowns, so the last equation which comes from is the fact that we're going to assume a1 equal to 0. a1 equal to 0 simply means that you are assuming that the first spline is linear, because your spline is like this, a1 t squared, plus b1 t, plus c1. So if you're going to assume a1 equal to 0, you're only left with these terms, which means that you're assuming that the first spline is linear, and you have to do something like that, because you need fifteen equations, fifteen unknowns, and you're not violating anything by saying that the last equation a1 is equal to 0, you still have your splines going through consecutive data points, you still have the first slop same at the interior data points. So this is what you get as the resulting equations once you take the fifteen equations, fifteen unknowns. This might not be visible to you when you are seeing this, but I just wanted to put it up there. You can look at this in the PowerPoint presentation. So let me just skip through this, and then what we do is we solve those fifteen equations, fifteen unknowns, and these are the resulting unknowns, so we have ais in the first column, bis in the second column, cis in the third column. So these are the unknowns which we get. So, for example, if you're going to look at this particular i equal to 2, that is your second spline, so the second spline will be 0.888 t squared, so that is this number, plus 4.928 t, that is this number here, then plus 88.88, and this is the second spline, which is going . . . which is going through the points 10 and 15. So similarly, you can set up the other splines by looking at what the coefficients are. And that's the end of this segment, and in the next segment, I will show you how we are using this information to be able to get the values which we are looking for.
http://nm.mathforcollege.com/videos/youtube/05inp/spline/0505_05_Quad_Spline_Interpol_Ex_1_of_2.htm
13
84
- slide 1 of 3 Before learning how to find the diameter of a circle, it's necessary to know the following definitions: - Diameter of a Circle - a straight line that passes through the center of the circle and touches two opposite sides. - Radius - a straight line from the center of a circle to any point on the circle's perimeter. - Circumference - a fancy word for the circle's perimeter or boundary. - Area of a Circle - the total surface of the circle. The area's measurement will always be squared--in2, ft2, m2, for example. - Center of a Circle - the point inside the circle that is the same exact distance from every point on the circle's perimeter. - Pi/∏ - the number pi goes on indefinitely. For our purposes we'll stop at the 1/100 slot. For these calculations, pi will be 3.14. - slide 2 of 3 Calculating a Circle's Diameter How to find the circle's diameter depends on the information available. - If the circle is drawn to scale or if it's an actual circle, you can calculate the diameter by measuring from one side of the circle to another, making sure the ruler passes through the circle's center. Because finding the exact center may prove difficult, it's best to take several measurements to ensure accuracy. - If you know the radius of the circle, multiply it by 2 to find the diameter. - If the radius of a circle is 2.47 cm, the diameter is 2.47 cm * 2, which equals 4.94 cm. - If you know the circumference of a circle you can divide it by pi (3.14) to get the diameter (circumference = pi * diameter). - If you know the area of a circle, you must manipulate the formula of a circle's area by doing the following: - Know the formula for the area of a circle--Area = pi * radius2 (A=∏r2). - Solve for the radius by dividing both sides by 3.14 (pi) and taking the square root. The resulting formula is radius = the square root of the area/pi - Once you know the area, multiply it by 2 to get the diameter. - Here's an example of how to find the diameter of a circle if the circle's area is given: - If the area of a circle is 12.56 m2, the radius = the square root of 12.56 meters2/3.14. 12.56 meters2/3.14=4 meters2. The square root of 4 meters2equals 2 meters. - If the radius of a circle equals 2 meters, multiply it by 2 to get the diameter. The diameter of a circle in this example is 4 meters. - Diameter = 2 * radius - Diameter = circumference / pi - Diameter = 2(square root of the area/pi) - slide 3 of 3 Now you can figure out the diameter of a circle as long as you know the radius or the circumference or the area. Public domain images courtesy of wikimedia commons.
http://www.brighthubeducation.com/homework-math-help/101920-how-to-find-the-diameter-of-a-circle/
13
114
Program Arcade GamesWith Python And Pygame Now that you can create loops, it is time to move on to learning how to create graphics. This chapter covers: - How the computer handles x, y coordinates. It isn't like the coordinate system you learned in math class. - How to specify colors. With millions of colors to choose from, telling the computer what color to use isn't as easy as just saying “red.” - How to open a blank window for drawing. Every artist needs a canvas. - How to draw lines, rectangles, ellipses, and arcs. The Cartesian coordinate system, shown in Figure 5.1 (Wikimedia Commons), is the system most people are used to when plotting graphics. This is the system taught in school. The computer uses a similar, but somewhat different, coordinate system. Understanding why it is different requires a quick bit of computer history. During the early '80s, most computer systems were text-based and did not support graphics. Figure 5.2 (Wikimedia Commons) shows an early spreadsheet program run on an Apple ][ computer that was popular in the '80s. When positioning text on the screen, programmers started at the top calling it line 1. The screen continued down for 24 lines and across for 40 characters. Even with plain text, it was possible to make rudimentary graphics by just using characters on the keyboard. See this kitten shown in Figure 5.3 and look carefully at how it is drawn. When making this art, characters were still positioned starting with line 1 at the top. Later the character set was expanded to include boxes and other primitive drawing shapes. Characters could be drawn in different colors. As shown in Figure 5.4 the graphics got more advanced. Search the web for “ASCII art” and many more examples can be found. Once computers moved to being able to control individual pixels for graphics, the text-based coordinate system stuck. The $x$ coordinates work the same as the Cartesian coordinates system. But the $y$ coordinates are reversed. Rather than the zero $y$ coordinate at the bottom of the graph like in Cartesian graphics, the zero $y$ coordinate is at the top of the screen with the computer. As the $y$ values go up, the computer coordinate position moved down the screen, just like lines of text rather than standard Cartesian graphics. See Figure 5.5. Also, note the screen covers the lower right quadrant, where the Cartesian coordinate system usually focuses on the upper right quadrant. It is possible to draw items at negative coordinates, but they will be drawn off-screen. This can be useful when part of a shape is off screen. The computer figures out what is off-screen and the programmer does not need to worry too much about it. To make graphics easier to work with, we'll use the Pygame. Pygame is a library of code other people have written, and makes it simple to: - Draw graphic shapes - Display bitmapped images - Interact with keyboard, mouse, and gamepad - Play sound - Detect when objects collide The first code a Pygame program needs to do is load and initialize the Pygame library. Every program that uses Pygame should start with these lines: # Import a library of functions called 'pygame' import pygame # Initialize the game engine pygame.init() If you haven't installed Pygame yet, directions for installing Pygame are available in the before you begin section. If Pygame is not installed on your computer, you will get an error when trying to run import pygame. Important: The import pygame looks for a library file named pygame. If a programmer creates a new program named pygame.py, the computer will import that file instead! This will prevent any pygame programs from working until that pygame.py file is deleted. Next, we need to add variables that define our program's colors. Colors are defined in a list of three colors: red, green, and blue. Have you ever heard of an RGB monitor? This is where the term comes. Red-Green-Blue. With older monitors, you could sit really close to the monitor and make out the individual RGB colors. At least before your mom told you not to sit so close to the TV. This is hard to do with today's high resolution monitors. Each element of the RGB triad is a number ranging from 0 to 255. Zero means there is none of the color, and 255 tells the monitor to display as much of the color as possible. The colors combine in an additive way, so if all three colors are specified, the color on the monitor appears white. (This is different than how ink and paint work.) Lists in Python are surrounded by either square brackets or parentheses. (Chapter 7 covers lists in detail and the difference between the two types.) Individual numbers in the list are separated by commas. Below is an example that creates variables and sets them equal to lists of three numbers. These lists will be used later to specify colors. # Define some colors black = ( 0, 0, 0) white = ( 255, 255, 255) green = ( 0, 255, 0) red = ( 255, 0, 0) Using the interactive shell in IDLE, try defining these variables and printing them If the five colors above aren't the colors you are looking for, you can define your own. To pick a color, find an on-line “color picker” like the one shown in Figure 5.6. One such color picker is at: Extra: Some color pickers specify colors in hexadecimal. You can enter hexadecimal numbers if you start them with 0x. For example: white = (0xFF, 0xFF, 0xFF) Eventually the program will need to use the value of $\pi$ when drawing arcs, so this is a good time in our program to define a variable that contains the value of $\pi$. (It is also possible to import this from the math library as math.pi.) pi = 3.141592653 So far, the programs we have created only printed text out to the screen. Those programs did not open any windows like most modern programs do. The code to open a window is not complex. Below is the required code, which creates a window sized to a width of 700 pixels, and a height of 500: # Set the width and height of the screen size = (700,500) screen = pygame.display.set_mode(size) Why set_mode? Why not open_window? The reason is that this command can actually do a lot more than open a window. It can also create games that run in a full-screen mode. This removes the start menu, title bars, and gives the game control of everything on the screen. Because this mode is slightly more complex to use, and most people prefer windowed games anyway, we'll skip a detailed discussion on full-screen games. But if you want to find out more about full-screen games, check out the documentation on pygame's display command. Also, why size=(700,500) and not size=700,500? The same reason why we put parentheses around the color definitions. Python can't normally store two numbers (a height and width) into one variable. The only way it can is if the numbers are stored as a list. Lists need either parentheses or square brackets. (Technically, parenthesis surrounding a set of numbers is more accurately called a tuple or an immutable list. Lists surrounded by square brackets are just called lists. An experienced Python developer would cringe at calling a list of numbers surrounded by parentheses a list rather than a tuple.) Lists are covered in detail in Chapter 7. To set the title of the window (which shown in the title bar) use the following line of code: pygame.display.set_caption("Professor Craven's Cool Game") With just the code written so far, the program would create a window and immediately hang. The user can't interact with the window, even to close it. All of this needs to be programmed. Code needs to be added so that the program waits in a loop until the user clicks “exit.” This is the most complex part of the program, and a complete understanding of it isn't needed yet. But it is necessary to have an idea of what it does, so spend some time studying it and asking questions. #Loop until the user clicks the close button. done = False # Used to manage how fast the screen updates clock = pygame.time.Clock() # -------- Main Program Loop ----------- while done == False: # ALL EVENT PROCESSING SHOULD GO BELOW THIS COMMENT for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done = True # Flag that we are done so we exit this loop # ALL EVENT PROCESSING SHOULD GO ABOVE THIS COMMENT # ALL GAME LOGIC SHOULD GO BELOW THIS COMMENT # ALL GAME LOGIC SHOULD GO ABOVE THIS COMMENT # ALL CODE TO DRAW SHOULD GO BELOW THIS COMMENT # ALL CODE TO DRAW SHOULD GO ABOVE THIS COMMENT # Limit to 20 frames per second clock.tick(20) Eventually we will add code to handle the keyboard and mouse clicks. That code will go between the comments for event processing. Code for determining when bullets are fired and how objects move will go between the comments for game logic. We'll talk about that in later chapters. Code to draw will go in between the appropriate draw-code comments. Alert! One of the most frustrating problems programmers have is to mess up the event processing loop. This “event processing” code handles all the keystrokes, mouse button clicks, and several other types of events. For example your loop might look like: for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") if event.type == pygame.KEYDOWN: print("User pressed a key.") if event.type == pygame.KEYUP: print("User let go of a key.") if event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button") The events (like pressing keys) all go together in a list. The program uses a for loop to loop through each event. Using a chain of if statements the code figures out what type of event occured, and the code to handle that event goes in the if statement. All the if statements should go together, in one for loop. A common mistake when doing copy and pasting of code is to not merge loops from two programs, but to have two event loops. # Here is one event loop for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") if event.type == pygame.KEYDOWN: print("User pressed a key.") if event.type == pygame.KEYUP: print("User let go of a key.") # Here the programmer has copied another event loop # into the program. This is BAD. The events were already # processed. for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") if event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button") The for loop on line 2 grabbed all of the user events. The for loop on line 13 won't grab any events because they were already processed in the prior loop. Another typical problem is to start drawing, and then try to finish the event loop: for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") if event.type == pygame.KEYDOWN: print("User pressed a key.") pygame.rect.draw(screen,green,[50,50,100,100]) # This is code that processes events. But it is not in the # 'for' loop that processes events. It will not act reliably. if event.type == pygame.KEYUP: print("User let go of a key.") if event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button") This will cause the program to ignore some keyboard and mouse commands. Why? The for loop processes all the events in a list. So if there are two keys that are hit, the for loop will process both. In the example above, the if statements are not in the for loop. If there are multiple events, the if statements will only run for the last event, rather than all events. The basic logic and order for each frame of the game: - While not done: - For each event (keypress, mouse click, etc.): - Use a chain of if statements to run code to handle each event. - Run calculations to determine where objects move, what happens when objects collide, etc. - Clear the screen - Draw everything - For each event (keypress, mouse click, etc.): It makes the program easier to read and understand if these steps aren't mixed togther. Don't do some calculations, some drawing, some more calculations, some more drawing. Also, see how this is similar to the calculator done in chapter one. Get user input, run calculations, and output the answer. That same pattern applies here. The code for drawing the image to the screen happens inside the while loop. With the clock tick set at 10, the contents of the window will be drawn 10 times per second. If it happens too fast the computer is sluggish because all of its time is spent updating the screen. If it isn't in the loop at all, the screen won't redraw properly. If the drawing is outside the loop, the screen may initially show the graphics, but the graphics won't reappear if the window is minimized, or if another window is placed in front. Right now, clicking the “close” button of a window while running this Pygame program in IDLE will still cause the program to crash. This is a hassle because it requires a lot of clicking to close a crashed program. The problem is, even though the loop has exited, the program hasn't told the computer to close the window. By calling the command below, the program will close any open windows and exit as desired. The following code clears whatever might be in the window with a white background. Remember that the variable white was defined earlier as a list of 3 RGB values. # Clear the screen and set the screen background screen.fill(white) This should be done before any drawing command is issued. Clearing the screen after the program draws graphics results in the user only seeing a blank screen. When a window is first created it has a black background. It is still important to clear the screen because there are several things that could occur to keep this window from starting out cleared. A program should not assume it has a blank canvas to draw on. Very important! You must flip the display after you draw. The computer will not display the graphics as you draw them because it would cause the screen to flicker. This waits to display the screen until the program has finished drawing. The command below “flips” the graphics to the screen. Failure to include this command will mean the program just shows a blank screen. Any drawing code after this flip will not display. # Go ahead and update the screen with what we've drawn. pygame.display.flip() Let's bring everything we've talked about into one full program. This code can be used as a base template for a Pygame program. It opens up a blank window and waits for the user to press the close button. # Sample Python/Pygame Programs # Simpson College Computer Science # http://programarcadegames.com/ # http://simpson.edu/computer-science/ # Explanation video: http://youtu.be/vRB_983kUMc import pygame # Define some colors black = ( 0, 0, 0) white = ( 255, 255, 255) green = ( 0, 255, 0) red = ( 255, 0, 0) pygame.init() # Set the width and height of the screen [width,height] size = [700,500] screen = pygame.display.set_mode(size) pygame.display.set_caption("My Game") #Loop until the user clicks the close button. done = False # Used to manage how fast the screen updates clock = pygame.time.Clock() # -------- Main Program Loop ----------- while done == False: # ALL EVENT PROCESSING SHOULD GO BELOW THIS COMMENT for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done = True # Flag that we are done so we exit this loop # ALL EVENT PROCESSING SHOULD GO ABOVE THIS COMMENT # ALL GAME LOGIC SHOULD GO BELOW THIS COMMENT # ALL GAME LOGIC SHOULD GO ABOVE THIS COMMENT # ALL CODE TO DRAW SHOULD GO BELOW THIS COMMENT # First, clear the screen to white. Don't put other drawing commands # above this, or they will be erased with this command. screen.fill(white) # ALL CODE TO DRAW SHOULD GO ABOVE THIS COMMENT # Go ahead and update the screen with what we've drawn. pygame.display.flip() # Limit to 20 frames per second clock.tick(20) # Close the window and quit. # If you forget this line, the program will 'hang' # on exit if running from IDLE. pygame.quit() Here is a list of things that you can draw: A program can draw things like rectangles, polygons, circles, ellipses, arcs, and lines. We will also cover how to display text with graphics. Bitmapped graphics such as images are covered in Chapter 12. If you decide to look at that pygame reference, you might see a function definition like this: pygame.draw.rect(Surface, color, Rect, width=0): return Rect A frequent cause of confusion is the part of the line that says width=0. What this means is that if you do not supply a width, it will default to zero. Thus this function call: pygame.draw.rect(screen, red, [55,500,10,5]) Is the same as this function call: pygame.draw.rect(screen, red, [55,500,10,5], 0) The : return Rect is telling you that the function returns a rectangle, the same one that was passed in. You can just ignore this part. What will not work, is attempting to copy the line and put width=0 in the quotes. # This fails and the error the computer gives you is # really hard to understand. pygame.draw.rect(screen, red, [55,500,10,5], width=0) The code example below shows how to draw a line on the screen. It will draw on the screen a green line from (0,0) to (100,100) that is 5 pixels wide. Remember that green is a variable that was defined earlier as a list of three RGB values. # Draw on the screen a green line from (0,0) to (100,100) # that is 5 pixels wide. pygame.draw.line(screen,green,[0,0],[100,100],5) Use the base template from the prior example and add the code to draw lines. Read the comments to figure out exactly where to put the code. Try drawing lines with different thicknesses, colors, and locations. Draw several lines. Programs can repeat things over and over. The next code example draws a line over and over using a loop. Programs can use this technique to do multiple lines, and even draw an entire car. Putting a line drawing command inside a loop will cause multiple lines being drawn to the screen. But here's the catch. If each line has the same starting and ending coordinates, then each line will draw on top of the other line. It will look like only one line was drawn. To get around this, it is necessary to offset the coordinates each time through the loop. So the first time through the loop the variable y_offset is zero. The line in the code below is drawn from (0,10) to (100,110). The next time through the loop y_offset increased by 10. This causes the next line to be drawn to have new coordinates of (0,20) and (100,120). This continues each time through the loop shifting the coordinates of each line down by 10 pixels. # Draw on the screen several green lines from (0,10) to (100,110) # 5 pixels wide using a while loop y_offset = 0 while y_offset < 100: pygame.draw.line(screen,red,[0,10+y_offset],[100,110+y_offset],5) y_offset = y_offset+10 This same code could be done even more easily with a for loop: # Draw on the screen several green lines from (0,10) to (100,110) # 5 pixels wide using a for loop for y_offset in range(0,100,10): pygame.draw.line(screen,red,[0,10+y_offset],[100,110+y_offset],5) Run this code and try using different changes to the offset. Try creating an offset with different values. Experiment with different values until exactly how this works is obvious. For example, here is a loop that uses sine and cosine to create a more complex set of offsets and produces the image shown in Figure 5.7. for i in range(200): radians_x = i/20 radians_y = i/6 x=int( 75 * math.sin(radians_x)) + 200 y=int( 75 * math.cos(radians_y)) + 200 pygame.draw.line(screen,black,[x,y],[x+5,y], 5) Multiple elements can be drawn in one for loop, such as this code which draws the multiple X's shown in Figure 5.8. for x_offset in range(30,300,30): pygame.draw.line(screen,black,[x_offset,100],[x_offset-10,90], 2 ) pygame.draw.line(screen,black,[x_offset,90],[x_offset-10,100], 2 ) When drawing a rectangle, the computer needs coordinates for the upper left rectangle corner (the origin), and a height and width. Figure 5.9 shows a rectangle (and an ellipse, which will be explained later) with the origin at (20,20), a width of 250 and a height of 100. When specifying a rectangle the computer needs a list of these four numbers in the order of (x, y, width, height). The next code example draws this rectangle. The first two numbers in the list define the upper left corner at (20,20). The next two numbers specify first the width of 250 pixels, and then the height of 100 pixels. The 2 at the end specifies a line width of 2 pixels. The larger the number, the thicker the line around the rectangle. If this number is 0, then there will not be a border around the rectangle. Instead it will be filled in with the color specified. # Draw a rectangle pygame.draw.rect(screen,black,[20,20,250,100],2) An ellipse is drawn just like a rectangle. The boundaries of a rectangle are specified, and the computer draws an ellipses inside those boundaries. The most common mistake in working with an ellipse is to think that the starting point specifies the center of the ellipse. In reality, nothing is drawn at the starting point. It is the upper left of a rectangle that contains the ellipse. Looking back at Figure 5.9 one can see an ellipse 250 pixels wide and 100 pixels tall. The upper left corner of the 250x100 rectangle that contains it is at (20,20). Note that nothing is actually drawn at (20,20). With both drawn on top of each other it is easier to see how the ellipse is specified. # Draw an ellipse, using a rectangle as the outside boundaries pygame.draw.ellipse(screen,black,[20,20,250,100],2) What if a program only needs to draw part of an ellipse? That can be done with the arc command. This command is similar to the ellipse command, but it includes start and end angles for the arc to be drawn. The angles are in radians. The code example below draws four arcs showing four difference quadrants of the circle. Each quadrant is drawn in a different color to make the arcs sections easier to see. The result of this code is shown in Figure 5.10. # Draw an arc as part of an ellipse. Use radians to determine what # angle to draw. pygame.draw.arc(screen,green,[100,100,250,200], pi/2, pi, 2) pygame.draw.arc(screen,black,[100,100,250,200], 0, pi/2, 2) pygame.draw.arc(screen,red, [100,100,250,200],3*pi/2, 2*pi, 2) pygame.draw.arc(screen,blue, [100,100,250,200], pi, 3*pi/2, 2) The next line of code draws a polygon. The triangle shape is defined with three points at (100,100) (0,200) and (200,200). It is possible to list as many points as desired. Note how the points are listed. Each point is a list of two numbers, and the points themselves are nested in another list that holds all the points. This code draws what can be seen in Figure 5.11. # This draws a triangle using the polygon command pygame.draw.polygon(screen,black,[[100,100],[0,200],[200,200]],5) Text is slightly more complex. There are three things that need to be done. First, the program creates a variable that holds information about the font to be used, such as what typeface and how big. Second, the program creates an image of the text. One way to think of it is that the program carves out a “stamp” with the required letters that is ready to be dipped in ink and stamped on the paper. The third thing that is done is the program tells where this image of the text should be stamped (or “blit'ed”) to the screen. Here's an example: # Select the font to use. Default font, 25 pt size. font = pygame.font.Font(None, 25) # Render the text. "True" means anti-aliased text. # Black is the color. The variable black was defined # above as a list of [0,0,0] # Note: This line creates an image of the letters, # but does not put it on the screen yet. text = font.render("My text",True,black) # Put the image of the text on the screen at 250x250 screen.blit(text, [250,250]) Want to print the score to the screen? That is a bit more complex. This does not work: text = font.render("Score: ",score,True,black) Why? A program can't just add extra items to font.render like the print statement. Only one string can be sent to the command, therefore the actual value of score needs to be appended to the “Score: ” string. But this doesn't work either: text = font.render("Score: "+score,True,black) If score is an integer variable, the computer doesn't know how to add it to a string. You, the programmer, must convert the score to a string. Then add the strings together like this: text = font.render("Score: "+str(score),True,black) Now you know how to print the score. If you want to print a timer, that requires print formatting, discussed in a chapter later on. Check in the example code for section on-line for the timer.py example: This is a full listing of the program discussed in this chapter. This program, along with other programs, may be downloaded from: # Sample Python/Pygame Programs # Simpson College Computer Science # http://programarcadegames.com/ # http://simpson.edu/computer-science/ # Import a library of functions called 'pygame' import pygame # Initialize the game engine pygame.init() # Define the colors we will use in RGB format black = [ 0, 0, 0] white = [255,255,255] blue = [ 0, 0,255] green = [ 0,255, 0] red = [255, 0, 0] pi = 3.141592653 # Set the height and width of the screen size = [400,500] screen = pygame.display.set_mode(size) pygame.display.set_caption("Professor Craven's Cool Game") #Loop until the user clicks the close button. done = False clock = pygame.time.Clock() while done == False: # This limits the while loop to a max of 10 times per second. # Leave this out and we will use all CPU we can. clock.tick(10) for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done=True # Flag that we are done so we exit this loop # All drawing code happens after the for loop and but # inside the main while done==False loop. # Clear the screen and set the screen background screen.fill(white) # Draw on the screen a green line from (0,0) to (100,100) # 5 pixels wide. pygame.draw.line(screen,green,[0,0],[100,100],5) # Draw on the screen several green lines from (0,10) to (100,110) # 5 pixels wide using a loop for y_offset in range(0,100,10): pygame.draw.line(screen,red,[0,10+y_offset],[100,110+y_offset],5) # Draw a rectangle pygame.draw.rect(screen,black,[20,20,250,100],2) # Draw an ellipse, using a rectangle as the outside boundaries pygame.draw.ellipse(screen,black,[20,20,250,100],2) # Draw an arc as part of an ellipse. # Use radians to determine what angle to draw. pygame.draw.arc(screen,black,[20,220,250,200], 0, pi/2, 2) pygame.draw.arc(screen,green,[20,220,250,200], pi/2, pi, 2) pygame.draw.arc(screen,blue, [20,220,250,200], pi,3*pi/2, 2) pygame.draw.arc(screen,red, [20,220,250,200],3*pi/2, 2*pi, 2) # This draws a triangle using the polygon command pygame.draw.polygon(screen,black,[[100,100],[0,200],[200,200]],5) # Select the font to use. Default font, 25 pt size. font = pygame.font.Font(None, 25) # Render the text. "True" means anti-aliased text. # Black is the color. This creates an image of the # letters, but does not put it on the screen text = font.render("My text",True,black) # Put the image of the text on the screen at 250x250 screen.blit(text, [250,250]) # Go ahead and update the screen with what we've drawn. # This MUST happen after all the other drawing commands. pygame.display.flip() # Be IDLE friendly pygame.quit() Click here for a multiple-choice review quiz. After answering the review questions below, try writing a computer program that creates an image of your own design. For details, see the Create-a-Picture lab. - Before a program can use any functions like pygame.display.set_mode(), what two things must happen? - What does the pygame.display.set_mode() function do? - What is pygame.time.Clock used for? - What does this for event in pygame.event.get() loop do? - For this line of code: - What does screen do? - What does [0,0] do? What does [100,100] do? - What does 5 do? - Explain how the computer coordinate system differs from the standard Cartesian coordinate system. - Explain how white = ( 255, 255, 255) represents a color. - When drawing a rectangle, what happens if the specified line width is zero? - Sketch the ellipse drawn in the code below, and label the origin coordinate, the length, and the width: - Describe, in general, what are the three steps needed when printing text to the screen using graphics? - What are the coordinates of the polygon that the code below draws? - What does pygame.display.flip() do? - What does pygame.quit() do? Complete Lab 3 “Create-a-Picture” to create your own picture, and show you understand how to use loops and graphics. You are not logged in. Log in here and track your progress.
http://programarcadegames.com/index.php?showpart=5
13
60
A polynomial is a sum of terms which are coefficients times various powers of a “base” variable. For example, ‘2 x^2 + 3 x - 4’ is a polynomial in ‘x’. Some formulas can be considered polynomials in several different variables: ‘1 + 2 x + 3 y + 4 x y^2’ is a polynomial in both ‘x’ and ‘y’. Polynomial coefficients are often numbers, but they may in general be any formulas not involving the base variable. The a f ( factor] command factors a polynomial into a product of terms. For example, the polynomial ‘x^3 + 2 x^2 + x’ is factored into ‘x*(x+1)^2’. As another example, ‘a c + b d + b c + a d’ is factored into the product ‘(a + b) (c + d)’. Calc currently has three algorithms for factoring. Formulas which are linear in several variables, such as the second example above, are merged according to the distributive law. Formulas which are polynomials in a single variable, with constant integer or fractional coefficients, are factored into irreducible linear and/or quadratic terms. The first example above factors into three linear terms (‘x’, ‘x+1’, and ‘x+1’ again). Finally, formulas which do not fit the above criteria are handled by the algebraic rewrite mechanism. Calc's polynomial factorization algorithm works by using the general root-finding command (a P) to solve for the roots of the polynomial. It then looks for roots which are rational numbers or complex-conjugate pairs, and converts these into linear and quadratic terms, respectively. Because it uses floating-point arithmetic, it may be unable to find terms that involve large integers (whose number of digits approaches the current precision). Also, irreducible factors of degree higher than quadratic are not found, and polynomials in more than one variable are not treated. (A more robust factorization algorithm may be included in a future version of Calc.) The rewrite-based factorization method uses rules stored in the variable FactorRules. See Rewrite Rules, for a discussion of the operation of rewrite rules. The default FactorRules are able to factor quadratic forms symbolically into two linear terms, ‘(a x + b) (c x + d)’. You can edit these rules to include other cases if you wish. To use the rules, Calc builds the formula ‘thecoefs(x, [a, b, c, ...])’ where x is the polynomial base variable and b, etc., are polynomial coefficients (which may be numbers or formulas). The constant term is written first, i.e., in the a position. When the rules complete, they should have changed the formula into the form ‘thefactors(x, [f1, f2, f3, ...])’ fi should be a factored term, e.g., ‘x - ai’. Calc then multiplies these terms together to get the complete factored form of the polynomial. If the rules do not change the thecoefs call to a thefactors call, a f leaves the polynomial alone on the assumption that it is unfactorable. (Note that the function names thefactors are used only as placeholders; there are no actual Calc functions by those names.) The H a f [ factors] command also factors a polynomial, but it returns a list of factors instead of an expression which is the product of the factors. Each factor is represented by a sub-vector of the factor, and the power with which it appears. For example, ‘x^5 + x^4 - 33 x^3 + 63 x^2’ factors to ‘(x + 7) x^2 (x - 3)^2’ in a f, or to ‘[ [x, 2], [x+7, 1], [x-3, 2] ]’ in H a f. If there is an overall numeric factor, it always comes first in the list. factors allow a second argument when written in algebraic form; ‘factor(x,v)’ factors ‘x’ with respect to the specific variable ‘v’. The default is to factor with respect to all the variables that appear in ‘x’. The a c ( collect] command rearranges a formula as a polynomial in a given variable, ordered in decreasing powers of that variable. For example, given ‘1 + 2 x + 3 y + 4 x y^2’ on the stack, a c x would produce ‘(2 + 4 y^2) x + (1 + 3 y)’, and a c y would produce ‘(4 x) y^2 + 3 y + (1 + 2 x)’. The polynomial will be expanded out using the distributive law as necessary: Collecting ‘x’ in ‘(x - 1)^3’ produces ‘x^3 - 3 x^2 + 3 x - 1’. Terms not involving ‘x’ will not be expanded. The “variable” you specify at the prompt can actually be any expression: a c ln(x+1) will collect together all terms multiplied by ‘ln(x+1)’ or integer powers thereof. If ‘x’ also appears in the formula in a context other than ‘ln(x+1)’, a c will treat those occurrences as unrelated to ‘ln(x+1)’, i.e., as constants. The a x ( expand] command expands an expression by applying the distributive law everywhere. It applies to products, quotients, and powers involving sums. By default, it fully distributes all parts of the expression. With a numeric prefix argument, the distributive law is applied only the specified number of times, then the partially expanded expression is left on the stack. The a x and j D commands are somewhat redundant. Use a x if you want to expand all products of sums in your formula. Use j D if you want to expand a particular specified term of the formula. There is an exactly analogous correspondence between a f and j M. (The j D and j M commands also know many other kinds of expansions, such as ‘exp(a + b) = exp(a) exp(b)’, which a x and a f do not do.) Calc's automatic simplifications will sometimes reverse a partial expansion. For example, the first step in expanding ‘(x+1)^3’ is to write ‘(x+1) (x+1)^2’. If a x stops there and tries to put this formula onto the stack, though, Calc will automatically simplify it back to ‘(x+1)^3’ form. The solution is to turn simplification off first (see Simplification Modes), or to run a x without a numeric prefix argument so that it expands all the way in one step. The a a ( apart] command expands a rational function by partial fractions. A rational function is the quotient of two polynomials; apart pulls this apart into a sum of rational functions with simple denominators. In algebraic apart function allows a second argument that specifies which variable to use as the “base”; by default, Calc chooses the base variable automatically. The a n ( attempts to arrange a formula into a quotient of two polynomials. For example, given ‘1 + (a + b/c) / d’, the result would be ‘(b + a c + c d) / c d’. The quotient is reduced, so that a n will simplify ‘(x^2 + 2x + 1) / (x^2 - 1)’ by dividing out the common factor ‘x + 1’, yielding ‘(x + 1) / (x - 1)’. The a \ ( pdiv] command divides two polynomials ‘u’ and ‘v’, yielding a new polynomial ‘q’. If several variables occur in the inputs, the inputs are considered multivariate polynomials. (Calc divides by the variable with the largest power in ‘u’ first, or, in the case of equal powers, chooses the variables in alphabetical order.) For example, dividing ‘x^2 + 3 x + 2’ by ‘x + 2’ yields ‘x + 1’. The remainder from the division, if any, is reported at the bottom of the screen and is also placed in the Trail along with the quotient. pdiv in algebraic notation, you can specify the particular variable to be used as the base: pdiv is given only two arguments (as is always the case with the a \ command), then it does a multivariate division as outlined The a % ( prem] command divides two polynomials and keeps the remainder ‘r’. The quotient ‘q’ is discarded. For any formulas ‘a’ and ‘b’, the results of a \ and a % satisfy ‘a = q b + r’. (This is analogous to plain \ and %, which compute the integer quotient and remainder from dividing two numbers.) The a / ( divides two polynomials and reports both the quotient and the remainder as a vector ‘[q, r]’. The H a / [ command divides two polynomials and constructs the formula ‘q + r/b’ on the stack. (Naturally if the remainder is zero, this will immediately simplify to ‘q’.) The a g ( pgcd] command computes the greatest common divisor of two polynomials. (The GCD actually is unique only to within a constant multiplier; Calc attempts to choose a GCD which will be unsurprising.) For example, the a n command uses a g to take the GCD of the numerator and denominator of a quotient, then divides each by the result using a \. (The definition of GCD ensures that this division can take place without leaving a remainder.) While the polynomials used in operations like a / and a g often have integer coefficients, this is not required. Calc can also deal with polynomials over the rationals or floating-point reals. Polynomials with modulo-form coefficients are also useful in many applications; if you enter ‘(x^2 + 3 x - 1) mod 5’, Calc automatically transforms this into a polynomial over the field of integers mod 5: ‘(1 mod 5) x^2 + (3 mod 5) x + (4 mod 5)’. Congratulations and thanks go to Ove Ewerlid [email protected]), who contributed many of the polynomial routines used in the above commands. See Decomposing Polynomials, for several useful functions for extracting the individual coefficients of a polynomial.
http://www.gnu.org/software/emacs/manual/html_node/calc/Polynomials.html
13
123
π IS THE RATIO of the circumference of a circle to the diameter. (Topic 9.) But how shall we compare a curved line to a straight line? The answer is that we cannot do it directly. We can only relate straight lines to straight lines, and so we must approximate a curved line by a series of straight lines. In this case, we approximate the circle by an inscribed polygon or a circumscribed polygon. The perimeter of the inscribed polygon will be less than the circumference of the circle, while the perimeter of the circumscribed polygon will be greater. We can then form the ratio of each perimeter to the diameter. That will produce a lesser and a greater approximation to π. Clearly, the more sides we take, the better the value. An inscribed polygon Each side of an inscribed polygon is a chord of the circle. The perimeter of the polygon -- the approximation to the circumference -- will be the sum of all the chords. From the following theorem we are able to evaluate π: The ratio of a chord of a circle to the diameter We say that a chord subtends -- literally, stretches under -- a central angle. Thus if AB is a chord of a circle, and CD the diameter, then Before proving this theorem, let us give some examples. Example 1. A chord subtends a central angle of 100°. What ratio has the chord to the diameter? Answer. According to the theorem, This means that the chord is 766 thousandths -- or a bit more than three fourths -- of the diameter. Problem 1. What ratio to the diameter has a chord that subtends a central angle of 60°? To see the answer, pass your mouse over the colored area. This chord is half of the diameter. This chord is equal to the radius! Example 2. A regular polygon of 8 sides is inscribed in a circle. What central angle does each side subtend? What ratio has each side to the diameter? What ratio has the entire perimeter to the diameter? Answer. Since the polygon has 8 sides, then each central angle is an eighth of the entire circle, that is, an eighth of 360°. 360° ÷ 8 = 45°. As for the entire perimeter, it is made up of 8 such chords. Therefore, the ratio of the perimeter to the diameter will be 8 × .383 = 3.064 This is an approximation to π For, The approximation is not a very good one, because we have approximated the circumference with a polygon of only 8 sides. Problem 2. Let a regular polygon of 20 sides be inscribed in a circle. a) Each side subtends what central angle? 360° ÷ 20 = 18° b) What ratio has each side to the diameter? (Table) The entire perimeter, is made up of 20 such chords. Therefore, the ratio of the perimeter to the diameter will be 20 × .156 = 3.12 We can generalize what we have done as follows. Let us inscribe in a circle a polygon of n sides. Then each side will subtend a central angle θ: Finally, since n such chords approximates the circumference, then the ratio of those n chords to the diameter is an approximation to π: We shall use this below to prove that the area A of circle is Here is the proof of the ratio of a chord to the diameter. Theorem. The ratio of a chord of a circle to the diameter is given by the sine of half the central angle that the chord subtends. Let E be the center of a circle with chord AB, diameter CD, and central angle AEB, which we will call θ; then Draw EF so that it bisects angle θ. Then EF is also the perpendicular bisector of AB, because EA and EB are radii, and so triangle AEB is isosceles (Theorem 2). This is what we set out to prove. In the previous Lesson, we saw how to know that area by the method of rearranging. Here, we will prove the formula by means of inscribed polygons. Theorem. The area A of a circle is where r is the radius of the circle, and D is the diameter. Proof. Let a regular polygon of n sides be inscribed in a circle of radius r, and let s be the length of each side. Let us divide the polygon into n isosceles triangles, and let us denote the area of each triangle by AT. Then the area of those n triangles will approximate the area of the circle. Now the area of each triangle is half the base s times the height h. We will now express both s and h in terms of the radius r, and then substitute those expressions in line (1). Since the side of each isosceles triangle is the radius r, then h/r is the cosine of half that central angle: Also, in each isosceles triangle, Upon substituting lines (3) and (2) into line (1), we have That is the area of one of the triangles. The area A of the entire circle is approximated by all n triangles: But we have seen that Suppose now that the number of sides n is an enormously large number -- more than the number of stars in a million galaxies Then will be indistinguishable from 0°. We will have A = πr² cos 0°. But cos 0° = 1. Therefore, A = πr². This is what we wanted to prove. When n is an extremely large number, then in the language of calculus, "The limit as n becomes infinite of Please make a donation to keep TheMathPage online. Copyright © 2012 Lawrence Spector Questions or comments?
http://www.themathpage.com/atrig/pi.htm
13
51
What is a spiral? A spiral is a curve in the plane or in the space, which runs around a centre in a special way. Different spirals follow. Most of them are produced by formulas. Spirals by Polar Equations Archimedean Spiral top You can make a spiral by two motions of a point: There is a uniform motion in a fixed direction and a motion in a circle with constant speed. Both motions start at the same point. (1) The uniform motion on the left moves a point to the right. - There are nine snapshots. (2) The motion with a constant angular velocity moves the point on a spiral at the same time. - There is a point every 8th turn. (3) A spiral as a curve comes, if you draw the point at every turn. You get formulas analogic to the circle You give a point by a pair (radius OP, angle t) in the (simple) polar equation. The radius is the distance of the point from the origin (0|0). The angle lies between the radius and the positive x-axis, its vertex in the origin. ||Let P be a point of a circle with the radius R, which is given by an equation in the centre position. There are three essential descriptions of the circle: (1) Central equation: x²+y² = R² or [y = sqr(R²-x²) und y = -sqr(R²-x²)], (2) Parameter form: x(t) = R cos(t), y(t) = R sin(t), (3) Polar equation: r(t) = R. The radius r(t) and the angle t are proportional for the simpliest spiral, the spiral of Archimedes. Therefore the equation is: (3) Polar equation: r(t) = at [a is constant]. From this follows (2) Parameter form: x(t) = at cos(t), y(t) = at sin(t), (1) Central equation: x²+y² = a²[arc tan (y/x)]². ||The Archimedean spiral starts in the origin and makes a curve with The distances between the spiral branches are the same. More exact: The distances of intersection points along a line through the origin are the same. If you connect both spirals by a straight (red) or a bowed curve, a double ||If you reflect an Archimedean spiral on a straight line, you get a new spiral with the opposite direction. Both spirals go outwards. If you look at the spirals, the left one forms a curve going to the left, the right one forms a curve going to the Equiangular Spiral (Logarithmic Spiral, Bernoulli's Spiral) top ||(1) Polar equation: r(t) = exp(t). (2) Parameter form: x(t) = exp(t) cos(t), y(t) = exp(t) sin(t). (3) Central equation: y = x tan[ln(sqr(x²+y²))]. The logarithmic spiral also goes outwards. The spiral has a characteristic feature: Each line starting in the origin (red) cuts the spiral with the same angle. More Spirals top If you replace the term r(t)=at of the Archimedean spiral by other terms, you get a number of new spirals. There are six spirals, which you can describe with the functions f(x)=x^a [a=2,1/2,-1/2,-1] and f(x)=exp(x), f(x)=ln(x). You distinguish two groups depending on how the parameter t grows from 0. ||If the absolute modulus of a function r(t) is increasing, the spirals run from inside to outside and go above all limits. The spiral 1 is called parabolic spiral or Fermat's spiral. I chose equations for the different spiral formulas suitable for plotting. ||If the absolute modulus of a function r(t) is decreasing, the spirals run from outside to inside. They generally run to the centre, but they don't reach it. There is a pole. Spiral 2 is called the Lituus (crooked staff). Clothoide (Cornu Spiral) top ||The clothoid or double spiral is a curve, whose curvature grows with the distance from the origin. The radius of curvature is opposite proportional to its arc measured from the origin. The parameter form consists of two equations with Fresnel's integrals, which can only be solved approximately. You use the Cornu spiral to describe the energy distribution of Fresnel's diffraction at a single slit in the wave theory. Spirals Made of Arcs top Half circle spirals ||You can add half circles growing step by step to get spirals. The radii have the ratios 1 : 1.5 : 2 : 2.5 : The Fibonacci spiral is called after its numbers. If you take the length of the square sides in the order, you get the sequence 1,1,2,3,5,8,13,21, ... These are the Fibonacci numbers, which you can find by the recursive formula a(n)=a(n-1)+a(n-2) with [a(1)=1, a(2)=1, n>2]. ||Draw two small squares on top of each other. Add a sequence of growing squares counter clockwise. Draw quarter circles inside the squares (black). They form the Fibonacci Spiral. Spirals Made of Line ||The spiral is made by line segments with the lengths 1,1,2,2,3,3,4,4,.... Lines meet one another at right angles. ||Draw a spiral in a crossing with four intersecting straight lines, which form 45° angles. Start with the horizontal line 1 and bend the next line perpendicularly to the straight line. The line segments form a geometric sequence with the common ratio sqr(2). If you draw a spiral into a straight line bundle, you approach the logarithmic spiral, if the angles become smaller and smaller. ||The next spiral is formed by a chain of right angled triangles, which have a common side. The hypotenuse of one triangle becomes the leg of the next. First link is a 1-1-sqr(2)-triangle. The free legs form the spiral. It is special that the triangles touch in line segments. Their lengths are the roots of the natural numbers. You can proof this with the Pythagorean This figure is called root spiral or root snail or wheel of Theodorus. This picture reminds me of the programming language LOGO of the early days of computing (C64-nostalgia). ||Squares are turned around their centre with 10° and compressed at the same time, so that their corners stay at the sides of their preceding Result: The corners form four spiral arms. The spiral is similar to the logarithm spiral, if the angles get smaller and smaller. You can also turn other regular polygons e.g. an equilateral triangle. You get similar figures. ||If you draw a circle with x=cos(t) and y=sin(t) and pull it evenly in z-direction, you get a spatial spiral called cylindrical spiral or helix. The picture pair makes a 3D view possible. ||Reflect the 3D-spiral on a vertical plane. You get a new spiral (red) with the opposite direction. If you hold your right hand around the right spiral and if your thumb points in direction of the spiral axis, the spiral runs clockwise upward. It is right circular. You must use your left hand for the left spiral. It is left circular. The rotation is counter clockwise. Example: Nearly all screws have a clockwise rotation, because most of the people are right-handed. ||In the "technical" literature the right circular spiral is explained as follows: You wind a right- angled triangle around a cylinder. A clockwise rotating spiral develops, if the triangle increases to the right. You can make the conical helix with the Archimedean spiral or equiangular The picture pairs make 3D views possible. Generally there is a loxodrome at every solid made by rotation about an ||The loxodrome is a curve on the sphere, which cuts the meridians at a constant angle. They appear on the Mercator projection as straight lines. The parametric representation is x=cos(t) cos [1/tan (at)] y=sin(t) cos[1/tan (at)] z= -sin [1/tan (at)] (a is constant) You can find out x²+y²+z²=1. This equation means that the loxodrome is lying on the sphere. Making of Spirals top You use this effect to decorate the ends of synthetic materials, such as the narrow colourful strips or ribbons used in gift-wrapping. ||A strip of paper becomes a spiral, if you pull the strip between the thumb and the edge of a knife, pressing hard. The spiral becomes a curl where gravity is present. I suppose that you have to explain this effect in the same way as a bimetallic bar. You create a bimetallic bar by glueing together two strips, each made of a different metal. Once this bimetallic bar is heated, one metal strip expands more than the other causing the bar to bend. The reason that the strip of paper bends is not so much to do with the difference in temperature between the top and bottom side. The knife changes the structure of the surface of the paper. This side becomes 'shorter'. Incidentally, a strip of paper will bend slightly if you hold it in the heat of a candle flame. ||Forming curls reminds me of an old children's game: Take a dandelion flower and cut the stem into two or four strips, keeping the head intact. If you place the flower into some water, so that the head floats on the surface, the strips of the stem will curl up. (Mind the spots.) A possible explanation: Perhaps the different absorption of water on each side of the strips causes them to curl up. Mandelbrot Set Spirals top The coordinates belong to the centre of the pictures. You also find nice spirals as Julia Sets. Here is an example: You find more about these graphics on my page Mandelbrot Spirals Made of Metal top You find nice spirals as a decoration of barred windows, fences, gates or doors. You can see them everywhere, if you are look around. ||I found spirals worth to show at New Ulm, Minnesota, USA. Americans with German ancestry built a copy of the Herman monument near Detmold/Germany in about 1900. Iron railings with many spirals decorate the stairs (photo). More about the American and German Herman on Wikipedia-pages (URL below) Costume jewelleries also take spirals as Spirals, Spirals, Spirals Ammonites, antlers of wild sheep, Archimedes' water spiral, area of high or low pressure, arrangement of the sunflower cores, @, bimetal thermometer, bishop staff, Brittany sign, circles of a sea-eagle, climbs, clockwise rotating lactic acid, clouds of smoke, coil, coil spring, corkscrew, creepers (plants), curl, depression in meteorology, disc of Festós, double filament of the bulb, double helix of the DNA, double spiral, electron rays in the magnetic longitudinal field, electrons in cyclotron, Exner spiral, finger mark, fir cone, glider ascending, groove of a record, head of the music instrument violin, heating wire inside a hotplate, heat spiral, herb spiral, inflation spiral, intestine of a tadpole, knowledge spiral, licorice snail, life spiral, Lorenz attractor, minaret at Samarra (Iraq), music instrument horn, pendulum body of the Galilei pendulum, relief strip of the Trajan's column at Rome or the Bernward column at Hildesheim, poppy snail, road of a cone mountain, role (wire, thread, cable, hose, tape measure, paper, bandage), screw threads, simple pendulum with friction, snake in resting position, snake of Aesculapius, snail of the interior ear, scrolls, screw alga, snail-shell, spider net, spiral exercise book, spiral nebula, spiral staircase (e.g. the two spiral stairs in the glass dome of the Reichstag in Berlin), Spirallala ;-), Spirelli noodles, Spirills (e.g. Cholera bacillus), springs of a mattress, suction trunk (lower jaw) of the cabbage white butterfly, tail of the sea-horse, taps of conifers, tongue and tail of the chamaeleon, traces on CD or DVD, treble clef, tusks of giants, viruses, volute, watch spring and balance spring of mechanical clocks, whirlpool, whirlwind. Spirals on the Internet top als Symbol der Sonnenbahn (Spiralen 1 ) - (Spiralen online zeichnen) am Einzelspalt (Cornu-Spirale) Susanne Helbig, Kareen Henkel und Jan Kriener in Naturwissenschaft, Technik und Kunst Stephan Jaeckel und Sergej Amboni in Natur, Technik und Kunst (Referenz: Heitzer J, Spiralen, ein Kapitel phänomenaler Mathematik, A Pocket Art Show Production in association with the Pataphysics Department at the Quantum Mechanics Institute, Keighley Ayhan Kursat ERBAS This is a David Eppstein (Geometry Junkyard) Eric W. Weisstein (MathWorld) Hop David (Hop's Gallery) A musical realization of the motion graphics of John Whitney as described in his book "digital The Double Helix Richard Parris (peanut Software) Robert FERRÉOL (COURBES 3D (SPHÉRO-CYLINDRIQUE, SPIRALE CONIQUE DE PAPPUS, SPIRALE CONIQUE DE PIRONDINI, SPIRALE SPHÉRIQUE) You can read this page in Haitian (1) Martin Gardener: Unsere gespiegelte Welt, Ullstein, Berlin, 1982 (2) Rainer und Patrick Gaitzsch: Computer-Lösungen für Schule und Studium, Band 2, Landsberg am Lech, 1985 (3) Jan Gullberg: Mathematics - From the Birth of Numbers, New York / London (1997) [ISBN 0-393-04002-X] (4) Khristo N. Boyadzhiev: Spirals and Conchospirals in the Flight of Insects, The College Mathematics Journal, Vol.30, No.1 (Jan.,1999) pp.23-31 (5) Jill Purce: the mystic spiral - Journey of the Soul, Thames and Hudson, 1972, reprinted 1992 Feedback: Email address on my main page page is also available in German Jürgen Köller 2002
http://www.mathematische-basteleien.de/spiral.htm
13
116
The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body. To describe such an orientation in 3-dimensional Euclidean space three parameters are required. They can be given in several ways, Euler angles being one of them; see charts on SO(3) for others. Euler angles are also used to represent the orientation of a frame of reference (typically, a coordinate system or basis) relative to another. They are typically denoted as α, β, γ, or . Euler angles also represent a sequence of three elemental rotations, i.e. rotations about the axes of a coordinate system. For instance, a first rotation about z by an angle α, a second rotation about x by an angle β, and a last rotation again about z, by an angle γ. These rotations start from a known standard orientation. In physics, this standard initial orientation is typically represented by a motionless (fixed, global, or world) coordinate system. In linear algebra, by a standard basis. Any orientation can be achieved by composing three elemental rotations. The elemental rotations can either occur about the axes of the fixed coordinate system (extrinsic rotations) or about the axes of a rotating coordinate system, which is initially aligned with the fixed one, and modifies its orientation after each elemental rotation (intrinsic rotations). The rotating coordinate system may be imagined to be rigidly attached to a rigid body. In this case, it is sometimes called a local coordinate system. Without considering the possibility of using two different conventions for the definition of the rotation axes (intrinsic or extrinsic), there exist twelve possible sequences of rotation axes, divided in two groups: - Euler angles (z-x-z, x-y-x, y-z-y, z-y-z, x-z-x, y-x-y) - Tait–Bryan angles (x-y-z, y-z-x, z-x-y, x-z-y, z-y-x, y-x-z). Tait-Bryan angles are also called Cardan angles, nautical angles, heading, elevation, and bank, or yaw, pitch, and roll. Sometimes, both kinds of sequences are called "Euler angles". In that case, the sequences of the first group are called proper or classic Euler angles. Proper Euler angles Euler angles are a means of representing the spatial orientation of any reference frame (coordinate system or basis) as a composition of three elemental rotations starting from a known standard orientation, represented by another frame (sometimes referred to as the original or fixed reference frame, or standard basis). The reference orientation can be imagined to be an initial orientation from which the frame virtually rotates to reach its actual orientation. In the following, the axes of the original frame are denoted as x,y,z and the axes of the rotated frame are denoted as X,Y,Z. In geometry and physics, the rotated coordinate system is often imagined to be rigidly attached to a rigid body. In this case, it is called a "local" coordinate system, and it is meant to represent both the position and the orientation of the body. The geometrical definition (referred sometimes as static) of the Euler angles is based on the axes of the above-mentioned (original and rotated) reference frames and an additional axis called the line of nodes. The line of nodes (N) is defined as the intersection of the xy and the XY coordinate planes. In other words, it is a line passing through the origin of both frames, and perpendicular to the zZ plane, on which both z and Z lie. The three Euler angles are defined as follows: - α (or ) is the angle between the x-axis and the N-axis. - β (or ) is the angle between the z-axis and the Z-axis. - γ (or ) is the angle between the N-axis and the X-axis. This definition implies that: - α represents a rotation around the z-axis, - β represents a rotation around the N-axis, - γ represents a rotation around the Z-axis. If β is zero, there is no rotation about N. As a consequence, Z coincides with z, α and γ represent rotations about the same axis (z), and the final orientation can be obtained with a single rotation about z, by an angle equal to α+γ. The rotated frame XYZ may be imagined to be initially aligned with xyz, before undergoing to the three elemental rotations represented by Euler angles. Its successive orientations may be denoted as follows: - x-y-z, or x0-y0-z0 (initial) - x’-y’-z’, or x1-y1-z1 (after first rotation) - x″-y″-z″, or x2-y2-z2 (after second rotation) - X-Y-Z, or x3-y3-z3 (final) For the above-listed sequence of rotations, the line of nodes N can be simply defined as the orientation of X after the first elemental rotation. Hence, N can be simply denoted x’. Moreover, since the third elemental rotation occurs about Z, it does not change the orientation of Z. Hence Z coincides with z″. This allows us to simplify the definition of the Euler angles as follows: - α (or ) represents a rotation around the z-axis, - β (or ) represents a rotation around the x’-axis, - γ (or ) represents a rotation around the z″-axis. Different authors may use different sets of rotation axes to define Euler angles, or different names for the same angles. Therefore any discussion employing Euler angles should always be preceded by their definition. Unless otherwise stated, this article will use the convention described above. The three elemental rotations may occur either about the axes xyz of the original coordinate system, which is assumed to remain motionless (extrinsic rotations), or about the axes of the rotating coordinate system XYZ, which changes its orientation after each elemental rotation (intrinsic rotations). The definition above uses intrinsic rotations. There are six possibilities of choosing the rotation axes for proper Euler angles. In all of them, the first and third rotation axis are the same. The six possible sequences are: - z-x’-z″ (intrinsic rotations) or z-x-z (extrinsic rotations) - x-y’-x″ (intrinsic rotations) or x-y-x (extrinsic rotations) - y-z’-y″ (intrinsic rotations) or y-z-y (extrinsic rotations) - z-y’-z″ (intrinsic rotations) or z-y-z (extrinsic rotations) - x-z’-x″ (intrinsic rotations) or x-z-x (extrinsic rotations) - y-x’-y″ (intrinsic rotations) or y-x-y (extrinsic rotations) Euler angles between two reference frames are defined only if both frames have the same handedness. Signs and ranges Angles are commonly defined according to the right hand rule. Namely, they have positive values when they represent a rotation that appears counter-clockwise when observed from a point laying on the positive part of the rotation axis, and negative values when the rotation appears clockwise. The opposite convention (left hand rule) is less frequently adopted. About the ranges: - for α and γ, the range is defined modulo 2π radians. A valid range could be [−π, π]. - for β, the range covers π radians (but can't be said to be modulo π). For example could be [0, π] or [−π/2, π/2]. The angles α, β and γ are uniquely determined except for the singular case that the xy and the XY planes are identical, the z axis and the Z axis having the same or opposite directions. Indeed, if the z-axis and the Z-axis are the same, β = 0 and only (α + γ) is uniquely defined (not the individual values), and, similarly, if the z-axis and the Z-axis are opposite, β = π and only (α − γ) is uniquely defined (not the individual values). These ambiguities are known as gimbal lock in applications. The fastest way to get the Euler Angles of a given frame is to write the three given vectors as columns of a matrix and compare it with the expression of the theoretical matrix (see later table of matrices). Hence the three Euler Angles can be calculated. Nevertheless, the same result can be reached avoiding matrix algebra, which is more geometrical. Assuming a frame with unitary vectors (X, Y, Z) as in the main diagram, it can be seen that As is the double projection of a unitary vector, There is a similar construction for , projecting it first over the plane defined by the axis z and the line of nodes. As the angle between the planes is and , this leads to: and finally, using the inverse cosine function, It is interesting to note that the inverse cosine function yields two possible values for the argument. In this geometrical description only one of the solutions is valid. When Euler Angles are defined as a sequence of rotations all the solutions can be valid, but there will be only one inside the angle ranges. This is because the sequence of rotations to reach the target frame is not unique if the ranges are not previously defined. For computational purposes, it may be useful to represent the angles using atan2(y,x): The definitions and notations used for Tait-Bryant angles are similar to those described above for proper Euler angles (Classic definition, Alternative definition). The only difference is that Tait–Bryan angles represent rotations about three distinct axes (e.g. x-y-z, or x-y’-z″), while proper Euler angles use the same axis for both the first and third elemental rotations (e.g., z-x-z, or z-x’-z″). This implies a different definition for the line of nodes. In the first case it was defined as the intersection between two homologous Cartesian planes (parallel when Euler angles are zero; e.g. xy and XY). In the second one, it is defined as the intersection of two non-homologous planes (perpendicular when Euler angles are zero; e.g. xy and YZ). The three elemental rotations may occur either about the axes of the original coordinate system, which remains motionless (extrinsic rotations), or about the axes of the rotating coordinate system, which changes its orientation after each elemental rotation (intrinsic rotations). There are six possibilities of choosing the rotation axes for Tait–Bryan angles. The six possible sequences are: - x-y’-z″ (intrinsic rotations) or x-y-z (extrinsic rotations) - y-z’-x″ (intrinsic rotations) or y-z-x (extrinsic rotations) - z-x’-y″ (intrinsic rotations) or z-x-y (extrinsic rotations) - x-z’-y″ (intrinsic rotations) or x-z-y (extrinsic rotations) - z-y’-x″ (intrinsic rotations) or z-y-x (extrinsic rotations) - y-x’-z″ (intrinsic rotations) or y-x-z (extrinsic rotations) Tait-Bryan angles are also known as nautical angles, because they can be used to describe the orientation of a ship or aircraft, or Cardan angles, after the Italian mathematician and physician Gerolamo Cardano (French: Jérôme Cardan; 24 September 1501 – 21 September 1576) who first described in detail the Cardan suspension and the Cardan joint. They are also called heading, elevation and bank, or yaw, pitch and roll. Notice that the second set of terms is also used for the three aircraft principal axes. Relationship with physical motions - See also Givens rotations Intrinsic rotations are elemental rotations that occur about the axes of the rotating coordinate system XYZ, which changes its orientation after each elemental rotation. The XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three intrinsic rotations can be used to reach any target orientation for XYZ. The Euler or Tait Bryan angles (α, β, γ) are the amplitudes of these elemental rotations. For instance, the target orientation can be reached as follows: - The XYZ-system rotates by α about the Z-axis (which coincides with the z-axis). The X-axis now lies on the line of nodes. - The XYZ-system rotates about the now rotated X-axis by β. The Z-axis is now in its final orientation, and the X-axis remains on the line of nodes. - The XYZ-system rotates a third time about the new Z-axis by γ. The above-mentioned notation allows us to summarize this as follows: the three elemental rotations of the XYZ-system occur about z, x’ and z″. Indeed, this sequence is often denoted z-x’-z″. Sets of rotation axes associated with both proper Euler angles and Tait Bryan angles are commonly named using this notation (see above for details). Sometimes, the same sequence is simply called z-x-z, Z-X-Z, or 3-1-3, but this notation may be ambiguous as it may be identical to that used for intrinsic rotations. In this case, it becomes necessary to separately specify whether the rotations are intrinsic or extrinsic. Rotation matrices can be used to represent a sequence of intrinsic rotations. For instance, represents a composition of intrinsic rotations about axes x-y’-z″, if used to pre-multiply column vectors, while represents exactly the same composition when used to post-multiply row vectors. See Ambiguities in the definition of rotation matrices for more details. Extrinsic rotations are elemental rotations that occur about the axes of the fixed coordinate system xyz. The XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three extrinsic rotations can be used to reach any target orientation for XYZ. The Euler or Tait Bryan angles (α, β, γ) are the amplitudes of these elemental rotations. For instance, the target orientation can be reached as follows: - The XYZ-system rotates about the z-axis by α. The X-axis is now at angle α with respect to the x-axis. - The XYZ-system rotates again about the x-axis by β. The Z-axis is now at angle β with respect to the z-axis. - The XYZ-system rotates a third time about the z-axis by γ. In sum, the three elemental rotations occur about z, x and z. Indeed, this sequence is often denoted z-x-z (or 3-1-3). Sets of rotation axes associated with both proper Euler angles and Tait Bryan angles are commonly named using this notation (see above for details). Rotation matrices can be used to represent a sequence of extrinsic rotations. For instance, represents a composition of extrinsic rotations about axes x-y-z, if used to pre-multiply column vectors, while represents exactly the same composition when used to post-multiply row vectors. See Ambiguities in the definition of rotation matrices for more details. Conversion between intrinsic and extrinsic rotations Any extrinsic rotation is equivalent to an intrinsic rotation by the same angles but with inverted order of elemental rotations, and vice-versa. For instance, the intrinsic rotations x-y’-z″ by angles α, β, γ are equivalent to the extrinsic rotations z-y-x by angles γ, β, α. Both are represented by a matrix if R is used to pre-multiply column vectors, and by a matrix if R is used to post-multiply row vectors. See Ambiguities in the definition of rotation matrices for more details. Euler rotations are defined as the movement obtained by changing one of the Euler angles while leaving the other two constant. Euler rotations are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third one is an intrinsic rotation around an axis fixed in the body that moves. These rotations are called precession, nutation, and intrinsic rotation (spin). As an example, consider a top. The top spins around its own axis of symmetry; this corresponds to its intrinsic rotation. It also rotates around its pivotal axis, with its center of mass orbiting the pivotal axis; this rotation is a precession. Finally, the top can wobble up and down; the inclination angle is the nutation angle. While all three are rotations when applied over individual frames, only precession is valid as a rotation operator, and only precession can be expressed in general as a matrix in the basis of the space. If we suppose a set of frames, able to move each with respect to the former according to just one angle, like a gimbal, there will exist an external fixed frame, one final frame and two frames in the middle, which are called "intermediate frames". The two in the middle work as two gimbal rings that allow the last frame to reach any orientation in space. The gimbal rings indicate some intermediate frames. They can be defined statically too. Taking some vectors i, j and k over the axes x, y and z, and vectors I, J, K over X, Y and Z, and a vector N over the line of nodes, some intermediate frames can be defined using the vector cross product, as following: - origin: [i,j,k] (where k = i × j) - first: [N,k × N,k] - second: [N,K × N,K] - final: [I,J,K] These intermediate frames are equivalent to those of the gimbal. They are such that they differ from the previous one in just a single elemental rotation. This proves that: - Any target frame can be reached from the reference frame just composing three rotations. - The values of these three rotations are exactly the Euler angles of the target frame. Relationship to other representations Euler angles are one way to represent orientations. There are others, and it is possible to change to and from other conventions. Any orientation can be achieved by composing three elemental rotations, starting from a known standard orientation. Equivalently, any rotation matrix R can be decomposed as a product of three elemental rotation matrices. For instance: is a rotation matrix that may be used to represent a composition of intrinsic rotations about axes x-y’-z″. However, both the definition of the elemental rotation matrices X, Y, Z, and their multiplication order depend on the choices taken by the user about the definition of both rotation matrices and Euler angles (see, for instance, Ambiguities in the definition of rotation matrices). Unfortunately, different sets of conventions are adopted by users in different contexts. The following table was built according to this set of conventions: - Each matrix is meant to operate by pre-multiplying column vectors (see Ambiguities in the definition of rotation matrices) - Each matrix is meant to represent an active rotation (the composing and composed matrices are supposed to act on the coordinates of vectors defined in the initial fixed reference frame and give as a result the coordinates of a rotated vector defined in the same reference frame). - Each matrix is meant to represent the composition of intrinsic rotations (around the axes of the rotating reference frame). - Right handed reference frames are adopted, and the right hand rule is used to determine the sign of the angles α, β, γ. For the sake of simplicity, the following table uses the following nomenclature: - 1, 2, 3 represent the angles α, β, γ. - X, Y, Z are the matrices representing the elemental rotations about the axes x, y, z of the fixed frame (e.g., X1 represents a rotation about x by an angle α). - s and c represent sine and cosine (e.g., s1 represents the sine of α). - Each matrix is denoted by the formula used to calculate it. If , we name it . Proper Euler angles Tait-Bryan angles To change the formulas for the opposite direction of rotation, change the signs of the sine functions. To change the formulas for passive rotations, transpose the matrices (then each matrix transforms the initial coordinates of a vector remaining fixed to the coordinates of the same vector measured in the rotated reference system; same rotation axis, same angles, but now the coordinate system rotates, rather than the vector). Expressing rotations in 3D as unit quaternions instead of matrices has some advantages: - Concatenating rotations is computationally faster and numerically more stable. - Extracting the angle and axis of rotation is simpler. - Interpolation is more straightforward. See for example slerp. Other representation comes from the Geometric algebra(GA). GA is a higher level abstraction, in which the quaternions are an even subalgebra. The principal tool in GA is the rotor where angle of rotation, rotation axis (unitary vector) and pseudoscalar (trivector in ) The Euler angles form a chart on all of SO(3), the special orthogonal group of rotations in 3D space. The chart is smooth except for a polar coordinate style singularity along β=0. See charts on SO(3) for a more complete treatment. The space of rotations is called in general "The Hypersphere of rotations", though this is a misnomer: the group Spin(3) is isometric to the hypersphere S3, but the rotation space SO(3) is instead isometric to the real projective space RP3 which is a 2-fold quotient space of the hypersphere. This 2-to-1 ambiguity is the mathematical origin of spin in physics. The Haar measure for Euler angles has the simple form sin(β).dα.dβ.dγ, usually normalized by a factor of 1/8π². For example, to generate uniformly randomized orientations, let α and γ be uniform from 0 to 2π, let z be uniform from −1 to 1, and let β = arccos(z). It is possible to define parameters analogous to the Euler angles in dimensions higher than three. The number of degrees of freedom of a rotation matrix is always less than the dimension of the matrix squared. That is, the elements of a rotation matrix are not all completely independent. For example, the rotation matrix in dimension 2 has only one degree of freedom, since all four of its elements depend on a single angle of rotation. A rotation matrix in dimension 3 (which has nine elements) has three degrees of freedom, corresponding to each independent rotation, for example by its three Euler angles or a magnitude one (unit) quaternion. In SO(4) the rotation matrix is defined by two quaternions, and is therefore 6-parametric (three degrees of freedom for every quaternion). The 4x4 rotation matrices have therefore 6 out of 16 independent components. Any set of 6 parameters that define the rotation matrix could be considered an extension of Euler angles to dimension 4. In general, the number of euler angles in dimension D is quadratic in D; since any one rotation consists of choosing two dimensions to rotate between, the total number of rotations available in dimension is , which for yields . Vehicles and moving frames Their main advantage over other orientation descriptions is that they are directly measurable from a gimbal mounted in a vehicle. As gyroscopes keep their rotation axis constant, angles measured in a gyro frame are equivalent to angles measured in the lab frame. Therefore gyros are used to know the actual orientation of moving spacecraft, and Euler angles are directly measurable. Intrinsic rotation angle cannot be read from a single gimbal, so there has to be more than one gimbal in a spacecraft. Normally there are at least three for redundancy. There is also a relation to the well-known gimbal lock problem of Mechanical Engineering . The most popular application is to describe aircraft attitudes, normally using a Tait–Bryan convention so that zero degrees elevation represents the horizontal attitude. Tait–Bryan angles represent the orientation of the aircraft respect a reference axis system (world frame) with three angles which in the context of an aircraft are normally called Heading, Elevation and Bank. When dealing with vehicles, different axes conventions are possible. When studying rigid bodies in general, one calls the xyz system space coordinates, and the XYZ system body coordinates. The space coordinates are treated as unmoving, while the body coordinates are considered embedded in the moving body. Calculations involving acceleration, angular acceleration, angular velocity, angular momentum, and kinetic energy are often easiest in body coordinates, because then the moment of inertia tensor does not change in time. If one also diagonalizes the rigid body's moment of inertia tensor (with nine components, six of which are independent), then one has a set of coordinates (called the principal axes) in which the moment of inertia tensor has only three components. The angular velocity of a rigid body takes a simple form using Euler angles in the moving frame. Also the Euler's rigid body equations are simpler because the inertia tensor is constant in that frame. Euler angles, normally in the Tait–Bryan convention, are also used in robotics for speaking about the degrees of freedom of a wrist. They are also used in Electronic stability control in a similar way. Gun fire control systems require corrections to gun-order angles (bearing and elevation) to compensate for deck tilt (pitch and roll). In traditional systems, a stabilizing gyroscope with a vertical spin axis corrects for deck tilt, and stabilizes the optical sights and radar antenna. However, gun barrels point in a direction different from the line of sight to the target, to anticipate target movement and fall of the projectile due to gravity, among other factors. Gun mounts roll and pitch with the deck plane, but also require stabilization. Gun orders include angles computed from the vertical gyro data, and those computations involve Euler angles. Euler angles are also used extensively in the quantum mechanics of angular momentum. In quantum mechanics, explicit descriptions of the representations of SO(3) are very important for calculations, and almost all the work has been done using Euler angles. In the early history of quantum mechanics, when physicists and chemists had a sharply negative reaction towards abstract group theoretic methods (called the Gruppenpest), reliance on Euler angles was also essential for basic theoretical work. In materials science, crystallographic texture (or preferred orientation) can be described using Euler angles. In texture analysis, the Euler angles provide the necessary mathematical depiction of the orientation of individual crystallites within a polycrystalline material, allowing for the quantitative description of the macroscopic material. The most common definition of the angles is due to Bunge and corresponds to the ZXZ convention. It is important to note, however, that the application generally involves axis transformations of tensor quantities, i.e. passive rotations. Thus the matrix that corresponds to the Bunge Euler angles is the transpose of that shown in the table above. Many mobile computing devices contain accelerometers which can determine these devices' Euler angles with respect to the earth's gravitational attraction. These are used in applications such as games, bubble level simulations, and kaleidoscopes. - 3D projection - Axis-angle representation - Conversion between quaternions and Euler angles - Euler's rotation theorem - Quaternions and spatial rotation - Rotation formalisms in three dimensions - Spherical coordinate system - Novi Commentarii academiae scientiarum Petropolitanae 20, 1776, pp. 189–207 (E478) pdf - Mathworld does a good job describing this issue - Gregory G. Slabaugh, Computing Euler angles from a rotation matrix - (Italian) A generalization of Euler Angles to n-dimensional real spaces - The relation between the Euler angles and the Cardan suspension is explained in chap. 11.7 of the following textbook: U. Krey, A. Owen, Basic Theoretical Physics – A Concise Overview, New York, London, Berlin, Heidelberg, Springer (2007) . - Kocks, U.F.; Tomé, C.N.; Wenk, H.-R. (2000), Texture and Anisotropy: Preferred Orientations in Polycrystals and their effect on Materials Properties, Cambridge, ISBN 978-0-521-79420-6 - Bunge, H. (1993), Texture Analysis in Materials Science: Mathematical Methods, CUVILLIER VERLAG, ASIN B0014XV9HU - Biedenharn, L. C.; Louck, J. D. (1981), Angular Momentum in Quantum Physics, Reading, MA: Addison–Wesley, ISBN 978-0-201-13507-7 - Goldstein, Herbert (1980), Classical Mechanics (2nd ed.), Reading, MA: Addison–Wesley, ISBN 978-0-201-02918-5 - Gray, Andrew (1918), A Treatise on Gyrostatics and Rotational Motion, London: Macmillan (published 2007), ISBN 978-1-4212-5592-7 - Rose, M. E. (1957), Elementary Theory of Angular Momentum, New York, NY: John Wiley & Sons (published 1995), ISBN 978-0-486-68480-2 - Symon, Keith (1971), Mechanics, Reading, MA: Addison-Wesley, ISBN 978-0-201-07392-8 - Landau, L.D.; Lifshitz, E. M. (1996), Mechanics (3rd ed.), Oxford: Butterworth-Heinemann, ISBN 978-0-7506-2896-9 |Wikimedia Commons has media related to: Euler angles| - Weisstein, Eric W., "Euler Angles", MathWorld. - Java applet for the simulation of Euler angles available at http://www.parallemic.org/Java/EulerAngles.html. - EulerAngles - An iOS app for visualizing in 3D the three rotations associated with Euler angles. - http://sourceforge.net/projects/orilib – A collection of routines for rotation / orientation manipulation, including special tools for crystal orientations. - Online tool to compose rotation matrices available at http://www.vectoralgebra.info/eulermatrix.html
http://en.wikipedia.org/wiki/Euler_angles
13
87
|Confederate States of America| Deo Vindice (Latin) "Under God, our Vindicator" Confederate States of America in 1862 |Languages||English (de facto)| |-||Lower house||House of Representatives| |Historical era||American Civil War |-||Confederacy formed||February 4, 1861| |-||Constitution created||March 11, 1861| |-||Battle of Fort Sumter||April 12, 1861| |-||Siege of Vicksburg||May 18, 1863| |-||Military collapse||April 9, 1865| |-||Confederacy dissolved||May 5, 1865| |-||1860||1,995,392 km² (770,425 sq mi)| |Density||4.6 /km² (11.8 /sq mi)| The Confederate States of America (CSA), also known as the Confederacy, was a government set up on February 8, 1861 by six of the seven southern slave states that had declared their secession from the United States. (On March 2 four of seven Texas delegates to the Provisional Confederate Congress, already in session, arrived in Montgomery, Alabama to add their signatures to the Confederate Constitution, which had been adopted on February 8.) The Confederacy went on to recognize as member states eleven states that had formally declared secession, two additional states with less formal declarations, and one new territory. Secessionists argued that the United States Constitution was a compact that each state could abandon without consultation; the United States (the Union) rejected secession as illegal. The American Civil War began with the 1861 Confederate attack upon Fort Sumter, a Union fort within territory claimed by the CSA. By 1865, after very heavy fighting, largely on Confederate soil, CSA forces were defeated and the Confederacy collapsed. No foreign nation officially recognized the Confederacy as an independent country, but several had granted belligerent status. The Confederate Constitution of seven state signatories-South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas-formed a "permanent federal government" in Montgomery, Alabama, in 1861. Four additional slave-holding states-Virginia, Arkansas, Tennessee, and North Carolina-declared their secession and joined the Confederacy following a call by U.S. President Abraham Lincoln for troops from each state to recapture Sumter and other lost federal properties in the South. Missouri and Kentucky were represented by partisan factions from those states. Also aligned with the Confederacy were two of the "Five Civilized Tribes" and a new Confederate Territory of Arizona. Efforts to secede in Maryland were halted by martial law, while Delaware, though of divided loyalty, did not attempt it. A Unionist government in western parts of Virginia organized the new state of West Virginia which was admitted to the Union on June 20, 1863. The Confederate government in Richmond, Virginia, had an uneasy relationship with its member states due to issues related to control of manpower, although the South mobilized nearly its entire white male population for war. Confederate control over its claimed territory and population steadily shrank from 73% to 34% during the course of the Civil War due to the Union's successful overland campaigns, its control of the inland waterways into the South, and its blockade of the Southern seacoast. These created an insurmountable disadvantage in men, supply, and finance. Public support of Confederate President Jefferson Davis's administration eroded over time with repeated military reverses, economic hardship, and allegations of autocratic government. After four years of Union campaigning, Richmond fell in April 1865, and shortly afterward, Confederate General Robert E. Lee surrendered to Ulysses S. Grant-with that the Confederacy effectively collapsed. President Davis was captured on May 10, 1865, at Irwinville, Georgia. Four years later, the U. S. Supreme Court ruled in Texas v. White that secession was illegal and that the Confederacy had never legally existed. The U.S. Congress began a decade-long process known as Reconstruction which some scholars treat as an extension of the Civil War. It lasted throughout the administrations of Lincoln, Andrew Johnson, and Grant and saw the adoption of the Thirteenth Amendment to free the slaves, the Fourteenth to guarantee dual U.S. and state citizenship to all, and the Fifteenth to guarantee the right to vote in states. The war left the South economically devastated by military action, ruined infrastructure, and exhausted resources. The region remained well below national levels of prosperity until after World War II. The Confederacy was established in the Montgomery Convention in February 1861 by state delegations sent from seven of the United States. Following Lincoln's inauguration, four additional border states were represented, and subsequently two states and two territories gained seats in the Confederate Congress in accordance with their Secessionist resolves. The government existed from Spring 1861 to Spring 1865 during a Civil War initiated by Confederate firing on U.S. Fort Sumter. Many southern whites had considered themselves more Southern than American and would fight for their state and their region to be independent of the larger nation. That regionalism became a Southern nationalism, or the "Cause". For the duration of its existence, the Confederacy underwent trial by war. The "Southern Cause" transcended the ideology of "states' rights", tariff policy or internal improvements. It was based on lifestyle, values and belief system. Its "way of life" became sacred to its adherents. Everything of the South became a moral question, commingling love of things Southern and hatred of things Yankee (the North). Not only did national political parties split, but national churches and interstate families as well divided along sectional lines as the war approached. In no states were the whites unanimous. There were minority views everywhere and the upland plateau regions in every state had strongholds of Unionist support, especially western Virginia and eastern Tennessee. South of the Mason–Dixon Line voter support for the three pro-Union candidates in 1860 ranged from 37% in Florida to 71% in Missouri. It was an American tragedy, the Brothers' War according to some scholars, "brother against brother, father against son, kith against kin of every degree". The Confederate States of America was created by secessionists in Southern slave states who refused to remain in a nation that they believed was turning them into second–class citizens. The agent of the change was seen as abolitionists and anti-slavery elements in the Republican Party who they believed used repeated insult and injury to subject them to intolerable "humiliation and degradation". The "Black Republicans" (as the Southerners called them) and their allies now threatened to become a majority in the United States House, Senate and Presidency. On the Supreme Court, Chief Justice Roger B. Taney (a presumed supporter of slavery) was 83 and ailing. During the campaign for president in 1860, some secessionists threatened disunion should Lincoln be elected, most notably William L. Yancey. Yancey toured the North calling for secession as Stephen A. Douglas toured the South calling for union in the event of Lincoln's election. To Secessionists the Republican intent was clear: the elimination or restriction of slavery. A Lincoln victory forced them to a momentous choice even before his inauguration, "The Union without slavery, or slavery without the Union." Historian Emory Thomas reconstructed the Confederacy's self–image by studying the correspondence sent by the Confederate government in 1861–62 to foreign governments. He found that Confederate diplomacy projected multiple contradictory self images: The Southern nation was by turns a guileless people attacked by a voracious neighbor, an 'established' nation in some temporary difficulty, a collection of bucolic aristocrats making a romantic stand against the banalities of industrial democracy, a cabal of commercial farmers seeking to make a pawn of King Cotton, an apotheosis of nineteenth-century nationalism and revolutionary liberalism, or the ultimate statement of social and economic reaction." By 1860, sectional disagreements between North and South revolved primarily around the maintenance or expansion of slavery. Historian Drew Gilpin Faust observed that "leaders of the secession movement across the South cited slavery as the most compelling reason for southern independence." Although this may seem strange, given that the majority of white Southerners did not own slaves, most white Southerners supported slavery. It has been supposed that this is because they did not want to be at the bottom of the social ladder. Related and intertwined secondary issues also fueled the dispute; these secondary differences included issues of free speech, runaway slaves, expansion into Cuba and states' rights. The immediate spark for secession came from the victory of the Republican Party and the election of Abraham Lincoln in the 1860 elections. Civil War historian James M. McPherson wrote: To southerners the election's most ominous feature was the magnitude of Republican victory north of the 41st parallel. Lincoln won more than 60 percent of the vote in that region, losing scarcely two dozen counties. Three-quarters of the Republican congressmen and senators in the next Congress would represent this "Yankee" and antislavery portion of the free states. The New Orleans Crescent saw these facts as "full of portentous significance". "The idle canvas prattle about Northern conservatism may now be dismissed," agreed the Richmond Examiner. "A party founded on the single sentiment ... of hatred of African slavery, is now the controlling power." No one could any longer "be deluded ... that the Black Republican party is a moderate" party, pronounced the New Orleans Delta. "It is in fact, essentially, a revolutionary party." In what later became known as the Cornerstone Speech, C.S. Vice President Alexander Stephens declared that the "cornerstone" of the new government "rest[ed] upon the great truth that the negro is not equal to the white man; that slavery-subordination to the superior race-is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth". In later years, however, Stephens made efforts to qualify his remarks, claiming they were extemporaneous, metaphorical, and never meant to literally reflect "the principles of the new Government on this subject." Four of the seceding states, the Deep South states of South Carolina, Mississippi, Georgia, and Texas, issued formal declarations of causes, each of which identified the threat to slaveholders' rights as the cause of, or a major cause of, secession. Georgia also claimed a general Federal policy of favoring Northern over Southern economic interests. Texas mentioned slavery 21 times, but also listed the failure of the federal government to live up to its obligations, in the original annexation agreement, to protect settlers along the exposed western frontier. Texas further stated: We hold as undeniable truths that the governments of the various States, and of the confederacy itself, were established exclusively by the white race, for themselves and their posterity; that the African race had no agency in their establishment; that they were rightfully held and regarded as an inferior and dependent race, and in that condition only could their existence in this country be rendered beneficial or tolerable. That in this free government all white men are and of right ought to be entitled to equal civil and political rights [emphasis in the original]; that the servitude of the African race, as existing in these States, is mutually beneficial to both bond and free, and is abundantly authorized and justified by the experience of mankind, and the revealed will of the Almighty Creator, as recognized by all Christian nations; while the destruction of the existing relations between the two races, as advocated by our sectional enemies, would bring inevitable calamities upon both and desolation upon the fifteen slave-holding states. The Fire-Eaters, calling for immediate secession, were opposed by two elements. "Cooperationists" in the Deep South would delay secession until several states went together, maybe in a Southern Convention. Under the influence of men such as Texas Governor Sam Houston, delay had the effect of sustaining the Union. "Unionists", especially in the Border South, often former Whigs, appealed to sentimental attachment to the United States. Their favorite was John Bell of Tennessee. Secessionists were active politically. Governor William Henry Gist of South Carolina corresponded secretly with other Deep South governors, and most governors exchanged clandestine commissioners. Charleston's secessionist "1860 Association" published over 200,000 pamphlets to persuade the youth of the South. The top three were South Carolina's John Townsend's "The Doom of Slavery", "The South Alone Should Govern the South", and James D.B. De Bow's "The Interest of Slavery of the Southern Non-slaveholder". Developments in South Carolina started a chain of events. The foreman of a jury refused the legitimacy of federal courts, so Federal Judge Andrew Magrath ruled that U.S. judicial authority in South Carolina was vacated. A mass meeting in Charleston celebrating the Charleston and Savannah railroad and state cooperation led to the South Carolina legislature to call for a Secession Convention. U.S. Senator James Chesnut, Jr. resigned, as did Senator James Henry Hammond. Elections for Secessionist conventions were heated to "an almost raving pitch, no one dared dissent" says Freehling. Even once–respected voices, including the Chief Justice of South Carolina, John Belton O'Neall, lost election to the Secession Convention on a Cooperationist ticket. Across the South mobs expelled Yankees and (in Texas) killed Germans suspected of loyalty to the United States. Generally, seceding conventions which followed did not call for a referendum to ratify, although Texas, Arkansas, and Tennessee did, also Virginia's second convention. Missouri and Kentucky declared neutrality. The first secession state conventions from the Deep South sent representatives to meet at the Montgomery Convention in Montgomery, Alabama, on February 4, 1861. There the fundamental documents of government were promulgated, a provisional government was established, and a representative Congress met for the Confederate States of America. The new 'provisional' Confederate President Jefferson Davis, a former "Cooperationist" who had insisted on delaying secession until a united South could move together, issued a call for 100,000 states' militia to defend the newborn nation. Previously John B. Floyd, U.S. Secretary of War under President James Buchanan, had moved arms south out of northern U.S. armories. To economize War Department expenditures, Floyd and Congressional elements persuaded Buchanan not to put the armaments for southern forts into place. These were now appropriated to the Confederacy along with bullion and coining dies at the U.S. mints in Charlotte, North Carolina; Dahlonega, Georgia; and New Orleans. The Confederate capital was moved to Richmond, Virginia, in May 1861. Five days later, Davis extended the earlier martial law declared in Norfolk and Portsmouth to ten miles beyond Richmond. On February 22, 1862 (George Washington's birthday), Davis was inaugurated as permanent president with a term of six years, having been elected in November 1861. In his first Inaugural Address, Abraham Lincoln tried to contain the expansion of the Confederacy. To quiet the rising calls for secession in additional slave-holding states, he assured the Border States that slavery would be preserved in the states where it existed, and he entertained a proposed Thirteenth "Corwin Amendment" under consideration to explicitly protect slavery in the Constitution. The newly inaugurated Confederate Administration pursued a policy of national territorial integrity, continuing earlier state efforts in 1860 and early 1861 to remove U.S. government presence from within their boundaries. These efforts included taking possession of U.S. courts, custom houses, post offices, and most notably, arsenals and forts. But at the Confederate attack on Fort Sumter, Lincoln called up 75,000 of the states' militia to muster under his command. The stated purpose was to re-occupy U.S. properties throughout the South, as the U.S. Congress had not authorized their abandonment. The resistance at Fort Sumter signaled his change of policy from that of the Buchanan Administration. Lincoln's response ignited a firestorm of emotion. The people both North and South demanded war, and young men rushed to their colors in the hundreds of thousands. Four more states (Virginia, North Carolina, Tennessee, and Arkansas) declared secessions, while Kentucky tried to remain neutral. Secessionists argued that the United States Constitution was a compact among states that could be abandoned at any time without consultation and that each state had a right to secede. After intense debates and statewide votes, seven Deep South cotton states passed secession ordinances by February 1861 (before Abraham Lincoln took office as president), while secession efforts failed in the other eight slave states. Delegates from those seven formed the C.S.A. in February 1861, selecting Jefferson Davis as the provisional president. Unionist talk of reunion failed and Davis began raising a 100,000 man army. Initially, secessionists hoped for a peaceful departure, including all slave-holding states in the Union.[dubious ] Moderates in the Confederate Constitutional Convention included a provision against importation of slaves from Africa to appeal to the Upper South. Non-slave states might join, but the radicals secured a two-thirds hurdle for them. Seven states declared their secession from the United States before Lincoln took office on March 4, 1861. After the Confederate attack on Fort Sumter April 12, 1861 and Lincoln's subsequent call for troops on April 15, four more states declared their secession: Kentucky declared neutrality but after Confederate troops moved in, the state government asked for Union troops to drive them out. The Confederate state government relocated to accompany western Confederate armies and never controlled the state population. In Missouri, on October 31, 1861, a pro-CSA remnant of the General Assembly met and passed an ordinance of secession. The Confederate state government was unable to control very much Missouri territory. It had its capital first at Neosho, then at Cassville, before being driven out of the state. For the remainder of the war, it operated as a government in exile at Marshall, Texas. Neither Kentucky nor Missouri were declared in rebellion in the Emancipation Proclamation. The Confederacy recognized the pro-Confederate claimants in both Kentucky and Missouri and laid claim to those states, granting them Congressional representation and adding two stars to the Confederate flag. The order of secession resolutions and dates follow. In Virginia the populous counties along the Ohio and Pennsylvania borders rejected the Confederacy. Unionists held a Convention in Wheeling in June 1861, establishing a "restored government" with a rump legislature, but sentiment in the region remained deeply divided. In the 50 counties that would make up the state of West Virginia, voters from 24 counties had voted for disunion in Virginia's May 23 referendum on the ordinance of secession. In the 1860 Presidential election "Constitutional Democrat" Breckenridge had outpolled "Constitutional Unionist" Bell in the 50 counties by 1,900 votes, 44% to 42%. Regardless of scholarly disputes over election procedures and results county by county, altogether they simultaneously supplied over 20,000 soldiers to each side of the conflict. Representatives for most of the counties were seated in both state legislatures at Wheeling and at Richmond for the duration of the war. Attempts to secede from the Confederacy by some counties in East Tennessee were checked by martial law. Although slave-holding Delaware and Maryland did not secede, citizens from those states exhibited divided loyalties. Maryland regiments fought in Lee's Army of Northern Virginia. Delaware never produced a full regiment for the Confederacy, but neither did it emancipate slaves as did Missouri and West Virginia. District of Columbia citizens made no attempts to secede and through the war years, Lincoln-sponsored referendums approved systems of compensated emancipation and slave confiscation from "disloyal citizens". Citizens at Mesilla and Tucson in the southern part of New Mexico Territory formed a secession convention, which voted to join the Confederacy on March 16, 1861, and appointed Lewis Owings as the new territorial governor. They won the Battle of Mesilla and established a territorial government with Mesilla serving as its capital. The Confederacy proclaimed the Confederate Arizona Territory on February 14, 1862 north to the 34th parallel. Marcus H. MacWillie served in both Confederate Congresses as Arizona's delegate. In 1862 the Confederate New Mexico Campaign to take the northern half of the U.S. territory failed and the Confederate territorial government in exile relocated to San Antonio, Texas. Confederate supporters in the trans-Mississippi west also claimed portions of United States Indian Territory after the United States evacuated the federal forts and installations. Over half of the American Indian troops participating in the Civil War from the Indian Territory supported the Confederacy; troops and one general were enlisted from each tribe. On July 12, 1861, the Confederate government signed a treaty with both the Choctaw and Chickasaw Indian nations. After several battles Northern armies moved back into the territory. Indian Territory was never formally ceded into the Confederacy by American Indian councils, but like Missouri and Kentucky, the Five Civilized Nations received representation in the Confederate Congress and their citizens were integrated into regular Confederate Army units. After 1863 the tribal governments sent representatives to the Confederate Congress: Elias Cornelius Boudinot representing the Cherokee and Samuel Benton Callahan representing the Seminole and Creek people. The Cherokee Nation, aligning with the Confederacy, alleged northern violations of the Constitution, waging war against slavery commercial and political interests, abolishing slavery in the Indian Territory, and that the North intended to seize additional Indian lands. Montgomery, Alabama served as the capital of the Confederate States of America from February 4 until May 29, 1861. Six states created the Confederate States of America there on February 8, 1861. The Texas delegation was seated at the time, so it is counted in the "original seven" states of the Confederacy. But it had no roll call vote until after its referendum made secession "operative". Two sessions of the Provisional Congress were held in Montgomery, adjourning May 21. The Permanent Constitution was adopted there on March 12, 1861. The permanent capital provided for in the Confederate Constitution called for a state cession of a ten-miles square (100 square mile) district to the central government. Atlanta, which had not yet supplanted Milledgeville, Georgia as its state capital, put in a bid noting its central location and rail connections, as did Opelika, Alabama, noting its strategically interior situation, rail connections and nearby deposits of coal and iron. Richmond, Virginia was chosen for the interim capital. The move was used by Vice President Stephens and others to encourage other border states to follow Virginia into the Confederacy. In the political moment it was a show of "defiance and strength". The war for southern independence was surely to be fought in Virginia, but it also had the largest Southern military-aged white population, with infrastructure, resources and supplies required to sustain a war. The Davis Administration's policy was that, "It must be held at all hazards." The naming of Richmond as the new capital took place on May 30, 1861, and the last two sessions of the Provisional Congress were held in the new capital. The Permanent Confederate Congress and President were elected in the states and army camps on November 6, 1861. The First Congress met in four sessions in Richmond February 18, 1862 – February 17, 1864. The Second Congress met there in two sessions, May 2, 1864 – March 18, 1865. As war dragged on, Richmond became crowded with training and transfers, logistics and hospitals. Prices rose dramatically despite government efforts at price regulation. A movement in Congress led by Henry S. Foote of Tennessee argued to remove the Capital from Richmond. At the approach of Federal armies in early summer 1862, the government's archives were readied for removal. As the Wilderness Campaign progressed, Congress authorized Davis to remove the executive department and call Congress to session elsewhere in 1864 and again in 1865. Shortly before the end of the war, the Confederate government evacuated Richmond, planning to relocate farther south. Little came of these plans before Lee's surrender at Appomattox Court House, Virginia on April 9, 1865. Davis and most of his Cabinet fled to Danville, Virginia, which served as the last Confederate capital for about one week. During the four years of its existence under trial by war, the Confederate States of America asserted its independence and appointed dozens of diplomatic agents abroad. The United States government regarded the southern states in rebellion and so refused any formal recognition of their status. Even before Fort Sumter, U.S. Secretary of State William H. Seward issued formal instructions to the American minister to the United Kingdom: Make "no expressions of harshness or disrespect, or even impatience concerning the seceding States, their agents, or their people, [those States] must always continue to be, equal and honored members of this Federal Union, [their citizens] still are and always must be our kindred and countrymen." If the British seemed inclined to recognize the Confederacy, or even waver in that regard, they were to receive a sharp warning, with a strong hint of war: "[if Britain is] tolerating the application of the so-called seceding States, or wavering about it, [they cannot] remain friends with the United States ... if they determine to recognize [the Confederacy], [Britain] may at the same time prepare to enter into alliance with the enemies of this republic." The United States government never declared war on those "kindred and countrymen", but conducted its military efforts beginning with a presidential proclamation issued April 15, 1861 calling for troops to recapture forts and suppress a rebellion. Mid-war parlays between the two sides occurred without formal political recognition, though the laws of war predominantly governed military relationships on both sides of uniformed conflict. On the part of the Confederacy, immediately following Fort Sumter the Confederate Congress proclaimed "... war exists between the Confederate States and the Government of the United States, and the States and Territories thereof ..." A state of war was not to formally exist between the Confederacy and those states and territories in the United States allowing slavery, although Confederate Rangers were compensated for destruction they could effect there throughout the war. Concerning the international status and nationhood of the Confederate States of America, in 1869 the United States Supreme Court in Texas v. White ruled Texas' declaration of secession was legally null and void. Jefferson Davis, former President of the Confederacy, and Alexander Stephens, its former Vice-President, both wrote postwar arguments in favor of secession's legality and the international legitimacy of the Government of the Confederate States of America, most notably Davis' The Rise and Fall of the Confederate Government. Once the war with the United States began, the Confederacy pinned its hopes for survival on military intervention by the United Kingdom and France. The Confederates who had believed that "cotton is king"-that is, Britain had to support the Confederacy to obtain cotton-proved mistaken. The British had stocks to last over a year and had been developing alternative sources of cotton, most notably India and Egypt. They were not about to go to war with the U.S. to acquire more cotton at the risk of losing the large quantities of food imported from the North. The Confederate government sent repeated delegations to Europe but historians give them low marks for their poor diplomacy. James M. Mason went to London and John Slidell traveled to Paris. They were unofficially interviewed, but neither secured official recognition for the Confederacy. In late 1861 illegal actions of the U.S. Navy in seizing a British ship outraged Britain and led to a war scare in the Trent Affair. Recognition of the Confederacy seemed at hand, but Lincoln released the two detained Confederate diplomats, tensions cooled, and the Confederacy gained no advantage. Throughout the early years of the war, British foreign secretary Lord John Russell, Emperor Napoleon III of France, and, to a lesser extent, British Prime Minister Lord Palmerston, showed interest in recognition of the Confederacy or at least mediation of the war. The Union victory at the Battle of Antietam (Sharpsburg) and abolitionist opposition in Britain put an end to these plans. The cost to Britain of a war with the U.S. would have been high: the immediate loss of American grain shipments, the end of exports to the U.S., the seizure of billions of pounds invested in American securities. War would have meant higher taxes, another invasion of Canada, and full-scale worldwide attacks on the British merchant fleet. While outright recognition would have meant certain war with the United States, in the summer of 1862 fears of race war as had transpired in Haiti led to the British considering intervention for humanitarian reasons. Lincoln's Emancipation Proclamation did not lead to interracial violence let alone a bloodbath, but it did give the friends of the Union strong talking points in the arguments that raged across Britain. The British government did allow blockade runners to be built in Britain and operated by British seamen. Several European nations maintained diplomats in place who had been appointed to the U.S., but no country appointed any diplomat to the Confederacy. However, those nations did recognize the Union and Confederate sides as belligerents. In 1863, the Confederacy expelled the European diplomatic missions for advising their resident subjects to refuse to serve in the Confederate army. Both Confederate and Union agents were allowed to work openly in British territories. Some state governments in northern Mexico negotiated local agreements to cover trade on the Texas border. Pope Pius IX wrote a letter to Jefferson Davis in which he addressed Davis as the "Honorable President of the Confederate States of America." but The Holy See never released a formal statement supporting or recognizing the Confederacy. The Confederacy was seen internationally as a serious attempt at nationhood, and European governments sent military observers to assess the de facto establishment of independence. These included official and unofficial Arthur Freemantle of the British Coldstream Guards, Fitzgerald Ross of the Austrian Hussars, and Justus Scheibert of the Prussian army. European travelers visited and wrote accounts for publication. Importantly in 1862, the Frenchman Charles Girard's Seven months in the rebel states during the North American War testified "this government ... is no longer a trial government ... but really a normal government, the expression of popular will". Due in part to Lincoln's covert support of Mexican President Benito Juarez, by late spring of 1863 France was in need of Confederate cotton and other Caribbean commerce to sustain the French conquest of Mexico, an effort to reestablish France's North American empire. News of Lee's decisive victory at Chancellorsville had reached Europe, and French Emperor Napoleon III assured Confederate diplomat John Slidell that he would make "direct proposition" to the United Kingdom for joint recognition. The Emperor made the same assurance to Members of Parliament John A. Roebuck and John A. Lindsay. Roebuck in turn publicly prepared a bill to submit to Parliament June 30 supporting joint Anglo-French recognition of the Confederacy. Preparations for Lee's incursion into Pennsylvania were underway to influence the midterm U.S. elections. Confederate independence and nationhood was at a turning point. "Southerners had a right to be optimistic, or at least hopeful, that their revolution would prevail, or at least endure". By December 1864, Davis considered sacrificing slavery in order to enlist recognition and aid from Paris and London; he secretly sent Duncan F. Kenner to Europe with a message that the war was fought solely for "the vindication of our rights to self-government and independence" and that "no sacrifice is too great, save that of honor." The message stated that if the French or British governments made their recognition conditional on anything at all, the Confederacy would consent to such terms. Davis's message could not explicitly acknowledge that slavery was on the bargaining table due to still-strong domestic support for slavery among the wealthy and politically influential. Although Louis-Napoleon responded receptively to the message in March 1865, he would not commit without the cooperation of Great Britain. Lord Palmerston, however, withheld support, as the war had turned against the Confederacy and Britain couldn't side with a lost cause. Southern Civil War historian E. Merton Coulter noted that for those who would secure its independence, "The Confederacy was unfortunate in its failure to work out a general strategy for the whole war". Aggressive strategy called for offensive force concentration. Defensive strategy sought dispersal to meet demands of locally minded governors. The controlling philosophy evolved into a combination "dispersal with a defensive concentration around Richmond". The Davis administration considered the war purely defensive, a "simple demand that the people of the United States would cease to war upon us." Historian James McPherson is a critic of General Lee's Offensive Strategy "Lee pursued a faulty military strategy that ensured Confederate defeat". As the Confederate government lost control of territory in campaign after campaign, it was said that "the vast size of the Confederacy would make its conquest impossible". The enemy would be struck down by the same elements which so often debilitated or destroyed visitors and transplants in the South. Heat exhaustion, sunstroke, endemic diseases such as malaria and typhoid would match the destructive effectiveness of the Moscow winter on the invading armies of Napoleon. But despite the Confederacy's essentially defensive stance, in the early stages of the war there were offensive visions of seizing the Rocky Mountains or cutting the North in two by marching to Lake Erie. Then, at a time when both sides believed that one great battle would decide the conflict, the Confederate won a great victory at the "Battle of Manassas". It drove the Confederate people "insane with joy", the public demanded a forward movement to capture Washington DC, relocate the Capital there, and admit Maryland to the Confederacy. A council of war by the victorious Confederate generals decided not to advance against larger numbers of fresh Federal troops in defensive positions. Davis did not countermand it. Following the Confederate incursion halted at the Battle of Antietam, (Sharpsburg), in October 1862 generals proposed concentrating forces from state commands to re-invade the north. Nothing came of it. Again in early 1863 at his incursion into Pennsylvania, Lee requested of Davis that Beauregard simultaneously attack Washington with troops taken from the Carolinas. But the troops there remained in place during the Gettysburg Campaign. Without counting their enslaved men, eleven states of the Confederacy were outnumbered by the North about four to one in military population. It was overmatched far more in military equipment, ability to produce and procure it, railroads for transport, and wagons supplying the front. Big guns were out-ranged and small arms were less effective. Confederate military policy innovated to compensate. Booby-trapped land mines were laid in the path of invading armies. Harbors, inlets and inland waterways were laced with numbers of sunken "torpedo" mines and covered with mobile artillery batteries. Rangers were sent to disrupt and destroy supplies of invading armies until they were disbanded, then the "dashing cavalry". The Confederacy relied on external sources for war materials. The first came from trade with the enemy. "Vast amounts of war supplies" came through Kentucky, and thereafter, western armies were "to a very considerable extent" provisioned with illicit trade via Federal agents and northern private traders. But that trade was interrupted in the first year of war by Admiral Porter's river gunboats as they gained dominance along navigable rivers north–south and east–west. Overseas blockade running then came to be of "outstanding importance". On April 17, President Davis called on privateer raiders, the "militia of the sea", to make war on U.S. seaborne commerce. Despite noteworthy effort, over the course of the war the Confederacy was found unable to match the Union in ships and seamanship, materials and marine construction. Perhaps the most implacable obstacle to success in the 19th century warfare of mass armies was the Confederacy's lack of manpower, sufficient numbers of disciplined, equipped troops in the field at the point of contact with the enemy. During the wintering of 1862–1863, Lee observed that none of his famous victories had resulted in the destruction of the opposing army. He lacked reserve troops to exploit an advantage on the battlefield as Napoleon had done. Lee explained, "More than once have most promising opportunities been lost for want of men to take advantage of them, and victory itself had been made to put on the appearance of defeat, because our diminished and exhausted troops have been unable to renew a successful struggle against fresh numbers of the enemy." The Confederate military leadership included many veterans from the United States Army and United States Navy who had resigned their Federal commissions and had won appointment to senior positions in the Confederate armed forces. Many had served in the Mexican-American War (including Robert E. Lee and Jefferson Davis), but some such as Leonidas Polk (who had attended West Point but did not graduate) had little or no experience. The Confederate officer corps consisted of men from both slave-owning and non-slave-owning families. The Confederacy appointed junior and field grade officers by election from the enlisted ranks. Although no Army service academy was established for the Confederacy, some colleges (such as The Citadel and Virginia Military Institute) maintained cadet corps that trained Confederate military leadership. A naval academy was established at Drewry's Bluff, Virginia in 1863, but no midshipmen graduated before the Confederacy's end. The soldiers of the Confederate armed forces consisted mainly of white males aged between 16 and 28. The median year of birth was 1838, so half the soldiers were 23 or older by 1861. The Confederacy adopted conscription in 1862. Many thousands of slaves served as laborers, cooks, and pioneers. Some freed blacks and men of color served in local state militia units of the Confederacy, primarily in Louisiana and South Carolina, but their officers deployed them for "local defense, not combat." Depleted by casualties and desertions, the military suffered chronic manpower shortages. In the spring of 1865, the Confederate Congress, influenced by the public support by General Lee, approved the recruitment of black infantry units. Contrary to Lee's and Davis's recommendations, the Congress refused "to guarantee the freedom of black volunteers." No more than two hundred black troops were ever raised. The immediate onset of war meant that it was fought by the "Provisional" or "Volunteer Army". State governors resisted concentrating a national effort. Several wanted a strong state army for self-defense. Others feared large "Provisional" armies answering only to Davis. When filling the Confederate government's call for 100,000 men, another 200,000 were turned away by accepting only those enlisted "for the duration" or twelve-month volunteers who brought their own arms or horses. It was important to raise troops; it was just as important to provide capable officers to command them. With few exceptions the Confederacy secured excellent general officers. Efficiency in the lower officers was "greater than could have been reasonably expected". As with the Federals, political appointees could be indifferent. Otherwise, the officer corps was governor-appointed or elected by unit enlisted. Promotion to fill vacancies was made internally regardless of merit, even if better officers were immediately available. Anticipating the need for more "duration" men, in January 1862 Congress provided for company level recruiters to return home for two months, but their efforts met little success on the heels of Confederate battlefield defeats in February. Congress allowed for Davis to require numbers of recruits from each governor to supply the volunteer shortfall. States responded by passing their own draft laws. The veteran Confederate army of early 1862 was mostly twelve-month volunteers with terms about to expire. Enlisted reorganization elections disintegrated the army for two months. Officers pleaded with the ranks to re-enlist, but a majority did not. Those remaining elected majors and colonels whose performance led to officer review boards in October. The boards caused a "rapid and widespread" thinning out of 1700 incompetent officers. Troops thereafter would elect only second lieutenants. In early 1862, the popular press suggested the Confederacy required a million men under arms. But veteran soldiers were not re-enlisting, and earlier secessionist volunteers did not reappear to serve in war. One Macon, Georgia, newspaper asked how two million brave fighting men of the South were about to be overcome by four million northerners who were said to be cowards. The Confederacy passed the first American law of national conscription on April 16, 1862. The white males of the Confederate States from 18 to 35 were declared members of the Confederate army for three years, and all men then enlisted were extended to a three-year term. They would serve only in units and under officers of their state. Those under 18 and over 35 could substitute for conscripts, in September those from 35 to 45 became conscripts. The cry of "rich man's war and a poor man's fight" led Congress to abolish the substitute system altogether in December 1863. All principals benefiting earlier were made eligible for service. By February 1864, the age bracket was made 17 to 50, those under eighteen and over forty-five to be limited to in-state duty. Confederate conscription was not universal; it was actually a selective service. The First Conscription Act of April 1862 exempted occupations related to transportation, communication, industry, ministers, teaching and physical fitness. The Second Conscription Act of October 1862 expanded exemptions in industry, agriculture and conscientious objection. Exemption fraud proliferated in medical examinations, army furloughs, churches, schools, apothecaries and newspapers. Rich men's sons were appointed to the socially outcast "overseer" occupation, but the measure was received in the country with "universal odium". The legislative vehicle was the controversial Twenty Negro Law that specifically exempted one white overseer or owner for every plantation with at least 20 slaves. Backpedalling six months later, Congress provided overseers under 45 could be exempted only if they held the occupation before the first Conscription Act. The number of officials under state exemptions appointed by state Governor patronage expanded significantly. By law, substitutes could not be subject to conscription, but instead of adding to Confederate manpower, unit officers in the field reported that over-50 and under-17 year old substitutes made up to 90% of the desertions. The Conscription Act of February 1864 "radically changed the whole system" of selection. It abolished industrial exemptions, placing detail authority in President Davis. As the shame of conscription was greater than a felony conviction, the system brought in "about as many volunteers as it did conscripts." Many men in otherwise "bombproof" positions were enlisted in one way or another, nearly 160,000 additional volunteers and conscripts in uniform. Still there was shirking. To administer the draft, a Bureau of Conscription was set up to use state officers, as state Governors would allow. It had a checkered career of "contention, opposition and futility". Armies appointed alternative military "recruiters" to bring in the out-of-uniform 17–50 year old conscripts and deserters. Nearly 3000 officers would be tasked with the job. By fall 1864, Lee was calling for more troops. "Our ranks are constantly diminishing by battle and disease, and few recruits are received; the consequences are inevitable." By March 1865 conscription was to be administered by generals of the state reserves calling out men over 45 and under 18 years old. All exemptions were abolished. These regiments were assigned to recruit conscripts ages 17–50, recover deserters, and repel enemy cavalry raids. The service retained men who had lost but one arm or a leg in home guards. April 1865 Lee surrendered an army of 50,000. Conscription had been a failure. The survival of the Confederacy depended on a strong base of civilians and soldiers devoted to victory. The soldiers performed well, though increasing numbers deserted in the last year of fighting, and the Confederacy never succeeded in replacing casualties as the Union could. The civilians, although enthusiastic in 1861–62, seem to have lost faith in the future of the Confederacy by 1864, and instead looked to protect their homes and communities. As Rable explains, "This contraction of civic vision was more than a crabbed libertarianism; it represented an increasingly widespread disillusionment with the Confederate experiment." The American Civil War broke out in April 1861 with the Battle of Fort Sumter in Charleston. In December 1860, Federal troops had withdrawn to the island fort from others in Charleston Harbor soon after South Carolina's declaration of secession to avoid soldier-civilian street confrontations. In January, President James Buchanan had attempted to resupply the garrison with the Star of the West, but Confederate artillery drove it away. In March, President Lincoln notified Governor Pickens that without Confederate resistance to resupply there would be no military reinforcement without further notice, but Lincoln prepared to force resupply if it were not allowed. Confederate President Davis in cabinet decided to capture Fort Sumter before the relief fleet arrived and on April 12, 1861, General Beauregard forced their surrender. Following Fort Sumter, Lincoln directed states to provide 75,000 troops for three months to recapture the Charleston Harbor forts and all other federal property that had been seized without Congressional authorization. In May, Federal troops crossed into Confederate territory along the entire border from the Chesapeake Bay to New Mexico. The Confederate victory at Fort Sumter was followed by Confederate victories at the battles of Big Bethel, (Bethel Church) VA in June, First Bull Run, (First Manassas) in July and in August, Wilson's Creek, (Oak Hills) in southwest Missouri. At all three, Confederate forces could not follow up their victory due to inadequate supply and shortages of fresh troops to exploit their successes. Following each battle, Federals maintained a military presence and their occupation of Washington DC, Fort Monroe VA and Springfield MO. Both North and South began training up armies for major fighting the next year. Confederate commerce-raiding just south of the Chesapeake Bay was ended in August at the loss of Hatteras NC. Early November a Union expedition at sea secured Port Royal and Beaufort SC south of Charleston, seizing Confederate-burned cotton fields along with escaped and owner-abandoned "contraband" field hands. December saw the loss of Georgetown SC north of Charleston. Federals there began a war-long policy of burning grain supplies up rivers into the interior wherever they could not occupy. The victories of 1861 were followed by a series of defeats east and west in early 1862. To restore the Union by military force the Federal intent was to (1) secure the Mississippi River, (2) seize or close Confederate ports and (3) march on Richmond. To secure independence, the Confederate intent was to (1) repel the invader on all fronts, costing him blood and treasure and (2) carry the war into the north by two offensives in time to impact the mid-term elections. Much of northwestern Virginia was under Federal control. In February and March, most of Missouri and Kentucky were Union "occupied, consolidated, and used as staging areas for advances further South". Following the repulse of Confederate counter-attack at the Battle of Shiloh, (Pittsburg Landing) Tennessee, permanent Federal occupation expanded west, south and east. Confederate forces then repositioned south along the Mississippi River to Memphis, where at the naval Battle of Memphis its River Defense Fleet was sunk and Confederates then withdrew from northern Mississippi and northern Alabama. New Orleans was captured April 29 by a combined Army-Navy force under U.S. Admiral Farragut, and the Confederacy lost control of the mouth of the Mississippi River, conceding large agricultural resources that supported the Union's sea-supplied logistics base. Although Confederates had suffered major reverses everywhere but Virginia, as of the end of April the Confederacy still controlled 72% of its population. Federal forces disrupted Missouri and Arkansas; they had broken through in western Virginia, Kentucky, Tennessee and Louisiana. Along the Confederacy's shores it had closed ports and made garrisoned lodgments on every coastal Confederate state but Alabama and Texas. Although scholars sometimes assess the Union blockade as ineffectual under international law until the last few months of the war, from the first months it disrupted Confederate privateers making it "almost impossible to bring their prizes into Confederate ports". Nevertheless, British firms developed small fleets of blockade running companies, such as John Fraser and Company and the Ordnance Department secured its own blockade runners for dedicated munitions cargos. The Civil War saw the advent of fleets of armored warships deployed in sustained blockades at sea. After some success against the Union blockade, in March the ironclad CSS Virginia was forced into port and burned by Confederates at their retreat. Despite several attempts mounted from their port cities, C.S. naval forces were unable to break the Union blockade including Commodore Josiah Tattnall's ironclads from Savannah, in 1862 with the CSS Atlanta. Secretary of the Navy Stephen Mallory placed his hopes in a European-built ironclad fleet, but they were never realized. On the other hand, four new English-built commerce raiders saw Confederate service, and several fast blockade runners were sold in Confederate ports, then converted into commerce-raiding cruisers, manned by their British crews. In the east, Union forces could not close on Richmond. General McClellan landed his army on the Lower Peninsula of Virginia. Lee subsequently ended that threat from the east, then Union General John Pope attacked overland from the north only to be repulsed at Second Bull Run, (Second Manassas). Lee's strike north was turned back at Antietam MD, then Burnside's offensive was disastrously ended at Fredericksburg VA in December. Both armies then turned to winter quarters to recruit and train for the coming spring. In an attempt to seize the initiative, reprovision, protect farms in mid-growing season and influence U.S. Congressional elections, two major Confederate incursions into Union territory had been launched in August and September 1862. Both Braxton Bragg's invasion of Kentucky and Lee's invasion of Maryland were decisively repulsed, leaving Confederates in control of but 63% of its population. Civil War scholar Alan Nevins argues that 1862 was the strategic high-water mark of the Confederacy. The failures of the two invasions were attributed to the same irrecoverable shortcomings: lack of manpower at the front, lack of supplies including serviceable shoes, and exhaustion after long marches without adequate food. The failed Middle Tennessee campaign was ended January 2, 1863 at the inconclusive Battle of Stones River, (Murfreesboro), both sides losing the largest percentage of casualties suffered during the war. It was followed by another strategic withdrawal by Confederate forces. The Confederacy won a significant victory April 1863, repulsing the Federal advance on Richmond at Chancellorsville, but the Union consolidated positions along the Virginia coast and the Chesapeake Bay. Without an effective answer to Federal gunboats, river transport and supply, the Confederacy lost the Mississippi River following the capture of Vicksburg, Mississippi, and Port Hudson in July, ending Southern access to the trans-Mississippi West. July brought short-lived counters, Morgan's Raid into Ohio and the New York City draft riots. Robert E. Lee's strike into Pennsylvania was repulsed at Gettysburg, Pennsylvania despite Pickett's famous charge and other acts of valor. Southern newspapers assessed the campaign as "The Confederates did not gain a victory, neither did the enemy." September and November left Confederates yielding Chattanooga, Tennessee, the gateway to the lower south. For the remainder of the war fighting was restricted inside the South, resulting in a slow but continuous loss of territory. In early 1864, the Confederacy still controlled 53% of its population, but it withdrew further to reestablish defensive positions. Union offensives continued with Sherman's March to the Sea to take Savannah and Grant's Wilderness Campaign to encircle Richmond and besiege Lee's army at Petersburg. In April 1863, the C.S. Congress authorized a uniformed Volunteer Navy, many of whom were British. Wilmington and Charleston had more shipping while "blockaded" than before the beginning of hostilities. The Confederacy had altogether eighteen commerce destroying cruisers, which seriously disrupted Federal commerce at sea and increased shipping insurance rates 900 percent. Commodore Tattnall unsuccessfully attempted to break the Union blockade on the Savannah River GA with an ironclad again in 1863. However beginning April 1864 the ironclad CSS Albemarle engaged Union gunboats and sank or cleared them for six months on the Roanoke River NC. The Federals closed Mobile Bay by sea-based amphibious assault in August, ending Gulf coast trade east of the Mississippi River. In December, the Battle of Nashville ended Confederate operations in the western theater. The first three months of 1865 saw the Federal Carolinas Campaign, devastating a wide swath of the remaining Confederate heartland. The "breadbasket of the Confederacy" in the Great Valley of Virginia was occupied by Philip Sheridan. The Union Blockade captured Fort Fisher NC, and Sherman finally took Charleston SC by land attack. The Confederacy controlled no ports, harbors or navigable rivers. Railroads were captured or had ceased operating. Its major food producing regions had been war-ravaged or occupied. Its administration survived in only three pockets of territory holding one-third its population. Its armies were defeated or disbanding. At the February 1865 Hampton Roads Conference with Lincoln, senior Confederate officials rejected his invitation to restore the Union with compensation for emancipated slaves. The Davis policy was independence or nothing, while Lee's army was wracked by disease and desertion, barely holding the trenches defending Jefferson Davis' capital. The Confederacy's last remaining blockade-running port, Wilmington, North Carolina, was lost. When the Union broke through Lee's lines at Petersburg, Richmond fell immediately. Lee surrendered the Army of Northern Virginia at Appomattox Court House, Virginia, on April 9, 1865. "The Surrender" marked the end of the Confederacy. The CSS Stonewall sailed from Europe to break the Union blockade in March; on making Havana, Cuba it surrendered. Some high officials escaped to Europe, but President Davis was captured May 10; all remaining Confederate forces surrendered by June 1865. The U.S. Army took control of the Confederate areas without post-surrender insurgency or guerrilla warfare against them, but peace was subsequently marred by a great deal of local violence, feuding and revenge killings. Historian Gary Gallagher concluded that the Confederacy capitulated in the spring of 1865 because northern armies crushed "organized southern military resistance." The Confederacy's population, soldier and civilian, had suffered material hardship and social disruption. They had expended and extracted a profusion of blood and treasure until collapse; "the end had come". Jefferson Davis' assessment in 1890 determined, "With the capture of the capital, the dispersion of the civil authorities, the surrender of the armies in the field, and the arrest of the President, the Confederate States of America disappeared ... their history henceforth became a part of the history of the United States." Historian Frank Lawrence Owsley argued that the Confederacy "died of states' rights." The central government was denied requisitioned soldiers and money by governors and state legislatures because they feared that Richmond would encroach on the rights of the states. Georgia's governor Joseph Brown warned of a secret conspiracy by Jefferson Davis to destroy states' rights and individual liberty. The first conscription act in North America authorizing Davis to draft soldiers was said to be the "essence of military despotism." Vice President Alexander Stephens feared losing the very form of republican government. Allowing President Davis to threaten "arbitrary arrests" to draft hundreds of governor-appointed "bomb-proof" bureaucrats conferred "more power than the English Parliament had ever bestowed on the king. History proved the dangers of such unchecked authority." The abolishment of draft exemptions for newspaper editors was interpreted as an attempt by the Confederate government to muzzle presses, such as the Raleigh NC Standard, to control elections and to suppress the peace meetings there. As Rable concludes, "For Stephens, the essence of patriotism, the heart of the Confederate cause, rested on an unyielding commitment to traditional rights" without considerations of military necessity, pragmatism or compromise. In 1863 governor Pendleton Murrah of Texas determined that state troops were required for defense against Plains Indians and Union successes advancing from the free state of Kansas. He refused to send them East. Governor Zebulon Vance of North Carolina showed intense opposition to conscription, limiting recruitment success. Vance's faith in states' rights drove him into repeated, stubborn opposition to the Davis administration. Despite political differences within the Confederacy, no national political parties were formed because they were seen as illegitimate. "Anti-partyism became an article of political faith." Without a two-party system building alternative sets of national leaders, electoral protests tended to be narrowly state-based, "negative, carping and petty". The 1863 mid-term elections became mere expressions of futile and frustrated dissatisfaction. According to historian David M. Potter, this lack of a functioning two-party system caused "real and direct damage" to the Confederate war effort since it prevented the formulation of any effective alternatives to the conduct of the war by the Davis administration. The enemies of President Davis proposed that the Confederacy "died of Davis." He was unfavorably compared to George Washington by critics such as E. A. Pollard, editor of the Richmond Examiner. Coulter summarizes, "The American Revolution had its Washington; the Southern Revolution had its Davis ... one succeeded and the other failed." Besides the early honeymoon period, Davis was never popular. He unwittingly caused much internal dissention from early on. His ill health and temporary bouts of blindness disabled him for days at a time. Coulter says Davis was heroic and his will was indomitable. But his "tenacity, determination, and will power" stirred up lasting opposition of enemies Davis could not shake. He failed to overcome "petty leaders of the states" who made the term "Confederacy" into a label for tyranny and oppression, denying the "Stars and Bars" from becoming a symbol of larger patriotic service and sacrifice. Instead of campaigning to develop nationalism and gain support for his administration, he rarely courted public opinion, assuming an aloofness, "almost like an Adams". Davis attended to too many details. He protected his friends after their failures were obvious. He spent too much time on military affairs versus his civil responsibilities. Coulter concludes he was not the ideal leader for the Southern Revolution, but he showed "fewer weaknesses than any other" contemporary character available for the role. Robert E. Lee's assessment of Davis as President was, "I knew of none that could have done as well." The Southern leaders met in Montgomery, Alabama, to write their constitution. Much of the Confederate States Constitution replicated the United States Constitution verbatim, but it contained several explicit protections of the institution of slavery including provisions for the recognition and protection of negro slavery in any new state admitted to the Confederacy. It maintained the existing ban on international slave-trading while protecting the existing internal trade of slaves among slaveholding states. In certain areas, the Confederate Constitution gave greater powers to the states (or curtailed the powers of the central government more) than the U.S. Constitution of the time did, but in other areas, the states actually lost rights they had under the U.S. Constitution. Although the Confederate Constitution, like the U.S. Constitution, contained a commerce clause, the Confederate version prohibited the central government from using revenues collected in one state for funding internal improvements in another state. The Confederate Constitution's equivalent to the U.S. Constitution's general welfare clause prohibited protective tariffs (but allowed tariffs for providing domestic revenue), and spoke of "carry[ing] on the Government of the Confederate States" rather than providing for the "general welfare". State legislatures had the power to impeach officials of the Confederate government in some cases. On the other hand, the Confederate Constitution contained a Necessary and Proper Clause and a Supremacy Clause that essentially duplicated the respective clauses of the U.S. Constitution. The Confederate Constitution also incorporated each of the 12 amendments to the U.S. Constitution that had been ratified up to that point. The Confederate Constitution did not specifically include a provision allowing states to secede; the Preamble spoke of each state "acting in its sovereign and independent character" but also of the formation of a "permanent federal government". During the debates on drafting the Confederate Constitution, one proposal would have allowed states to secede from the Confederacy. The proposal was tabled with only the South Carolina delegates voting in favor of considering the motion. The Confederate Constitution also explicitly denied States the power to bar slaveholders from other parts of the Confederacy from bringing their slaves into any state of the Confederacy or to interfere with the property rights of slave owners traveling between different parts of the Confederacy. In contrast with the language of the United States Constitution, the Confederate Constitution overtly asked God's blessing ("... invoking the favor and guidance of Almighty God ..."). The Montgomery Convention to establish the Confederacy and its executive met February 4, 1861. Each state as a sovereignty had one vote, with the same delegation size as it held in the U.S. Congress, and generally 41 to 50 members attended. Offices were "provisional", limited to a term not to exceed one year. One name was placed in nomination for president, one for vice president. Both were elected unanimously, 6–0. Jefferson Davis was elected provisional president. His U.S. Senate resignation speech greatly impressed with its clear rationale for secession and his pleading for a peaceful departure from the Union to independence. Although he had made it known that he wanted to be commander-in-chief of the Confederate armies, when elected, he assumed the office of Provisional President. Three candidates for provisional Vice President were under consideration the night before the February 9 election. All were from Georgia, and the various delegations meeting in different places determined two would not do, so Alexander Stephens was elected unanimously provisional Vice President, though with some privately held reservations. Stephens was inaugurated February 11, Davis February 18. Davis and Stephens were elected President and Vice President, unopposed on November 6, 1861. They were inaugurated on February 22, 1862. Historian E. M. Coulter observed, "No president of the U.S. ever had a more difficult task." Washington was inaugurated in peacetime. Lincoln inherited an established government of long standing. The creation of the Confederacy was accomplished by men who saw themselves as fundamentally conservative. Although they referred to their "Revolution", it was in their eyes more a counter-revolution against changes away from their understanding of U.S. founding documents. In Davis' inauguration speech, he explained the Confederacy was not a French-like revolution, but a transfer of rule. The Montgomery Convention had assumed all the laws of the United States until superseded by the Confederate Congress. The Permanent Constitution provided for a President of the Confederate States of America, elected to serve a six-year term but without the possibility of re-election. Unlike the United States Constitution, the Confederate Constitution gave the president the ability to subject a bill to a line item veto, a power also held by some state governors. The Confederate Congress could overturn either the general or the line item vetoes with the same two-thirds majorities that are required in the U.S. Congress. In addition, appropriations not specifically requested by the executive branch required passage by a two-thirds vote in both houses of Congress. The only person to serve as president was Jefferson Davis, due to the Confederacy being defeated before the completion of his term. |The Davis Cabinet| |Vice President||Alexander Stephens||1861–1865| |Secretary of State||Robert Toombs||1861| |Robert M.T. Hunter||1861–1862| |Judah P. Benjamin||1862–1865| |Secretary of the Treasury||Christopher Memminger||1861–1864| |John H. Reagan||1865| |Secretary of War||Leroy Pope Walker||1861| |Judah P. Benjamin||1861–1862| |George W. Randolph||1862| |John C. Breckinridge||1865| |Secretary of the Navy||Stephen Mallory||1861–1865| |Postmaster General||John H. Reagan||1861–1865| |Attorney General||Judah P. Benjamin||1861| |Thomas H. Watts||1862–1863| The only two "formal, national, functioning, civilian administrative bodies" in the Civil War South were the Jefferson Davis administration and the Confederate Congresses. The Confederacy was begun by the Provisional Congress in Convention at Montgomery, Alabama on February 28, 1861. It had one vote per state in a unicameral assembly. The Permanent Confederate Congress was elected and began its first session February 18, 1862. The Permanent Congress for the Confederacy followed the United States forms with a bicameral legislature. The Senate had two per state, twenty-six Senators. The House numbered 106 representatives apportioned by free and slave populations within each state. Two Congresses sat in six sessions until March 18, 1865. The political influences of the civilian, soldier vote and appointed representatives reflected divisions of political geography of a diverse South. These in turn changed over time relative to Union occupation and disruption, the war impact on local economy, and the course of the war. Without political parties, key candidate identification related to adopting secession before or after Lincoln's call for volunteers to retake Federal property. Previous party affiliation played a part in voter selection, predominantly secessionist Democrat or unionist Whig. The absence of political parties made individual roll call voting all the more important, as the Confederate "freedom of roll-call voting [was] unprecedented in American legislative history. Key issues throughout the life of the Confederacy related to (1) suspension of habeas corpus, (2) military concerns such as control of state militia, conscription and exemption, (3) economic and fiscal policy including impressment of slaves, goods and scorched earth, and (4) support of the Jefferson Davis administration in its foreign affairs and negotiating peace. President of the Provisional Congress Presidents pro tempore of the Provisional Congress Sessions of the Confederate Congress Tribal Representatives to Confederate Congress The Confederate Constitution outlined a judicial branch of the government, but the ongoing war and resistance from states-rights advocates, particularly on the question of whether it would have appellate jurisdiction over the state courts, prevented the creation or seating of the "Supreme Court of the Confederate States;" the state courts generally continued to operate as they had done, simply recognizing the Confederate States as the national government. Confederate district courts were authorized by Article III, Section 1, of the Confederate Constitution, and President Davis appointed judges within the individual states of the Confederate States of America. In many cases, the same US Federal District Judges were appointed as Confederate States District Judges. Confederate district courts began reopening in the spring of 1861 handling many of the same type cases as had been done before. Prize cases, in which Union ships were captured by the Confederate Navy or raiders and sold through court proceedings, were heard until the blockade of southern ports made this impossible. After a Sequestration Act was passed by the Confederate Congress, the Confederate district courts heard many cases in which enemy aliens (typically Northern absentee landlords owning property in the South) had their property sequestered (seized) by Confederate Receivers. When the matter came before the Confederate court, the property owner could not appear because he was unable to travel across the front lines between Union and Confederate forces. Thus, the District Attorney won the case by default, the property was typically sold, and the money used to further the Southern war effort. Eventually, because there was no Confederate Supreme Court, sharp attorneys like South Carolina's Edward McCrady began filing appeals. This prevented their clients' property from being sold until a supreme court could be constituted to hear the appeal, which never occurred. Where Federal troops gained control over parts of the Confederacy and re-established civilian government, US district courts sometimes resumed jurisdiction. Supreme Court – not established. District Courts – judges When the Confederacy was formed and its seceding states broke from the Union, it was at once confronted with the arduous task of providing its citizens with a mail delivery system, and, in the midst of the American Civil War, the newly formed Confederacy created and established the Confederate Post Office. One of the first undertakings in establishing the Post Office was the appointment of John H. Reagan to the position of Postmaster General, by Jefferson Davis in 1861, making him the first Postmaster General of the Confederate Post Office as well as a member of Davis' presidential cabinet. Through Reagan's resourcefulness and remarkable industry, he had his department assembled, organized and in operation before the other Presidential cabinet members had their departments fully operational. When the war began, the US Post Office still delivered mail from the seceded states for a brief period of time. Mail that was postmarked after the date of a state's admission into the Confederacy through May 31, 1861 and bearing US postage was still delivered. After this time, private express companies still managed to carry some of the mail across enemy lines. Later, mail that crossed lines had to be sent by 'Flag of Truce' and was allowed to pass at only two specific points. Mail sent from the South to the North states was received, opened and inspected at Fortress Monroe on the Virginia coast before being passed on into the U.S. mail stream. Mail sent from the North to the South passed at City Point, also in Virginia, where it was also inspected before being sent on. With the chaos of the war, a working postal system was more important than ever for the Confederacy. The Civil War had divided family members and friends and consequently letter writing naturally increased dramatically across the entire divided nation, especially to and from the men who were away serving in an army. Mail delivery was also important for the Confederacy for a myriad of business and military reasons. Because of the Union blockade, basic supplies were always in demand and so getting mailed correspondence out of the country to suppliers was imperative to the successful operation of the Confederacy. Volumes of material have been written about the Blockade runners who evaded Union ships on blockade patrol, usually at night, and who moved cargo and mail in and out of the Confederate States throughout the course of the war. Of particular interest to students and historians of the American Civil War is Prisoner of War mail and Blockade mail as these items were often involved with a variety of military and other war time activities. The postal history of the Confederacy along with surviving Confederate mail has helped historians document the various people, places and events that were involved in the American Civil War as it unfolded. The Confederacy actively used the army to arrest people suspected of loyalty to the United States. Historian Mark Neely found 4,108 names of men arrested and estimated a much larger total. The Confederacy arrested pro-Union civilians in the South at about the same rate as the Union arrested pro-Confederate civilians in the North. Neely concludes: The Confederate citizen was not any freer than the Union citizen – and perhaps no less likely to be arrested by military authorities. In fact, the Confederate citizen may have been in some ways less free than his Northern counterpart. For example, freedom to travel within the Confederate states was severely limited by a domestic passport system. Most whites were subsistence farmers who traded their surpluses locally. The plantations of the South, with white ownership and an enslaved labor force, produced substantial wealth from cash crops. It supplied two-thirds of the world's cotton, which was in high demand for textiles, along with tobacco, sugar, and naval stores (such as turpentine). These raw materials were exported to factories in Europe and the Northeast. Planters reinvested their profits in more slaves and fresh land, for cotton and tobacco depleted the soil. There was little manufacturing or mining; shipping was controlled by outsiders. The plantations that employed over three million black slaves were the principal source of wealth, but those slaves were also the source of general tension and white racial solidarity. William Freehling and Steven A. Channing have documented the race-based system of enslavement as "prone to insurrection and racial upheaval" inside the South, and by midcentury, its maintenance there was coming under increasing attacks from outside. Slave labor was applied in industry in a limited way in the Upper South and in a few port cities. One reason for the regional lag in industrial development was "top-heavy income distribution". Mass production requires mass markets, and slave-labor living in packed-earth cabins, using self-made tools and outfitted with one suit of work clothes each year of inferior fabric, did not generate consumer demand to sustain local manufactures of any description in the same way a mechanized family farm of free labor did in the North. The Southern economy was "pre-capitalist" in that slaves were employed in the largest revenue producing enterprises, not free labor. That labor system as practiced in the American South encompassed paternalism, whether abusive or indulgent, and that meant labor management considerations apart from productivity. Approximately 85% of both North and South white populations lived on family farms, both regions were predominantly agricultural, and mid-century industry in both was mostly domestic. But the Southern economy was uniquely pre-capitalist in its overwhelming reliance on the agriculture of cash crops to produce wealth. Southern cities and industries grew faster than ever before, but the thrust of the rest of the country's exponential growth elsewhere was toward urban industrial development along transportation systems of canals and railroads. The South was following the dominant currents of the American economic mainstream, but at a "great distance" as it lagged in the all-weather modes of transportation that brought cheaper, speedier freight shipment and forged new, expanding inter-regional markets. A third count of southern pre-capitalist economy relates to the cultural setting. The South and southerners did not adopt a frenzied work ethic, nor the habits of thrift that marked the rest of the country. It had access to the tools of capitalism, but it did not adopt its culture. The Southern Cause as a national economy in the Confederacy was grounded in "slavery and race, planters and patricians, plain folk and folk culture, cotton and plantations". The Confederacy started its existence as an agrarian economy with exports, to a world market, of cotton, and, to a lesser extent, tobacco and sugarcane. Local food production included grains, hogs, cattle, and gardens. The cash came from exports but the Southern people spontaneously stopped exports in spring 1861 to hasten the impact of "King Cotton." When the blockade was announced, commercial shipping practically ended (the ships could not get insurance), and only a trickle of supplies came via blockade runners. The 11 states had produced $155 million in manufactured goods in 1860, chiefly from local grist-mills, and lumber, processed tobacco, cotton goods and naval stores such as turpentine. The main industrial areas were border cities such as Baltimore, Wheeling, Louisville and St. Louis, that were never under Confederate control. The Confederacy adopted a tariff of 15 per cent, but imposed it on all imports from other countries, including the United States. The tariff mattered little; the Union blockade minimized commercial traffic through the Confederacy's ports, and very few people paid taxes on goods smuggled from the North. The Confederate government in its entire history collected only $3.5 million in tariff revenue. The lack of adequate financial resources led the Confederacy to finance the war through printing money, which led to high inflation. The Confederacy underwent an economic revolution by centralization and standardization, but it was too little too late as its economy was systematically strangled by blockade and raids. In peacetime, the extensive and connected systems of navigable rivers and coastal access allowed for cheap and easy transportation of agricultural products. The railroad system in the South had been built as a supplement to the navigable rivers to enhance the all-weather shipment of cash crops to market. They tied plantation areas to the nearest river or seaport and so made supply more dependable, lowered costs and increased profits. In the event of invasion, the vast geography of the Confederacy made logistics difficult for the Union. Wherever Union armies invaded, they assigned many of their soldiers to garrison captured areas and to protect rail lines. At onset of the Civil War, the Southern rail network was disjointed and plagued by change in track gauge as well as lack of interchange. Locomotives and freight cars had fixed axles and could not roll on tracks of different gauges (widths). Railroads of different gauges leading to the same city required all freight to be off-loaded onto wagons to be transported to the connecting railroad station where it would await freight cars and a locomotive to proceed. These included Vicksburg, New Orleans, Montgomery, Wilmington and Richmond. In addition, most rail lines led from coastal or river ports to inland cities, with few lateral railroads. Due to this design limitation, the relatively primitive railroads of the Confederacy were unable to overcome the Union Naval Blockade of the South's crucial intra-coastal and river routes. The Confederacy had no plan to expand, protect or encourage its railroads. Refusal to export the cotton crop in 1861 left railroads bereft of their main source of income. Many lines had to lay off employees; many critical skilled technicians and engineers were permanently lost to military service. For the early years of the war, the Confederate government had a hands-off approach to the railroads. Only in mid-1863 did the Confederate government initiate an national policy, and it was confined solely to aiding the war effort. Railroads came under the de facto control of the military. In contrast, U.S. Congress had authorized military administration of railroad and telegraph January 1862, imposed a standard gauge, and built railroads into the South using that gauge. Confederate reoccupation of territory by successful armies could not be resupplied directly by rail as they advanced. The C.S. Congress formally authorized military administration of railroads in February 1865. In the last year before the end of the war, the Confederate railroad system stood permanently on the verge of collapse. There was no new equipment and raids on both sides systematically destroyed key bridges, as well as locomotives and freight cars. Spare parts were cannibalized; feeder lines were torn up to get replacement rails for trunk lines, and the heavy use of rolling stock wore them out. The army was always short of horses and mules, and requisitioned them with dubious promissory notes from local farmers and breeders. Union forces paid in real money and found ready sellers in the South. Horses were needed for cavalry and artillery. Mules pulled the wagons. The supply was undermined by an unprecedented epidemic of glanders, a fatal disease that baffled veterinarians. After 1863 the policy of the Union Army was to shoot all the horses and mules it did not need to keep them out of Confederate hands. The army and farmers experienced a growing shortage of horses and mules, which hurt the economy and the Confederate war effort. The South lost half its 2.5 million horses and mules; many farmers ended the war with none left. Army horses were used up by hard work, malnourishment, disease and battle wounds; their life expectancy was about seven months. Both the individual Confederate states and later the Confederate government printed Confederate States of America dollars as paper currency in various denominations, much of it signed by the Treasurer Edward C. Elmore. During the course of the war these severely depreciated and eventually became worthless. Many bills still exist, although in recent years copies have proliferated. The Confederate government initially wanted to finance its war mostly through tariffs on imports, export taxes, and voluntary donations of gold. However, after the spontaneous imposition of an embargo on cotton sales to Europe in 1861, these sources of revenue dried up and the Confederacy increasingly turned to issuing debt and printing money to pay for war expenses. The Confederate States politicians were worried about angering the general population with hard taxes. A tax increase might disillusion many Southerners, so the Confederacy resorted to printing more money. As a result inflation increased and remained a problem for the southern states throughout the rest of the war. At the time of their secession, the states (and later the Confederate government) took over the national mints in their territories: the Charlotte Mint in North Carolina, the Dahlonega Mint in Georgia, and the New Orleans Mint in Louisiana. During 1861, the first two produced small amounts of gold coinage, the latter half dollars. Since the mints used the current dies on hand, these issues remain indistinguishable from those minted by the Union. However, in New Orleans the Confederacy did use its own reverse design to strike four half dollars. Since these have a small die break on the obverse, which is also seen on some of the regular 1861-O coins, it is possible that these were minted under CSA authority. By summer 1861, the Union naval blockade virtually shut down the export of cotton and the import of manufactured goods. Food that formerly came overland was cut off. In response, the governor and legislature pleaded with planters to grow less cotton and more food.[clarification needed] Most refused, some believing that the Yankees would not or could not fight. When cotton prices soared in Europe, expectations were that Europe would soon intervene to break the blockade. Neither proved true and the myth of omnipotent "King Cotton" died hard. The Georgia legislature imposed cotton quotas, making it a crime to grow an excess. But food shortages only worsened, especially in the towns. The overall decline in food supplies, made worse by the inadequate transportation system, led to serious shortages and high prices in urban areas. When bacon reached a dollar a pound in 1864, the poor women of Richmond, Atlanta and many other cities began to riot; they broke into shops and warehouses to seize food. The women expressed their anger at ineffective state relief efforts, speculators, and merchants and planters. As wives and widows of soldiers they were hurt by the inadequate welfare system. By the end of the war deterioration of the Southern infrastructure was widespread. The number of civilian deaths is unknown. Most of the war was fought in Virginia and Tennessee, but every Southern state was affected as well as Maryland, West Virginia, Kentucky, Missouri, and Indian Territory. Texas and Florida saw the least military action. Much of the damage was caused by military action, but most was caused by lack of repairs and upkeep, and by deliberately using up resources. Historians have recently estimated how much of the devastation was caused by military action. Military operations were conducted in 56% of 645 counties in nine Confederate states (excluding Texas and Florida). These counties contained 63% of the 1860 white population and 64% of the slaves. By the time the fighting took place, undoubtedly some people had fled to safer areas, so the exact population exposed to war is unknown. The eleven Confederate states in the 1860 census had 297 towns and cities with 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the eleven contained 115,900 people in the 1860 census, or 14% of the urban South. Historians have not estimated what their actual population was when Union forces arrived. The number of people (as of 1860) who lived in the destroyed towns represented just over 1% of the Confederacy's 1860 population. In addition, 45 court houses were burned (out of 830). The South's agriculture was not highly mechanized. The value of farm implements and machinery in the 1860 Census was $81 million; by 1870, there was 40% less, worth just $48 million. Many old tools had broken through heavy use; new tools were rarely available; even repairs were difficult. The economic losses affected everyone. Banks and insurance companies were mostly bankrupt. Confederate currency and bonds were worthless. The billions of dollars invested in slaves vanished. However, most debts were left behind. Most farms were intact but most had lost their horses, mules and cattle; fences and barns were in disrepair. Paskoff shows the loss of farm infrastructure was about the same whether or not fighting took place nearby. The loss of infrastructure and productive capacity meant that rural widows throughout the region faced not only the absence of able-bodied men, but a depleted stock of material resources that they could manage and operate themselves. During four years of warfare, disruption, and blockades, the South used up about half its capital stock. The North, by contrast, absorbed its material losses so effortlessly that it appeared richer at the end of the war than at the beginning. The rebuilding would take years and was hindered by the low price of cotton after the war. Outside investment was essential, especially in railroads. One historian has summarized the collapse of the transportation infrastructure needed for economic recovery: About 250,000 men never came home, or 30% of all white men aged 18 to 40, in 1860. Widows who were overwhelmed often abandoned the farm and merged into the households of relatives, or even became refugees living in camps with high rates of disease and death. In the Old South, being an "old maid" was something of an embarrassment to the woman and her family. Now it became almost a norm. Some women welcomed the freedom of not having to marry. Divorce, while never fully accepted, became more common. The concept of the "New Woman" emerged-she was self-sufficient, independent, and stood in sharp contrast to the "Southern Belle" of antebellum lore. The first official flag of the Confederate States of America-called the "Stars and Bars" – originally had seven stars, representing the first seven states that initially formed the Confederacy. As more states seceded, more stars were added, until the total was 13 (two stars were added for the divided states of Kentucky and Missouri). However, during the First Battle of Bull Run, (First Manassas) it sometimes proved difficult to distinguish the Stars and Bars from the Union flag. To rectify the situation, a separate "Battle Flag" was designed for use by troops in the field. Also known as the "Southern Cross", many variations sprang from the original square configuration. Although it was never officially adopted by the Confederate government, the popularity of the Southern Cross among both soldiers and the civilian population was a primary reason why it was made the main color feature when a new national flag was adopted in 1863. This new standard-known as the "Stainless Banner" – consisted of a lengthened white field area with a Battle Flag canton. This flag too had its problems when used in military operations as, on a windless day, it could easily be mistaken for a flag of truce or surrender. Thus, in 1865, a modified version of the Stainless Banner was adopted. This final national flag of the Confederacy kept the Battle Flag canton, but shortened the white field and added a vertical red bar to the fly end. Because of its depiction in the 20th-century and popular media, many people consider the rectangular battle flag with the dark blue bars as being synonymous with "the Confederate Flag". This flag, however, was never adopted as a Confederate national flag. The "Confederate Flag" has a color scheme similar to the official Battle Flag, but is rectangular, not square. (Its design and shape matches the Naval Jack, but the blue bars are darker.) The "Confederate Flag" is the most recognized symbol of the South in the United States today, and continues to be a controversial icon. in U.S. Congress |South Carolina||Dec. 20, 1860||Feb. 8, 1861|| |Mississippi||Jan. 9, 1861||Feb. 8, 1861|| |Florida||Jan. 10, 1861||Feb. 8, 1861|| |Alabama||Jan. 11, 1861||Feb. 8, 1861|| |Georgia||Jan. 19, 1861||Feb. 8, 1861|| 2nd Date July 15, 1870 |Louisiana||Jan. 26, 1861||Feb. 8, 1861|| |Texas||Feb. 1, 1861||March 2, 1861|| |Virginia||April 17, 1861||May 7, 1861|| June 20, 1863 |Arkansas||May 6, 1861||May 18, 1861|| |North Carolina||May 20, 1861||May 21, 1861|| |Tennessee||June 8, 1861||July 2, 1861|| |Oct. 31, 1861||Nov. 28, 1861|| in U.S. Congress |Nov. 20, 1861||Dec. 10, 1861|| in U.S. Congress The Confederate States of America claimed a total of 2,919 miles (4,698 km) of coastline, thus a large part of its territory lay on the seacoast with level and often sandy or marshy ground. Most of the interior portion consisted of arable farmland, though much was also hilly and mountainous, and the far western territories were deserts. The lower reaches of the Mississippi River bisected the country, with the western half often referred to as the Trans-Mississippi. The highest point (excluding Arizona and New Mexico) was Guadalupe Peak in Texas at 8,750 feet (2,670 m). Much of the area claimed by the Confederate States of America had a humid subtropical climate with mild winters and long, hot, humid summers. The climate and terrain varied from vast swamps (such as those in Florida and Louisiana) to semi-arid steppes and arid deserts west of longitude 100 degrees west. The subtropical climate made winters mild but allowed infectious diseases to flourish. Consequently, on both sides more soldiers died from disease than were killed in combat, a fact hardly atypical of pre–World War I conflicts. The United States Census of 1860 gives a picture of the overall 1860 population of the areas that joined the Confederacy. Note that population-numbers exclude non-assimilated Indian tribes. | % of Free as % of (Figures for Virginia include the future West Virginia.) |Age structure||0–14 years||15–59 years||60 years and over| |Free black males||45%||50%||5%| |Free black females||40%||54%||6%| (Rows may not total to 100% due to rounding) In 1860 the areas that later formed the 11 Confederate States (and including the future West Virginia) had 132,760 (1.46%) free blacks. Males made up 49.2% of the total population and females 50.8% (whites: 48.60% male, 51.40% female; slaves: 50.15% male, 49.85% female; free blacks: 47.43% male, 52.57% female). The area claimed by the Confederate States of America consisted overwhelmingly of rural land. Few urban areas had populations of more than 1,000 – the typical county seat had a population of fewer than 500 people. Cities were rare. Of the twenty largest U.S. cities in the 1860 census, only New Orleans lay in Confederate territory – and the Union captured New Orleans in 1862. Only 13 Confederate-controlled cities ranked among the top 100 U.S. cities in 1860, most of them ports whose economic activities vanished or suffered severely in the Union blockade. The population of Richmond swelled after it became the Confederate capital, reaching an estimated 128,000 in 1864. Other Southern cities in the Border slave-holding states such as Baltimore MD, Washington DC, Wheeling VA/WV and Alexandria VA, Louisville KY, and St. Louis MO, never came under the control of the Confederate government. The cities of the Confederacy included most prominently in order of size of population: |#||City||1860 population||1860 U.S. rank||Return to U.S. control| |1.||New Orleans, Louisiana||168,675||6||1862| |2.||Charleston, South Carolina||40,522||22||1865| |13.||Wilmington, North Carolina||9,553||100||1865| (See also Atlanta in the Civil War, Charleston, South Carolina, in the Civil War, Nashville in the Civil War, New Orleans in the Civil War, Wilmington, North Carolina, in the American Civil War, and Richmond in the Civil War). Military leaders of the Confederacy (with their state or country of birth and highest rank) included: Confederacy in Latin America & abroad Confederacy in popular culture Confederacy at war As the telegraph chattered reports of the attack on Sumter April 12 and its surrender next day, huge crowds poured into the streets of Richmond, Raleigh, Nashville, and other upper South cities to celebrate this victory over the Yankees. These crowds waved Confederate flags and cheered the glorious cause of southern independence. They demanded that their own states join the cause. Scores of demonstrations took place from April 12 to 14, before Lincoln issued his call for troops. Many conditional unionists were swept along by this powerful tide of southern nationalism; others were cowed into silence.- McPherson p. 278 Historian Daniel W. Crofts disagrees with McPherson. Crofts wrote: Crofts further noted that,The bombardment of Fort Sumter, by itself, did not destroy Unionist majorities in the upper South. Because only three days elapsed before Lincoln issued the proclamation, the two events viewed retrospectively, appear almost simultaneous. Nevertheless, close examination of contemporary evidence ... shows that the proclamation had a far more decisive impact.-Crofts p. 336 Many concluded ... that Lincoln had deliberately chosen 'to drive off all the Slave states, in order to make war on them and annihilate slavery.'- Crofts pp. 337–338, quoting the North Carolina politician Jonathan Worth (1802–1869). ▪ Premium designs ▪ Designs by country ▪ Designs by U.S. state ▪ Most popular designs ▪ Newest, last added designs ▪ Unique designs ▪ Cheap, budget designs ▪ Design super sale DESIGNS BY THEME ▪ Accounting, audit designs ▪ Adult, sex designs ▪ African designs ▪ American, U.S. designs ▪ Animals, birds, pets designs ▪ Agricultural, farming designs ▪ Architecture, building designs ▪ Army, navy, military designs ▪ Audio & video designs ▪ Automobiles, car designs ▪ Books, e-book designs ▪ Beauty salon, SPA designs ▪ Black, dark designs ▪ Business, corporate designs ▪ Charity, donation designs ▪ Cinema, movie, film designs ▪ Computer, hardware designs ▪ Celebrity, star fan designs ▪ Children, family designs ▪ Christmas, New Year's designs ▪ Green, St. Patrick designs ▪ Dating, matchmaking designs ▪ Design studio, creative designs ▪ Educational, student designs ▪ Electronics designs ▪ Entertainment, fun designs ▪ Fashion, wear designs ▪ Finance, financial designs ▪ Fishing & hunting designs ▪ Flowers, floral shop designs ▪ Food, nutrition designs ▪ Football, soccer designs ▪ Gambling, casino designs ▪ Games, gaming designs ▪ Gifts, gift designs ▪ Halloween, carnival designs ▪ Hotel, resort designs ▪ Industry, industrial designs ▪ Insurance, insurer designs ▪ Interior, furniture designs ▪ International designs ▪ Internet technology designs ▪ Jewelry, jewellery designs ▪ Job & employment designs ▪ Landscaping, garden designs ▪ Law, juridical, legal designs ▪ Love, romantic designs ▪ Marketing designs ▪ Media, radio, TV designs ▪ Medicine, health care designs ▪ Mortgage, loan designs ▪ Music, musical designs ▪ Night club, dancing designs ▪ Photography, photo designs ▪ Personal, individual designs ▪ Politics, political designs ▪ Real estate, realty designs ▪ Religious, church designs ▪ Restaurant, cafe designs ▪ Retirement, pension designs ▪ Science, scientific designs ▪ Sea, ocean, river designs ▪ Security, protection designs ▪ Social, cultural designs ▪ Spirit, meditational designs ▪ Software designs ▪ Sports, sporting designs ▪ Telecommunication designs ▪ Travel, vacation designs ▪ Transport, logistic designs ▪ Web hosting designs ▪ Wedding, marriage designs ▪ White, light designs ▪ Magento store designs ▪ OpenCart store designs ▪ PrestaShop store designs ▪ CRE Loaded store designs ▪ Jigoshop store designs ▪ VirtueMart store designs ▪ osCommerce store designs ▪ Zen Cart store designs ▪ Flash CMS designs ▪ Joomla CMS designs ▪ Mambo CMS designs ▪ Drupal CMS designs ▪ WordPress blog designs ▪ Forum designs ▪ phpBB forum designs ▪ PHP-Nuke portal designs ANIMATED WEBSITE DESIGNS ▪ Flash CMS designs ▪ Silverlight animated designs ▪ Silverlight intro designs ▪ Flash animated designs ▪ Flash intro designs ▪ XML Flash designs ▪ Flash 8 animated designs ▪ Dynamic Flash designs ▪ Flash animated photo albums ▪ Dynamic Swish designs ▪ Swish animated designs ▪ jQuery animated designs ▪ WebMatrix Razor designs ▪ HTML 5 designs ▪ Web 2.0 designs ▪ 3-color variation designs ▪ 3D, three-dimensional designs ▪ Artwork, illustrated designs ▪ Clean, simple designs ▪ CSS based website designs ▪ Full design packages ▪ Full ready websites ▪ Portal designs ▪ Stretched, full screen designs ▪ Universal, neutral designs CORPORATE ID DESIGNS ▪ Corporate identity sets ▪ Logo layouts, logo designs ▪ Logotype sets, logo packs ▪ PowerPoint, PTT designs ▪ Facebook themes VIDEO, SOUND & MUSIC ▪ Video e-cards ▪ After Effects video intros ▪ Special video effects ▪ Music tracks, music loops ▪ Stock music bank GRAPHICS & CLIPART ▪ Pro clipart & illustrations, $19/year ▪ 5,000+ icons by subscription ▪ Icons, pictograms |Custom Logo Design $149 ▪ Web Programming ▪ ID Card Printing ▪ Best Web Hosting ▪ eCommerce Software ▪ Add Your Link| |© 1996-2013 MAGIA Internet Studio ▪ About ▪ Portfolio ▪ Photo on Demand ▪ Hosting ▪ Advertise ▪ Sitemap ▪ Privacy ▪ Maria Online|
http://www.qesign.com/sale.php?x=Confederate_States_of_America
13
62
A V plotter is a minimalistic design which uses a pair of steppers, some string, and a pen head to create a plotter. These are sometimes made by students or technology sector employees as a way to avoid "real work". In this article, I dig into the math behind these machines, and also write a program to calculate the configuration of a V setup needed to produce a working device. RequirementsWhat is the optimal configuration of control lines for an area to be plotted? Obviously, we can't have a drawing area above the control lines — our friend gravity sees to that. But, can we do better than hand waving and "somewhere below the control lines" for the plot area? Yes: we think up some constraints and model them with math and code (two more friends!): - Tension: We can imagine that the control lines have to be under some tension in order to be effective. For the purposes of this article we say that both string tensions must be in the range [ m/2, m*1.5 ], where m is the mass of the plotter head. Lines can neither be too slack nor too heavily loaded. The effect of this constraint is to prevent any line from being too close to horizontal or too close to vertical. - Resolution: There is a change in resolution when we map a change of length in one or both of the strings into X and Y coordinates. I.e. Coordinate system conversion causes a non-uniform step resolution. We say that, for each control line, a one unit change causes at most a 1.4 unit change in the X,Y coordinate system. We limit plotting to the area of reasonable resolution. Here our definition of reasonable is a 40% change. Line Tension CalculationThe below diagram shows a mass m suspended by two lines. Each line can (and usually will) have a different angle to the X axis. We wish to calculate the tension of each line in the diagram. To describe the horizontal forces along the X axis (in balance), we write: To describe the force m along the Y axis, caused by the weight of the plotter head assembly, we write: Solving these two equations in terms of tension, we get: Note that the tension equations have denominators in common. Angle Length Cartesian ConversionWe wish to translate between coordinate systems: - From an angle and a length (here based from the origin), we wish to find an X,Y coordinate. - From an X,Y coordinate pair, we wish to find the angle and length (here to the origin). Trigonometry tells us that: See Wikipedia article on atan2. Adjusting these formulas for non-origin locations involves a simple addition or subtraction adjustment. I don't think there is a name for the coordinate system using two control lines. The closest I can find is the biangular coordinate system, but this has more to do with the angles of the lines to the X axis than the length of the lines. As long as I am in confession mode, I must also say that I have not seen "V plotter" being used as the name of this kind of device. I am not able to find any established name for this "thing", so necessity became the mother of invention. Law of CosinesThe law of cosines can be used for many applications, here we use it to find an angle when we know the length of three legs. The law of cosines allows us to calculate the position of the print head when a line changes length. The basic form of the law is: Solving for α (alpha) we get: The code uses this to map where the resolution of the print head is within an acceptable range. At certain points, a small change in line length will result in too large a change in the x,y coordinate system. To check for resolution, we adjust a or b, then use the law of cosines to find α (alpha). Knowing α (alpha), length b and position A, we can use simple sine and cosine (see previous section) to find position C. Code SimulationNow we have enough math to code a V plotter simulation. The code below is divided into sections, each section is introduced with information to know when reading the the code itself. The code sections can be pasted together to produce a complete Python program to run the simulation and draw the plotting area. First we reference some libraries and set some constants. #!/usr/bin/env python import sys,Image,ImageDraw from math import sqrt,sin,cos,acos,atan2,degrees,fabs # setup the constants version=1.7 outputFile="out.png" width,height=800,600 border=32 # V line end points v1=border/2,border/2 v2=width-border/2-1,border/2 Here we draw the fixed parts of the picture: the crosses showing the end points of the control lines, and the background for the drawing. Note that this drawing package has the origin in the upper left hand corner and a Y axis with positive values progressing downward. def cross(draw,p,n): c="#000000" draw.line((p,p-n,p,p+n),c) draw.line((p-n,p,p+n,p),c) def drawFixtures(draw): # border of calculation pixels draw.rectangle([border-1,border-1,width-border,height-border],"#FFFFFF","#000000") # V line end points cross(draw,v1,border/4) cross(draw,v2,border/4) There is a one to one correspondence between the tension calculation code here and the tension calculation derivation in the first math section. def lineTensions(a1,a2): d=cos(a1)*sin(a2)+sin(a1)*cos(a2) return cos(a2)/d,cos(a1)/d def tensionOk(p): # find angles a1=atan2(p-v1,p-v1) a2=atan2(p-v2,v2-p) # strings tension check t1,t2=lineTensions(a1,a2) lo,hi=.5,1.5 return lo<t1<hi and lo<t2<hi Similarly, the resolution check code here is an implementation of the math in the previous section. In addition, there is a "sanity check" to verify that the calculated point for the triangle is the same as the point passed in to the calculation. def dx(p1,p2): return sqrt((p1-p2)**2+(p1-p2)**2); def calcPointB(a,b,c): alpha=acos((b**2+c**2-a**2)/(2*b*c)) return b*cos(alpha)+v1,b*sin(alpha)+v1 def resolutionOk(p): max=1.4 # law of cosines calculation and nomenclature c=dx(v1,v2) b=dx(v1,p) a=dx(v2,p) # sanity check err=.00000000001 pc=calcPointB(a,b,c) assert p-err<pc<p+err assert p-err<pc<p+err # calculate mapped differences db=dx(p,calcPointB(a,b+1,c)) # extend left line by 1 unit da=dx(p,calcPointB(a+1,b,c)) # extend right line by 1 unit return db<max and da<max # line pull of 1 unit does not move x,y by more than max Each pixel in the drawing area is assigned a color based on the tension and resolution calculations. Dots are written on the terminal window to indicate that the calculation is underway. def calcPixel(draw,p): t=tensionOk(p) r=resolutionOk(p) if not t and not r: draw.point(p,"#3A5FBD") if not t and r: draw.point(p,"#4876FF") if t and not r: draw.point(p,"#FF7F24") # default to background color def drawPixels(draw): for y in range(border,height-border): sys.stdout.write('.') sys.stdout.flush() for x in range(border,width-border): calcPixel(draw,(x,y)) sys.stdout.write('\n') The main section of the program prepares an image for the calculation and writes it to disk when done. def main(): print "V plotter map, version", version image = Image.new("RGB",(width,height),"#D0D0D0") draw = ImageDraw.Draw(image) drawFixtures(draw) drawPixels(draw) image.save(outputFile,"PNG") print "map image written to", outputFile print "done." if __name__ == "__main__": main() To read code "top down" (more general to more detailed), start reading at the bottom section and work your way up. Oops, it's too late for me to tell you that now. Sorry. Plot Area MapThe output of the program is a picture mapping locations to how well they met the specified requirements. - Orange: poor resolution - Light Blue: too little tension in one of the lines - Dark Blue: too much tension in one of the lines (and poor resolution) - White: drawing area candidate It is interesting to see that the area with poor resolution is a superset (entirely covers) the area with too much line tension. Thinking about it makes some intuitive sense: this is similar to a lever with a heavy weight on the short arm making the long arm move up or down a greater distance. We see light blue sections under each "+" line control point. This also makes some intuitive sense because supporting a weight with a vertical line leaves no work (or tension) for an adjacent line. The light blue area on each side shows the loss of line tension on the side opposite. This is unfortunate because there is also line sag in longer lines (not covered in the above math or simulation) which is exacerbated by the loss of tension. Given V plotter control lines sourced at the "+" markers, reasonable plotting can be done in the white region. For square or rectangular physical plotting surface, we can make a panel with the top side touching the orange area and the bottom corners touching the light blue area. For example the green area in the picture below: The ends of the control lines in the above picture seem to be further away from the plotting surface than V plotters commonly seen on the internet. Specifying different constraints will give different possibilities for drawing areas. We now have the tools in hand to parametrically design a V plotter matching the constraints we desire.
http://2e5.com/plotter/V/design/
13
101
E2 Equilibrium of drop forming substances Fluid level in communicating vessels In vessels, which are interconnected below the fluid level by channels (Fig. 183), the fluid has the same level everywhere, provided, of course, they contain the same fluid. Why? A fluid, which is only acted upon by gravity, demands for its equilibrium that the pressure has the same magnitude on all planes (dyn/cm²) at the same level. The magnitude of the force acting on a single cm² only depends on how far it lies below the free surface. In other words: The equilibrium does not depend on the number of cm², that is, the width and shape of the vessel (disregarding capillary tubes which are discussed below). There must act on every cm² of the same horizontal plane the same force, for example, at AB as at CD as at EF, etc. This pressure is numerically equal to the product of 1 cm², the height in cm and the specific weight (1·h·s). However, the specific weight is in all vessels the same, whence 1·h·s can only have the same value, if also h has the same value, that is, the fluid is equally high over every cm² of the cross-section, that is, it is equally high in all vessels. Thus, communicating vessels effectively represent a single vessel; their free surfaces lie in the same horizontal plane. The employment of a water level glass tube at steam containers depends on this principle. The tube communicates with the steam container; the height of the water in the tube indicates that in the steam container. In the gauging apparatus of surveyors, the employment of two communicating vessels, filled with the same fluid, also depends on this principle (Fig. 184). You fix with it the point M which must share the free surface with D and E. You can also measure differences in height with this gadget and a ruler. However, the free surfaces do not lie in the same horizontal plane, but at different heights, if the free surfaces belong to fluids of different specific weight, that is, communicating vessels can then not be viewed to be a single vessel; in fact, the surface in one vessel lies lower with respect to that in another one the larger is the specific weight of the fluid below its free surface. Pour water into a U-shaped vessel (Fig. 185) with communicating legs S1 and S2 and subsequently into S1, on top of the free surface, a fluid which is lighter than water and does not mix with it, for example, oil. The fluids will touch each other at the cross-section a. Then the free surface in S2 has water below it, that in S1 oil, and the oil level O lies higher than that of the water W. Since there is equilibrium, the pressure (dyn/cm²) at a equals that at b. The pressure of the oil column (its height is h1, its specific weight s1) at a must be h1·s1 = h2·s2, that is, you must have h1/h2 = s2/s1. The heights above the horizontal plane, at which the two fluids meet, are thus inversely related to the specific weights. This mechanism can be used to compare specific weights. Dulong and Alexis Thérèse 1791-1820 have thus compared the specific weight of mercury at different temperatures. What is the pressure on a not horizontal plane area - the side pressure? Every point of such an area experiences the same pressure which acts at a point at the same horizontal level. Hence there acts on such an area a variety of differing, parallel, equi-directed forces, the magnitudes and points of action of which are known. Our task is then: Compute from the single, known, parallel forces the magnitude and point of action of their resultant. An example of the task of finding the point of attack will clarify the problem. A hollow cube (Fig. 186) is filled to its rim with water. What is the pressure on the entire vertical wall AB and at which point must you apply a force against this wall, if it is not connected rigidly to the other walls and the bottom and you want to keep it in place against the acting pressure? The computation yields: The magnitude of the resultant, that is, the pressure on the (not horizontal) area equals the pressure which it would be exposed to if it were lying horizontally at that horizontal level of the fluid, at which is located its centre of gravity. The point of action of the resultant lies lower than the centre of gravity of the area; its location must be computed specially. It cannot be identical to the centre of gravity, because that is the centre of equally large parallel forces; in this case, the forces are not equally large. The solution of the task in Fig. 186 follows: Since the wall is a square, it is subject to the same pressure as the horizontal cross-section through its centre S - this is equal to the size of the wall, because the vessel is a cube - that is, the cross-section which halves the vessel and on which presses half the fluid. The magnitude of the pressure force lies vertically under the centre of the wall, its distance from the bottom is one third of the cube's edge length. This is where the force must act from outside. Buoyancy is a fundamental aspect of the Physics of Fluids. For example, it allows to explain natural swimming of bodies, when they are supported by a fluid at rest. Natural swimming - swimming by swimming motions is artificial. Like your own swimming, it is a lasting fight with sinking. Rudders, sails and propellers are means of propagation of naturally swimming bodies. At rest! A body can also be supported by an upwards jet. However, it does not swim then, but dances. A body which is freely movable in a fluid at rest is pulled vertically downwards by gravity and pushed upwards by its buoyancy. Its behaviour depends on the relative magnitudes of these two forces. If its weight is larger than its buoyancy, it sinks below - it drops; if its buoyancy is equal to its weight, it can neither rise nor fall - it swims. For the sake of simplicity, let the body (Fig. 187) be a rectangular prism and its base lie horizontally, parallel to the free level of the fluid. (The treatment of arbitrarily shaped and located bodies demands Infinitesimal Calculus!) Every point of the surface of the prism is subject to a pressure which is determined by its depth below the free surface of the fluid. However, the pressure against its sides causes nothing, because at the same level the pressures on opposite sides are the same, but opposite, and therefore balance. Only the pressures on the horizontal faces need be considered. The pressure on the top face is q·k·s, that on the bottom face q·(k + h)·s, the weight of the body q·h·S, where q is the prism's cross-section, h its height, k the depth of the upper face below the free surface of the fluid, s the specific weight of the fluid and S that of the prism. Hence the vertical forces are q·k·s + q·h·S downwards and q·(k + h)·s upwards. The result depends on whether q·k·s + q·h·S is larger than, equal to or smaller than q·(k + h)·s. The prism has the weight q·h·S , a body with the volume q·h of the prism, but with the density of the fluid, has the weight q·h·s. However, in order that the prism can occupy the place in the fluid, it must displace an equal volume of fluid: q·h·s is therefore the weight of fluid displaced by it. Hence: Weight of the submerged body > = < weight of the fluid it displaces. The weight of a body drops on submersion in a fluid as much as the weight of the fluid which it displaces (Archimedes' Principle) We have chosen a rectilinear prism, because the demonstration of the principle is then simpler. However, it can be shown theoretically and experimentally that it is valid for bodies of any shape, so that q·h = V can be interpreted as the volume of any body. A proof of Archimedes' Principle is given by the equal-armed lever balance of special shape, the hydrostatic balance (Fig. 188). The body to be weighed hangs below one scale and immerses completely in the fluid, in which its loss of weight is to be found. C is a hollow cylinder, the internal volume of which equals that of of the filled cylinder D. You first establish equilibrium of the balance, while D is surrounded by air and C is empty. If you now place the container with the fluid below D, so that it is completely immersed, the balance deflects to the right, that is, D has lost weight. If you now fill C completely with the same fluid as is already in D, equilibrium is restored. The loss of weight is thus compensated by the weight of a volume of fluid, which is equal to the volume in the cylinder D. However, that is the volume of the fluid, which D has displaced by taking its place. Referring now to the work of the preceding section, you have the results: 1. If qhS > qhs, that is, the body is heavier than the fluid it displaced, a downwards force acts: The body becomes submerged. 2. If qhS = qhs, that is, the body has the same weight as the fluid it displaces; the two forces are equal: The body floats in the fluid. 3. If qhS < qhs, that is, the body is lighter than the fluid it displaces, an upwards force acts: The body begins to rise and sticks out of the fluid and displaces less fluid than when it is fully immersed. The volume of the part of the body, sticking out of the fluid, reduces the buoyancy, qhs. In the end, as much of the body sticks out of the fluid so that the already submerged part of fluid weighs as much as the entire body; it does not rise further, but swims on the surface of the fluid. In order to show that the fluid displaced by the submerged part of a body weighs as much as the entire swimming body, you fill the vessel V (Fig. 189) up to the opening o with fluid and then place into it a body A which will swim on the fluid. By weighing of the displaced fluid, you convince yourself that the fluid which flowed out of the vessel at o and the body have the same weight. The densities of a body and a fluid decide how much of the volume of a body gets immersed and how much sticks out as, for example, in the case of an iceberg1. At 0º, fresh water has the density 0.9167, sea water with 3.4 % salt the density 1.0273. Let V be the volume of the iceberg; its weight is then V·0.9167·g. Let V·x denote the submerged part of the iceberg, where x is a real fraction; then the weight of the displaced sea water is V·x·1.0273·g. From the equality of the two weights follows that x = 0.9167/1.0273 = ~ 9/10. Hence only 1/10 of the volume of an iceberg sticks out of the ocean. However, the slimmer, less massive part of the volume will sticks out, the broader, more massive part will be submerged, because the iceberg (by lowering its centre of gravity) will always have the maximum possible stability. As a rule, the visible part of an iceberg is estimated at 1/7 - 1/8 of the total height (frequently 40 - 60 m). 1 Icebergs are the ends of glaciers (fresh water) which have broken off and been carried away by currents and wind; these glaciers lie on polar main land and islands (Inland ice). You can even make a material with specific weight larger than that of a fluid swim naturally in it by giving it a suitable shape. For example, a iron plate does not swim on or float in water, even if it is fully submerged, since it weighs more than the water it displaces. But in the shape of a ship, it will swim, because the submerged part of the ship with its curved form displaces a volume of water which is equal to its weight. A swimming elastic plate, which is deformed by a weight at its centre, exhibits a strange phenomenon (Hertz 1884). Computations show that the buoyancy which the water exerts on the plate due to its deformation equals the weight on it. Hertz says: "How ever large is the weight, it will always be carried by the buoyancy which a plane unloaded plate experiences. If you place a small round disk of stiff paper on water, you can deposit several hundred grams on it while the buoyancy of the paper is only a few grams. Thus, if someone swims on a large plate of ice, it is, exactly speaking, more correct to say, he swims because the plate of ice is deformed by his weight into a very shallow boat than to say that he swimsm, because the ice is light enough to carry him in addition to its own weight. For he would swim anyhow, even if the ice were not lighter than the water; if instead of a person you place arbitrarily large weights on the ice, they might break through the ice and sink, but they would never sink with the ice. The limit of the load depends on the strength of the plate and not on the weight of the ice. It is quite different when persons or weights are distributed uniformly over the area." (Hertz's Collected Works, 1, 292) Since a living being weighs more than the water it displaces (its empty spaces may be disregarded), it will sink in water1. It compensates for its sinking with swimming motions, by which it exerts a pressure downwards and the resistance of the water against this pressure lifts its body, that is, it swims artificially. Mainly through the developing gases in its inside, a dead person is specifically lighter than water and swims naturally. Birds swim naturally in their cover consisting of feathers and air. The floating of fish is artificial: the pressure of their muscles on their swim bladder is required - for rising and sinking - as is seen from the fact that dead fish (also not decaying ones) swim naturally on the water surface. There are fish which are heavier than water (those without swim bladders such as sharks and rays) and fish the weight of which equals that of the water they displace, because they balance the excess weight of their bodies by means of an air filled swim bladder (most of the bone fish teleostae)2. The former, for example, sharks, sink when they do not move along (like an aeroplane), the latter can stop in place, for example, goldfish and carps (like balloons). A bone fish can swim as slowly as it likes, a shark must have a minimum velocity, in order to generate a resistance of the water against its lower half, the upwards component of which balances its overweight and thus carries it - these are the same differences as arise between aeroplanes and air ships (Hesse). 1. Except in the Dead Sea the salt content of 25 % of which makes the water too heavy. (ordinary sea water has a content of 3.5%). However, one can anyhow drown in it, because ordinary swimming is next to impossible as the legs cannot submerge and a person has little control over its body. 2. Telocasta have bone skeletons Stability of the swimming body. Meta-centre Its own weight and buoyancy act all the time on a swimming body, whence it is continuously acted upon by two forces, which are equally large and have opposite directions. Their equality allows only vertical displacement, but admits rotation. For the body to remain at rest, the forces must yet fulfil one condition as shows the following example (Figs. 190/191). The weight of a swimming body is replaced by a force Gt which acts vertically downwards at its centre of gravity G . The buoyancy equals the weight of the fluid volume displaced by the body. This weight can also be replaced by a force. If the centre of gravity of the fluid prior to being displaced by the body lies at A, the swimming body is acted upon at A by the vertically upwards force AB (= Gt). For it to remain at rest, the two forces AB and Gt must lie on the same straight line; in other words, the centre of gravity of the swimming body and the point of attack of the buoyancy must lie above each other (Figs. 190a/191/a); otherwise they form a couple and will rotate the body. Imagine that a body has been displaced from its position of rest, say, by sudden wind action to the position of Figs. 190b/191b, so that G and A no longer lie on the same vertical line. The centre of gravity G maintains, of course, its position in the body, but the point of action of the buoyancy does not, for in every new position of the body its submerged portion has another volume, that is, the displaced fluid volume changes and therefore the position of its centre of gravity. It now lies at A'. The forces Gt and A'B' then form a couple and cause the body to rotate. The two cases of Figs. 190b/191b differ totally. In the second case,. the couple tends to return the body to its position of rest, that is, to straighten it out, in the second case, it tries to turn it over. Thus, in the first case, the equilibrium is unstable, in the second case, stable. If a ship were to swim in an unstable state, as in Fig. 190a, the smallest gust would turn it over, whence we demand that it swims in a stable state as in Fig. 191a. You can formulate the condition for a body to swim in a stable state as follows: Place in the body deflected from its position of rest (Figs.190b/191b) through G and through the earlier point of action A of the buoyancy a straight line. It intersects (in the deflected body) the line of action of the buoyancy at M, the meta-centre of Pierre Bouguer 1698-1758 1746. If the equilibrium is stable, the centre of gravity of the body lies below, if it is unstable above the centre of gravity of the body. In order to make a swimming ship stable, one must have the centre of gravity as low as possible (for example, by means of ballast), so that it will also lie below the meta-centre when it leans over very much. Hydro-static equilibrium of Earth's crust (Isostasy). Two equally heavy massive cylinders which swim on a fluid displace two equally heavy cylinders of fluid. If the cylinders of the displaced fluid have the same diameter, they also have the same height, that is, they sink equally deep into the fluid. If such cylinders have different specific weights, they stick out of the fluid inversely proportional to the ratio of their specific weights. Fig. 192 shows several equally heavy .cylinders with the same cross-section, but different specific weights swimming in a fluid which supports. They demonstrate the fundamental geophysical problem of isostasy: Earth's interior is most probably to a certain degree plastic, so that the firm strata of its surface effectively swim on the lower layers, which must therefore have a larger specific weight. (The geologists speak of two layers in Earth's crust; the upper, Sal, consisting predominantly of rocks with silicon and aluminium, the lower of rocks with silicon and magnesium, Sima,. the former swimming on the latter.) Submergence of the lighter matter in the heavier one generates mass defect below the projecting part. Accordingly, the visible elevations of mass above Earth's surface correspond to subterranean mass defects. The visible elevations, according to Archimedes' Principle, are equal to what is missing below or, in other words, the mass defects are equal to the visible masses; they compensate each other. All of the subterranean masses, the defects of which compensate all the elevations, are bounded by a surface which corresponds to a common depth of submergence like in Fig. 192. It is called the compensation surface and is defined by the fact that on each unit area lies the same mass. Following C.E. Dutton 1841-1912, the state of equilibrium is referred to as Isostasy. The compensation surface lies most probably 118 km below Earth's surface. The surfaces of equal density coincide then with the level surfaces - only not in the top layers of Earth with their mixture of masses of different forms and densities. One level surface will therefore be the last (counting from inside outwards) which corresponds to hydro-static equilibrium; it must have the property that the same pressure is on each of its units of area - the characteristic of the compensation layer. Geological events (formation of mountains, vulcanism, , fracture formation) take place above it, in Earth's crust. Measurement of density according to Archimedes' Principle You call the density of a body the ratio of its mass to its volume (dimensional formula: m·l-³), that is, in the cm-g-sec system, the ratio g/cm³. In order to obtain its density, you must therefore find 1. its mass in grams, 2. its volume in cm³, 3. divide the number of grams by the number of cm³. You determine its mass by weighing, its volume, if it cannot be found by measurement, indirectly: You then find out how much weight it loses, if it is fully submerged in a fluid (Fig. 188). For example: If a piece of copper weighs in air 11.378 g, in distilled water at 4º C 10.100 g, then its loss of weight in water is 1.278 g, that is, it has displaced 1.278 g of water at 4ºC, that is, 1.278 cm³, but has itself a volume of 1.278 cm³. 1.278 cm³ copper contain 11.387 g, whence the weight of 1 cm³ copper is given by 11.378/1.278 = 8.903 g; the density of copper is 8.903 g/cm³. Methods of determining the density of solids differ essentially by the means employed for the determination of their loss of weight in a fluid, that is, on the method of determination of their volume. You use for this purpose a hydrostatic spring-balance or a weight-hydrometer. 1. You use the hydrostatic balance (Fig. 188) to determine a body's loss of weight by weighing it normally and then again when it is submerged in a fluid. Also the spring balance of Philip von Jolly 1809-1889 (Fig. 193) is a hydrostatic balance underneath the scale of which hangs a second scale submerged in the fluid in which the body is to be submerged. The lower end of the spring has a marker which is displaced along a scale during weighing. When you place the body in the upper scale, the marker moves to a definite position on the scale. You then determine a) how many grams you have to place in the upper scale instead of the body, in order to move the marker to the same place on the scale - that is, you find the weight of the body in air - b) how many more grams you have to place when the body lies in the lower scale, that is, how much weight it loses by buoyancy. 2. The weight hydrometer (Fig. 194) is a float B consisting of two rigidly interconnected scales A and C lying above each other, the lower (as in Jolly's spring balance) in the fluid, the upper in air. The marker O, which you move by a load on the float just to the level of the fluid, lies between A and B. The two weightings proceed as for Jolly's instrument. 3. The container hydrometer is a small bottle which you fill to its rim with a fluid. If you then introduce a small body, it displaces the fluid outside. Hence, if you a) weigh it filled to the rim with the fluid and with the body next to it on the scale and b) when the body is in the bottle, you find from the difference of the two results the weight of the fluid displaced by the body.. Measurement of the density of fluids (Mohr's scale hydrometer) In order to measure the density of a fluid, you determine the loss of weight of a solid body first in water and then in the fluid. Its loss of weight in water yields its volume. Its loss of weight in the fluid which is equal to the weight of the displaced fluid thus yields the weight (through the first measurement of a known volume of this fluid. You can employ always the same body (in Figs. 195/196 a small glass container with mercury), whence you need not find its loss of weight in water, that is, its volume is only determined once, in order to find out the volume of the fluid displaced for the second measurement. Thus, the measurement of the density of a fluid is reduced to that of the loss of weight of the small glass bottle in it. Here too you employ a hydrostatic balance or Jolly's spring balance or a weight hydrometer or a scale hydrometer. For this purpose, the hydrostatic balance of Friedrich Mohr 1806-1879 (Fig. 195) is used. The weighing of the glass bottle for the determination of its loss of weight is done by movement of sliding weights on the lever, subdivided into 10 equal sections. The hydrometer of Gabriel Daniel Fahrenheit 1686-1736 (Fig. 196) is a hollow glass float, which carries (instead of the lower scale with a load which is the same for all weightings) a mass of mercury, mostly the sphere of a thermometer, since one must take into account the temperature of the fluid. The gadget is loaded for each measurement in such a way that it dips into the fluid to a definite mark. If it weighs P·g in air and has to be loaded, when it swims in water, additionally with p·g, in order that it will submerge to the marker, it receives in water, since the displaced water volume is equally heavy as the swimming body, a buoyancy of (P + p)g, that is, it displaces (P + p) cm³ water, that is, it dips in with a volume of (P + p) cm³. If it must be loaded in the fluid to be investigated with p'g in order to dip in as far as the marker - that is, again to dip in with the volume (P + p) cm³ - it displaces (P + p')g of the fluid. Thus (P + p')g cm³ contain (P + p')g of the fluid, whence 1 cm³ contains (P + p')/(P + p)g. Scale hydrometers differ from weight hydrometers in the same way as automatic balances differ from non-automatic ones: They only require reading of a scale. A scale hydrometer (Fig. 197) - always a thermometer like float - is a hydrometer with an empirically subdivided and numbered scale. You let it swim in the fluid and read off the number on its scale to which it dips in. (The same weight of fluid is displaced.) The number read off the scale does not always represent the density. Its significance depends on the purpose for which the hydrometer has been calibrated. For example, scale hydrometers are calibrated as alcohol meter in order to determine the percentage of the weight of pure alcohol in a mixture of alcohol and water (spirits), as alcohol meter for mixtures of absolute alcohol (Gay-Lussac hydrometer), as alkali meter for the determination of the alkali content in lyes, as lactometer of the water content in milk, etc. There exist also scale hydrometers with arbitrary subdivisions like that of Antoine Baumé 1728-1804. Concentrated sulphuric acid should have 66º B., that is, its density should be such that the Baumé hydrometer dips in up to the subdivision 66; the density of nitric acid in the trade should correspond to 36 B. In order to transform Grade B. into density, you use a table. If n is the number of degrees and d the density, then, depending on whether the fluid (at 12.5º C) is heavier or lighter than water, d=/(146-n) and d=/(146+n). continue step back
http://mpec.sc.mahidol.ac.th/RADOK/physmath/PHYSICS/e2.htm
13
80
Arc Measure vs Arc Length In geometry, an arc is an often found, useful figure. Generally, the term arc is used to refer to any smooth curve. The length along the curve from the starting to end point is known as the arc length. Specifically, the term arc is used for a portion of a circle along its circumference. The size of the arc is usually given by the size of the angle subtended by the arc at the center or the length of the arc. The angle subtended at the center is also known as the angle measure of an arc or informally the arc measure. It is measured in degrees or radians. The length of the arc differs from the size of the arc, where the length is dependent on the radius of the curve and the angle measure of the arc. This relation between the arc length and the arc measure can be explicitly expressed by the mathematical formula, S = rθ where S is the arc length, r is the radius and θ is the angle measure of the arc in radians (this is a direct result from the definition of the radian). From this relation, the formula for the perimeter of a circle or the circumference can be easily obtained. Since the perimeter of a circle is the arc length with an angle measure of 2π radians, the circumference is, C = 2πr These formulas are important at every level of mathematics, and many applications can be derived based on these simple ideas. In fact, the definition of the radian is based on the above formula. When the term arc refers to a curved line, other than a circular line, advanced calculus has to be employed to calculate the arc length. The definite integral of the function describing the path of the curve between two points in space gives the arc length. What is the difference between Arc Measure and Arc Length? • The size of an arc is measured by the length of the arc or the angle measure of the arc (arc measure). Arc length is the length along the curve while the angle measure of the arc is the angle subtended at the center by an arc. • The arc length is measured in units of length while the angle of measure is measured in units of angles. • The relation between the arc length and the angle measure of the arc is given by S = rθ.
http://www.differencebetween.com/difference-between-arc-measure-and-vs-arc-length/
13
74
||This article needs additional citations for verification. (September 2012)| Linear equations can have one or more variables. Linear equations occur with great regularity in applied mathematics. While they arise quite naturally when modeling many phenomena, they are particularly useful since many non-linear equations may be reduced to linear equations by assuming that quantities of interest vary to only a small extent from some "background" state. Linear equations do not include exponents. Linear equations in two variables A common form of a linear equation in the two variables x and y is where m and b designate constants (parameters). The origin of the name "linear" comes from the fact that the set of solutions of such an equation forms a straight line in the plane. In this particular equation, the constant m determines the slope or gradient of that line, and the constant term b determines the point at which the line crosses the y-axis, otherwise known as the y-intercept. Since terms of linear equations cannot contain products of distinct or equal variables, nor any power (other than 1) or other function of a variable, equations involving terms such as xy, x2, y1/3, and sin(x) are nonlinear. Forms for 2D linear equations Linear equations can be rewritten using the laws of elementary algebra into several different forms. These equations are often referred to as the "equations of the straight line." In what follows, x, y, t, and θ are variables; other letters represent constants (fixed numbers). General (or standard) form - In the general (or standard) form the linear equation is written as: - where A and B are not both equal to zero. The equation is usually written so that A ≥ 0, by convention. The graph of the equation is a straight line, and every straight line can be represented by an equation in the above form. If A is nonzero, then the x-intercept, that is, the x-coordinate of the point where the graph crosses the x-axis (where, y is zero), is C/A. If B is nonzero, then the y-intercept, that is the y-coordinate of the point where the graph crosses the y-axis (where x is zero), is C/B, and the slope of the line is −A/B. The general form is sometimes written as: - where a and b are not both equal to zero. The two versions can be converted from one to the other by moving the constant term to the other side of the equal sign. - where m is the slope of the line and b is the y-intercept, which is the y-coordinate of the location where line crosses the y axis. This can be seen by letting x = 0, which immediately gives y = b. It may be helpful to think about this in terms of y = b + mx; where the line passes through the point (0, b) and extends to the left and right at a slope of m. Vertical lines, having undefined slope, cannot be represented by this form. - where m is the slope of the line and (x1,y1) is any point on the line. - The point-slope form expresses the fact that the difference in the y coordinate between two points on a line (that is, y − y1) is proportional to the difference in the x coordinate (that is, x − x1). The proportionality constant is m (the slope of the line). - where (x1, y1) and (x2, y2) are two points on the line with x2 ≠ x1. This is equivalent to the point-slope form above, where the slope is explicitly given as (y2 − y1)/(x2 − x1). Multiplying both sides of this equation by (x2 − x1) yields a form of the line generally referred to as the symmetric form: - where a and b must be nonzero. The graph of the equation has x-intercept a and y-intercept b. The intercept form is in standard form with A/C = 1/a and B/C = 1/b. Lines that pass through the origin or which are horizontal or vertical violate the nonzero condition on a or b and cannot be represented in this form. Using the order of the standard form one can rewrite the equation in matrix form: Further, this representation extends to systems of linear equations. Since this extends easily to higher dimensions, it is a common representation in linear algebra, and in computer programming. There are named methods for solving system of linear equations, like Gauss-Jordan which can be expressed as matrix elementary row operations. - Two simultaneous equations in terms of a variable parameter t, with slope m = V / T, x-intercept (VU−WT) / V and y-intercept (WT−VU) / T. - This can also be related to the two-point form, where T = p−h, U = h, V = q−k, and W = k: - In this case t varies from 0 at point (h,k) to 1 at point (p,q), with values of t between 0 and 1 providing interpolation and other values of t providing extrapolation. - where m is the slope of the line and b is the y-intercept. When θ = 0 the graph will be undefined. The equation can be rewritten to eliminate discontinuities: - The normal segment for a given line is defined to be the line segment drawn from the origin perpendicular to the line. This segment joins the origin with the closest point on the line to the origin. The normal form of the equation of a straight line is given by: - where θ is the angle of inclination of the normal segment, and p is the (signed) length of the normal segment. The normal form can be derived from the general form by dividing all of the coefficients by - This form is also called the Hesse standard form, after the German mathematician Ludwig Otto Hesse. - Unlike the slope-intercept and intercept forms, this form can represent any line but also requires only two finite parameters, θ and p, to be specified. Note that if the line is through the origin (C=0, p=0), one drops the |C|/-C term to compute sinθ and cosθ 2D vector determinant form The equation of a line can also be written as the determinant of two vectors. If and are unique points on the line, then will also be a point on the line if the following is true: - One way to understand this formula is to use the fact that the determinant of two vectors on the plane will give the area of the parallelogram they form. Therefore, if the determinant equals zero then the parallelogram has no area, and that will happen when two vectors are on the same line. To expand on this we can say that , and . Thus and , then the above equation becomes: Then dividing both side by would result in the “Two-point form” shown above, but leaving it here allows the equation to still be valid when . - This is a special case of the standard form where A = 0 and B = 1, or of the slope-intercept form where the slope m = 0. The graph is a horizontal line with y-intercept equal to b. There is no x-intercept, unless b = 0, in which case the graph of the line is the x-axis, and so every real number is an x-intercept. - This is a special case of the standard form where A = 1 and B = 0. The graph is a vertical line with x-intercept equal to a. The slope is undefined. There is no y-intercept, unless a = 0, in which case the graph of the line is the y-axis, and so every real number is a y-intercept. Connection with linear functions A linear equation, written in the form y = f(x) whose graph crosses through the origin, that is, whose y-intercept is 0, has the following properties: where a is any scalar. A function which satisfies these properties is called a linear function (or linear operator, or more generally a linear map). However, linear equations that have non-zero y-intercepts, when written in this manner, produce functions which will have neither property above and hence are not linear functions in this sense. They are known as affine functions. Linear equations in more than two variables A linear equation can involve more than two variables. The general linear equation in n variables is: In this form, a1, a2, …, an are the coefficients, x1, x2, …, xn are the variables, and b is the constant. When dealing with three or fewer variables, it is common to replace x1 with just x, x2 with y, and x3 with z, as appropriate. In vector notation, this can be expressed as: where is a vector normal to the plane, are the coordinates of any point on the plane, and are the coordinates of the origin of the plane. - Quadratic equation (degree = 2) - Cubic equation (degree = 3) - Quartic equation (degree = 4) - Quintic equation (degree = 5) - Barnett, Ziegler & Byleen 2008, pg. 15 - Barnett, R.A.; Ziegler, M.R.; Byleen, K.E. (2008), College Mathematics for Business, Economics, Life Sciences and the Social Sciences (11th ed.), Upper Saddle River, N.J.: Pearson, ISBN 0-13-157225-3 - Algebraic Equations at EqWorld: The World of Mathematical Equations. - Video tutorial on solving one step to multistep equations - Linear Equations and Inequalities Open Elementary Algebra textbook chapter on linear equations and inequalities. - Hazewinkel, Michiel, ed. (2001), "Linear equation", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
http://en.wikipedia.org/wiki/Linear_equation
13
50
Theory of Operation The scope clock uses a highly uncommon method called Lissajous figures to draw characters on the screen. Whereas most every other oscilloscope display in the world uses short line segments or dots to form characters, the scope clock sweeps the electron beam with sine and cosine waves. Always moving, the beam produces smooth, gentle curves with no bright spots or jaggies. A Lissajous figure is produced when an oscilloscope beam is moved in the X axis by a sine wave, and a matching sine wave that is 90 degrees out of phase (called quadrature) is applied to the Y axis. Most people who have taken an electronics or physics class have done this experiement, then wondered what possible use it could have in the real world. Well, here's that use. The scope clock has special circuitry called a circle generator. This consists of five parts: The wave shaper, the shape selector, the DACs, the X-Y summer and the arc blanker. The wave shaper starts with a 38.4 KHz square wave that is produced by dividing the 19.6608 MHz processor clock by 512. This square wave is sent through a series of low-pass and band-pass filters to turn it into a 38.4 KHz sine wave, then through more filters to shift its phase by 90 degrees. The sine wave is also inverted to produce a negative wave, needed to make a backslash. The wave selector determines which version of the wave is sent to the Y axis. The X axis always receives the original sine wave. If that wave is sent to the Y axis also, then a forward slash, a line with positive slope, will be produced. If the inverted wave is sent to the Y axis, then a backslash will be produced. If a 90 degree quadrature wave is sent to the Y axis, then a circle will be The DACs are four digital-to-analog converters whose job is to scale the circle size and position the center of the circle on the screen. These are special DACs called multiplying DACs. A multiplying DAC will multiply, in the analog domain, the voltage applied to its reference input by a scale factor equal to the number loaded into the DAC divided by its input number range. For example, if a 1 volt peak-to-peak sine wave is fed into the DAC and its register is loaded with 64, which is 1/4 of its range of 256, then the outrput will be a 1/4 volt peak-to-peak sine wave. Two of the DACs are used to scale the sine waves for the X and Y axes. The other two have 5 volts as their input, and simply produce X and Y offset voltages to move the circle on the screen. There are two summers, one each for the X and Y axes. A summer adds the offset voltage and the sine wave and another offset voltage from the centering knob to produce a signal that is amplified and sent to the oscilloscope tube deflection The arc blanker switches the electron beam on and off as the sine waves sweep out the circle on the screen. It is made from an 8-way multiplexer that selects one bit of the arc code byte at a time as the circle is swept. The selection is done by using three higher-frequency bits from the frequency divider that produces the 38.4 KHz square wave. The selected control bit is passed to the blanking circuit, which shifts it in voltage and applies it to the grid of the CRT. Because the circle generator hardware does most of the work, only a very simple 8-bit microcontroller is used to guide it. This is a Freescale (nee Motorola) HC908GP32 processor with 32 Kbytes of ROM and 512 bytes of RAM. The job of the microcontroller in the scope clock is to set up the circle- generating hardware with the needed parameters for each character segment. The code draws characters as a series of segments. Each segment is a single circle, arc or line. A typical character has two to five segments. A sector blanking control allows the creation of arcs. There are eight sectors, each from one center axis of the circle to the nearest 45 degree point. Any of the 256 combinations of eight sectors may be blanked. The brightness of a segment is a function of the display time and the segment size (height and width). It is necessary to make the segment display time an integer multiple of the circle drawing period. This is accomplished by using a programmed delay loop. The circle generator runs at 38.4 KHz. This means that the minimum display time (one revolution of the circle generator) is 26 microseconds long. For a standard segment of perhaps 20 radius, the display time is 260 microseconds. This translates to the ability to display about 40 small characters in 1/60 second. The 3" diameter screen is 256 units square, which makes each unit about 0.01" or 1/4 millimeter. The Maximum diameter is 255 units (2.5"), minimum is 1 unit (.01"). The minimum usable circle diameter is about 1/32" or about 4 units. A basic font has been designed at 20 tall x 12 wide, proportional spaced. This is big enough for 8 lines of text of 16 characters each. A double sized font would be more readable from a distance, giving 4 lines of 8 characters each. Doubling in size again yields the time display size, giving two lines of 4 characters each. The font is stored as two tables: an index into the segment table sorted by ASCII value, and a segment table with a series of segment groups, each terminated by a flag containing the character width. Each segment requires six bytes of data: X,Y position (circle center), X,Y diameter, a shape code (\ or O or /) and the eight arc bits. Back to the SC200 page. Last updated March 19, 2008
http://www.cathodecorner.com/sc200theory.html
13
69
Fin (extended surface) In the study of heat transfer, a fin is a surface that extends from an object to increase the rate of heat transfer to or from the environment by increasing convection. The amount of conduction, convection, or radiation of an object determines the amount of heat it transfers. Increasing the temperature difference between the object and the environment, increasing the convection heat transfer coefficient, or increasing the surface area of the object increases the heat transfer. Sometimes it is not economical or it is not feasible to change the first two options. Adding a fin to an object, however, increases the surface area and can sometimes be an economical solution to heat transfer problems. Simplified case To create a simplified equation for the heat transfer of a fin, many assumptions need to be made: - Steady state - Constant material properties (independent of temperature) - No internal heat generation - One-dimensional conduction - Uniform cross-sectional area - Uniform convection across the surface area With these assumptions, the conservation of energy can be used to create an energy balance for a differential cross section of the fin. Fourier’s law states that where is the cross-sectional area of the differential element. Therefore the conduction rate at x+dx can be expressed as Hence, it can also be expressed as Since the equation for heat flux is then is equal to where is the surface area of the differential element. By substitution it is found that This is the general equation for convection from extended surfaces. Applying certain boundary conditions will allow this equation to simplify. Convecting Tip There is convection occurring all over a fin. The surface all along the length as well as the surface of the tip of the fin are undergoing convection. This calls for a correction of the area of the fin to adequately calculate the heat transfer of the fin. The area of the fin can be calculated by adding the surface area of the sides of the fin and the surface area of the tip. The length of the fin can be defined by combining the two areas. The total surface area of the fin will become the product of the perimeter and a corrected length, . Then, rearranging the expression to solve for : Expanding the equation using the equation for the area of a rectangle we find: Where L is the length of the rectangle, w is its width, and t is its thickness. Expanding the equation using the equation for the area of a cylinder we find: Where D is the diameter of the cylinder and L is its length. Now the length can be replaced in the insulated tip equation with the corrected length to make the equation more accurate for the fin with the convecting tip: Adiabatic Tip When comparing the surface area of the fin to the surface area of the tip, it can be seen that the surface area of the tip is fairly negligible when calculating the heat transfer. In this instance, it is assumed that the tip is insulated. If the tip is completely insulated, there will be no heat loss from the tip. The boundary condition regarding the adiabatic tip can be expressed as: From this, the general equation can be altered to: Then solving for : And substituting that value and solving for θ(x): Using the hyperbolic cosine function, the equation becomes: If the fin is very long, the hyperbolic tangent function will approach 1. This will cause the heat transfer equation to simplify again to: The heat transfer in this case is approximately the same as the calculations for the case of an infinitely long fin. Infinitely Long Fin As the tip of the fin is approached, the temperature of the tip can be seen approaching the ambient temperature of the air. In the case that the fin is assumed to be infinitely long, it can also be assumed that the temperature of the tip is equal to the temperature of the air. Due to this assumption, the equation can be simplified to: In order to make this equation true, must equal zero and we find that is equal to . Thus, the first term can be eliminated and the equation simplified once more to: The total heat transfer from the fin is equal to the heat which enters the fin. All of the heat that enters the fin comes from conduction from the heat source to the base, which eventually conducts from the base to the fin. All of this heat must leave the fin through convection, making the equation: Uniform cross-sectional area For all four cases, the above equation will simplify because the area is constant and where P is the perimeter of the cross-sectional area. Thus, the general equation for convection from extended surfaces with constant cross-sectional area simplifies to The solution to the simplified equation is The constants and can be found by applying the proper boundary conditions. All four cases have the boundary condition for the temperature at the base. The boundary condition at , however, is different for all of them, where L is the length of the fin. For the first case, the second boundary condition is that there is free convection at the tip. Therefore, which simplifies to the equations can be combined to produce and can be solved to produce the temperature distribution, which is in the table below. Then applying Fourier’s law at the base of the fin, the heat transfer rate can be found. Similar mathematical methods can be used to find the temperature distributions and heat transfer rates for other cases. For the second case, the tip is assumed to be adiabatic or completely insulated. Therefore at x=L, because heat flux is 0 at an adiabatic tip. For the third case, the temperature at the tip is held constant. Therefore the boundary condition is: For the fourth and final case, the fin is assumed to be infinitely long. Therefore the boundary condition is: The temperature distributions and heat transfer rates can then be found for each case. |Case||Tip condition (x=L)||Temperature distribution||Fin heat transfer rate| |A||Convection heat transfer| |D||Infinite Fin Length| Fin performance Fin performance can be described in three different ways. The first is fin effectiveness. It is the ratio of the fin heat transfer rate to the heat transfer rate of the object if it had no fin. The formula for this is where is the fin cross-sectional area at the base. Fin performance can also be characterized by fin efficiency. This is the ratio of the fin heat transfer rate to the heat transfer rate of the fin if the entire fin were at the base temperature. The fin efficiency is defined as in this equation is equal to the surface area of the fin. Fin efficiency will always be less than one. This is because assuming the temperature throughout the fin is at the base temperature would increase the heat transfer rate. The third way fin performance can be described is with overall surface efficiency. where is the total area and is the sum of the heat transfer rates of all the fins. This is the efficiency for an array of fins. Fin uses Fins are most commonly used in heat exchanging devices such as radiators in cars and heat exchangers in power plants. They are also used in newer technology such as hydrogen fuel cells. Nature has also taken advantage of the phenomena of fins. The ears of jackrabbits and Fennec Foxes act as fins to release heat from the blood that flows through them. - "Conservation of Energy". Donald E. Richards. Retrieved 2006-09-14. - "Fourier's Law of Heat Conduction". Dr Ulrich Faul. Retrieved 2006-09-18. - "Radiator Fin Machine or Machinery". FinTool International. Retrieved 2006-09-18. - "The Design of Chart Heat Exchangers". Chart. Archived from the original on 2006-10-11. Retrieved 2006-09-16. - "VII.H.4 Development of a Thermal and Water Management System for PEM Fuel Cells". Guillermo Pont. Retrieved 2006-09-17. - "Jackrabbit ears: surface temperatures and vascular responses". sciencemag.org. Retrieved 2006-09-19.
http://en.wikipedia.org/wiki/Fin_(extended_surface)
13
59
In the history of the United States, the term Reconstruction Era has two senses: the first covers the complete history of the entire U.S. from 1865 to 1877 following the Civil War; the second sense focuses on the transformation of the Southern United States from 1863 to 1877, as directed by Washington, with the reconstruction of state and society. Although in strict terms Reconstruction ended in 1877, the phrase "Reconstruction Era" includes more eclectically, in certain contexts, several years even after the disputed presidential election of 1876, with respect to pockets of the Deep South where Republicans continued to hold sway. Reconstruction freed and enfranchised African Americans, giving them the right to vote for the first time in 1867. |Other names||Reconstruction; Radical Reconstruction| |Participants||President Abraham Lincoln President Andrew Johnson President Ulysses S. Grant President Rutherford B. Hayes |Date||January 1, 1863to March 31, 1877| New Orleans Riot South Carolina Riots In the history of the United States, the term Reconstruction Era has two senses: the first covers the complete history of the entire U.S. from 1865 to 1877 following the Civil War; the second sense focuses on the transformation of the Southern United States from 1863 to 1877, as directed by Washington, with the reconstruction of state and society. Although in strict terms Reconstruction ended in 1877, the phrase "Reconstruction Era" includes more eclectically, in certain contexts, several years even after the disputed presidential election of 1876, with respect to pockets of the Deep South where Republicans continued to hold sway. From 1863 to 1869, Presidents Abraham Lincoln, and Andrew Johnson (who became president on April 15, 1865) took a moderate position designed to bring the South back to normal as soon as possible, while the Radical Republicans (as they called themselves) used Congress to block the moderate approach, impose harsh terms, and upgrade the rights of the Freedmen (former slaves). The views of Lincoln and Johnson prevailed until the election of 1866, which enabled the Radicals to take control of policy, remove former Confederates from power, and enfranchise the Freedmen. A Republican coalition came to power in nearly all the southern states and set out to transform the society by setting up a free labor economy, with support from the Army and the Freedmen's Bureau. The Radicals, upset at President Johnson's opposition to Congressional Reconstruction, filed impeachment charges but the action failed by one vote in the Senate. President Ulysses S. Grant supported Radical Reconstruction and enforced the protection of African Americans in the South through the use of the Force Acts passed by Congress. President Grant used both the U.S. Justice Department and the U.S. military to suppress white insurgency and support Republican reconstructed states. Southern Democrats, who strongly opposed African American equality to whites, alleged widespread corruption, counterattacked and regained power in each state by 1877. President Rutherford B. Hayes blocked efforts to overturn Reconstruction legislation. The deployment of the U.S. military was central to the establishment of Southern Reconstructed state governments and the suppression of violence against black and white voters. Reconstruction was a significant chapter in the history of civil rights in the United States, but most historians consider it a failure because the region became a poverty-stricken backwater and whites re-established their supremacy, making the Freedmen second-class citizens by the start of the 20th century. Historian Eric Foner argues, "What remains certain is that Reconstruction failed, and that for blacks its failure was a disaster whose magnitude cannot be obscured by the genuine accomplishments that did endure." In the different states Reconstruction began and ended at different times; federal Reconstruction finally ended with the Compromise of 1877. In recent decades most historians follow Foner (1988) in dating the Reconstruction of the South as starting in 1863 (with emancipation) rather than 1865; the usual ending has always been 1877. Reconstruction policies were debated in the North when the war began, and commenced in earnest after the Emancipation Proclamation, issued on January 1, 1863. Reconstruction policies were implemented when a Confederate state came under the control of the US Army. President Abraham Lincoln set up reconstructed governments in several southern states during the war, including Tennessee, Arkansas, and Louisiana. He experimented with giving land to former slaves in South Carolina. Following Lincoln's assassination in April 1865, President Andrew Johnson tried to follow Lincoln's policies and appointed new governors in the summer of 1865. Johnson quickly declared that the war goals of national unity and the ending of slavery had been achieved, so that reconstruction was completed. Republicans in Congress refused to accept Johnson's terms and rejected the new members of Congress elected by the South; in 1865 and 1866, they broke with the president. A sweeping Republican victory in the 1866 Congressional elections in the North gave the Radical Republicans enough control of Congress to override Johnson's vetoes and began what is called "Radical Reconstruction" in 1867. Congress removed civilian governments in the South in 1867 and put the former Confederacy under the rule of the U.S. Army. The army conducted new elections in which the freed slaves could vote, while whites who had held leading positions under the Confederacy were temporarily denied the vote and were not permitted to run for office. In ten states, coalitions of freedmen, recent black and white arrivals from the North (carpetbaggers), and white Southerners who supported Reconstruction (scalawags) cooperated to form Republican biracial state governments. They introduced various reconstruction programs, including the founding of public schools in most states for the first time, and the establishment of charitable institutions. They raised taxes, which historically had been low as planters preferred to make private investments for their own purposes; offered massive aid to support railroads to improve transportation and shipping. Conservative opponents charged that Republican regimes were marred by widespread corruption. Violent opposition towards freedmen and whites who supported Reconstruction emerged in numerous localities under the name of the Ku Klux Klan (KKK), a secret vigilante organization, which led to federal intervention by President Ulysses S. Grant in 1871 that suppressed the Klan. White Democrats calling themselves "Redeemers" regained control state by state, sometimes using fraud and violence to control state elections. A deep national economic depression following the Panic of 1873 led to major Democratic gains in the North, the collapse of many railroad schemes in the South, and a growing sense of frustration in the North. The end of Reconstruction was a staggered process, and the period of Republican control ended at different times in different states. With the Compromise of 1877, Army intervention in the South ceased and Republican control collapsed in the last three state governments in the South. This was followed by a period that white Southerners labeled Redemption, in which white-dominated state legislatures enacted Jim Crow laws and (after 1890) disenfranchised most blacks and many poor whites through a combination of constitutional amendments and electoral laws. The white Democrat Southerners' memory of Reconstruction played a major role in imposing the system of white supremacy and second-class citizenship for blacks, known as the age of Jim Crow. Reconstruction addressed how the eleven seceding states would regain self-government and be reseated in Congress, the civil status of the former leaders of the Confederacy, and the Constitutional and legal status of freedmen, especially their civil rights and whether they should be given the right to vote. Violent controversy erupted throughout the South over these issues. The laws and constitutional amendments that laid the foundation for the most radical phase of Reconstruction were adopted from 1866 to 1871. By the 1870s, Reconstruction had officially provided freedmen with equal rights under the constitution, and blacks were voting and taking political office. Republican legislatures, coalitions of whites and blacks, established the first public school systems and numerous charitable institutions in the South. Beginning in 1874, however, there was a rise in white paramilitary organizations, such as the White League and Red Shirts in the Deep South, whose political aim was to drive out the Republicans. They also disrupted political organizing and terrorized blacks to bar them from the polls in Louisiana, Mississippi, North and South Carolina. From 1873 to 1877, conservative white Democrats (calling themselves "Redeemers") regained power in the states. In the 1860s and 1870s the terms "radical" and "conservative" had distinctive meanings. "Conservatism" in this context generally indicates the mindset of the ruling elite of the planter class. Many leaders[clarification needed] who had been Whigs were committed to modernization. Most of the "radical" Republicans in the North were men who believed in free enterprise and industrialization; most were also modernizers and former Whigs. The "Liberal Republicans" of 1872 shared the same outlook except they were especially opposed to the corruption they saw around President Grant, and believed that the goals had been achieved so that the federal intervention could now end. Passage of the 13th, 14th, and 15th Amendments is the constitutional legacy of Reconstruction. These Reconstruction Amendments established the rights that, through extensive litigation, led to Supreme Court rulings starting in the early 20th century that struck down discriminatory state laws. A "Second Reconstruction", sparked by the Civil Rights Movement, led to civil rights laws in 1964 and 1965 that protected and enforced full civic rights of African Americans. Reconstruction played out against a backdrop of a once prosperous economy in ruins. The Confederacy in 1861 had 297 towns and cities with a combined population of 835,000; of these, 162 locations with 681,000 total residents were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta, Georgia; Charleston, South Carolina; Columbia, South Carolina; and Richmond, Virginia; these eleven contained 115,900 people in the 1860 census, or 14% of the urban South. The number of people who lived in the destroyed towns represented just over 1% of the Confederacy's combined urban and rural populations. In addition, 45 courthouses were burned (out of 830), destroying the documentation for the legal relationships in the affected communities. Farms were in disrepair, and the prewar stock of horses, mules and cattle was much depleted; two-fifths of the South's livestock had been killed. The South's farms were not highly mechanized, but the value of farm implements and machinery in the 1860 Census was $81 million and was reduced by 40% by 1870. The transportation infrastructure lay in ruins, with little railroad or riverboat service available to move crops and animals to market. Railroad mileage was located mostly in rural areas and over two-thirds of the South's rails, bridges, rail yards, repair shops and rolling stock were in areas reached by Union armies, which systematically destroyed what they could. Even in untouched areas, the lack of maintenance and repair, the absence of new equipment, the heavy over-use, and the deliberate relocation of equipment by the Confederates from remote areas to the war zone ensured the system would be ruined at war's end. Restoring the infrastructure — especially the railroad system — became a high priority for Reconstruction state governments. The enormous cost of the Confederate war effort took a high toll on the South's economic infrastructure. The direct costs to the Confederacy in human capital, government expenditures, and physical destruction from the war totaled $3.3 billion. By 1865, the Confederate dollar was worthless due to massive inflation, and people in the South had to resort to bartering services for goods, or else use scarce Union dollars. With the emancipation of the southern slaves, the entire economy of the South had to be rebuilt. Having lost their enormous investment in slaves, white planters had minimal capital to pay freedmen workers to bring in crops. As a result, a system of sharecropping was developed where landowners broke up large plantations and rented small lots to the freedmen and their families. The South was transformed from a prosperous minority of landed gentry slaveholders into a tenant farming agriculture system. The end of the Civil War was accompanied by a large migration of new freedpeople to the cities. In the cities, African Americans were relegated to the lowest paying jobs such as unskilled and service labor. Men worked as rail workers, rolling and lumber mills workers, and hotels workers. The large population of slave artisans during the antebellum period had not been translated into a large number of freemen artisans during Reconstruction. Black women were largely confined to domestic work employed as cooks, maids, and child nurses. Others worked in hotels. A large number became laundresses. Over a quarter of Southern white men of military age — meaning the backbone of the South's white workforce — died during the war, leaving countless families destitute. Per capita income for white southerners declined from $125 in 1857 to a low of $80 in 1879. By the end of the 19th century and well into the 20th century, the South was locked into a system of poverty. How much of this failure was caused by the war and by previous reliance on agriculture remains the subject of debate among economists and historians. During the Civil War, the Radical Republican leaders argued that slavery and the Slave Power had to be permanently destroyed, and that all forms of Confederate nationalism had to be suppressed. Moderates said this could be easily accomplished as soon as Confederate armies surrendered and the Southern states repealed secession and accepted the 13th Amendment – most of which happened by December 1865. President Lincoln was the leader of the moderate Republicans and wanted to speed up Reconstruction and reunite the nation painlessly and quickly. Lincoln formally began Reconstruction in late 1863 with his Ten percent plan, which went into operation in several states but which Radical Republicans opposed. Lincoln pocket vetoed the Radical plan, the Wade–Davis Bill of 1864, which was much more strict than the Ten-Percent Plan. The opposing faction of Radical Republicans was skeptical of Southern intentions and demanded stringent federal action. Congressman Thaddeus Stevens of Pennsylvania and Senator Charles Sumner of Massachusetts led the Radicals. Sumner argued that secession had destroyed statehood but the Constitution still extended its authority and its protection over individuals, as in existing U.S. territories. Thaddeus Stevens and his followers viewed secession as having left the states in a status like new territories. The Republicans sought to prevent Southern politicians from "restoring the historic subordination of Negroes". Since slavery was abolished, the three-fifths compromise no longer applied to counting the population of blacks. After the 1870 census, the South would gain numerous additional representatives in Congress, based on the population of freedmen. One Illinois Republican expressed a common fear that if the South were allowed to simply restore its previous established powers, that the "reward of treason will be an increased representation". Upon Lincoln's assassination in April 1865, Andrew Johnson of Tennessee, who had been elected with Lincoln in 1864 as the latter's vice president, became president. Johnson rejected the Radical program of harsh, lengthy Reconstruction and instead appointed his own governors and tried to finish reconstruction by the end of 1865. Thaddeus Stevens vehemently opposed President Johnson's plans for an abrupt end to Reconstruction, insisting that Reconstruction must "revolutionize Southern institutions, habits, and manners… The foundations of their institutions… must be broken up and relaid, or all our blood and treasure have been spent in vain." By early 1866, full-scale political warfare existed between Johnson (now allied with the Democrats) and the Radical Republicans; he vetoed laws and issued orders that contradicted Congressional legislation. Congress rejected Johnson's argument that he had the war power to decide what to do, since the war was over. Congress decided it had the primary authority to decide how Reconstruction should proceed, because the Constitution stated the United States had to guarantee each state a republican form of government. The Radicals insisted that meant Congress decided how Reconstruction should be achieved. The issues were multiple: who should decide, Congress or the president? How should republicanism operate in the South? What was the status of the Confederate states? What was the citizenship status of men who had supported the Confederacy? What was the citizenship and suffrage status of freedmen? The election of 1866 decisively changed the balance of power, giving the Republicans two-thirds majorities in both houses of Congress, and enough votes to overcome Johnson's vetoes. They moved to impeach Johnson because of his constant attempts to thwart Radical Reconstruction measures, by using the Tenure of Office Act. Johnson was acquitted by one vote, but he lost the influence to shape Reconstruction policy. The Republican Congress established military districts in the South and used Army personnel to administer the region until new governments loyal to the Union could be established. Congress temporarily suspended the ability to vote of approximately 10,000 to 15,000 white men who had been Confederate officials or senior officers, while constitutional amendments gave full citizenship and suffrage to former slaves. With the power to vote, freedmen started participating in politics. While many slaves were illiterate, educated blacks (including escaped slaves) moved down from the North to aid them, and natural leaders also stepped forward. They elected white and black men to represent them in constitutional conventions. A Republican coalition of freedmen, southerners supportive of the Union (derisively called scalawags by white Democrats), and northerners who had migrated to the South (derisively called carpetbaggers) — some of whom were returning natives, but were mostly Union veterans -, organized to create constitutional conventions. They created new state constitutions to set new directions for southern states. The issue of loyalty emerged in the debates over the Wade–Davis Bill of 1864. The bill required voters to take the "ironclad oath", swearing they had never supported the Confederacy or been one of its soldiers. Pursuing a policy of "malice toward none" announced in his second inaugural address, Lincoln asked voters only to support the Union. The Radicals lost support following Lincoln's veto of the Wade–Davis Bill but regained strength after Lincoln's assassination in April 1865. Congress had to consider how to restore to full status and representation within the Union those southern states that had declared their independence from the United States and had withdrawn their representation. Suffrage for former Confederates was one of two main concerns. A decision needed to be made whether to allow just some or all former Confederates to vote (and to hold office). The moderates wanted virtually all of them to vote, but the Radicals resisted. They repeatedly tried to impose the ironclad oath, which would effectively have allowed no former Confederates to vote. Radical Republican leader Thaddeus Stevens proposed, unsuccessfully, that all former Confederates lose the right to vote for five years. The compromise that was reached disenfranchised many former Confederate civil and military leaders. No one knows how many temporarily lost the vote, but one estimate was 10,000 to 15,000. Second, and closely related, was the issue of whether freedmen should be allowed to vote. The issue was how to receive the four million former slaves as citizens. If they were to be fully counted as citizens, some sort of representation for apportionment of seats in Congress had to be determined. Before the war, the population of slaves had been counted as three-fifths of a comparable number of free whites. By having four million freedmen counted as full citizens, the South would gain additional seats in Congress. If blacks were denied the vote and the right to hold office, then only whites would represent them. Many conservatives, including most white southerners, northern Democrats, and some northern Republicans, opposed black voting. Some northern states that had referenda on the subject limited the ability of their own small populations of blacks to vote. Lincoln had supported a middle position to allow some black men to vote, especially army veterans. Johnson also believed that such service should be rewarded with citizenship. Lincoln proposed giving the vote to "the very intelligent, and especially those who have fought gallantly in our ranks." In 1864, Governor Johnson said, "The better class of them will go to work and sustain themselves, and that class ought to be allowed to vote, on the ground that a loyal negro is more worthy than a disloyal white man." As President in 1865, Johnson wrote to the man he appointed as governor of Mississippi, recommending, "If you could extend the elective franchise to all persons of color who can read the Constitution in English and write their names, and to all persons of color who own real estate valued at least two hundred and fifty dollars, and pay taxes thereon, you would completely disarm the adversary [Radicals in Congress], and set an example the other states will follow." Charles Sumner and Thaddeus Stevens, leaders of the Radical Republicans, were initially hesitant to enfranchise the largely illiterate former slave population. Sumner preferred at first impartial requirements that would have imposed literacy restrictions on blacks and whites. He believed that he would not succeed in passing legislation to disfranchise illiterate whites who already had the vote. In the South, many poor whites were illiterate as there was almost no public education before the war. In 1880, for example, the white illiteracy rate was about 25% in Tennessee, Kentucky, Alabama, South Carolina, and Georgia; and as high as 33% in North Carolina. This compares with the 9% national rate, and a black rate of illiteracy that was over 70% in the South. By 1900, however, with emphasis within the black community on education, the majority of blacks had achieved literacy. Sumner soon concluded that "there was no substantial protection for the freedman except in the franchise." This was necessary, he stated, "(1) For his own protection; (2) For the protection of the white Unionist; and (3) For the peace of the country. We put the musket in his hands because it was necessary; for the same reason we must give him the franchise." The support for voting rights was a compromise between moderate and Radical Republicans. The Republicans believed that the best way for men to get political experience was to be able to vote and to participate in the political system. They passed laws allowing all male freedmen to vote. In 1867, black men voted for the first time. Over the course of Reconstruction, more than 1,500 African Americans held public office in the South; some of them were men who had escaped to the North and gained educations, and returned to the South. They did not hold office in numbers representative of their proportion in the population, but often elected whites to represent them. The question of women's suffrage was also debated but was rejected. From 1890 to 1908, southern states passed new constitutions and laws that disfranchised most blacks and tens of thousands of poor whites with new voter registration and electoral rules. When establishing new requirements such as subjectively administered literacy tests, in some states, they used "grandfather clauses" to enable illiterate whites to vote. The Five Civilized Tribes that had been relocated to Indian Territory (now part of Oklahoma) held black slaves and signed treaties supporting the Confederacy. During the war, a war among pro- and anti-Union Indians had raged. Congress passed a statute that gave the President the authority to suspend the appropriations of any tribe if the tribe is “in a state of actual hostility to the government of the United States… and, by proclamation, to declare all treaties with such tribe to be abrogated by such tribe”(25 USC Sec. 72). As a component of Reconstruction, the Interior Department ordered a meeting of representatives from all Indian tribes which had affiliated with the Confederacy. The Council, the Southern Treaty Commission, was first held in Ft. Smith, Arkansas in September 1865, was attended by hundreds of Indians representing dozens of tribes. Over the next several years the commission negotiated treaties with tribes that resulted in additional relocations to Indian Territory and the de facto creation (initially by treaty) of an unorganized Oklahoma Territory. President Lincoln signed two Confiscation Acts into law, the first on August 6, 1861, and the second on July 17, 1862, safeguarding fugitive slaves from the Confederacy that came over into Union lines and giving them indirect emancipation if their masters continued insurrection against the United States. The laws allowed the confiscation of lands for colonization from those who aided and supported the rebellion. However, these laws had limited effect as they were poorly funded by Congress and poorly enforced by Attorney General Edward Bates. In August 1861, Maj. Gen. John C. Frémont, Union commander of the Western Department, declared martial law in Missouri, confiscated Confederate property, and emancipated their slaves. President Lincoln immediately ordered Frémont to rescind his emancipation declaration stating, "I think there is great danger that ... the liberating slaves of traitorous owners, will alarm our Southern Union friends, and turn them against us – perhaps ruin our fair prospect for Kentucky." After Frémont refused to rescind the emancipation order, President Lincoln terminated him from active duty on November 2, 1861. Lincoln was concerned that border states would bolt from the Union if slaves were given their freedom. On May 26, 1862, Union Maj. Gen. David Hunter emancipated slaves in South Carolina, Georgia, and Florida stated all "persons ... heretofore held as slaves ... forever free." Lincoln, embarrassed by the order, rescinded Hunter's declaration and canceled the emancipations. On April 16, 1862 Lincoln signed a bill into law outlawing slavery in Washington D.C. and freeing the estimated 3,500 slaves in the city and on June 19, 1862 he signed legislation outlawing slavery in all U.S. territories. In July 1862, under the authority of the Confiscation Acts and an amended Force Bill of 1795, he authorized the recruitment of freed slaves into the Union army and seizure of any Confederate property for military purposes. In an effort to keep border states in the Union, President Lincoln as early as 1861 designed gradual compensated emancipation programs paid for by government bonds. Lincoln desired Delaware, Maryland, Kentucky, and Missouri to "adopt a system of gradual emancipation which should work the extinction of slavery in twenty years." On March 26, 1862 Lincoln met with Senator Charles Sumner and recommended that a special joint session of Congress be conveyed to discuss giving financial aid to any border states who initiated a gradual emancipation plan. In April 1862, the joint session of Congress met, however, the border states were not interested and did not make any response to Lincoln or any Congressional emancipation proposal. Lincoln advocated compensated emancipation during the 1865 River Queen steamer conference. In August 1862, President Lincoln met with African American leaders and urged them to colonize some place in Central America. Lincoln planned to free the Southern slaves in the Emancipation Proclamation and he was concerned that freedmen would not be well treated in the United States by Whites in both the North and South. Although Lincoln gave assurances that the United States government would support and protect any colonies, the leaders declined the offer of colonization. Many free blacks had been opposed to colonization plans in the past and wanted to remain in the United States. President Lincoln persisted in his colonization plan believing that emancipation and colonization were part of the same program. Lincoln was successful by April 1863 at sending black colonists to Haiti and 453 to Chiriqui in Central America; however, none of the colonies was able to remain self-sufficient. Frederick Douglass, a prominent 19th-century American civil rights activist, criticized that Lincoln was "showing all his inconsistencies, his pride of race and blood, his contempt for Negroes and his canting hypocrisy." African Americans, according to Douglass, wanted citizen rights rather than to be colonized. Historians debate if Lincoln gave up on African American colonization at the end of 1863 or if he actually planned to continue this policy up until 1865. Starting in March 1862, in an effort to forestall Reconstruction by the Radicals in Congress, President Lincoln installed military governors in certain rebellious states under Union military control. Although the states would not be recognized by the Radicals until an undetermined time, installation of military governors kept the administration of Reconstruction under Presidential control, rather than that of the increasingly unsympathetic Radical Congress. On March 3, 1862, Lincoln installed a loyalist Democrat Senator Andrew Johnson, as Military Governor with the rank of Brigadier General in his home state of Tennessee. In May 1862, Lincoln appointed Edward Stanly Military Governor of the coastal region of North Carolina with the rank of Brigadier General. Stanly resigned almost a year later when he angered Lincoln by closing two schools for black children in New Bern. After Lincoln installed Brigadier General George F. Sheply as Military Governor of Louisiana in May 1862, Sheply sent two anti-slavery representatives, Benjamin Flanders and Michael Hahn, elected in December 1862, to the House which capitulated and voted to seat them. In July 1862, Lincoln installed Colonel John S. Phelps as Military Governor of Arkansas, though he resigned soon after due to poor health. In July 1862, President Lincoln became convinced that "a military necessity" was needed to strike at slavery in order to win the Civil War for the Union. The Confiscation Acts were only having a minimal effect to end slavery. On July 22, he wrote a first draft of the Emancipation Proclamation that freed the slaves in states in rebellion. Lincoln decided not to release the document until there was a Union victory in the battlefield. After he showed his cabinet the document, slight alterations were made in the wording. Then on September 22, 1862, George McClellan defeated Robert E. Lee at Antietam; the second draft of the Emancipation Proclamation was issued to the public the following day. On January 1, 1863, the second part of the Emancipation Proclamation was issued, specifically naming ten states in which slaves would be "forever free". The proclamation did not name the states of Tennessee, Kentucky, Missouri, Maryland, and Delaware, and specifically excluded numerous counties in some other states. Eventually, as the Union Armies advanced into the Confederacy millions of slaves were set free. Many of these freedmen joined the Union army and fought in battles against the Confederate forces. Yet hundreds of thousands of freed slaves died during emancipation from illness that devastated army regiments. Freed slaves suffered from smallpox, yellow fever, and malnutrition. President Abraham Lincoln was concerned to effect a speedy restoration of the Confederate states to the Union after the Civil War. In 1863, President Lincoln proposed a moderate plan for the Reconstruction of the captured Confederate State of Louisiana. The plan granted amnesty to Rebels who took an oath of loyalty to the Union. Black Freedmen workers were tied to labor on plantations for one year at $10 a month pay. Only 10% of the state's electorate had to take the loyalty oath in order for the state to be readmitted into U.S. Congress. The state was required to abolish slavery in its new constitution. Identical Reconstruction plans would be adopted in Arkansas and Tennessee. By December 1864, the Lincoln plan of Reconstruction had been enacted in Louisiana and the legislature sent two Senators and five Representatives to take their seats in Washington. However, Congress refused to count any of the votes from Louisiana, Arkansas, and Tennessee, in essence rejecting Lincoln's moderate Reconstruction plan. Congress, at this time controlled by the Radicals, proposed the Wade–Davis Bill that required a majority of the state electorates to take the oath of loyalty to be admitted to Congress. Lincoln pocket-vetoed the bill and the rift widened between the moderates, who wanted to save the Union and win the war, and the Radicals, who wanted to effect a more complete change within Southern society. Frederick Douglass denounced Lincoln's 10% electorate plan undemocratic since state admission and loyalty only depended on a minority vote. Before 1864, slave marriages had not been recognized legally; emancipation did not affect them. When freed, many former slaves made official marriages. Before emancipation, slaves could not enter into contracts, including the marriage contract. After emancipation, former slaves and whites both began to view the lack of officially recognized marriage for their unions as problematic. Not all free people formalized their unions. Some continued to have common-law marriages or community-recognized relationships. The acknowledgement of marriage by the state increased the state’s recognition of freedpeople as legal actors and eventually helped make the case for parental rights for freedpeople against the practice of apprenticeship of black children. These children were legally taken away from their families under the guise of “providing them with guardianship and ‘good’ homes until they reached the age of consent at twenty-one” under acts such as the Georgia 1866 Apprentice Act. Such children were generally used as sources of unpaid labor. On March 3, 1865 the Freedmen's Bureau Bill became law, sponsored by the Republicans to aid freedmen and white refugees. A federal Bureau was created to provide food, clothing, fuel, and advice on negotiating labor contracts. It attempted to oversee new relations between freedmen and their former masters in a free labor market. The Act, without deference to a person's color, authorized the Bureau to lease confiscated land for a period of three years and to sell it in portions of up to 40 acres per buyer. The Bureau was to expire one year after the termination of the War. Lincoln was assassinated before he could appoint a commissioner of the Bureau. A popular myth was that the Act offered 40 acres and a mule, or that slaves had been promised this. With the help of the Bureau, the recently freed slaves began voting, forming political parties, and assuming the control of labor in many areas. The Bureau helped to start a change of power in the South that drew national attention from the Republicans in the North to the conservative Democrats in the South. This is especially evident in the election between Grant and Johnson, where almost 700,000 black voters voted and swayed the election 300,000 votes in Grant's favor. Even with the benefits that it gave to the freedmen, the Freedmen's Bureau failed to protect and take care for former slaves in certain areas. Because the Bureau only provided help with labor, food, and housing, medical attention for the former slaves was severely lacking. Furthermore, neither the Bureau nor other government institutions were able to protect the slaves from groups like the KKK. Terrorizing freedmen for trying to vote, hold a political office, or own land, the KKK was the antithesis to the Freedmen's Bureau. Sadly, the Bureau seemed to be unable to address the issue of hate groups which permeated the South. Other legislation was signed that broadened equality and rights for African Americans. Lincoln outlawed discrimination on account of color, in carrying U.S. mail, in riding on public street cars in Washington D.C., and in pay for soldiers. Lincoln and Secretary of State William H. Seward met with three southern representatives to discuss the peaceful reconstruction of the Union and the Confederacy on February 3, 1865 in Hampton Roads, Virginia. The southern delegation included Confederate vice-president, Alexander H. Stephens, John A. Campbell, and Robert M.T. Hunter. The southerners proposed the Union recognition of the Confederacy, a joint Union-Confederate attack on Mexico to oust dictator Maximillian, and an alternative subordinate status of servitude for blacks rather than slavery. Lincoln flatly denied recognition of the Confederacy, and said that the slaves covered by his Emancipation Proclamation would not be re-enslaved. He said that the Union States were about to pass the Thirteenth Amendment outlawing slavery. Lincoln urged the governor of Georgia to remove Confederate troops and "ratify this Constitutional Amendment prospectively, so as to take effect—say in five years ... Slavery is doomed." Lincoln also urged compensated emancipation for the slaves as he thought the North should be willing to share the costs of freedom. Although the meeting was cordial, the parties did not settle on agreements. Lincoln continued to advocate his Louisiana Plan as a model for all states up until his assassination on April 14, 1865. The plan successfully started the Reconstruction process of ratifying the Thirteenth Amendment in all states. Lincoln is typically portrayed as taking the moderate position and fighting the Radical positions. There is considerable debate on how well Lincoln, had he lived, would have handled Congress during the Reconstruction process that took place after the Civil War ended. One historical camp argues that Lincoln's flexibility, pragmatism, and superior political skills with Congress would have solved Reconstruction with far less difficulty. The other camp believes the Radicals would have attempted to impeach Lincoln, just as they did to his successor, Andrew Johnson, in 1868. Northern anger over the assassination of Lincoln and the immense human cost of the war led to vengeful demands for harsh policies. Vice President Andrew Johnson had taken a hard line and spoke of hanging rebel Confederates, but when he succeeded Lincoln as President, Johnson took a much softer line, pardoning many Confederate leaders and former Confederates. Jefferson Davis was held in prison for two years, but other Confederate leaders were not. There were no treason trials. Only one person—Captain Henry Wirz, the commandant of the prison camp in Andersonville, Georgia—was executed for war crimes. According to historian Eric Foner, Andrew Johnson's conservative view of Reconstruction did not include blacks or former slaves involvement in government and he refused to heed Northern concerns when southern state legislatures implemented Black Codes that lowered the status of the freedmen similar to slavery. President Andrew Johnson's Reconstruction policy would be known primarily for the nonenforcement and defiance of Reconstruction laws passed by the U.S. Congress and would be in constant conflict constitutionally with the Radicals in Congress over the status of freedmen and whites in the defeated South. Although resigned to the abolition of slavery, many former Confederates were unwilling to accept both social changes and political domination by former slaves. The defeated were unwilling to acknowledge that their society had changed. In the words of Benjamin F. Perry, President Johnson's choice as the provisional governor of South Carolina: "First, the Negro is to be invested with all political power, and then the antagonism of interest between capital and labor is to work out the result." The fears, however, of the mostly conservative planter elite and other leading white citizens were partly assuaged by the actions of President Johnson, who ensured that a wholesale land redistribution from the planters to the freedman did not occur. President Johnson ordered that confiscated or abandoned lands administered by the Freedman's Bureau would not be redistributed to the freedmen but be returned to pardoned owners. Land was returned that would have been forfeited under the Confiscation Acts passed by Congress in 1861 and 1862. Southern state governments quickly enacted the restrictive "black codes". However, they were abolished in 1866 and seldom had effect, because the Freedman's Bureau (not the local courts) handled the legal affairs of freedmen. The Black Codes indicated the plans of the southern whites for the former slaves. The freedmen would have more rights than did free blacks before the war, but they still had only a limited set of second-class civil rights, no voting rights and no citizenship. They could not own firearms, serve on a jury in a lawsuit involving whites or move about without employment. The Black Codes outraged northern opinion. They were overthrown by the Civil Rights Act of 1866 that gave the freedmen full legal equality (except for the right to vote). The freedmen, with the strong backing of the Freedman's Bureau, rejected gang-labor work patterns that had been used in slavery. Instead of gang labor, freedpeople preferred family-based labor groups. They forced planters to bargain for their labor. Such bargaining soon led to the establishment of the system of sharecropping, which gave the freedmen greater economic independence and social autonomy than gang labor. However, because they lacked capital and the planters continued to own the means of production (tools, draft animals and land), the freedmen were forced into producing cash crops (mainly cotton) for the land-owners and merchants, and they entered into a crop-lien system. Widespread poverty, disruption to an agricultural economy too dependent on cotton, and the falling price of cotton, led within decades to the routine indebtedness of the majority of the freedmen, and poverty by many planters. Northern officials gave varying reports on conditions for the freedmen in the South. One harsh assessment came from Carl Schurz, who reported on the situation in the states along the Gulf Coast. His report documented dozens of extra-judicial killings and claimed that hundreds or thousands more African Americans were killed. The number of murders and assaults perpetrated upon Negroes is very great; we can form only an approximative estimate of what is going on in those parts of the South which are not closely garrisoned, and from which no regular reports are received, by what occurs under the very eyes of our military authorities. As to my personal experience, I will only mention that during my two days sojourn at Atlanta, one Negro was stabbed with fatal effect on the street, and three were poisoned, one of whom died. While I was at Montgomery, one negro was cut across the throat evidently with intent to kill, and another was shot, but both escaped with their lives. Several papers attached to this report give an account of the number of capital cases that occurred at certain places during a certain period of time. It is a sad fact that the perpetration of those acts is not confined to that class of people which might be called the rabble. Carl Schurz, "Report on the Condition of the South", December 1865 (U.S. Senate Exec. Doc. No. 2, 39th Congress, 1st session). The report included sworn testimony from soldiers and officials of the Freedman's Bureau. In Selma, Alabama, Major J.P. Houston noted that whites who killed 12 African Americans in his district never came to trial. Many more killings never became official cases. Captain Poillon described white patrols in southwestern Alabama who board some of the boats; after the boats leave they hang, shoot, or drown the victims they may find on them, and all those found on the roads or coming down the rivers are almost invariably murdered. The bewildered and terrified freedmen know not what to do—to leave is death; to remain is to suffer the increased burden imposed upon them by the cruel taskmaster, whose only interest is their labor, wrung from them by every device an inhuman ingenuity can devise; hence the lash and murder is resorted to intimidate those whom fear of an awful death alone cause to remain, while patrols, Negro dogs and spies, disguised as Yankees, keep constant guard over these unfortunate people. Much of the violence that was perpetrated against African Americans was shaped by gendered prejudices regarding African Americans. Black women were in a particularly vulnerable situation. To convict a white man of sexually assaulting black women in this period was exceedingly difficult. Black women were socially constructed as sexually avaricious and since they were portrayed as having little virtue, society held that they could not be raped. One report indicates two freedwomen, Frances Thompson and Lucy Smith, describe their violent sexual assault during the Memphis Riots of 1866. However, black women were vulnerable even in times of relative normalcy. Sexual assaults on African American women were so pervasive, particularly on the part of their white employers, that black men sought to reduce the contact between white males and black females by having the women in their family avoid doing work that was closely overseen by whites. Black men were construed as being extremely sexually aggressive and their supposed or rumored threats to white women were often used as a pretext for lynching and castrations. During fall 1865, out of response to the Black codes and worrisome signs of Southern recalcitrance, the Radical Republicans blocked the readmission of the former rebellious states to the Congress. Johnson, however, was content with allowing former Confederate states into the Union as long as their state governments adopted the 13th Amendment abolishing slavery. By December 6, 1865, the amendment was ratified and Johnson considered Reconstruction over. Johnson was following the moderate Lincoln Presidential Reconstruction policy to get the states readmitted as soon as possible. Congress, however, controlled by the Radicals, had other plans. The Radicals were led by Charles Sumner in the Senate and Thaddeus Stevens in the House of Representatives. Congress, on December 4, 1865, rejected Johnson's moderate Presidential Reconstruction, and organized the Joint Committee on Reconstruction, a 15-member panel to devise reconstruction requirements for the Southern states to be restored to the Union. In January 1866, Congress renewed the Freedman's Bureau; however, Johnson vetoed the Freedmen's Bureau Bill in February 1866. Although Johnson had sympathies for the plights of the freedmen, he was against federal assistance. An attempt to override the veto failed on February 20, 1866. This veto shocked the Congressional Radicals. In response, both the Senate and House passed a joint resolution not to allow any Senator or Representative seat admittance until Congress decided when Reconstruction was finished. laws are to be enacted and enforced depriving persons of African descent of privileges which are essential to freemen ... A law that does not allow a colored person to go from one county to another, and one that does not allow him to hold property, to teach, to preach, are certainly laws in violation of the rights of a freeman ... The purpose of this bill is to destroy all these discriminations. The key to the bill was the opening section: All persons born in the United States ... are hereby declared to be citizens of the United States; and such citizens of every race and color, without regard to any previous condition of slavery ... shall have the same right in every State ... to make and enforce contracts, to sue, be parties, and give evidence, to inherit, purchase, lease, sell, hold, and convey real and personal property, and to full and equal benefit of all laws and proceedings for the security of person and property, as is enjoyed by white citizens, and shall be subject to like punishment, pains, and penalties and to none other, any law, statute, ordinance, regulation, or custom to the Contrary notwithstanding. Congress quickly passed the Civil Rights bill; the Senate on February 2 voted 33–12; the House on March 13 voted 111–38. Although strongly urged by moderates in Congress to sign the Civil Rights bill, Johnson broke decisively with them by vetoing it on March 27, 1866. His veto message objected to the measure because it conferred citizenship on the freedmen at a time when eleven out of thirty-six states were unrepresented and attempted to fix by Federal law "a perfect equality of the white and black races in every State of the Union." Johnson said it was an invasion by Federal authority of the rights of the States; it had no warrant in the Constitution and was contrary to all precedents. It was a "stride toward centralization and the concentration of all legislative power in the national government." The Democratic Party, proclaiming itself the party of white men, north and south, supported Johnson. However the Republicans in Congress overrode his veto (the Senate by the close vote of 33:15, the House by 122:41) and the Civil Rights bill became law. Congress also passed a toned-down Freedmen's Bureau Bill; Johnson quickly vetoed as he had done to the previous bill. This time, however, Congress had enough support and overrode Johnson's veto. The last moderate proposal was the Fourteenth Amendment, whose principal drafter was Representative John Bingham. It was designed to put the key provisions of the Civil Rights Act into the Constitution, but it went much further. It extended citizenship to everyone born in the United States (except visitors and Indians on reservations), penalized states that did not give the vote to freedmen, and most importantly, created new federal civil rights that could be protected by federal courts. It guaranteed the Federal war debt would be paid (and promised the Confederate debt would never be paid). Johnson used his influence to block the amendment in the states since three-fourths of the states were required for ratification (the amendment was later ratified.). The moderate effort to compromise with Johnson had failed, and a political fight broke out between the Republicans (both Radical and moderate) on one side, and on the other side, Johnson and his allies in the Democratic Party in the North, and the conservative groupings (which used different names) in each southern state. Concerned that President Johnson viewed Congress as an "illegal body" and wanted to overthrow the government, Republicans in Congress took control of Reconstruction policies after the election of 1866. Johnson ignored the policy mandate, and he openly encouraged southern states to deny ratification of the 14th Amendment (except for Tennessee, all former Confederate states did refuse to ratify, as did the border states of Delaware, Maryland and Kentucky). Radical Republicans in Congress, led by Stevens and Sumner, opened the way to suffrage for male freedmen. They were generally in control, although they had to compromise with the moderate Republicans (the Democrats in Congress had almost no power). Historians generally refer to this period as Radical Reconstruction. The South's white leaders, who held power in the immediate postwar era before the vote was granted to the freedmen, renounced secession and slavery, but not white supremacy. People who had previously held power were angered in 1867 when new elections were held. New Republican lawmakers were elected by a coalition of white Unionists, freedmen and northerners who had settled in the South. Some leaders in the South tried to accommodate to new conditions. Three Constitutional amendments, known as the Reconstruction Amendments, were adopted. The 13th Amendment abolishing slavery was ratified in 1865. The 14th Amendment was proposed in 1866 and ratified in 1868, guaranteeing United States citizenship to all persons born or naturalized in the United States and granting them federal civil rights. The 15th Amendment, proposed in late February 1869 and passed in early February 1870, decreed that the right to vote could not be denied because of "race, color, or previous condition of servitude". The amendment did not declare the vote an unconditional right; it prohibited these types of discrimination. States would still determine voter registration and electoral laws. The amendments were directed at ending slavery and providing full citizenship to freedmen. Northern Congressmen believed that providing black men with the right to vote would be the most rapid means of political education and training. Many blacks took an active part in voting and political life, and rapidly continued to build churches and community organizations. Following Reconstruction, white Democrats and insurgent groups used force to regain power in the state legislatures, and pass laws that effectively disfranchised most blacks and many poor whites in the South. Around the start of the 20th century, from 1890 to 1910, southern states passed new constitutions that completed disfranchisement of blacks. U.S. Supreme Court rulings on these provisions upheld many of these new southern constitutions and laws, and most blacks were prevented from voting in the South until the 1960s. Full federal enforcement of the Fourteenth and Fifteenth Amendments did not occur until after passage of legislation in the mid-1960s as a result of the African-American Civil Rights Movement (1955–1968). For details, see: The Reconstruction Acts as originally passed, were initially called "An act to provide for the more efficient Government of the Rebel States" the legislation was enacted by the 39th Congress, on March 2, 1867. It was vetoed by President Johnson, and the veto overridden by two-thirds majority, in both the House and the Senate, the same day. Congress also clarified the scope of the federal writ of habeas corpus to allow federal courts to vacate unlawful state court convictions or sentences in 1867 (28 U.S.C. §2254). With the Radicals in control, Congress passed the Reconstruction Acts on July 19, 1867. The first Reconstruction Act, authored by Oregon Sen. George H. Williams, a Radical Republican, placed ten Confederate states under military control, grouping them into five military districts: 20,000 U.S. troops were deployed to enforce the Act. Tennessee was not made part of a military district (having already been readmitted to the Union), and therefore federal controls did not apply. The ten Southern state governments were re-constituted under the direct control of the United States Army. One major purpose was to recognize and protect the right of African Americans to vote. There was little or no combat, but rather a state of martial law in which the military closely supervised local government, supervised elections, and tried to protect office holders and freedmen from violence. Blacks were enrolled as voters; former Confederate leaders were excluded for a limited period. No one state was entirely representative. Randolph Campbell describes what happened in Texas: The first critical step ... was the registration of voters according to guidelines established by Congress and interpreted by Generals Sheridan and Charles Griffin. The Reconstruction Acts called for registering all adult males, white and black, except those who had ever sworn an oath to uphold the Constitution of the United States and then engaged in rebellion ... Sheridan interpreted these restrictions stringently, barring from registration not only all pre-1861 officials of state and local governments who had supported the Confederacy but also all city officeholders and even minor functionaries such as sextons of cemeteries. In May Griffin ... appointed a three-man board of registrars for each county, making his choices on the advice of known scalawags and local Freedman's Bureau agents. In every county where practicable a freedman served as one of the three registrars ... Final registration amounted to approximately 59,633 whites and 49,479 blacks. It is impossible to say how many whites were rejected or refused to register (estimates vary from 7,500 to 12,000), but blacks, who constituted only about 30 percent of the state's population, were significantly overrepresented at 45 percent of all voters. All Southern states were readmitted to representation in Congress by the end of 1870, the last being Georgia. All but 500 top Confederate leaders were pardoned when President Grant signed the Amnesty Act of 1872. During the Civil War, many in the North believed that fighting for the Union was a noble cause – for the preservation of the Union and the end of slavery. After the war ended, with the North victorious, the fear among Radicals was that President Johnson too quickly assumed that slavery and Confederate nationalism were dead and that the southern states could return. The Radicals sought out a candidate for President who represented their viewpoint. In 1868, the Republicans unanimously chose Ulysses S. Grant to be the Republican Presidential candidate. Grant won favor with the Radicals after he allowed Edwin M. Stanton, a Radical, to be reinstated as Secretary of War. As early as 1862, during the Civil War, Grant had appointed the Ohio military chaplain John Eaton to protect and gradually incorporate refugee slaves in west Tennessee and northern Mississippi into the Union War effort, and pay them for their labor. It was the beginning of his vision for the Freedmen's Bureau. Grant opposed President Johnson by supporting the Reconstruction Acts passed by the Radicals. Immediately upon Inauguration in 1869, Grant bolstered Reconstruction by prodding Congress to readmit Virginia, Mississippi, and Texas into the Union, while ensuring their constitutions protected every citizen's voting rights. Grant met with prominent black leaders for consultation, and signed a bill into law that guaranteed equal rights to both blacks and whites in Washington D.C. In Grant's two terms he strengthened Washington's legal capabilities. He worked with Congress to create the Department of Justice and Office of Solicitor General, led by Attorney General Amos Akerman and the first Solicitor General Benjamin Bristow, who both prosecuted thousands of Klansmen under the Force Acts. Grant sent additional federal troops to nine South Carolina counties to suppress Klan violence in 1871. In 1872, Grant was the first American President to legally recognize an African American governor, P. B. S. Pinchback of Louisiana. Grant also used military pressure to ensure that African Americans could maintain their new electoral status; won passage of the Fifteenth Amendment giving African Americans the right to vote; and signed the Civil Rights Act of 1875 giving people access to public facilities regardless of race. To counter vote fraud in the Democratic stronghold of New York City, Grant sent in tens of thousands of armed, uniformed federal marshals and other election officials to regulate the 1870 and subsequent elections. Democrats across the North then mobilized to defend their base and attacked Grant's entire set of policies. On October 21, 1876 President Grant deployed troops to protect black and white Republican voters in Petersburg, Virginia. Grant's support from Congress and the nation declined due to presidential scandals during his administration and the political resurgence of the Democrats in the North and South. By 1870, most Republicans felt the war goals had been achieved, and they turned their attention to other issues such as financial and monetary policies. On April 20, 1871, the U.S. Congress launched a 21-member investigation committee on the status of the Southern Reconstruction states: North Carolina, South Carolina, Georgia, Mississippi, Alabama, and Florida. Congressional members on the committee included Rep. Benjamin Butler, Sen. Zachariah Chandler, and Sen. Francis P. Blair. Subcommittee members traveled into the South to interview the people living in their respective states. Those interviewed included top-ranking officials, such as Wade Hampton, former South Carolina Gov. James L. Orr, and Nathan B. Forrest, a former Confederate general and prominent Ku Klux Klan leader. Others southerners interviewed included farmers, doctors, merchants, teachers, and clergymen. The committee heard numerous reports of white violence against blacks, while many whites denied Klan membership or knowledge of violent activities. The majority report by Republicans concluded that the government would not tolerate any Southern "conspiracy" to resist violently the Congressional Reconstruction. The committee completed its 13-volume report in February 1872. While Grant had been able to suppress the KKK through the Force Acts, other paramilitary insurgents organized, including the White League in 1874, active in Louisiana; and the Red Shirts, with chapters active in Mississippi and the Carolinas. They used intimidation and outright attacks to run Republicans out of office and repress voting by blacks, leading to white Democrats regaining power by the elections of the mid-to-late 1870s. Republicans took control of all Southern state governorships and state legislatures, except for Virginia. The Republican coalition elected numerous African Americans to local, state, and national offices; though they did not dominate any electoral offices, black men as representatives voting in state and federal legislatures marked a drastic social change. At the beginning of 1867, no African-American in the South held political office, but within three or four years "about 15 percent of the officeholders in the South were black—a larger proportion than in 1990." About 137 black officeholders had lived outside the South before the Civil War. Some who had escaped from slavery to the North and had become educated returned to help the South advance in the postwar era. Others were free blacks before the war, who had achieved education and positions of leadership elsewhere. Other African-American men who served were already leaders in their communities, including a number of preachers. As happened in white communities, not all leadership depended upon wealth and literacy. |Race of delegates to 1867 state constitutional conventions (% in 1870) There were few African Americans elected or appointed to national office. African Americans voted for white candidates and for blacks. The Fifteenth Amendment to the United States Constitution guaranteed the right to vote, but did not guarantee that the vote would be counted, that the districts would be apportioned equally, or that voters would be free from intimidation and violence. As a result, states with majority African-American population often elected only one or two African-American representatives in Congress. Exceptions included South Carolina; at the end of Reconstruction, four of its five Congressmen were African American. |African Americans in Office 1870–1876| W. E. B. Du Bois argued that the freedmen had a deep commitment to education and that African Americans in the Republican coalition played a critical role in establishing the principle of universal public education in state constitutions during congressional Reconstruction. Some slaves learned to read from white playmates although formal education was not allowed by law; African Americans started "native schools" before the end of the war; Sabbath schools were another widespread means freedmen created for teaching literacy. When they gained suffrage, black politicians took this commitment to public education to state constitutional conventions. African Americans and white Republicans joined to build education at the state level. They created a system of public schools, which were segregated by race everywhere except New Orleans. Generally, elementary and a few secondary schools were built in most cities, and occasionally in the countryside, but the South had few cities. The rural areas faced many difficulties opening and maintaining public schools. In the country, the public school was often a one-room affair that attracted about half the younger children. The teachers were poorly paid, and their pay was often in arrears. Conservatives contended the rural schools were too expensive and unnecessary for a region where the vast majority of people were cotton or tobacco farmers. They had no vision of a better future for their residents. One historian found that the schools were less effective than they might have been because "poverty, the inability of the states to collect taxes, and inefficiency and corruption in many places prevented successful operation of the schools." Numerous private academies and colleges for freedmen were established by northern missionaries. Every state created state colleges for freedmen, such as Alcorn State University in Mississippi. The state colleges created generations of teachers who were critical in the education of African American children. In 1890, the black state colleges started receiving federal funds as land grant schools. They received state funds after Reconstruction ended because, as Lynch explains, "there are very many liberal, fair-minded and influential Democrats in the State who are strongly in favor of having the State provide for the liberal education of both races." Every Southern state subsidized railroads, which modernizers felt could haul the South out of isolation and poverty. Millions of dollars in bonds and subsidies were fraudulently pocketed. One ring in North Carolina spent $200,000 in bribing the legislature and obtained millions in state money for its railroads. Instead of building new track, however, it used the funds to speculate in bonds, reward friends with extravagant fees, and enjoy lavish trips to Europe. Taxes were quadrupled across the South to pay off the railroad bonds and the school costs. There were complaints among taxpayers, because taxes had historically been low, since there was so little commitment to public works or public education. Taxes historically had been much lower than in the North, reflecting a lack of public investment in the communities. Nevertheless thousands of miles of lines were built as the Southern system expanded from 11,000 miles (17,700 km) in 1870 to 29,000 miles (46,700 km) in 1890. The lines were owned and directed overwhelmingly by Northerners. Railroads helped create a mechanically skilled group of craftsmen and broke the isolation of much of the region. Passengers were few, however, and apart from hauling the cotton crop when it was harvested, there was little freight traffic. As Franklin explains, "numerous railroads fed at the public trough by bribing legislators ... and through the use and misuse of state funds." The effect, according to one businessman, "was to drive capital from the State, paralyze industry, and demoralize labor." Reconstruction changed the tax structure of the South. In the U.S. from the earliest days until today, a major source of state revenue was the property tax. In the South, wealthy landowners were allowed to self-assess the value of their own land. These fraudulent assessments were almost valueless, and pre-war property tax collections were lacking due to property value misrepresentation. State revenues came from fees and from sales taxes on slave auctions. Some states assessed property owners by a combination of land value and a capitation tax, a tax on each worker employed. This tax was often assessed in a way to discourage a free labor market, where a slave was assessed at 75 cents, while a free white was assessed at a dollar or more, and a free African American at $3 or more. Some revenue also came from poll taxes. These taxes were more than poor people could pay, with the designed and inevitable consequence that they did not vote. During Reconstruction, new spending on schools and infrastructure, combined with fraudulent spending and a collapse in state credit because of huge deficits, forced the states to dramatically increase property tax rates. In places, the rate went up to ten times higher—despite the poverty of the region. The infrastructure of much of the South—roads, bridges, and railroads—scarce and deficient even before the war—had been destroyed during the war. In addition, there were other new expenditures, because pre-war southern states did not educate their citizens or build and maintain much infrastructure. In part, the new tax system was designed to force owners of large estates with huge tracts of uncultivated land either to sell or to have it confiscated for failure to pay taxes. The taxes would serve as a market-based system for redistributing the land to the landless freedmen and white poor. Here is a table of property tax rates for South Carolina and Mississippi. Note that many local town and county assessments effectively doubled the tax rates reported in the table. These taxes were still levied upon the landowners' own sworn testimony as to the value of their land, which remained the dubious and exploitable system used by wealthy landholders in the South well into the 20th century. |State Property Tax Rates during Reconstruction| |1869||5 mills (0.5%)||1 mill (0.1%) (lowest rate between 1822 and 1898)| |1870||9 mills||5 mills| |1871||7 mills||4 mills| |1872||12 mills||8.5 mills| |1873||12 mills||12.5 mills| |1874||10.3–8 mills||14 mills (1.4%) "a rate which virtually amounted to confiscation" (highest rate between 1822 and 1898)| |Source||J. S. Reynolds, Reconstruction in South Carolina, 1865–1877, (Columbia, SC: The State Co., 1905) p. 329.||J. H. Hollander,Studies in State Taxation with Particular Reference to the Southern States, (Baltimore: Johns Hopkins Press, 1900) p. 192.| Called upon to pay an actual tax on their property, angry plantation owners revolted. The conservatives shifted their focus away from race to taxes. Former Congressman John R. Lynch, a black Republican leader from Mississippi, concluded, The fact that their former slaves now held political and military power angered many whites. They self-consciously defended their own actions within the framework of an Anglo-American discourse of resistance against tyrannical government, and they broadly succeeded in convincing fellow white citizens says Steedman. They formed new political parties (often called the "Conservative" party) and supported or tolerated violent activist groups that intimidated both black and white Republican leaders at election time. By the mid-1870s, the Conservatives and Democrats had aligned with the national Democratic Party, which enthusiastically supported their cause even as the national Republican Party was losing interest in Southern affairs. Historian Walter Lynwood Fleming describes mounting anger of Southern whites: The Negro troops, even at their best, were everywhere considered offensive by the native whites ... The Negro soldier, impudent by reason of his new freedom, his new uniform, and his new gun, was more than Southern temper could tranquilly bear, and race conflicts were frequent. Often, these parties called themselves the "Conservative Party" or the "Democratic and Conservative Party" in order to distinguish themselves from the national Democratic Party and to obtain support from former Whigs. These parties sent delegates to the 1868 Democratic National Convention and abandoned their separate names by 1873 or 1874. Most [white] members of both the planter/business class and common farmer class of the South opposed black power, Carpetbaggers and military rule, and sought white supremacy. Democrats nominated blacks for political office and tried to steal other blacks from the Republican side. When these attempts to combine with the blacks failed, the planters joined the common farmers in simply trying to displace the Republican governments. The planters and their business allies dominated the self-styled "conservative" coalition that finally took control in the South. They were paternalistic toward the blacks but feared they would use power to raise taxes and slow business development. Fleming is a typical example of the conservative interpretation of Reconstruction. His work defended some roles in opposing military oppression by the white supremacist group the Ku Klux Klan (KKK) but denounced the Klan's violence. Fleming accepted as necessary the disenfranchisement of African Americans because he thought their votes were bought and sold by Carpetbaggers. Fleming described the first results of the movement as "good" and the later ones as "both good and bad." According to Fleming (1907) the KKK "quieted the Negroes, made life and property safer, gave protection to women, stopped burnings, forced the Radical leaders to be more moderate, made the Negroes work better, drove the worst of the Radical leaders from the country and started the whites on the way to gain political supremacy." The evil result, Fleming said, was that lawless elements "made use of the organization as a cloak to cover their misdeeds ... the lynching habits of today are largely to conditions, social and legal, growing out of Reconstruction." Ellis Paxson Oberholtzer (a northern scholar) in 1917 explained: Outrages upon the former slaves in the South there were in plenty. Their sufferings were many. But white men, too, were victims of lawless violence, and in all portions of the North and the late "rebel" states. Not a political campaign passed without the exchange of bullets, the breaking of skulls with sticks and stones, the firing of rival club-houses. Republican clubs marched the streets of Philadelphia, amid revolver shots and brickbats, to save the negroes from the "rebel" savages in Alabama ... The project to make voters out of black men was not so much for their social elevation as for the further punishment of the Southern white people—for the capture of offices for Radical scamps and the entrenchment of the Radical party in power for a long time to come in the South and in the country at large. Reaction by the angry whites included the formation of violent secret societies, especially the KKK. Violence occurred in cities with Democrats, Conservatives and other angry whites on one side and Republicans, African-Americans, federal government representatives, and Republican-organized armed Loyal Leagues on the other. The victims of this violence were overwhelmingly African American. The Klan and other such groups were careful to avoid federal legal intervention or military conflict. Their election-time tactics included violent intimidation of African American and Republican voters prior to elections while avoiding conflict with the U.S. Army or the state militias and then withdrawing completely on election day. Conservative reaction continued in both the north and south; the "white liners" movement to elect candidates dedicated to white supremacy reached as far as Ohio in 1875. As early as 1868 Supreme Court Chief Justice Salmon P. Chase, a leading Radical during the war, concluded that: Congress was right in not limiting, by its reconstruction acts, the right of suffrage to whites; but wrong in the exclusion from suffrage of certain classes of citizens and all unable to take its prescribed retrospective oath, and wrong also in the establishment of despotic military governments for the States and in authorizing military commissions for the trial of civilians in time of peace. There should have been as little military government as possible; no military commissions; no classes excluded from suffrage; and no oath except one of faithful obedience and support to the Constitution and laws, and of sincere attachment to the constitutional Government of the United States. By 1872, President Ulysses S. Grant had alienated large numbers of leading Republicans, including many Radicals by the corruption of his administration and his use of federal soldiers to prop up Radical state regimes in the South. The opponents, called "Liberal Republicans", included founders of the party who expressed dismay that the party had succumbed to corruption. They were further wearied by the continued insurgent violence of whites against blacks in the South, especially around every election cycle, which demonstrated the war was not over and changes were fragile. Leaders included editors of some of the nation's most powerful newspapers. Charles Sumner, embittered by the corruption of the Grant administration, joined the new party, which nominated editor Horace Greeley. The badly organized Democratic party also supported Greeley. Grant made up for the defections by new gains among Union veterans and by strong support from the "Stalwart" faction of his party (which depended on his patronage), and the Southern Republican parties. Grant won with 55.6% of the vote to Greeley's 43.8%. The Liberal Republican party vanished and many former supporters—even former abolitionists—abandoned the cause of Reconstruction. In the South, political–racial tensions built up inside the Republican party as they were attacked by the Democrats. In 1868, Georgia Democrats, with support from some Republicans, expelled all 28 black Republican members[clarification needed], arguing blacks were eligible to vote but not to hold office. In several states, the more conservative scalawags fought for control with the more radical carpetbaggers and usually lost. Thus, in Mississippi, the conservative faction led by scalawag James Lusk Alcorn was decisively defeated by the radical faction led by carpetbagger Adelbert Ames. The party lost support steadily as many scalawags left it; few recruits were acquired. Meanwhile, the freedmen were demanding a bigger share of the offices and patronage, thus squeezing out their carpetbagger allies. Finally, some of the more prosperous freedmen were joining the Democrats, as they were angered at the failure of the Republicans to help them acquire land. Although historians such as W. E. B. Du Bois looked for and celebrated a cross-racial coalition of poor whites and blacks, such coalitions rarely formed in these years. Writing in 1915, former Congressman Lynch, recalling his experience as a black leader in Mississippi, explained that, While the colored men did not look with favor upon a political alliance with the poor whites, it must be admitted that, with very few exceptions, that class of whites did not seek, and did not seem to desire such an alliance. Lynch reported that poor whites resented the job competition from freedmen. Furthermore, the poor whites with a few exceptions, were less efficient, less capable, and knew less about matters of state and governmental administration than many of the former slaves.… As a rule, therefore, the whites that came into the leadership of the Republican party between 1872 and 1875 were representatives of the most substantial families of the land. By 1870, the Democratic–Conservative leadership across the South decided it had to end its opposition to Reconstruction and black suffrage to survive and move on to new issues. The Grant administration had proven by its crackdown on the Ku Klux Klan that it would use as much federal power as necessary to suppress open anti-black violence. Democrats in the North concurred with these Southern Democrats. They wanted to fight the Republican Party on economic grounds rather than race. The New Departure offered the chance for a clean slate without having to re-fight the Civil War every election. Furthermore, many wealthy Southern landowners thought they could control part of the newly enfranchised black electorate to their own advantage. Not all Democrats agreed; an insurgent element continued to resist Reconstruction no matter what. Eventually, a group called "Redeemers" took control of the party in the Southern states. They formed coalitions with conservative Republicans, including scalawags and carpetbaggers, emphasizing the need for economic modernization. Railroad building was seen as a panacea since northern capital was needed. The new tactics were a success in Virginia where William Mahone built a winning coalition. In Tennessee, the Redeemers formed a coalition with Republican governor DeWitt Senter. Across the South, some Democrats switched from the race issue to taxes and corruption, charging that Republican governments were corrupt and inefficient. With continuing decrease in cotton prices, taxes squeezed cash-poor farmers who rarely saw $20 in currency a year but had to pay taxes in currency or lose their farm. In North Carolina, Republican Governor William Woods Holden used state troops against the Klan, but the prisoners were released by federal judges. Holden became the first governor in American history to be impeached and removed from office. Republican political disputes in Georgia split the party and enabled the Redeemers to take over. In the lower South, violence continued and new insurgent groups arose. The disputed election in Louisiana in 1872 found both Republican and Democratic candidates holding inaugural balls while returns were reviewed. Both certified their own slates for local parish offices in many places, causing local tensions to rise. Finally, Federal support helped certify the Republican as governor, but the Democrat Samuel D. McEnery in March 1873 brought his own militia to bear in New Orleans, the seat of government. Slates for local offices were certified by each candidate. In rural Grant Parish in the Red River Valley, freedmen fearing a Democratic attempt to take over the parish government reinforced defenses at the Colfax courthouse in late March. White militias gathered from the area a few miles outside the settlement. Rumors and fears abounded on both sides. William Ward, an African-American Union veteran and militia captain, mustered his company in Colfax and went to the courthouse. On Easter Sunday, April 13, 1873, the whites attacked the defenders at the courthouse. There was confusion about who shot one of the white leaders after an offer by the defenders to surrender. It was a catalyst to mayhem. In the end, three whites died and 120–150 blacks were killed, some 50 were held as prisoners. The disproportionate numbers of black to white fatalities and documentation of brutalized bodies are why contemporary historians call it the Colfax Massacre rather than the Colfax Riot, as it is known locally. This marked the beginning of heightened insurgency and attacks on Republican officeholders and freedmen in Louisiana and other Deep South states. In Louisiana, Judge T.S. Crawford and District Attorney P.H. Harris of the 12th Judicial District were shot off their horses and killed from ambush October 8, 1873, while going to court. One widow wrote to the Department of Justice that her husband was killed because he was a Union man and "... of the efforts made to screen those who committed a crime ..." In the North, a live-and-let-live attitude made elections more like a sporting contest. But in the Deep South, many white citizens had not reconciled with the defeat of the war or the granting of citizenship to freedmen. As an Alabama scalawag explained, The Panic of 1873 (a depression) hit the Southern economy hard and disillusioned many Republicans who had gambled that railroads would pull the South out of its poverty. The price of cotton fell by half; many small landowners, local merchants and cotton factors (wholesalers) went bankrupt. Sharecropping for black and white farmers became more common as a way to spread the risk of owning land. The old abolitionist element in the North was aging away, or had lost interest, and was not replenished. Many carpetbaggers returned to the North or joined the Redeemers. Blacks had an increased voice in the Republican Party, but across the South it was divided by internal bickering and was rapidly losing its cohesion. Many local black leaders started emphasizing individual economic progress in cooperation with white elites, rather than racial political progress in opposition to them, a conservative attitude that foreshadowed Booker T. Washington. Nationally, President Grant was blamed for the depression; the Republican Party lost 96 seats in all parts of the country in the 1874 elections. The Bourbon Democrats took control of the House and were confident of electing Samuel J. Tilden president in 1876. President Grant was not running for re-election and seemed to be losing interest in the South. States fell to the Redeemers, with only four in Republican hands in 1873, Arkansas, Louisiana, Mississippi and South Carolina; Arkansas then fell after the violent Brooks–Baxter War in 1874 ripped apart the Republican party there. In wide area of the South, secret societies sprang up with the aim of preventing blacks from voting and destroying the organization of the Republican party by assassinating local leaders and public officials. The most notorious such organization was the Ku Klux Klan, which in effect served as the military arm of the Democratic party in the South. It was led by planters, merchants, and Democratic politicians, men who liked to style themselves the South's "respectable citizens."(Foner Give me liberty 504). Political violence had been endemic in Louisiana, but in 1874 the white militias coalesced into paramilitary organizations such as the White League, first in parishes of the Red River Valley. A new organization operated openly and had political goals: the violent overthrow of Republican rule and suppression of black voting. White League chapters soon rose in many rural parishes, receiving financing for advanced weaponry from wealthy men. In one example of local violence, the White League assassinated six white Republican officeholders and five to twenty black witnesses outside Coushatta, Red River Parish in 1874. Four of the white men were related to the Republican representative of the parish. Later in 1874 the White League mounted a serious attempt to unseat the Republican governor of Louisiana, in a dispute that had simmered since the 1872 election. It brought 5000 troops to New Orleans to engage and overwhelm forces of the Metropolitan Police and state militia to turn Republican Governor William P. Kellogg out of office and seat McEnery. The White League took over and held the state house and city hall, but they retreated before the arrival of reinforcing Federal troops. Kellogg had asked for reinforcements before, and Grant finally responded, sending additional troops to try to quell violence throughout plantation areas of the Red River Valley, although 2,000 troops were already in the state. Similarly, the Red Shirts, another paramilitary group, arose in 1875 in Mississippi and the Carolinas. Like the White League and White Liner rifle clubs, these groups operated as a "military arm of the Democratic Party", to restore white supremacy. Democrats and many northern Republicans agreed that Confederate nationalism and slavery were dead—the war goals were achieved—and further federal military interference was an undemocratic violation of historic Republican values. The victory of Rutherford Hayes in the hotly contested Ohio gubernatorial election of 1875 indicated his "let alone" policy toward the South would become Republican policy, as happened when he won the 1876 Republican nomination for president. An explosion of violence accompanied the campaign for the Mississippi's 1875 election, in which Red Shirts and Democratic rifle clubs, operating in the open and without disguise, threatened or shot enough Republicans to decide the election for the Democrats. Republican Governor Adelbert Ames asked Grant for federal troops to fight back; Grant initially refused, saying public opinion was "tired out" of the perpetual troubles in the South. Ames fled the state as the Democrats took over Mississippi. This was not the end of the violence, however, as the campaigns and elections of 1876 were marked by additional murders and attacks on Republicans in Louisiana, North and South Carolina, and Florida. In South Carolina the campaign season of 1876 was marked by murderous outbreaks and fraud against freedmen. Red Shirts paraded with arms behind Democratic candidates; they killed blacks in the Hamburg and Ellenton SC massacres; and one historian estimated 150 blacks were killed in the weeks before the 1876 election across South Carolina. Red Shirts prevented almost all black voting in two majority-black counties. The Red Shirts were also active in North Carolina. Reconstruction continued in South Carolina, Louisiana and Florida until 1877. The elections of 1876 were accompanied by heightened violence across the Deep South. A combination of ballot stuffing and intimidating blacks suppressed their vote even in majority black counties. The White League was active in Louisiana. After Republican Rutherford Hayes won the disputed 1876 presidential election, the national Compromise of 1877 was reached. The white Democrats in the South agreed to accept Hayes's victory if he withdrew the last Federal troops. By this point, the North was weary of insurgency. White Democrats controlled most of the Southern legislatures and armed militias controlled small towns and rural areas. Blacks considered Reconstruction a failure because the Federal government withdrew from enforcing their ability to exercise their rights as citizens. On January 29, 1877 President Ulysses S. Grant signed the Electoral Commission Act that set up a 15-member commission to settle the disputed 1876 election of 8 Republicans and 7 Democrats. The Electoral Commission awarded Rutherford B. Hayes the electoral votes he needed; Congress certified he had won by one electoral vote. The Democrats had little leverage—they could not block Hayes' election, but they were mollified by the implicit, "back room" deal that federal troops would be removed on the condition that the Southern states pledged to protect the lives of African Americans. Hayes's friends also let it be known that he would promote Federal aid for internal improvements, including help for a railroad in Texas, and name a Southerner to his cabinet. With the removal of Northern troops, the President had no method to enforce Reconstruction, thus this "back room" deal signaled the end of American Reconstruction. After assuming office on March 4, 1877, President Hayes removed troops from the capitals of the remaining Reconstruction states, Louisiana and South Carolina, allowing the Redeemers to have full control of these states. President Grant had already removed troops from Florida, before Hayes was inaugurated, and troops from the other Reconstruction states had long since been withdrawn. Hayes appointed David M. Key from Tennessee, a Southern Democrat, to the position of Postmaster General. By 1879, thousands of African American "exodusters" packed up and headed to new opportunities in Kansas. The Democrats gained control of the Senate, and had complete control of Congress, having taken over the House in 1875. Hayes vetoed bills from the Democrats that outlawed the Republican Force Acts; however, with the military underfunded, Hayes could not adequately enforce these laws. Blacks remained involved in Southern politics, particularly in Virginia, which was run by the biracial Readjuster Party. Numerous blacks were elected to local office through the 1880s, and in the 1890s in some states, biracial coalitions of Populists and Republicans briefly held control of state legislatures. In the last decade of the 19th century, southern states elected five black US Congressmen before disfranchising constitutions were passed throughout the former Confederacy. The interpretation of Reconstruction has been a topic of controversy. Nearly all historians hold that Reconstruction ended in failure but for different reasons. The first generation of Northern historians believed that the former Confederates were traitors and Johnson was their ally who threatened to undo the Union's constitutional achievements. By the 1880s, however, Northern historians argued that Johnson and his allies were not traitors but had blundered badly in rejecting the 14th Amendment and setting the stage for Radical Reconstruction. The black leader Booker T. Washington, who grew up in West Virginia during Reconstruction, concluded later that, "the Reconstruction experiment in racial democracy failed because it began at the wrong end, emphasizing political means and civil rights acts rather than economic means and self-determination." His solution was to concentrate on building the economic infrastructure of the black community, in part by his leadership of Tuskegee Institute. The Dunning School of scholars, based at the history department of Columbia University analyzed Reconstruction as a failure after 1866 for different reasons. They claimed that it took freedoms and rights from qualified whites and gave them to unqualified blacks who were being duped by corrupt carpetbaggers and scalawags. As T. Harry Williams (who was not a member of the Dunning school) notes, the Dunningites portrayed the era in stark terms: "Reconstruction was a battle between two extremes: the Democrats, as the group which included the vast majority of the whites, standing for decent government and racial supremacy, versus the Republicans, the Negroes, alien carpetbaggers, and renegade scalawags, standing for dishonest government and alien ideals. These historians wrote literally in terms of white and black." In the 1930s, revisionism became popular among scholars. As disciples of Charles A. Beard, revisionists focused on economics, downplaying politics and constitutional issues. They argued that the Radical rhetoric of equal rights was mostly a smokescreen hiding the true motivation of Reconstruction's real backers. Howard K. Beale argued that Reconstruction was primarily a successful attempt by financiers, railroad builders and industrialists in the Northeast, using the Republican Party, to control the national government for their own selfish economic ends. Those ends were to continue the wartime high protective tariff, the new network of national banks and to guarantee a sound currency. To succeed, the business class had to remove the old ruling agrarian class of Southern planters and Midwestern farmers. This it did by inaugurating Reconstruction, which made the South Republican, and by selling its policies to the voters wrapped up in such attractive vote-getting packages as Northern patriotism or the bloody shirt. Historian William Hesseltine added the point that the Northeastern businessmen wanted to control the South economically, which they did through ownership of the railroads. However, historians in the 1950s and 1960s refuted Beale's economic causation by demonstrating that Northern businessmen were widely divergent on monetary or tariff policy, and seldom paid attention to Reconstruction issues. The black scholar W. E. B. Du Bois, in his Black Reconstruction in America, 1860–1880, published in 1935, compared results across the states to show achievements by the Reconstruction legislatures and to refute claims about wholesale African-American control of governments. He showed black contributions, as in the establishment of universal public education, charitable and social institutions and universal suffrage as important results, and he noted their collaboration with whites. He also pointed out that whites benefited most by the financial deals made, and he put excesses in the perspective of the war's aftermath. He noted that despite complaints, several states kept their Reconstruction constitutions for nearly a quarter of a century. Despite receiving favorable reviews, his work was largely ignored by white historians. In the 1960s neoabolitionist historians emerged, led by John Hope Franklin, Kenneth Stampp, Leon Litwack, and Eric Foner. Influenced by the Civil Rights Movement, they rejected the Dunning school and found a great deal to praise in Radical Reconstruction. Foner, the primary advocate of this view, argued that it was never truly completed, and that a Second Reconstruction was needed in the late 20th century to complete the goal of full equality for African Americans. The neo-abolitionists followed the revisionists in minimizing the corruption and waste created by Republican state governments, saying it was no worse than Boss Tweed's ring in New York City. Instead, they emphasized that suppression of the rights of African Americans was a worse scandal and a grave corruption of America's republican ideals. They argued that the tragedy of Reconstruction was not that it failed because blacks were incapable of governing, especially as they did not dominate any state government, but that it failed because whites raised an insurgent movement to restore white supremacy. White elite-dominated state legislatures passed disfranchising constitutions from 1890 to 1908 that effectively barred most blacks and many poor whites from voting. This disfranchisement affected millions of people for decades into the 20th century, and closed African Americans and poor whites out of the political process in the South. Re-establishment of white supremacy meant that within a decade white people forgot that blacks were creating thriving middle classes in many states of the South. African Americans' lack of representation meant that they were treated as second-class citizens, with schools and services consistently underfunded in segregated societies, no representation on juries or in law enforcement, and bias in other legislation. It was not until the Civil Rights Movement and the passage of Federal legislation that African Americans regained their suffrage and civil rights in the South, under what is sometimes referred to as the "Second Reconstruction." In 1990 Eric Foner concluded that from the black point of view, "Reconstruction must be judged a failure." Foner stated Reconstruction was "a noble if flawed experiment, the first attempt to introduce a genuine inter-racial democracy in the United States". The many factors contributing to the failure included: lack of a permanent federal agency specifically designed for the enforcement of civil rights; the Morrison R. Waite Supreme Court decisions that dismantled previous congressional civil rights legislation; and the economic reestablishment of conservative white planters in the South by 1877. Historian William McFeely explained that although the constitutional amendments and civil rights legislation on their own merit were remarkable achievements, no permanent government agency whose specific purpose was civil rights enforcement had been created. More recent work by Nina Silber, David W. Blight, Cecelia O'Leary, Laura Edwards, LeeAnn Whites and Edward J. Blum, has encouraged greater attention to race, religion and issues of gender while at the same time pushing the end of Reconstruction to the end of the 19th century, while monographs by Charles Reagan Wilson, Gaines Foster, W. Scott Poole and Bruce Baker have offered new views of the Southern "Lost Cause". While 1877 is the usual date given for the end of Reconstruction, some historians extend the era to the 1890s. Reconstruction is unanimously considered a failure, though the reason for this is a matter of controversy. Historian Donald R. Shaffer maintained that the gains during Reconstruction for African Americans were not entirely extinguished. The legalization of African American marriage and family and the independence of black churches from white denominations were a source of strength during the Jim Crow era. Reconstruction was never forgotten among the black community and remained as a source of inspiration. The system of share-cropping allowed blacks a considerable amount of freedom over slavery. As a journalist writing as Joe Harris for the Atlanta Constitution, mostly after Reconstruction, Joel Chandler Harris tried to advance racial and sectional reconciliation in the late nineteenth century. He supported the editor Henry Grady's vision of a New South, while Grady was editor from 1880 to 1889. Harris wrote many editorials encouraging southern acceptance of the changed conditions and some Northern influence, although he also asserted his belief that it should proceed under white supremacy. In popular literature two early 20th-century novels by Thomas Dixon—The Clansman (1905) and The Leopard's Spots: A Romance of the White Man's Burden – 1865–1900 (1902)— romanticized white resistance to Northern/black coercion, hailing vigilante action by the KKK. Dixon's The Clansmen was adapted for the screen in D.W. Griffith's anti-Republican movie The Birth of a Nation (1915), considered to contribute to the 20th-century revival of the KKK. Many other authors romanticized the benevolence of slavery and the elite world of the antebellum plantations in memoirs and histories published in the late nineteenth and early twentieth centuries, and the United Daughters of the Confederacy promoted influential works by women in these genres. Only Georgia has a separate article about its experiences under Reconstruction. The other state names below link to a specific section in the state history article about the Reconstruction era. in each State |South Carolina||December 20, 1860||February 4, 1861||July 9, 1868||April 11, 1877| |Mississippi||January 9, 1861||February 4, 1861||February 23, 1870||January 4, 1876| |Florida||January 10, 1861||February 4, 1861||June 25, 1868||January 2, 1877| |Alabama||January 11, 1861||February 4, 1861||July 14, 1868||November 16, 1874| |Georgia||January 19, 1861||February 4, 1861||July 15, 1870||November 1, 1871| |Louisiana||January 26, 1861||February 4, 1861||June 25, 1868 (or July 9)||January 2, 1877| |Texas||February 1, 1861||March 2, 1861||March 30, 1870||January 14, 1873| |Virginia||April 17, 1861||May 7, 1861||January 26, 1870||October 5, 1869| |Arkansas||May 6, 1861||May 18, 1861||June 22, 1868||November 10, 1874| |North Carolina||May 21, 1861||May 16, 1861||July 4, 1868||November 28, 1870| |Tennessee||June 8, 1861||May 16, 1861||July 24, 1866||October 4, 1869| For much more detail see Reconstruction: Bibliography |Wikisource has the text of the 1905 New International Encyclopedia article Reconstruction.|
http://www.mashpedia.com/Reconstruction_era_of_the_United_States
13
50
Most scientists believe that modern humans originated in Africa. For decades scientists have discovered incredible finds from different areas of Africa; fragments of bone and human-like fossil remains are giving us tantalizing clues to our past. Ancient rock art created by people who inhabited this land thousands of years ago provides another window into the past. In this episode, Frontiers visits Hilary Deacon and other archaeologists working in South Africa to solve the riddles of human evolution. Activity 1: Calculating Clues from Bones Activity 2:Painting Rituals Find Out More For Further Thought AFRICAN HISTORY & CULTURE geological time scale ACTIVITY 1: CALCULATING CLUES FROM BONES Much of our knowledge about early humans is based on inferences. Inferences are "best guesses" that connect an observation with an established fact or association. Behavioral and anatomical features of early humans are often inferred from partial skeletons or scattered bone fragments. Rarely is an entire skeleton ever discovered by a paleontologist. Sometimes a single bone can be used to uncover a person's or animal's complex biological and social characteristics. In the following activity, you'll infer a person's height from the length of one bone. By applying a simple calculation to the observed length, you'll develop a "best guess" for body height. Evaluate mathematical relationships. - The formulas below illustrate the relationships between bone lengths and a person's height. MALES (height in inches) - Height equals (length of radius x 3.3) plus 34 - Height equals (length of humerus x 2.9) plus 27.8 FEMALES (height in inches) - Height equals (length of radius x 3.3) plus 32 - Height equals (length of humerus x 2.8) plus 28.1 - Work with a partner. Identify the radius. It is one of the two bones found in the forearm and extends from the base of the wrist to just beneath the elbow hinge. Use a meter stick to measure the length of your partner's radius. Record this length in the table below. - Use the formulas to calculate height based on radius length. Record your calculated height in the table below. - Now identify the humerus in the upper arm. This bone extends from the shoulder socket to just above the elbow hinge. Use a meter stick to measure the length of your partner's humerus. Record this length in the table below. - Use the formulas to calculate height based on humerus length. Record your calculated height in the table below. - Use the meter stick to measure your partner's actual height. Record the measured height. - Compare your calculated and measured heights. How accurate were your inferences? (Answers will vary.) - Which was a more accurate bone length to base your inference upon? (Answers will vary; however, many students will find it easier to measure the length of the humerus.) - Measure the length of your foot. Is this length closer to the length of your radius or humerus? (In most, it will be surprisingly close to the length of the radius.) - Pool and average the class data collected above. Graph the relationship between radius length and height. Use separate curves for males and females. - Work with a partner to determine if you can find a correlation between height and the length of a person's tibia, or shinbone. Can you find a correlation between height and the length of a person's femur (thighbone)? Once you arrive at the relationship, have other student groups test out your calculation method. ACTIVITY 2: PAINTING RITUALS What are the rituals in your daily life? Do your evening rituals depend on the lineup of sky objects or the lineup of nightly television shows? What can someone learn about you and your society by studying rituals? Identify and communicate present-day rituals through ancient art techniques. - non-toxic finger paints - Identify three ritual activities you share with members of your family. Identify three different rituals you share with friends. - Obtain a set of non-toxic finger paints from your instructor. Fingerpaint each of the rituals identified above. Use straight lines and the fewest strokes possible to simulate schematic images or the "stick figures" typical of some ancient art. - Display your illustrations. Have other students interpret your rituals. FIND OUT MORE For more about the South African rock art, see "Rock Art in Southern Africa" in the November 1996 issue of Scientific American magazine. FOR FURTHER THOUGHT - In Activity 1, why do the formulas contain different calculations for determining male and female heights? (Skeletons of males and females have different proportions.) - Do you think archaeologists of the future might interpret the graffiti of today similarly to the way we interpret ancient rock paintings or petroglyphs? - Much rock art in Paleolithic times in various areas of the world was painted in red or yellow ochre. What is this pigment and why was it used by so many different cultures?
http://www.pbs.org/safarchive/4_class/45_pguides/pguide_702/4572_firstpeople.html
13
50
In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. More generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that vector space will be the elements of the associated field. A scalar product operation (not to be confused with scalar multiplication) may be defined on a vector space, allowing two vectors to be multiplied to produce a scalar. A vector space equipped with a scalar product is called an inner product space. The real component of a quaternion is also called its scalar part. The term is also sometimes used informally to mean a vector, matrix, tensor, or other usually "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1×n matrix and an n×1 matrix, which is formally a 1×1 matrix, is often said to be a scalar. The word scalar derives from the Latin word scalaris, adjectival form from scala (Latin for "ladder"). The English word "scale" is also derived from scala. The first recorded usage of the word "scalar" in mathematics was by François Viète in Analytic Art (In artem analyticen isagoge)(1591): - Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to another are called scalar terms. - (Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, vocentur Scalares.) - The algebraically real part may receive, according to the question in which it occurs, all values contained on the one scale of progression of numbers from negative to positive infinity; we shall call it therefore the scalar part. Definitions and properties Scalars of vector spaces A vector space is defined as a set of vectors, a set of scalars, and a scalar multiplication operation that takes a scalar k and a vector v to another vector kv. For example, in a coordinate space, the scalar multiplication yields . In a (linear) function space, kƒ is the function x ↦ k(ƒ(x)). Scalars as vector components According to a fundamental theorem of linear algebra, every vector space has a basis. It follows that every vector space over a scalar field K is isomorphic to a coordinate vector space where the coordinates are elements of K. For example, every real vector space of dimension n is isomorphic to n-dimensional real space Rn. Scalars in normed vector spaces Alternatively, a vector space V can be equipped with a norm function that assigns to every vector v in V a scalar ||v||. By definition, multiplying v by a scalar k also multiplies its norm by |k|. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k. A vector space equipped with a norm is called a normed vector space (or normed linear space). The norm is usually defined to be an element of V's scalar field K, which restricts the latter to fields that support the notion of sign. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four arithmetic operations; thus the rational numbers Q are excluded, but the surd field is acceptable. For this reason, not every scalar product space is a normed vector space. Scalars in modules When the requirement that the set of scalars form a field is relaxed so that it need only form a ring (so that, for example, the division of scalars need not be defined, or the scalars need not be commutative), the resulting more general algebraic structure is called a module. In this case the "scalars" may be complicated objects. For instance, if R is a ring, the vectors of the product space Rn can be made into a module with the n×n matrices with entries from R as the scalars. Another example comes from manifold theory, where the space of sections of the tangent bundle forms a module over the algebra of real functions on the manifold. Scaling transformation Scalar operations (computer science) Operations that apply to a single value at a time. See also - Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4. - Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6. - Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2. - http://math.ucdenver.edu/~wcherowi/courses/m4010/s08/lcviete.pdf Lincoln Collins. Biography Paper: Francois Viete - Hazewinkel, Michiel, ed. (2001), "Scalar", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Weisstein, Eric W., "Scalar", MathWorld. - Mathwords.com – Scalar
http://en.wikipedia.org/wiki/Scalar_(mathematics)
13
57
The Divergence Operator What does the symbol in Equation 1 mean? What is an upside-down triangle (also known as the del operator) with a dot next to it do? In this section, I'll give the definition with no math: Divergence at a point (x,y,z) is the measure of the vector flow out of a surface surrounding that point. That is, imagine a vector field represents water flow. Then if the divergence is a positive number, this means water is flowing out of the point (like a water spout - this location is considered a source). If the divergence is a negative number, then water is flowing into the point (like a water drain - this location is known as a sink). I will give some examples to make this more clear. First, imagine we have a vector field (given by the vector function A) as shown in Figure 1, and we want to know what the divergence is at the point P: Figure 1. Example of a Vector Field Surrounding a Point. We also draw an imaginary surface (S) surrounding the point P. Now imagine the vector A represents water flow. Then, if you add up the amount of water flowing out of the surface, would the amount be positive? The answer, is yes: water is flowing out of the surface at every location along the surface S. Hence, we can say that the divergence at P is positive. Let's take another simple example, that of Figure 2. We have a new vector field B surrounding the point P: Figure 2. Example of a Vector Field Surrounding a Point (negative divergence). In Figure 2, if we imagine the water flowing, we would see the point P acting like a drain or water sink. In this case, the flow out of the surface is negative - hence, the divergence of the field B at P is negative. Pretty simple, eh? Here's a couple more examples. Figure 3 has a vector field C surrounding the point: Figure 3. Example of a Vector Field with no Variation around a Point. In Figure 3, if C represents the flow of water, does more water flow into or out of the surface? In the top of Figure 3, water flows out of the surface, but at the bottom it flows in. Since the field has equal flow into and out of the surface S, the divergence is zero. Seeing as you are probably having a great time, let's do two more examples. Check out Figure 4: Figure 4. A field that wraps around a Point. In Figure 4, we have a vector field D that wraps around the point P. Is the flow positive (out of the surface) or negative (into the surface)? At each point along the surface S, the field is flowing tangentially along the surface. Therefore the field is not flowing into or out of the surface at each point. Hence, again, we have the divergence of D equal to 0 at P. Let's look at a last example, the field E in Figure 5: Figure 5. A More Complicated Vector Field around a point. The vector field E has a large vector above the point P indicating a strong field there - a lot of water flowing out of the surface. The vector to the left of P is small and tangential to the surface, so there is no flow into or out of S at that point. The same is true for the vector to the right of P. And the vector below P is small, indicating a smaller amount of water flowing into the surface. Hence, we can guess the divergence is positive - more water is flowing out of the surface than into it. But how exactly is divergence quantified? To get to that, we'll have to move on to the mathematical section of the divergence page. The Math Behind the Divergence First, we must know that the divergence operator can only accept a vector function as an input. A vector function is simply a vector of 3-functions, broken into x-, y-, and z- components. To learn more, see the vector function page. Divergence is a specific measure of how fast the vector field is changing in the x, y, and z directions. If a vector function A is given by: Then the divergence of A is the sum of how fast the vector function is changing: The symbol is the partial derivative symbol, which means rate of change with respect to x. For more information, see the partial derivatives page. Divergence Mathematical Examples Let's recall the vector field E from Figure 5, but this time we will assign some values to the vectors, as shown in Figure 6: Figure 6. The Vector Field E with Vector Magnitudes Shown. In Figure 6, let's assume that E is not changing in the z-direction, so that we can neglect the final term in Equation . We will also assume that over this region (around point P) that everything is varying linearly. Then what is the divergence in Figure 6? Using Equation , we can estimate the rate of change in the x and y-directions (assuming the rate of change in z is zero): You should make sure Equation makes sense. Note that when we evaluate the x-rate of change of E, we only care about the Ex component. If you look at the blue vectors in Figure 6, you can see that at locations (1,0,0) and (-1,0,0) we have Ex=0. Hence there is no change of Ex in x. However, if you look at the rate of change of E in the y-direction, we have Ey=3 at (0,1,0) and Ey=1 at (0,-1,0). Therefore, there is more flow out of the point P than into the point with respect to the y-direction. Hence the point P acts as a source, and the divergence is positive. Let us now move on to Example 2. I will assume you know a little bit of calculus, so that I can use the derivative operation. The derivative (as shown in Equation ) calculates the rate of change of a function with respect to a single variable. Consider the vector function A in Equation : The divergence can be calculated using Equation : Equation gives us the divergence at every location in space. That is, if you want to know the divergence at (x,y,z)=(3,2,1) then we can use equation to see that the divergence of A is 2+6*1 = 8. We can find the divergence at any point in space because we knew the functions defining the vector A from Equation , and then calculated the rate of changes (derivatives) in Equation . I think this pretty well sums up divergence, at least as far as we will need to know for Maxwell's Equations. Remember: for any point in space, the divergence takes a vector function and produces a single number. This number evaluates whether the point acts as a source of fields (produces more fields than it takes in) or as a sink of fields (fields are diminished around the point).
http://www.maxwells-equations.com/divergence.php
13
131
Peter's Third String Determine the maximum area of a rectangle with a given perimeter Consider how the area of a quadrilateral changes as its shape changes Interpret a relationship from a graph Note that this is the same problem as we used in Peter’s Second String, Level 5 but at this level we expect students to have a more sophisticated way to solve the problem. The extension problem here is quite difficult. If the students can make progress with any quadrilateral, then they have done well. The method shown here may not be the easiest way to solve the problem. If your class comes up with a better approach we would like to see it so that we can add it to this problem. To be able to do this problem students need to be able to measure lengths and calculate the perimeters and areas of rectangles using the formulae: perimeter = twice length plus twice width and area = length x width. In addition, it would help if they have tried Peter’s Second String, Level 5 and have seen how to use tables. Apparently in some areas of New Guinea they measure the area of land by its perimeter. When you think about it this isn’t such a good idea. A piece of land can have a relatively large perimeter and only a small area. This sequence of problems is built up from this simple bad idea. Seven problems have been spawned by the perimeter-area tangle. These come in two waves. First there is the string of Peter’s String problems. These are Peters’ String, Measurement, Level 4, Peters’ Second String, Measurement, Level 5, Peters’ Third String, Algebra, Level 6, The Old Chicken Run Problem, Algebra, Level 6 and the Polygonal String Problem, Algebra, Level 6. These follow through on the non-link between rectangles’ areas and perimeters, going as far as showing that among all quadrilaterals with a fixed perimeter, the square has the largest area. In the second last of these five problems we are able to use an idea that has been developed to look at the old problem of maximising the area of a chicken run. This is often given as an early application of calculus but doesn’t need more than an elementary knowledge of parabolas. The final problem looks at the areas of regular polygons with a fixed perimeter. We show that they are ‘bounded above’ by the circle with the same perimeter. The second string of lessons looks at the problem from the other side: does area have anything to say about perimeter? This leads to questions about the maximum and minimum perimeters for a given area. The lessons here are Karen’s Tiles, Measurement, Level 5 and Karen’s Second Tiles, Algebra, Level 6. Mathematics is more than doing calculations or following routine instructions. Thinking and creating are at the heart of the subject. Though there are some problems that have a set procedure or a formula that can be used to solve them, most worthwhile problems require the use of known mathematics (but not necessarily formulae) in a novel way. Throughout this web site we are hoping to motivate students to think about what they are doing and see connections between various aspects of what they are doing. The mathematical question asked here is what can we say about the rectangle of biggest area that has a fixed perimeter? This question is typical of a lot of mathematical ones that attempt to maximise quantities with given restrictions. There are obvious benefits for this type of maximising activity. The ideas in this sequence of problems further help to develop the student’s concept of mathematics, the thought structure underlying the subject, and the way the subject develops. We start off with a piece of string and use this to realise that there is no direct relation between the perimeter of a rectangle and its area. This leads us to thinking about what areas are possible. A natural consequence of this is to try to find the largest and smallest areas that a given perimeter can encompass. We end up solving both these problems. The largest area comes from a square and the smallest area is as small as we like to make it. Some of the techniques we have used to produce the largest area are then applied in a completely different situation – the chicken run. This positive offshoot of what is really a very pure piece of mathematics initially, is the kind of thing that frequently happens in maths. Somehow, sanitised bits of mathematics, produced in a pure mathematician’s head, can often be applied to real situations. The next direction that the problem takes is to turn the original question around. Don’t ask given perimeter what do we know about area, ask given area what do we know about perimeter. Again there seems to be no direct link. But having spent time with rectangles, the obvious thing to do is to look at other shapes. We actually look at polygons and their relation with circles but there is no reason why you shouldn’t look at triangles or hexagons. Here you might ask whether you can find two triangles with the same area and perimeter or what is the triangle with given area that has maximum perimeter. We have actually avoided these last two questions because of the difficulty of the maths that would be required to solve them. However, we may have got it wrong. There may be some nice answers that are relatively easy to find. If so, please let us know. Peter had kept a piece of string that had been on a parcel that had come for his birthday. It was 30 cm long. He played with it and made different shapes out of it. Then he got stuck on rectangles. He wasn’t sure but he thought that the rectangle with the biggest area that he could make was a square. His sister Veronica said that was crazy but she didn’t have a good reason for saying that. Who was right and why? - Introduce the problem to the class. Get them to consider how they would approach the problem. - Let them investigate Peter’s conjecture in groups using any approach that they want. (Although you might decide to move them in a more sophisticated direction.) At some stage though they will probably have to write down some equations. They may need some help at this point. - Move round the groups as they work to check on progress. Encourage them to set up some equations and reduce the number of dependent variables to one. - If a lot of the pairs are having problems, then you may want a brainstorming session to help them along. - Share the students’ answers. Get them to write up their work in their books. Make sure that they have carefully explained their arguments. - Encourage the more able students to try the Extension Problem. You may want to give them a few days to think about it. Extension to the problem If Peter made any quadrilateral shape with his string, would the maximum area he could get be produced by a square? We have already done this problem using a table in Peter’s Second String, Level 5. But at this level students should begin to see how to use algebra more effectively in such problems so we would encourage you to move them in this direction here. What do we know? Well if we make Peter’s string into a rectangle with side lengths L and W, then 2L + 2W = 30 or L + W = 15. This gives us W = 15 – L. Then we know that LW = A, where A is the area of the rectangle. So we can substitute for W into this equation and at least arrive at an expression for A that only involves the one variable L. This expression is A = L(15 – L). Now we can graph this function because it’s a parabola. And we know from past experience that (i) it is has a maximum value because the coefficient of L2 is negative; and (ii) it crosses the L-axis when L = 0 and L = 15. But parabolas have their axis of symmetry and maximum point, midway between the points where they cross the L-axis. So the axis of symmetry of the parabola above is at L = 7.5. This is also where its maximum point is. But if L = 7.5, then W = 15 – L = 15 – 7.5 = 7.5. So W = L and the rectangle has to be a square. Solution to the extension We do this in easy steps. First the rectangle of perimeter 30 cm with biggest area is a square. This has been done in the main problem. Second, a parallelogram with no internal angles of 90 has a smaller area than a rectangle with the same side lengths. To see this, consider the parallelogram and the rectangle below. Clearly the rectangle has area bc. The parallelogram has area base x height. Since its height is less than c, its area is less than bc. So the parallelogram has a smaller area than the rectangle even though they both have the same perimeters. Third, a quadrilateral with an exterior angle less than 180° has a smaller area than a corresponding one with all exterior angles bigger than 180°. To see this, look at the diagram below. Clearly the quadrilateral on the right has the largest area yet they both have the same perimeters. Fourth, the quadrilateral on the right above has smaller area than a quadrilateral whose diagonals are perpendicular. To see this, imagine that the points A and B are fixed and that A to C to B is a piece of string.Place a pencil inside the string at C. Move the pencil keeping AC and BC straight (taut). What is the biggest area of the triangle ABC? Well as C moves, the base AB remains the same. So the biggest area is found when the height of the triangle is the largest. This occurs when the perpendicular from C to AB goes through the midpoint of AB. Call this position for C, K. (Can you see that AK = KB?) Notice that the quadrilateral formed by AKBD has the same perimeter as ACBD but AKBD has the bigger area. We can do exactly the same thing with the point D. The position of D which makes triangle ABD have the biggest area is when D is above the midpoint of AB. Call this point L. Then the quadrilateral AKBL has the same perimeter as ABCD but it has a bigger area. What’s more, KL is perpendicular to AB. Fifth, we want to show that among all quadrilaterals with the same perimeter and with perpendicular diagonals, the one with the biggest area is a parallelogram. Consider the quadrilateral below. By the argument of ‘Four’, we can assume that AB = AD and that BC = CD. Now repeat the argument of ‘Four’ using B as the point to put the pencil and A and C as the fixed points. The argument we used above then shows that we can move B to a place above the midpoint of AC and in the process increase the area of triangle ABC. Repeating this on triangle ACD, we see that the area of this triangle is maximised when D is above the midpoint of AC. So we have a quadrilateral whose area is bigger than the original quadrilateral ABCD. In this new quadrilateral, AB = BC = CD = DA. From here it is easy to show that opposite angles are equal and so the quadrilateral is a parallelogram (with equal sides – a rhombus). Now lets recap. We can show that among all rectangles with a given perimeter, the one with biggest area is a square. Then any other quadrilateral can be changed into another quadrilateral with the same perimeter and bigger area, using ‘Three’ and/or the ‘pencil’ approach of ‘Four’. What’s more this quadrilateral has to be a parallelogram. But we know that for every parallelogram with a given perimeter there is a rectangle with the same perimeter and a larger area. Hence among all quadrilaterals with a given perimeter, the square is the one with the biggest area.
http://www.nzmaths.co.nz/resource/peters-third-string
13
51
Posted by Rachel on Thursday, February 14, 2008 at 4:52pm. (1) A triangle is called an isosceles triangle if it has two sides with equal lengths. Consider an isosceles triangle ABC with AC = CB. Then side AB (i.e. the side that is not equal to the other sides) is called the base side. The angle opposite the base side is called the vertex angle and the other two angles of an isosceles triangle are called base angles. (2) The SAS Inequality Theorem (Hinge Theorem) states: If two sides of one triangle are congruent to two sides of another triangle, and the included angle of the first triangle is larger than the included angle of the second triangle, then the third side of the first triangle is longer than the third side of the second triangle. (3) The SSS Inequality Theorem (Converse of Hinge Theorem) states: If two sides of one triangle are congruent to two sides of another triangle, and the third side of the first triangle is longer than the third side of the second triangle, then the included angle of the first triangle is larger than the included angle of the second triangle. Steps in an Indirect Proof: 1-Assume that the opposite of what you are trying to prove is true. 2-From this assumption, see what conclusions can be drawn. These conclusions must be based upon the assumption and the use of valid statements. 3-Search for a conclusion that you know is false because it contradicts given or known information. Oftentimes you will be contradicting a piece of GIVEN information. 4-Since your assumption leads to a false conclusion, the assumption must be false. 5-If the assumption (which is the opposite of what you are trying to prove) is false, then you will know that what you are trying to prove must be true. Now, use these steps and form your own algebraic indirect question. Then write back and we will check over your work. Assume: x<5 or x=5 Using a table with several possibilities for x given that x<5 or x=5 x - 2x-3 1 = -1 2 = 1 3 = 4 4 = 5 5 = 7 It's a contradiction because then x<5 or when x>5, 2x-3< or = 7 So in both cases, the assumption leads to the contradiction of a known fact. Therefore, the assumption that x is < or = to 5 must be false, which means that x>5 must be true. Is that right? Given: 2x-3 > 7 Prove: x > 5 Let it be given that 2x - 3 > 7. Assume that x < 5. Then by the Addition Property of Equality: 2x - 3 > 7 2x > 10 By the Division Property of Equality; 2x /2 > 10/2 By Simplification: x > 5. But this is a contradiction to the assumption that x < 5. Thus x > 5. This is a more formal proof. Using a table of values only shows the case for the values you selected. In reality, you must show the case is true for all values greater than 5. The paragraph proof above does just that. Geometry - 5. What theorem do Exercises 1-4 prove? (1 point) Triangle Inequality... geometry - Point M is the midpoint of AC. Find its coordinates. (1 point)(b + c... Honors Geometry - I'm stuck on these 2 problems. My Geometry teacher won'... Math Calculus - The Image Theorem: The image theorem, a corollary of the ... Calculus - Verify the hypothesis of the mean value theorem for each function ... honors/Geometry - how do you write out proofs? To be more specific with my ... math - Can someone explain the Triangle Similarity, Or the AA Similarity, ... Geometry - Can the Isosceles Triangle Theorem be written as a biconditional? If ... Math help plz - Identify three different situations where you could apply the ... calculus - Verify that the Intermediate Value theorem applies to the indicated ... For Further Reading
http://www.jiskha.com/display.cgi?id=1203025920
13
66
The lipid bilayer is a thin polar membrane made of two layers of lipid molecules. These membranes are flat sheets that form a continuous barrier around cells. The cell membrane of almost all living organisms and many viruses are made of a lipid bilayer, as are the membranes surrounding the cell nucleus and other sub-cellular structures. The lipid bilayer is the barrier that keeps ions, proteins and other molecules where they are needed and prevents them from diffusing into areas where they should not be. Lipid bilayers are ideally suited to this role because, even though they are only a few nanometers in width, they are impermeable to most water-soluble (hydrophilic) molecules. Bilayers are particularly impermeable to ions, which allows cells to regulate salt concentrations and pH by pumping ions across their membranes using proteins called ion pumps. Natural bilayers are usually composed of phospholipids, which have a hydrophilic head and two hydrophobic tails each. When phospholipids are exposed to water, they arrange themselves into a two-layered sheet (a bilayer) with all of their tails pointing toward the center of the sheet. The center of this bilayer contains almost no water and excludes molecules like sugars or salts that dissolve in water but not in oil. This assembly process is similar to the coalescing of oil droplets in water and is driven by the same force, called the hydrophobic effect. Because lipid bilayers are quite fragile and are so thin that they are invisible in a traditional microscope, bilayers are very challenging to study. Experiments on bilayers often require advanced techniques like electron microscopy and atomic force microscopy. Phospholipids with certain head groups can alter the surface chemistry of a bilayer and can, for example, mark a cell for destruction by the immune system. Lipid tails can also affect membrane properties, for instance by determining the phase of the bilayer. The bilayer can adopt a solid gel phase state at lower temperatures but undergo phase transition to a fluid state at higher temperatures. The packing of lipids within the bilayer also affects its mechanical properties, including its resistance to stretching and bending. Many of these properties have been studied with the use of artificial "model" bilayers produced in a lab. Vesicles made by model bilayers have also been used clinically to deliver drugs. Biological membranes typically include several types of lipids other than phospholipids. A particularly important example in animal cells is cholesterol, which helps strengthen the bilayer and decrease its permeability. Cholesterol also helps regulate the activity of certain integral membrane proteins. Integral membrane proteins function when incorporated into a lipid bilayer. Because bilayers define the boundaries of the cell and its compartments, these membrane proteins are involved in many intra- and inter-cellular signaling processes. Certain kinds of membrane proteins are involved in the process of fusing two bilayers together. This fusion allows the joining of two distinct structures as in the fertilization of an egg by sperm or the entry of a virus into a cell. Structure and organization A lipid bilayer, also known as the phospholipid bilayer, is a sheet of lipids two molecules thick, arranged so that the hydrophilic phosphate heads point “out” to the water on either side of the bilayer and the hydrophobic tails point “in” to the core of the bilayer. This arrangement results in two “leaflets” which are each a single molecular layer. Lipids self-assemble into this structure because of the hydrophobic effect which creates an energetically unfavorable interaction between the hydrophobic lipid tails and the surrounding water. Thus, a lipid bilayer is typically held together by entirely non-covalent forces that do not involve formation of chemical bonds between individual molecules. There are a few similarities between this structure and a common soap bubble, although there are also important differences. As illustrated, both structures involve two single-molecule layers of an amphiphilic substance. In the case of a soap bubble, the two soap monolayers coat an intervening water layer. The hydrophilic heads are oriented “in” toward this water core, while the hydrophobic tails point “out” to the air. In the case of a lipid bilayer, this structure is reversed with heads out and tails in. Another important difference between lipid bilayers and soap bubbles is their relative size. Soap bubbles are typically hundreds of nanometers thick, on the same order as the wavelength of light, which is why interference effects cause rainbow colors on a bubble surface. A single lipid bilayer, on the other hand, is around five nanometers thick, much smaller than the wavelength of light and is therefore invisible to the eye, even with a standard light microscope. Cross section analysis The lipid bilayer is very thin compared to its lateral dimensions. If a typical mammalian cell (diameter ~10 micrometre) were magnified to the size of a watermelon (~1 ft/30 cm), the lipid bilayer making up the plasma membrane would be about as thick as a piece of office paper. Despite being only a few nanometers thick, the bilayer is composed of several distinct chemical regions across its cross-section. These regions and their interactions with the surrounding water have been characterized over the past several decades with x-ray reflectometry, neutron scattering and nuclear magnetic resonance techniques. The first region on either side of the bilayer is the hydrophilic headgroup. This portion of the membrane is completely hydrated and is typically around 0.8-0.9 nm thick. In phospholipid bilayers the phosphate group is located within this hydrated region, approximately 0.5 nm outside the hydrophobic core. In some cases, the hydrated region can extend much further, for instance in lipids with a large protein or long sugar chain grafted to the head. One common example of such a modification in nature is the lipopolysaccharide coat on a bacterial outer membrane, which helps retain a water layer around the bacterium to prevent dehydration. Next to the hydrated region is an intermediate region which is only partially hydrated. This boundary layer is approximately 0.3 nm thick. Within this short distance, the water concentration drops from 2M on the headgroup side to nearly zero on the tail (core) side. The hydrophobic core of the bilayer is typically 3-4 nm thick, but this value varies with chain length and chemistry. Core thickness also varies significantly with temperature, particularly near a phase transition. In many naturally occurring bilayers, the compositions of the inner and outer membrane leaflets are different. In human red blood cells, the inner (cytoplasmic) leaflet is largely composed of phosphatidylethanolamine, phosphatidylserine and phosphatidylinositol and its phosphorylated derivatives. By contrast, the outer (extracellular) leaflet is based on phosphatidylcholine, sphingomyelin and a variety of glycolipids, In some cases, this asymmetry is based on where the lipids are made in the cell and reflects their initial orientation. The biological functions of lipid asymmetry are imperfectly understood, although it is clear that it is used in several different situations. For example, when a cell undergoes apoptosis, the phosphatidylserine — normally localised to the cytoplasmic leaflet — is transferred to the outer surface: there it is recognised by a macrophage which then actively scavenges the dying cell. Lipid asymmetry arises, at least in part, from the fact that most phospholipids are synthesised and initially inserted into the inner monolayer: those that constitute the outer monolayer are then transported from the inner monolayer by a class of enzymes called flippases. Other lipids, such as sphingomyelin, appear to be synthesised at the external leaflet. Flippases are members of a larger family of lipid transport molecules which also includes floppases, which transfer lipids in the opposite direction, and scramblases, which randomize lipid distribution across lipid bilayers (as in apoptotic cells). In any case, once lipid asymmetry is established it does not normally dissipate quickly because spontaneous flip-flop of lipids between leaflets is extremely slow. It is possible to mimic this asymmetry in the laboratory in model bilayer systems. Certain types of very small artificial vesicle will automatically make themselves slightly asymmetric, although the mechanism by which this asymmetry is generated is very different from that in cells. By utilizing two different monolayers in Langmuir-Blodgett deposition or a combination of Langmuir-Blodgett and vesicle rupture deposition it is also possible to synthesize an asymmetric planar bilayer. This asymmetry may be lost over time as lipids in supported bilayers can be prone to flip-flop. Phases and phase transitions At a given temperature a lipid bilayer can exist in either a liquid or a gel (solid) phase. All lipids have a characteristic temperature at which they transition (melt) from the gel to liquid phase. In both phases the lipid molecules are prevented from flip-flopping across the bilayer, but in liquid phase bilayers a given lipid will exchange locations with its neighbor millions of times a second. This random walk exchange allows lipid to diffuse and thus wander across the surface of the membrane. Unlike liquid phase bilayers, the lipids in a gel phase bilayer are locked in place. The phase behavior of lipid bilayers is largely determined by the strength of the attractive Van der Waals interactions between adjacent lipid molecules. Longer tailed lipids have more area over which to interact, increasing the strength of this interaction and consequently decreasing the lipid mobility. Thus, at a given temperature, a short-tailed lipid will be more fluid than an otherwise identical long-tailed lipid. Transition temperature can also be affected by the degree of unsaturation of the lipid tails. An unsaturated double bond can produce a kink in the alkane chain, disrupting the lipid packing. This disruption creates extra free space within the bilayer which allows additional flexibility in the adjacent chains. An example of this effect can be noted in everyday life as butter, which has a large percentage saturated fats, is solid at room temperature while vegetable oil, which is mostly unsaturated, is liquid. Most natural membranes are a complex mixture of different lipid molecules. If some of the components are liquid at a given temperature while others are in the gel phase, the two phases can coexist in spatially separated regions, rather like an iceberg floating in the ocean. This phase separation plays a critical role in biochemical phenomena because membrane components such as proteins can partition into one or the other phase and thus be locally concentrated or activated. One particularly important component of many mixed phase systems is cholesterol, which modulates bilayer permeability, mechanical strength and biochemical interactions. Surface chemistry While lipid tails primarily modulate bilayer phase behavior, it is the headgroup that determines the bilayer surface chemistry. Most natural bilayers are composed primarily of phospholipids, although sphingolipids such as sphingomyelin and sterols such as cholesterol are also important components. Of the phospholipids, the most common headgroup is phosphatidylcholine (PC), accounting for about half the phospholipids in most mammalian cells. PC is a zwitterionic headgroup, as it has a negative charge on the phosphate group and a positive charge on the amine but, because these local charges balance, no net charge. Other headgroups are also present to varying degrees and can include phosphatidylserine (PS) phosphatidylethanolamine (PE) and phosphatidylglycerol (PG). These alternate headgroups often confer specific biological functionality that is highly context-dependent. For instance, PS presence on the extracellular membrane face of erythrocytes is a marker of cell apoptosis, whereas PS in growth plate vesicles is necessary for the nucleation of hydroxyapatite crystals and subsequent bone mineralization. Unlike PC, some of the other headgroups carry a net charge, which can alter the electrostatic interactions of small molecules with the bilayer. Biological roles Containment and separation The primary role of the lipid bilayer in biology is to separate aqueous compartments from their surroundings. Without some form of barrier delineating “self” from “non-self” it is difficult to even define the concept of an organism or of life. This barrier takes the form of a lipid bilayer in all known life forms except for a few species of archaea which utilize a specially adapted lipid monolayer. It has even been proposed that the very first form of life may have been a simple lipid vesicle with virtually its sole biosynthetic capability being the production of more phospholipids. The partitioning ability of the lipid bilayer is based on the fact that hydrophilic molecules cannot easily cross the hydrophobic bilayer core, as discussed in Transport across the bilayer below. Nucleus, mitochondria and chloroplasts have two lipid bilayer, and other structures are surrounded by a single lipid bilayer (such as the plasma membrane, endoplasmic reticula, Golgi apparatuses and lysosomes). See Organelle. Prokaryotes have only one lipid bilayer- the cell membrane (also known as the plasma membrane). Many prokaryotes also have a cell wall, but the cell wall is composed of proteins or long chain carbohydrates, not lipids. In contrast, eukaryotes have a range of organelles including the nucleus, mitochondria, lysosomes and endoplasmic reticulum. All of these sub-cellular compartments are surrounded by one or more lipid bilayers and, together, typically comprise the majority of the bilayer area present in the cell. In liver hepatocytes for example, the plasma membrane accounts for only two percent of the total bilayer area of the cell, whereas the endoplasmic reticulum contains more than fifty percent and the mitochondria a further thirty percent. Probably the most familiar form of cellular signaling is synaptic transmission, whereby a nerve impulse that has reached the end of one neuron is conveyed to an adjacent neuron via the release of neurotransmitters. This transmission is made possible by the action of synaptic vesicles loaded with the neurotransmitters to be released. These vesicles fuse with the cell membrane at the pre-synaptic terminal and release its contents to the exterior of the cell. The contents then diffuse across the synapse to the post-synaptic terminal. Lipid bilayers are also involved in signal transduction through their role as the home of integral membrane proteins. This is an extremely broad and important class of biomolecule. It is estimated that up to a third of the human proteome may be membrane proteins. Some of these proteins are linked to the exterior of the cell membrane. An example of this is the CD59 protein, which identifies cells as “self” and thus inhibits their destruction by the immune system. The HIV virus evades the immune system in part by grafting these proteins from the host membrane onto its own surface. Alternatively, some membrane proteins penetrate all the way through the bilayer and serve to relay individual signal events from the outside to the inside of the cell. The most common class of this type of protein is the G protein-coupled receptor (GPCR). GPCRs are responsible for much of the cell’s ability to sense its surroundings and, because of this important role, approximately 40% of all modern drugs are targeted at GPCRs. In addition to protein- and solution-mediated processes, it is also possible for lipid bilayers to participate directly in signaling. A classic example of this is phosphatidylserine-triggered phagocytosis. Normally, phosphatidylserine is asymmetrically distributed in the cell membrane and is present only on the interior side. During programmed cell death a protein called a scramblase equilibrates this distribution, displaying phosphatidylserine on the extracellular bilayer face. The presence of phosphatidylserine then triggers phagocytosis to remove the dead or dying cell. Characterization methods The lipid bilayer is a very difficult structure to study because it is so thin and fragile. In spite of these limitations dozens of techniques have been developed over the last seventy years to allow investigations of its structure and function. Electrical measurements are a straightforward way to characterize an important function of a bilayer: its ability to segregate and prevent the flow of ions in solution. By applying a voltage across the bilayer and measuring the resulting current, the resistance of the bilayer is determined. This resistance is typically quite high since the hydrophobic core is impermeable to charged species. The presence of even a few nanometer-scale holes results in a dramatic increase in current. The sensitivity of this system is such that even the activity of single ion channels can be resolved. Electrical measurements do not provide an actual picture like imaging with a microscope can. Lipid bilayers cannot be seen in a traditional microscope because they are too thin. In order to see bilayers, researchers often use fluorescence microscopy. A sample is excited with one wavelength of light and observed in a different wavelength, so that only fluorescent molecules with a matching excitation and emission profile will be seen. Natural lipid bilayers are not fluorescent, so a dye is used that attaches to the desired molecules in the bilayer. Resolution is usually limited to a few hundred nanometers, much smaller than a typical cell but much larger than the thickness of a lipid bilayer. Electron microscopy offers a higher resolution image. In an electron microscope, a beam of focused electrons interacts with the sample rather than a beam of light as in traditional microscopy. In conjunction with rapid freezing techniques, electron microscopy has also been used to study the mechanisms of inter- and intracellular transport, for instance in demonstrating that exocytotic vesicles are the means of chemical release at synapses. 31P-NMR(nuclear magnetic resonance) spectroscopy is widely used for studies of phospholipid bilayers and biological membranes in native conditions. The analysis of 31P-NMR spectra of lipids could provide a wide range of information about lipid bilayer packing, phase transitions (gel phase, physiological liquid crystal phase, ripple phases, non bilayer phases), lipid head group orientation/dynamics, and elastic properties of pure lipid bilayer and as a result of binding of proteins and other biomolecules. In addition, a specific H-N...(O)-P NMR experiment (INEPT transfer by scalar coupling 3JH-P~5 Hz) could provide a direct information about formation of hydrogen bonds between amid protons of protein to phosphate of lipid headgroups, which is useful in studies of protein/membrane interactions. A new method to study lipid bilayers is Atomic force microscopy (AFM). Rather than using a beam of light or particles, a very small sharpened tip scans the surface by making physical contact with the bilayer and moving across it, like a record player needle. AFM is a promising technique because it has the potential to image with nanometer resolution at room temperature and even under water or physiological buffer, conditions necessary for natural bilayer behavior. Utilizing this capability, AFM has been used to examine dynamic bilayer behavior including the formation of transmembrane pores (holes) and phase transitions in supported bilayers. Another advantage is that AFM does not require fluorescent or isotopic labeling of the lipids, since the probe tip interacts mechanically with the bilayer surface. Because of this, the same scan can image both lipids and associated proteins, sometimes even with single-molecule resolution. AFM can also probe the mechanical nature of lipid bilayers. Lipid bilayers exhibit high levels of birefringence where the refractive index in the plane of the bilayer differs from that perpendicular by as much as 0.1 refractive index units. This has been used to characterise the degree of order and disruption in bilayers using dual polarisation interferometry to understand mechanisms of protein interaction. Lipid bilayers are complicated molecular systems with many degrees of freedom. Thus atomistic simulation of membrane and in particular ab initio calculations of its properties is difficult and computationally expensive. Quantum chemical calculations has recently been successfully performed to estimate dipole and quadrupole moments of lipid membranes. Hydrated bilayers show rich vibrational dynamics and are good media for efficient vibrational energy transfer. Vibrational properties of lipid monolayers and bilayers has been investigated by ultrafast spectroscopic techniques and recently developed computational methods. Transport across the bilayer Passive diffusion Most polar molecules have low solubility in the hydrocarbon core of a lipid bilayer and consequently have low permeability coefficients across the bilayer. This effect is particularly pronounced for charged species, which have even lower permeability coefficients than neutral polar molecules. Anions typically have a higher rate of diffusion through bilayers than cations. Compared to ions, water molecules actually have a relatively large permeability through the bilayer, as evidenced by osmotic swelling. When a cell or vesicle with a high interior salt concentration is placed in a solution with a low salt concentration it will swell and eventually burst. Such a result would not be observed unless water was able to pass through the bilayer with relative ease. The anomalously large permeability of water through bilayers is still not completely understood and continues to be the subject of active debate. Small uncharged apolar molecules diffuse through lipid bilayers many orders of magnitude faster than ions or water. This applies both to fats and organic solvents like chloroform and ether. Regardless of their polar character larger molecules diffuse more slowly across lipid bilayers than small molecules. Ion pumps and channels Two special classes of protein deal with the ionic gradients found across cellular and sub-cellular membranes in nature- ion channels and ion pumps. Both pumps and channels are integral membrane proteins that pass through the bilayer, but their roles are quite different. Ion pumps are the proteins that build and maintain the chemical gradients by utilizing an external energy source to move ions against the concentration gradient to an area of higher chemical potential. The energy source can be ATP, as is the case for the Na+-K+ ATPase. Alternatively, the energy source can be another chemical gradient already in place, as in the Ca2+/Na+ antiporter. It is through the action of ion pumps that cells are able to regulate pH via the pumping of protons. In contrast to ion pumps, ion channels do not build chemical gradients but rather dissipate them in order to perform work or send a signal. Probably the most familiar and best studied example is the voltage-gated Na+ channel, which allows conduction of an action potential along neurons. All ion pumps have some sort of trigger or “gating” mechanism. In the previous example it was electrical bias, but other channels can be activated by binding a molecular agonist or through a conformational change in another nearby protein. Endocytosis and exocytosis Some molecules or particles are too large or too hydrophilic to effectively pass through a lipid bilayer. Other molecules could pass through the bilayer but must be transported rapidly in such large numbers that channel-type transport is impractical. In both cases these types of cargo can be moved across the cell membrane through fusion or budding of vesicles. When a vesicle is produced inside the cell and fuses with the plasma membrane to release its contents into the extracellular space this process is known as exocytosis. In the reverse process a region of the cell membrane will dimple inwards and eventually pinch off, enclosing a portion of the extracellular fluid to transport it into the cell. Endocytosis and exocytosis rely on very different molecular machinery to function, but the two processes are intimately linked and could not work without each other. The primary mechanism this interdependence is the sheer volume of lipid material involved. In a typical cell, an area of bilayer equivalent to the entire plasma membrane will travel through the endocytosis/exocytosis cycle in about half an hour. If these two processes were not balancing each other the cell would either balloon outward to an unmanageable size or completely deplete its plasma membrane within a matter of minutes. Electroporation is the rapid increase in bilayer permeability induced by the application of a large artificial electric field across the membrane. Experimentally, electroporation is used to introduce hydrophilic molecules into cells. It is a particularly useful technique for large highly charged molecules such as DNA which would never passively diffuse across the hydrophobic bilayer core. Because of this, electroporation is one of the key methods of transfection as well as bacterial transformation. It has even been proposed that electroporation resulting from lightning strikes could be a mechanism of natural horizontal gene transfer. This increase in permeability primarily affects transport of ions and other hydrated species, indicating that the mechanism is the creation of nm-scale water-filled holes in the membrane. Although electroporation and dielectric breakdown both result from application of an electric field, the mechanisms involved are fundamentally different. In dielectric breakdown the barrier material is ionized, creating a conductive pathway. The material alteration is thus chemical in nature. In contrast, during electroporation the lipid molecules are not chemically altered but simply shift position, opening up a pore which acts as the conductive pathway through the bilayer as it is filled with water. Lipid bilayers are large enough structures to have some of the mechanical properties of liquids or solids. The area compression modulus Ka, bending modulus Kb, and edge energy , can be used to describe them. Solid lipid bilayers also have a shear modulus, but like any liquid, the shear modulus is zero for fluid bilayers. These mechanical properties affect how the membrane functions. Ka and Kb affect the ability of proteins and small molecules to insert into the bilayer, and bilayer mechanical properties have been shown to alter the function of mechanically activated ion channels. Bilayer mechanical properties also govern what types of stress a cell can withstand without tearing. Although lipid bilayers can easily bend, most cannot stretch more than a few percent before rupturing. As discussed in the Structure and organization section, the hydrophobic attraction of lipid tails in water is the primary force holding lipid bilayers together. Thus, the elastic modulus of the bilayer is primarily determined by how much extra area is exposed to water when the lipid molecules are stretched apart. It is not surprising given this understanding of the forces involved that studies have shown that Ka varies strongly with osmotic pressure but only weakly with tail length and unsaturation. Because the forces involved are so small, it is difficult to experimentally determine Ka. Most techniques require sophisticated microscopy and very sensitive measurement equipment. In contrast to Ka, which is a measure of how much energy is needed to stretch the bilayer, Kb is a measure of how much energy is needed to bend or flex the bilayer. Formally, bending modulus is defined as the energy required to deform a membrane from its intrinsic curvature to some other curvature. Intrinsic curvature is defined by the ratio of the diameter of the head group to that of the tail group. For two-tailed PC lipids, this ratio is nearly one so the intrinsic curvature is nearly zero. If a particular lipid has too large a deviation from zero intrinsic curvature it will not form a bilayer and will instead form other phases such as micelles or inverted micelles. Typically, Kb is not measured experimentally but rather is calculated from measurements of Ka and bilayer thickness, since the three parameters are related. is a measure of how much energy it takes to expose a bilayer edge to water by tearing the bilayer or creating a hole in it. The origin of this energy is the fact that creating such an interface exposes some of the lipid tails to water, but the exact orientation of these border lipids is unknown. There is some evidence that both hydrophobic (tails straight) and hydrophilic (heads curved around) pores can coexist. Fusion is the process by which two lipid bilayers merge, resulting in one connected structure. If this fusion proceeds completely through both leaflets of both bilayers, a water-filled bridge is formed and the solutions contained by the bilayers can mix. Alternatively, if only one leaflet from each bilayer is involved in the fusion process, the bilayers are said to be hemifused. Fusion is involved in many cellular processes, particularly in eukaryotes since the eukaryotic cell is extensively sub-divided by lipid bilayer membranes. Exocytosis, fertilization of an egg by sperm and transport of waste products to the lysozome are a few of the many eukaryotic processes that rely on some form of fusion. Even the entry of pathogens can be governed by fusion, as many bilayer-coated viruses have dedicated fusion proteins to gain entry into the host cell. There are four fundamental steps in the fusion process. First, the involved membranes must aggregate, approaching each other to within several nanometers. Second, the two bilayers must come into very close contact (within a few angstroms). To achieve this close contact, the two surfaces must become at least partially dehydrated, as the bound surface water normally present causes bilayers to strongly repel. The presence of ions, particularly divalent cations like magnesium and calcium, strongly affects this step. One of the critical roles of calcium in the body is regulating membrane fusion. Third, a destabilization must form at one point between the two bilayers, locally distorting their structures. The exact nature of this distortion is not known. One theory is that a highly curved "stalk" must form between the two bilayers. Proponents of this theory believe that it explains why phosphatidylethanolamine, a highly curved lipid, promotes fusion. Finally, in the last step of fusion, this point defect grows and the components of the two bilayers mix and diffuse away from the site of contact. The situation is further complicated when considering fusion in vivo since biological fusion is almost always regulated by the action of membrane-associated proteins. The first of these proteins to be studied were the viral fusion proteins, which allow an enveloped virus to insert its genetic material into the host cell (enveloped viruses are those surrounded by a lipid bilayer; some others have only a protein coat).Eukaryotic cells also use fusion proteins, the best studied of which are the SNAREs. SNARE proteins are used to direct all vesicular intracellular trafficking. Despite years of study, much is still unknown about the function of this protein class. In fact, there is still an active debate regarding whether SNAREs are linked to early docking or participate later in the fusion process by facilitating hemifusion. In studies of molecular and cellular biology it is often desirable to artificially induce fusion. The addition of polyethylene glycol (PEG) causes fusion without significant aggregation or biochemical disruption. This procedure is now used extensively, for example by fusing B-cells with melanoma cells. The resulting “hybridoma” from this combination expresses a desired antibody as determined by the B-cell involved, but is immortalized due to the melanoma component. Fusion can also be artificially induced through electroporation in a process known as electrofusion. It is believed that this phenomenon results from the energetically active edges formed during electroporation, which can act as the local defect point to nucleate stalk growth between two bilayers. Model systems Lipid bilayers can be created artificially in the lab to allow researchers to perform experiments that cannot be done with natural bilayers. These synthetic systems are called model lipid bilayers. There are many different types of model bilayers, each having experimental advantages and disadvantages. They can be made with either synthetic or natural lipids. Among the most common model systems are: - Black lipid membranes (BLM) - Supported lipid bilayers (SLB) - Tethered Bilayer Lipid Membranes (t-BLM) Commercial applications To date, the most successful commercial application of lipid bilayers has been the use of liposomes for drug delivery, especially for cancer treatment. (Note- the term “liposome” is essentially synonymous with “vesicle” except that vesicle is a general term for the structure whereas liposome only refers to artificial, not natural vesicles) The basic idea of liposomal drug delivery is that the drug is encapsulated in solution inside the liposome then injected into the patient. These drug-loaded liposomes travel through the system until they bind at the target site and rupture, releasing the drug. In theory, liposomes should make an ideal drug delivery system since they can isolate nearly any hydrophilic drug, can be grafted with molecules to target specific tissues and can be relatively non-toxic since the body possesses biochemical pathways for degrading lipids. The first generation of drug delivery liposomes had a simple lipid composition and suffered from several limitations. Circulation in the bloodstream was extremely limited due to both renal clearing and phagocytosis. Refinement of the lipid composition to tune fluidity, surface charge density and surface hydration resulted in vesicles that adsorb fewer proteins from serum and thus are less readily recognized by the immune system. The most significant advance in this area was the grafting of polyethylene glycol (PEG) onto the liposome surface to produce “stealth” vesicles which circulate over long times without immune or renal clearing. The first stealth liposomes were passively targeted at tumor tissues. Because tumors induce rapid and uncontrolled angiogenesis they are especially “leaky” and allow liposomes to exit the bloodstream at a much higher rate than normal tissue would. More recently[when?] work has been undertaken to graft antibodies or other molecular markers onto the liposome surface in the hope of actively binding them to a specific cell or tissue type. Some examples of this approach are already in clinical trials. Another potential application of lipid bilayers is the field of biosensors. Since the lipid bilayer is the barrier between the interior and exterior of the cell it is also the site of extensive signal transduction. Researchers over the years have tried to harness this potential to develop a bilayer-based device for clinical diagnosis or bioterrorism detection. Progress has been slow in this area and, although a few companies have developed automated lipid-based detection systems, they are still targeted at the research community. These include Biacore Life Sciences, which offers a disposable chip for utilizing lipid bilayers in studies of binding kinetics and Nanion Inc which has developed an automated patch clamping system. Other, more exotic applications are also being pursued such as the use of lipid bilayer membrane pores for DNA sequencing by Oxford Nanolabs. To date, this technology has not proven commercially viable. A supported lipid bilayer (SLB) as described above has achieved commercial success as a screening technique to measure the permeability of drugs. This parallel artificial membrane permeability assay PAMPA technique measures the permeability across specifically formulated lipid cocktail(s) found to be highly correlated with Caco-2 cultures, the gastrointestinal tract, blood–brain barrier and skin. By the early twentieth century scientists had come to believe that cells are surrounded by a thin oil-like barrier, but the structural nature of this membrane was not known. Two experiments in 1925 laid the groundwork to fill in this gap. By measuring the capacitance of erythrocyte solutions, Hugo Fricke determined that the cell membrane was 3.3 nm thick. Although the results of this experiment were accurate, Fricke misinterpreted the data to mean that the cell membrane is a single molecular layer. Prof. Dr. Evert Gorter (1881–1954) and F. Grendel of Leiden University approached the problem from a different perspective, spreading the erythrocyte lipids as a monolayer on a Langmuir-Blodgett trough. When they compared the area of the monolayer to the surface area of the cells, they found a ratio of two to one. Later analyses showed several errors and incorrect assumptions with this experiment but, serendipitously, these errors canceled out and from this flawed data Gorter and Grendel drew the correct conclusion- that the cell membrane is a lipid bilayer. This theory was confirmed through the use of electron microscopy in the late 1950s. Although he did not publish the first electron microscopy study of lipid bilayers J. David Robertson was the first to assert that the two dark electron-dense bands were the headgroups and associated proteins of two apposed lipid monolayers. In this body of work, Robertson put forward the concept of the “unit membrane.” This was the first time the bilayer structure had been universally assigned to all cell membranes as well as organelle membranes. Around the same time the development of model membranes confirmed that the lipid bilayer is a stable structure that can exist independently of proteins. By “painting” a solution of lipid in organic solvent across an aperture, Mueller and Rudin were able to create an artificial bilayer and determine that this exhibited lateral fluidity, high electrical resistance and self-healing in response to puncture, all of which are properties of a natural cell membrane. A few years later, Alec Bangham showed that bilayers, in the form of lipid vesicles, could also be formed simply by exposing a dried lipid sample to water. This was an important advance since it demonstrated that lipid bilayers form spontaneously via self assembly and do not require a patterned support structure. See also - Mashaghi et al. Hydration strongly affects the molecular and electronic structure of membrane phospholipids. 136, 114709 (2012) - Lewis BA, Engelman DM (May 1983). "Lipid bilayer thickness varies linearly with acyl chain length in fluid phosphatidylcholine vesicles". J. Mol. Biol. 166 (2): 211–7. doi:10.1016/S0022-2836(83)80007-2. PMID 6854644. - Zaccai G, Blasie JK, Schoenborn BP (January 1975). "Neutron Diffraction Studies on the Location of Water in Lecithin Bilayer Model Membranes". Proc. Natl. Acad. Sci. U.S.A. 72 (1): 376–380. Bibcode:1975PNAS...72..376Z. doi:10.1073/pnas.72.1.376. PMC 432308. PMID 16592215. - Nagle JF, Tristram-Nagle S (November 2000). "Structure of lipid bilayers". Biochim. Biophys. Acta 1469 (3): 159–95. doi:10.1016/S0304-4157(00)00016-2. PMC 2747654. PMID 11063882. - Parker J, Madigan MT, Brock TD, Martinko JM (2003). Brock biology of microorganisms (10th ed.). Englewood Cliffs, N.J: Prentice Hall. ISBN 0-13-049147-0. - Marsh D (July 2001). "Polarity and permeation profiles in lipid membranes". Proc. Natl. Acad. Sci. U.S.A. 98 (14): 7777–82. Bibcode:2001PNAS...98.7777M. doi:10.1073/pnas.131023798. PMC 35418. PMID 11438731. - Marsh D (December 2002). "Membrane water-penetration profiles from spin labels". Eur. Biophys. J. 31 (7): 559–62. doi:10.1007/s00249-002-0245-z. PMID 12602343. - Rawicz W, Olbrich KC, McIntosh T, Needham D, Evans E (July 2000). "Effect of chain length and unsaturation on elasticity of lipid bilayers". Biophys. J. 79 (1): 328–39. Bibcode:2000BpJ....79..328R. doi:10.1016/S0006-3495(00)76295-3. PMC 1300937. PMID 10866959. - Trauble H, Haynes DH (1971). "The volume change in lipid bilayer lamellae at the crystalline-liquid crystalline phase transition". Chem. Phys. Lipids. 7 (4): 324–35. doi:10.1016/0009-3084(71)90010-7. - Bretscher MS, (March 1972). "Asymmetrical lipid bilayer structure for biological membranes". Nature 236 (61): 11–12. Bibcode:1972Natur.236...11O. doi:10.1038/236011a0. PMID 4502419. - Verkleij AJ, Zwaal RF, Roelofsen B, Comfurius P, Kastelijn D, van Deenen LL (October 1973). "The asymmetric distribution of phospholipids in the human red cell membrane. A combined study using phospholipases and freeze-etch electron microscopy". Biochim. Biophys. Acta 323 (2): 178–93. doi:10.1016/0005-2736(73)90143-0. PMID 4356540. - Bell RM, Ballas LM, Coleman RA (1 March 1981). "Lipid topogenesis". J. Lipid Res. 22 (3): 391–403. PMID 7017050. - Bretscher MS (August 1973). "Membrane structure: some general principles". Science 181 (4100): 622–629. Bibcode:1973Sci...181..622B. doi:10.1126/science.181.4100.622. PMID 4724478. - Rothman JE, Kennedy EP (May 1977). "Rapid transmembrane movement of newly synthesized phospholipids during membrane assembly". Proc. Natl. Acad. Sci. U.S.A. 74 (5): 1821–5. Bibcode:1977PNAS...74.1821R. doi:10.1073/pnas.74.5.1821. PMC 431015. PMID 405668. - Kornberg RD, McConnell HM (March 1971). "Inside-outside transitions of phospholipids in vesicle membranes". Biochemistry 10 (7): 1111–20. doi:10.1021/bi00783a003. PMID 4324203. - Litman BJ (July 1974). "Determination of molecular asymmetry in the phosphatidylethanolamine surface distribution in mixed phospholipid vesicles". Biochemistry 13 (14): 2844–8. doi:10.1021/bi00711a010. PMID 4407872. - Crane JM, Kiessling V, Tamm LK (February 2005). "Measuring lipid asymmetry in planar supported bilayers by fluorescence interference contrast microscopy". Langmuir 21 (4): 1377–88. doi:10.1021/la047654w. PMID 15697284. - Kalb E, Frey S, Tamm LK (January 1992). "Formation of supported planar bilayers by fusion of vesicles to supported phospholipid monolayers". Biochim. Biophys. Acta 1103 (2): 307–16. doi:10.1016/0005-2736(92)90101-Q. PMID 1311950. - Lin WC, Blanchette CD, Ratto TV, Longo ML (January 2006). "Lipid asymmetry in DLPC/DSPC-supported lipid bilayers: a combined AFM and fluorescence microscopy study". Biophys. J. 90 (1): 228–37. Bibcode:2006BpJ....90..228L. doi:10.1529/biophysj.105.067066. PMC 1367021. PMID 16214871. - Berg, Howard C. (1993). Random walks in biology (Extended Paperback ed.). Princeton, N.J: Princeton University Press. ISBN 0-691-00064-6. - Dietrich C, Volovyk ZN, Levi M, Thompson NL, Jacobson K (September 2001). "Partitioning of Thy-1, GM1, and cross-linked phospholipid analogs into lipid rafts reconstituted in supported model membrane monolayers". Proc. Natl. Acad. Sci. U.S.A. 98 (19): 10642–7. Bibcode:2001PNAS...9810642D. doi:10.1073/pnas.191168698. PMC 58519. PMID 11535814. - Yeagle, Philip (1993). The membranes of cells (2nd ed.). Boston: Academic Press. ISBN 0-12-769041-7. - Fadok VA, Bratton DL, Frasch SC, Warner ML, Henson PM (July 1998). "The role of phosphatidylserine in recognition of apoptotic cells by phagocytes". Cell Death Differ. 5 (7): 551–62. doi:10.1038/sj.cdd.4400404. PMID 10200509. - Anderson HC, Garimella R, Tague SE (January 2005). "The role of matrix vesicles in growth plate development and biomineralization". Front. Biosci. 10 (1-3): 822–37. doi:10.2741/1576. PMID 15569622. - Eanes ED, Hailer AW (January 1987). "Calcium phosphate precipitation in aqueous suspensions of phosphatidylserine-containing anionic liposomes". Calcif. Tissue Int. 40 (1): 43–8. doi:10.1007/BF02555727. PMID 3103899. - Kim J, Mosior M, Chung LA, Wu H, McLaughlin S (July 1991). "Binding of peptides with basic residues to membranes containing acidic phospholipids". Biophys. J. 60 (1): 135–48. Bibcode:1991BpJ....60..135K. doi:10.1016/S0006-3495(91)82037-9. PMC 1260045. PMID 1883932. - Koch AL (1984). "Primeval cells: possible energy-generating and cell-division mechanisms". J. Mol. Evol. 21 (3): 270–7. doi:10.1007/BF02102359. PMID 6242168. - Alberts, Bruce (2002). Molecular biology of the cell (4th ed.). New York: Garland Science. ISBN 0-8153-4072-9. - Martelli PL, Fariselli P, Casadio R (2003). "An ENSEMBLE machine learning approach for the prediction of all-alpha membrane proteins". Bioinformatics 19 (Suppl 1): i205–11. doi:10.1093/bioinformatics/btg1027. PMID 12855459. - Filmore D (2004). "It's A GPCR World". Modern Drug Discovery 11: 24–9. - Melikov KC, Frolov VA, Shcherbakov A, Samsonov AV, Chizmadzhev YA, Chernomordik LV (April 2001). "Voltage-induced nonconductive pre-pores and metastable single pores in unmodified planar lipid bilayer". Biophys. J. 80 (4): 1829–36. Bibcode:2001BpJ....80.1829M. doi:10.1016/S0006-3495(01)76153-X. PMC 1301372. PMID 11259296. - Neher E, Sakmann B (April 1976). "Single-channel currents recorded from membrane of denervated frog muscle fibres". Nature 260 (5554): 799–802. Bibcode:1976Natur.260..799N. doi:10.1038/260799a0. PMID 1083489. - Y. Roiter, M. Ornatska, A. R. Rammohan, J. Balakrishnan, D. R. Heine, and S. Minko, Interaction of Nanoparticles with Lipid Membrane, Nano Letters, vol. 8, iss. 3, pp. 941–944 (2008). - Heuser JE, Reese TS, Dennis MJ, Jan Y, Jan L, Evans L (May 1979). "Synaptic vesicle exocytosis captured by quick freezing and correlated with quantal transmitter release". J. Cell Biol. 81 (2): 275–300. doi:10.1083/jcb.81.2.275. PMC 2110310. PMID 38256. - Dubinnyi MA, Lesovoy DM, Dubovskii PV, Chupin VV, Arseniev AS (Jun 2006). "Modeling of 31P-NMR spectra of magnetically oriented phospholipid liposomes: A new analytical solution". Solid State Nucl Magn Reson. 29 (4): 305–311. doi:10.1016/j.ssnmr.2005.10.009. PMID 16298110. - Tokumasu F, Jin AJ, Dvorak JA (2002). "Lipid membrane phase behavior elucidated in real time by controlled environment atomic force microscopy". J. Electron Micros. 51 (1): 1–9. doi:10.1093/jmicro/51.1.1. PMID 12003236. - Richter RP, Brisson A (2003). "Characterization of lipid bilayers and protein assemblies supported on rough surfaces by atomic force microscopy". Langmuir 19 (5): 1632–40. doi:10.1021/la026427w. - Steltenkamp S, Müller MM, Deserno M, Hennesthal C, Steinem C, Janshoff A (July 2006). "Mechanical properties of pore-spanning lipid bilayers probed by atomic force microscopy". Biophys. J. 91 (1): 217–26. Bibcode:2006BpJ....91..217S. doi:10.1529/biophysj.106.081398. PMC 1479081. PMID 16617084. - Alireza Mashaghi et al., Hydration strongly affects the molecular and electronic structure of membrane phospholipids. J. Chem. Phys. 136, 114709 (2012) http://jcp.aip.org/resource/1/jcpsa6/v136/i11/p114709_s1 - M. Bonn et al., Structural inhomogeneity of interfacial water at lipid monolayers revealed by surface-specific vibrational pump-probe spectroscopy, J. Am. Chem. Soc. 132, 14971–14978 (2010). - Mischa Bonn et al., Interfacial Water Facilitates Energy Transfer by Inducing Extended Vibrations in Membrane Lipids, J Phys Chem, 2012 http://pubs.acs.org/doi/abs/10.1021/jp302478a - Chakrabarti AC (1994). "Permeability of membranes to amino acids and modified amino acids: mechanisms involved in translocation". Amino Acids 6 (3): 213–29. doi:10.1007/BF00813743. PMID 11543596. - Hauser H, Phillips MC, Stubbs M (October 1972). "Ion permeability of phospholipid bilayers". Nature 239 (5371): 342–4. Bibcode:1972Natur.239..342H. doi:10.1038/239342a0. PMID 12635233. - Papahadjopoulos D, Watkins JC (September 1967). "Phospholipid model membranes. II. Permeability properties of hydrated liquid crystals". Biochim. Biophys. Acta 135 (4): 639–52. doi:10.1016/0005-2736(67)90095-8. PMID 6048247. - Paula S, Volkov AG, Van Hoek AN, Haines TH, Deamer DW (January 1996). "Permeation of protons, potassium ions, and small polar molecules through phospholipid bilayers as a function of membrane thickness". Biophys. J. 70 (1): 339–48. Bibcode:1996BpJ....70..339P. doi:10.1016/S0006-3495(96)79575-9. PMC 1224932. PMID 8770210. - Xiang TX, Anderson BD (June 1994). "The relationship between permeant size and permeability in lipid bilayer membranes". J. Membr. Biol. 140 (2): 111–22. PMID 7932645. - Gouaux E, Mackinnon R (December 2005). "Principles of selective ion transport in channels and pumps". Science 310 (5753): 1461–5. Bibcode:2005Sci...310.1461G. doi:10.1126/science.1113666. PMID 16322449. - Gundelfinger ED, Kessels MM, Qualmann B (February 2003). "Temporal and spatial coordination of exocytosis and endocytosis". Nat. Rev. Mol. Cell Biol. 4 (2): 127–39. doi:10.1038/nrm1016. PMID 12563290. - Steinman RM, Brodie SE, Cohn ZA (March 1976). "Membrane flow during pinocytosis. A stereologic analysis". J. Cell Biol. 68 (3): 665–87. doi:10.1083/jcb.68.3.665. PMC 2109655. PMID 1030706. - Neumann E, Schaefer-Ridder M, Wang Y, Hofschneider PH (1982). "Gene transfer into mouse lyoma cells by electroporation in high electric fields". EMBO J. 1 (7): 841–5. PMC 553119. PMID 6329708. - Demanèche S, Bertolla F, Buret F, et al. (August 2001). "Laboratory-scale evidence for lightning-mediated gene transfer in soil". Appl. Environ. Microbiol. 67 (8): 3440–4. doi:10.1128/AEM.67.8.3440-3444.2001. PMC 93040. PMID 11472916. - Garcia ML (July 2004). "Ion channels: gate expectations". Nature 430 (6996): 153–5. Bibcode:2004Natur.430..153G. doi:10.1038/430153a. PMID 15241399. - McIntosh TJ, Simon SA (2006). "Roles of Bilayer Material Properties in Function and Distribution of Membrane Proteins". Annu. Rev. Biophys. Biomol. Struct. 35 (1): 177–98. doi:10.1146/annurev.biophys.35.040405.102022. PMID 16689633. - Suchyna TM, Tape SE, Koeppe RE, Andersen OS, Sachs F, Gottlieb PA (July 2004). "Bilayer-dependent inhibition of mechanosensitive channels by neuroactive peptide enantiomers". Nature 430 (6996): 235–40. Bibcode:2004Natur.430..235S. doi:10.1038/nature02743. PMID 15241420. - Hallett FR, Marsh J, Nickel BG, Wood JM (February 1993). "Mechanical properties of vesicles. II. A model for osmotic swelling and lysis". Biophys. J. 64 (2): 435–42. Bibcode:1993BpJ....64..435H. doi:10.1016/S0006-3495(93)81384-5. PMC 1262346. PMID 8457669. - Boal, David H. (2001). Mechanics of the cell. Cambridge, UK: Cambridge University Press. ISBN 0-521-79681-4. - Rutkowski CA, Williams LM, Haines TH, Cummins HZ (June 1991). "The elasticity of synthetic phospholipid vesicles obtained by photon correlation spectroscopy". Biochemistry 30 (23): 5688–96. doi:10.1021/bi00237a008. PMID 2043611. - Evans E, Heinrich V, Ludwig F, Rawicz W (October 2003). "Dynamic tension spectroscopy and strength of biomembranes". Biophys. J. 85 (4): 2342–50. Bibcode:2003BpJ....85.2342E. doi:10.1016/S0006-3495(03)74658-X. PMC 1303459. PMID 14507698. - Weaver JC, Chizmadzhev YA (1996). "Theory of electroporation: A review". Biochemistry and Bioenergetics 41 (2): 135–60. doi:10.1016/S0302-4598(96)05062-3. - Papahadjopoulos D, Nir S, Düzgünes N (April 1990). "Molecular mechanisms of calcium-induced membrane fusion". J. Bioenerg. Biomembr. 22 (2): 157–79. doi:10.1007/BF00762944. PMID 2139437. - Leventis R, Gagné J, Fuller N, Rand RP, Silvius JR (November 1986). "Divalent cation induced fusion and lipid lateral segregation in phosphatidylcholine-phosphatidic acid vesicles". Biochemistry 25 (22): 6978–87. doi:10.1021/bi00370a600. PMID 3801406. - Markin VS, Kozlov MM, Borovjagin VL (October 1984). "On the theory of membrane fusion. The stalk mechanism". Gen. Physiol. Biophys. 3 (5): 361–77. PMID 6510702. - Chernomordik LV, Kozlov MM (2003). "Protein-lipid interplay in fusion and fission of biological membranes". Annu. Rev. Biochem. 72 (1): 175–207. doi:10.1146/annurev.biochem.72.121801.161504. PMID 14527322. - Georgiev, Danko D .; James F . Glazebrook (2007). "Subneuronal processing of information by solitary waves and stochastic processes". In Lyshevski, Sergey Edward. Nano and Molecular Electronics Handbook. Nano and Microengineering Series. CRC Press. pp. 17–1–17–41. ISBN 978-0-8493-8528-5. - Chen YA, Scheller RH (February 2001). "SNARE-mediated membrane fusion". Nat. Rev. Mol. Cell Biol. 2 (2): 98–106. doi:10.1038/35052017. PMID 11252968. - Köhler G, Milstein C (August 1975). "Continuous cultures of fused cells secreting antibody of predefined specificity". Nature 256 (5517): 495–7. Bibcode:1975Natur.256..495K. doi:10.1038/256495a0. PMID 1172191. - Jordan, Carol A.; Neumann, Eberhard; Sowershi mason, Arthur E. (1989). Electroporation and electrofusion in cell biology. New York: Plenum Press. ISBN 0-306-43043-6. - Immordino ML, Dosio F, Cattel L (2006). "Stealth liposomes: review of the basic science, rationale, and clinical applications, existing and potential". Int J Nanomedicine 1 (3): 297–315. doi:10.2217/17435822.214.171.1247. PMC 2426795. PMID 17717971. - Chonn A, Semple SC, Cullis PR (15 September 1992). "Association of blood proteins with large unilamellar liposomes in vivo. Relation to circulation lifetimes". J. Biol. Chem. 267 (26): 18759–65. PMID 1527006. - Boris EH, Winterhalter M, Frederik PM, Vallner JJ, Lasic DD (1997). "Stealth liposomes: from theory to product". Advanced Drug Delivery Reviews 24 (2-3): 165–77. doi:10.1016/S0169-409X(96)00456-5. - Maeda H, Sawa T, Konno T (July 2001). "Mechanism of tumor-targeted delivery of macromolecular drugs, including the EPR effect in solid tumor and clinical overview of the prototype polymeric drug SMANCS". J Control Release 74 (1-3): 47–61. doi:10.1016/S0168-3659(01)00309-1. PMID 11489482. - Lopes DE, Menezes DE, Kirchmeier MJ, Gagne JF (1999). "Cellular trafficking and cytotoxicity of anti-CD19-targeted liposomal doxorubicin in B lymphoma cells". Journal of Liposome Research 9 (2): 199–228. doi:10.3109/08982109909024786. - Matsumura Y, Gotoh M, Muro K, et al. (March 2004). "Phase I and pharmacokinetic study of MCC-465, a doxorubicin (DXR) encapsulated in PEG immunoliposome, in patients with metastatic stomach cancer". Ann. Oncol. 15 (3): 517–25. doi:10.1093/annonc/mdh092. PMID 14998859. - Biacore A100 System Information. Biacore Inc. Retrieved Feb 12, 2009. - Nanion Technologies. Automated Patch Clamp. Retrieved Feb 28, 2010. (PDF) - Bermejo, M. et al. (2004). PAMPA – a drug absorption in vitro model 7. Comparing rat in situ, Caco-2, and PAMPA permeability of fluoroquinolones. Pharm. Sci., 21: 429-441. - Avdeef, A. et al. (2005). Caco-2 permeability of weakly basic drugs predicted with the Double-Sink PAMPA pKaflux method. Pharm. Sci., 24: 333-349. - Avdeef, A. et al. (2004). PAMPA – a drug absorption in vitro model 11. Matching the in vivo unstirred water layer thickness by individual-well stirring in microtitre plates. Pharm. Sci., 22: 365-374. - Dagenais, C. et al. (2009). P-glycoprotein deficient mouse in situ blood–brain barrier permeability and its prediction using an in combo PAMPA model. Eur. J. Phar. Sci., 38(2): 121-137. - Sinkó, B. et al. (2009). A PAMPA Study of the Permeability-Enhancing Effect of New Ceramide Analogues. Chemistry & Biodiversity, 6: 1867-1874. - Loeb J (December 1904). "The recent development of Biology". Science 20 (519): 777–786. Bibcode:1904Sci....20..777L. doi:10.1126/science.20.519.777. PMID 17730464. - Fricke H (1925). "The electrical capacity of suspensions with special reference to blood". Journal of General Physiology 9 (2): 137–52. doi:10.1085/jgp.9.2.137. PMC 2140799. PMID 19872238. - Dooren L J, Wiedemann L R (1986). "On bimolecular layers of lipids on the chromocytes of the blood". Journal of European Journal of Pediatrics 145 (5): 329. doi:10.1007/BF00439232. - Gorter E, Grendel F (1925). "On bimolecular layers of lipids on the chromocytes of the blood". Journal of Experimental Medicine 41 (4): 439–43. doi:10.1084/jem.41.4.439. PMC 2130960. PMID 19868999. - Sjöstrand FS, Andersson-Cedergren E, Dewey MM (April 1958). "The ultrastructure of the intercalated discs of frog, mouse and guinea pig cardiac muscle". J. Ultrastruct. Res. 1 (3): 271–87. doi:10.1016/S0022-5320(58)80008-8. PMID 13550367. - Robertson JD (1960). "The molecular structure and contact relationships of cell membranes". Prog. Biophys. Mol. Biol. 10: 343–418. PMID 13742209. - Robertson JD (1959). "The ultrastructure of cell membranes and their derivatives". Biochem. Soc. Symp. 16: 3–43. PMID 13651159. - Mueller P, Rudin DO, Tien HT, Wescott WC (June 1962). "Reconstitution of cell membrane structure in vitro and its transformation into an excitable system". Nature 194 (4832): 979–80. Bibcode:1962Natur.194..979M. doi:10.1038/194979a0. PMID 14476933. - Bangham, A. D.; Horne, R. W. (1964). "Negative Staining of Phospholipids and Their Structural Modification by Surface-Active Agents As Observed in the Electron Microscope". Journal of Molecular Biology 8: 660–668. doi:10.1016/S0022-2836(64)80115-7. PMID 14187392. - Avanti Lipids One of the largest commercial suppliers of lipids. Technical information on lipid properties and handling and lipid bilayer preparation techniques. - LIPIDAT An extensive database of lipid physical properties - Structure of Fluid Lipid Bilayers Simulations and publication links related to the cross sectional structure of lipid bilayers. - Lipid Bilayers and the Gramicidin Channel (requires Java plugin) Pictures and movies showing the results of molecular dynamics simulations of lipid bilayers. - Structure of Fluid Lipid Bilayers, from the Stephen White laboratory at University of California, Irvine - Animations of lipid bilayer dynamics (requires Flash plugin)
http://en.wikipedia.org/wiki/Lipid_bilayer
13
53
From Latin: circus - "ring, a round arena" A line forming a closed loop, every point on which is a fixed distance from a center point. Try this Drag an orange dot. The circle can be moved by dragging the center point and resized by dragging the point on the circle. A circle is a type of line. Imagine a straight line segment that is bent around until its ends join. Then arrange that loop until it is exactly circular - that is, all points along that line are the same distance from a center point. There is a difference between a circle and a disk. A circle is a line, and so, for example, has no area - just as a line has no area. A disk however is a round portion of a which has a circular outline. If you draw a circle on paper and cut it out, the round piece is a disk. Properties of a circle ||A point inside the circle. All points on the circle are equidistant (same distance) from the center point. ||The radius is the distance from the center to any point on the circle. It is half the diameter. See Radius of a circle. ||The distance across the circle. The length of any chord passing through the center. It is twice the radius. See Diameter of a circle. ||The circumference is the distance around the circle. See Circumference of a Circle. ||Strictly speaking a circle is a line, and so has no area. What is usually meant is the area of the region enclosed by the circle. See Area enclosed by a circle . ||A line segment linking any two points on a circle. See ||A line passing a circle and touching it at just one point. See Tangent definition ||A line that intersects a circle at two points. See Secant definition In any circle, if you divide the circumference (distance around the circle) by it's diameter (distance across the circle), you always get the same number. This number is called Pi and is approximately 3.142. See Definition of pi. Relation to ellipse A circle is actually a special case of an ellipse. In an ellipse, if you make the major and minor axis the same length, the result is a circle, with both foci at the center. See Ellipse definition There are several definitions of a circle that you may come across. Below are some of the alternative ones. "The set of all points equidistant from the center". This assumes that a line can be defined as an infinitely large set of points. of all points a fixed distance from a given (center) point". This definition assumes the plane is composed of an infinite number of points and we select only those that are a fixed distance from the center. A similar definition to the one above. (See locus definition.) Equations of a circle In coordinate geometry, a circle can be described using sets of equations. For more on this see Equations of circles and ellipses. Other circle topics Equations of a circle Angles in a circle (C) 2009 Copyright Math Open Reference. All rights reserved
http://www.mathopenref.com/circle.html
13
80
6.1 Correlation between Variables In the previous section we saw how to create crosstabs tables, relating one variable with another and we computed the Chi-Square statistics to tell us if the variables are independent or not. While this type of analysis is very useful for categorical data, for numerical data the resulting tables would (usually) be too big to be useful. Therefore we need to learn different methods for dealing with numerical variables to decide whether two such variables are related. Example: Suppose that 5 students were asked their high school GPA and their College GPA, with the answers as follows: Student HS GPA College GPA A 3.8 2.8 B 3.1 2.2 C 4.0 3.5 D 2.5 1.9 E 3.3 2.5 We want to know: is high school and college GPA related according to this data, and if they are related, how can I use the high school GPA to predict the college GPA? There are two answers to give: - first, are they related, and - second, how are they related. Casually looking at this data it seems clear that the college GPA is always worse than the high school one, and the smaller the high school GPA the smaller the college GPA. But how strong a relationship, if any, seems difficult to quantify. We will first discuss how to compute and interpret the so-called correlation coefficient to help decide whether two numeric variables are related or not. In other words, it can answer our first question. We will answer the second question in later sections. First, let's define the correlation coefficient mathematically. Definition of the Correlation Coefficient If your data is given in (x,y) pairs, then compute the following quantities: where the "sigma" symbol indicates summation and n stands for the number of data points. With these quantities computed, the correlation coefficient is defined as: These formulas are, indeed, quiet a "hand-full" but with a little effort we can manually compute the correlation coefficient just fine. To compute the correlation coefficient for our above GPA example we make a table containing both variables, with additional columns for their squares as well as their product as follows: Student HS GPA x2 y2 x*y A 3.8 2.8 3.82 = 14.44 2.82 = 7.84 3.8*2.8 = 10.64 B 3.1 2.2 3.12 = 9.61 2.22 = 4.84 3.1*2.2 = 6.82 C 4.0 3.5 4.02 = 16.00 3.52 = 12.25 4.0*3.5 = 14.00 D 2.5 1.9 2.52 = 6.25 1.92 = 3.61 2.5*1.9 = 4.75 E 3.3 2.5 3.32 = 10.89 2.52 = 6.25 3.3*2.5 = 8.25 Sum 16.7 12.9 57.19 34.79 44.46 The last row contains the sum of the x's, y's, x-squared, y-squared, and x*y, which are precisely the quantities that we need to compute Sxx, Syy, and Sxy. In this case we can compute these quantities as follows: - Sxx = 57.19 - 16.7 * 16.7 / 5 = 1.412 - Syy = 34.79 - 12.9*12.9 / 5 = 1.508 - Sxy = 44.46 - 16.7 * 12.9 / 5 = 1.374 so that the correlation coefficient for this data is: 1.374 / sqrt(1.412 * 1.508) = 0.9416 Interpretation of the Correlation CoefficientThe correlation coefficient as defined above measures how strong a linear relationship exists between two numeric variables x and y. Specifically: - The correlation coefficient is always a number between -1.0 and +1.0. - If the correlation coefficient is close to +1.0, then there is a strong positive linear relationship between x and y. In other words, if x increases, y also increases. - If the correlation coefficient is close to -1.0, then there is a strong negative linear relationship between x and y. In other words, if x increases, y will decrease. - The closer to zero the correlation coefficient is, the less of a linear relationship between x and y exists In the above example the correlation coefficient is very close to +1. Therefore we can conclude that there indeed is a strong positive relationship between high school GPA and college GPA in this particular example. Using Excel to computer the Correlation Coefficient While the table above certainly helps in computing the correlation coefficient, it is still a lot of work, especially if there are lots of (x, y) data points. Even using Excel to help compute the table seems like a lot of work. However, Excel has a convenient function to quickly compute the correlation coefficient without us having to construct a complicated table. The Excel built-in function=CORREL(RANGE1, RANGE2) returns the correlation coefficient of the the cells in RANGE1 and the cells in RANGE2. All arguments should be numbers, and no cell should be empty. Example: To use this Excel function to compute the correlation coefficient for the previous GPA example, we would enter the data and the formulas as follows: Consider the following artificial example: some data for x and y (which have no particular meaning right now) is listed below, in a "case A", "case B", and "case C" situation. x = 10, y = 20 x = 20, y = 40 x = 30, y = 60 x = 40, y = 80 x = 50, y = 100 x = 10, y = 200 x = 20, y = 160 x = 30, y = 120 x = 40, y = 80 x = 50, y = 40 x = 10, y = 100 x = 20, y = 20 x = 30, y = 200 x = 40, y = 50 x = 50, y = 100 Just looking at this data, it seems pretty obvious that: - in case A there should be a strong positive relationship between x and y - in case B there should be a strong negative relationship between x and y - in case C there should be no apparent relationship between x and y Indeed, using Excel to compute each correlation coefficient (we will explain the procedure below), confirms this: - in case A, the coefficient is +1.0, i.e. strong positive correlation - in case B, the coefficient is -1.0, i.e. strong negative correlation - in case C, the coefficient is 0.069, i.e. no correlation Note that in "real world" data, the correlation is almost never as clear-cut as in this artificial example. Example: In a previous section we looked at an Excel data set that shows various information about employees. Here is the spreadsheet data, but the salary is left as an actual number instead of a category (as we previously had). Download this file into Excel and find out whether there is a linear relationship between the salary and the years of education of an employee. - Download the above spreadsheet and start MS Excel with that worksheet as input. - Find an empty cell anywhere in your spreadsheet - Select the first input range (corresponding to the salary) by dragging the mouse across all cells containing numbers in the "Salary" column - Select the second input range (corresponding to the months on the job) by dragging the mouse across the "Months on the Job" column containing numbers and hit RETURN - Excel will compute the correlation coefficient. In our example, it turn out that the correlation coefficient for this data is 0.66 Since the correlation coefficient is 0.66 it means that there is indeed some positive relation between years of schooling and salary earnings. But since the value is not that close to +1.0, the relationship is not strong.
http://pirate.shu.edu/~wachsmut/Teaching/MATH1101/Relations/correlation.html
13
87
E=mc² is perhaps the most famous, most immediately recognizable to the public, most widely quoted by the public, and least understood by the public, of all equations in science. Most people recognize it instantly, but when asked to explain it, might say something like "It says that matter and energy are the same/equivalent", or "It explains relativity", or "It tells where nuclear power comes from", or "It's about atomic bombs." Mass refers to what is often called inertia (though that term is used much less now than it was 100 years ago). It is sometimes called inertial mass. Inertial mass is the resistance of an object to being accelerated when a force is applied. The m in E=mc² is the same as the m in Newton's formula of motion F=ma. Gravitational mass is what makes things heavy, that is, what is acted upon by gravity. It is the m1 and m2 in Newton's formula for gravity . The difference between inertial mass and gravitational mass used to be a burning issue among physicists, eventually being tested, and found to be the same, by the famous Eotvos experiments. (Note that general relativity does have an explanation for why this should be, but this is not relevant here.) Energy refers to kinetic energy of motion, or radiated energy. - E=mc² is not about gravity or gravitational mass. - E=mc² is not about light, or other electromagnetic radiation, except insofar as it is a form of energy. - E=mc² is not about nuclear power per se. It applies everywhere. - E=mc² is not about where the power of the atomic bomb comes from, though the physicists developing atomic power and atomic bombs were completely familiar with it. A little calculation will show that E=mc² is dimensionally correct. So, for example, in SI units (also called MKS units), E is in Joules, m in kilograms, and c in meters per second. In CGS units or units of the US Customary System (pounds and feet) it is also dimensionally correct, as long as the units are used consistently. In any normal units, the value of c, the speed of light, is enormous, its square even more so. This means that E=mc² equates an extremely tiny amount of mass with an extremely large amount of energy. This gives the equation some of its mystique. E=mc² is a meaningless though working, almost nonsensical though often applied, e.g., in nuclear physics, statement that purports to relate all matter to energy and light. In fact, no theory has successfully unified the laws governing mass (i.e., gravity) with the laws governing light (i.e., electromagnetism). Simply put, E=mc² is liberal claptrap. Biblical Scientific Foreknowledge predicts that a unified theory of all the laws of physics is impossible, because light and matter were created at different times, in different ways, as described in the Book of Genesis. Mass is a measure of an object's inertia, and is directly related to the force of gravity. In contrast, the intrinsic energy of an object (such as an atom) is a function of electrostatic charge and other non-inertial forces, having nothing to do with gravity. Declaring the object's energy to be a function of inertia rather than electrostatics is an absurd and impossible attempt to unify the forces of nature, contrary to Biblical Scientific Foreknowledge. For more than a century, the claim that E=mc² has never yielded anything of value. Often it seems to be used as a redefinition of "energy" for pseudo-scientific purposes, as by the lamestream media. There have been attempts to find some justification for the equation in already understood processes involved in nuclear power generation and nuclear weapons, and in the speculation about antimatter. The energy released in matter/antimatter annihilations corresponds to E=mc² for example the annihilation of an electron and a positron (each with rest energy of 0.511 mega electron volts according to E=mc²) results in the release of two gamma ray photons, which each have energy 0.511 mega electron volts. The Theory of Relativity has never been able to mathematically derive E=mc² from first principles, and a physicist observed in a peer-reviewed paper published in 2011 that "Leaving aside that it continues to be affirmed experimentally, a rigorous proof of the mass-energy equivalence is probably beyond the purview of the special theory." It has been known for a long time that radiation has a mass equivalence, which was correctly derived by Henri Poincare in 1904, but the equation E=mc² makes a claim far beyond that limited circumstance: |“||The equality of the mass equivalent of radiation to the mass lost by a radiating body is derivable from Poincaré’s momentum of radiation (1900) and his principle of relativity (1904).||”| —Herbert Ives, 1952 Description for the layman Ten top physicists were asked to describe in laymen's terms E=mc²: |“||Things that seem incredibly different can really be manifestations of the same underlying phenomena.||”| —Nima Arkani-Hamed, Theoretical Physicist, Harvard University |“||You can get access to parts of nature you have never been able to get access to before.||”| —Lene Hau, Experimental Physicist, Harvard University |“||It certainly is not an equation that reveals all its subtlety in the few symbols that it takes to write down.||”| —Brian Greene Theoretical Physicist Columbia University History of E=mc² |“||Over time, physicists became used to multiplying an object's mass by the square of its velocity (mv²) to come up with a useful indicator of its energy. If the velocity of a ball or rock was 100 mph, then they knew that the energy it carried would be proportional to its mass times 100 squared. If the velocity is raised as high as it could go, to 670 million mph, it's almost as if the ultimate energy an object will contain should be revealed when you look at its mass times c squared, or its mc².||”| The first experimental verification for the equation was performed 1932 by a team of an English and an Irish physicist, John Cockcroft and Ernest Walton, as a byproduct of "their pioneer work on the transmutation of atomic nuclei by artificially accelerated atomic particles" for which they were honored with the Nobel Prize in physics in 1951. The idea of the mass defect - and its calculation using E=mc² can be found on page 169-170 of his Nobel lecture. The mass of the particles on the left hand side is 8.0263 amus, the mass on the right hand side only 8.0077 amu. The difference between this masses is .00186 amu, which results in the following back-of-an-envelope calculation: Accurate measurements and detailed calculations allowed for verifying the theoretical values with an accuracy of ±0.5%. This was the first time a nucleus was artificially split, and thereby the first transmutation of elements using accelerated particles: Probably the best empirical verification of E=mc² was done in 2005 by Simon Rainville et al., as published in Nature, as the authors write in their abstract: |“||Einstein's relationship is separately confirmed in two tests, which yield a combined result of 1−Δmc²/E=(−1.4±4.4)×10−7, indicating that it holds to a level of at least 0.00004%. To our knowledge, this is the most precise direct test of the famous equation yet described.||”| A Famous Example -- Nuclear Fission of Uranium For most types of physical interactions, the masses of the initial reactants and of the final products match so closely that it is essentially impossible to measure any difference. But for nuclear reactions, the difference is measurable. That difference is related to the energy absorbed or released, described by the equation E=mc². (The equation applies to all interactions; the fact that nuclear interactions are the only ones for which the mass difference is measurable has led people to believe, wrongly, that E=mc² applies only to nuclear interactions.) The Theory of Relativity played no role in this work, but later tried to retrofit the theory to the data in order to explain the explain the observed mass changes. Here is the most famous example of the mass change. The decay path of Uranium that figured in the Hahn-Strassmann experiment may have been this: - 235U → 140Xe + 91Sr + 4n The masses of the particles are: |Number of protons||92||54||38||0| |Number of neutrons||235||140||91||4| |Number of electrons||92||54||38||0| The mass of the Uranium atom is 235.04393, and the sum of the masses of the products is 234.866503. The difference is .177427 amu, or, using the E=mc² equation, 165 million electron volts. (The generally accepted value for the total energy released by Uranium fission, including secondary decays, is about 200 million electron volts.) The insight that the conversion from Uranium to Barium was caused by complete fission of the atom was made by Lise Meitner in December, 1938. She had the approximate "mass defect" quantities memorized, and so she worked out in her head, using the E=mc² equation, that there would be this enormous release of energy. This release was observed shortly thereafter, and the result is nuclear power and nuclear weapons. A Topical Example: Speed of Extremely Energetic Neutrinos Here is another example of the use of this formula in physics calculations. Recently there has been quite a controversy over whether neutrinos were observed traveling at a speed faster than light. Relativity doesn't allow that, and, since neutrinos have nonzero (but incredibly tiny) mass, they aren't even supposed to travel at the speed of light. This very issue came up on the Talk:Main_Page#Neutrinos. The speeds under discussion were calculated by the use of E=mc². The mass of a neutrino is about 0.44x10-36kilograms. (Normally all of these things are measured in more convenient units such as Giga-electron-Volts, but that makes implicit use of E=mc². If we don't accept that, we have to do the calculations under classical physics, using SI (meter/kilogram/second) units.) The neutrinos were accelerated to an energy of about 17GeV, or .27x10-8Joules. Using the classical formula , we get v=110x1012meters per second. This is about 370,000 times the speed of light. However, the classical formula breaks down at speeds close to c, and indeed, as the speed of a massive object approaches c, the object's kinetic energy approaches . Several scientists have gone on record stating that the neutrinos, which have mass, travel at precisely the speed of light. If true, this disproves the Theory of Relativity and the claim that E=mc². However, it is more likely that those scientists are using language inaccurately. It is impossible to measure the speed of neutrinos precisely. What is meant is the difference between the speed of light and the speed of the neutrinos is too small to measure. Deducing the Equation From Empirical Observation While the equation was historically developed on theoretical grounds as an inevitable consequence of special relativity, it is possible to deduce it purely from empirical observation. So, for the purposes of this section, imagine that one is in the era of "classical physics"; prior to 1900 or so. Relativity has not been invented, but, inexplicably, nuclear physics has. Imagine that the phenomena of radioactivity and nuclear fission have been observed, without any knowledge of relativity. A well-accepted physical law of classical physics was the law of conservation of mass. This was not easy to deduce. It required careful analysis of such phenomena as combustion, in the 1700's, to eliminate the various confounding sub-phenomena that made the law difficult to see. But, by 1900, the law was well established: - In all interactions, mass is precisely conserved. For example, the mass of a TNT molecule is 227.1311 Daltons, or 227.1311 g/mol, which is, for all practical purposes, the same as the mass of its constituent Carbon, Hydrogen, Nitrogen, and Oxygen atoms. It is essentially impossible to measure the difference. The principle of conservation of mass is upheld. But when nuclear phenomena are discovered, we notice something different. The masses of the result particles after an event (e.g. alpha decay, nuclear fission, or artificial transmutation) is measurably less than the masses of the original particle(s). With the invention of the mass spectrometer around 1920, it became possible to measure atomic weights of various isotopes with great precision. Radium-226 decays into Radon-222 by emission of an alpha particle with an energy of 4.78 MeV. 1 kg of Radium-226 = atoms. (The numerator is Avogadro's number, and the denominator is the atomic weight of Radium-226.) This is 2.6643648 * 1024 atoms. That number of Radon-222 atoms has mass .98226836 kg. That number of alpha particles has mass .01770864 kg. The mass lost is .00002299 kg. Each emitted alpha particle has energy of 4.78 MeV, or 4.78 * .16021765 * 10-18 Joules. The total alpha energy from the decay of 1 kg of radium is 2.040 * 1012 Joules. Also, Radon-222 decays into Polonium-218 by emission of an alpha particle with an energy of 5.49 MeV. 1 kg of Radon-222 = atoms. This is 2.7124612 * 1024 atoms. That number of Polonium-218 atoms has mass .98194455 kg. That number of alpha particles has mass .018028315 kg. The mass lost is .00002713 kg. Each emitted alpha particle has energy of 5.49 MeV. The total alpha energy from the decay of 1 kg of polonium is 2.386 * 1012 Joules. It looks as though we have to rewrite the law of conservation of mass: For the Cockcroft-Walton experiment, we us Avogadro's number of particles. The mass of that many of the various atoms, in kilograms, is just their atomic mass. Avogadro's number of Lithium-7 atoms weighs 7.01600455 kg. The Hydrogen atoms weigh 1.007825032 kg, and the Helium atoms weigh 4.00260325 kg each. The atoms weigh 8.023829582 kg before, and 8.0052065 kg after. The mass lost is .018623082 kg. The energy released is 17.3 MeV per reaction, or 1669 * 10-12 Joules per Avogadro's number. It looks as though we have to rewrite the law of conservation of mass: - In all "ordinary" interactions, mass is precisely conserved. - In nuclear interactions, there is a small but measurable loss of mass. - By the way, we can clearly see that atomic weights of pure isotopes are not integers, and that it has something to do with the energy released by nuclear disintegration. In retrospect, the formula E=mc² explains the non-integer character of atomic weights. Making special cases out of nuclear interaction versus non-nuclear ones is unsatisfactory, of course. We do this for a few other interactions, including the explosion of TNT. This would include many other radioactive decays, and the Uranium fission phenomena described above. We won't bother with the details. As observational scientists, we look for patterns in the behavior of nature. We make a table: |interaction||energy released per kg, Joules||mass lost per kg of original substance, kg| |explosion of TNT||4.184 * 106||seems to be zero| |alpha decay of Ra-226||2.040 * 1012||.00002299 kg| |alpha decay of Rn-222||2.386 * 1012||.00002713 kg| |Cockcroft-Walton experiment, per Avogadro||1669 * 1012||.018623082 kg| We plot these, and a few others, not shown, on graph paper, and find to our amazement that the relationship is linear. For Radium decay, m/E = .1126 * 10-16 For Polonium decay, m/E = .1137 * 10-16 For the Cockcroft-Walton experiment, m/E = .1116 * 10-16 As a linear relationship, the mass defect for TNT would have been .47 * 10-10. We couldn't possibly have measured this. So we can rewrite the rule for conservation of mass in a more satisfactory way: - In all interactions, there is a loss of mass, equal to about .112 * 10-16 kg per Joule of energy released. What we thought was exact conservation is just very nearly exact, and we hadn't been able to measure it before. But maybe there's more. This constant has dimensions of kilograms per Joule. From high-school physics, we know that that is seconds squared divided by meters squared. That is, it is the reciprocal of the square of a velocity. We calculate that velocity. It is about 2.97 * 108 meters per second. Very close to the speed of light! Very interesting! (The calculations above were not extremely precise. The formula has been verified with great precision, but not here.) Since we are just making empirical observations, we don't understand why this is so (that will have to wait for the invention of relativity), but we can formulate a hypothesis: - In all interactions, there is a loss of mass, equal to times the amount of energy released. We don't have to give the units any more, since everything is now dimensionally correct. - There is a very interesting analogy with the discovery of Maxwell's Equations. Maxwell found an interesting relationship involving the fundamental constants and appearing in his equations. Specifically, has the dimensions of seconds squared divided by meters squared, and that: - where "c" was the known velocity of light. He also showed that his equations predict electromagnetic waves, propagating at that speed. This sort of inductive approach is a common way that scientific discoveries are made. For example, Newton's law of motion, F=ma, is empirical and backed up by enormous amounts of confirming data. The formula E=mc² could have been discovered this way in our alternative universe, but, as stated above, that is not historically how it happened. It was formulated on theoretical grounds in order to obtain conservation of momentum in relativistic interactions, and then confirmed by experiment. - ↑ Physicists, when they are really into theory, sometimes like to use "natural units", which are calibrated so that c is dimensionless and has the value 1. That is not the approach taken here, however. - ↑ The equation claims that the total energy E of a body in all forms is equal to that body's mass m multiplied by the square of the speed of light c. - ↑ Peter Tyson The Legacy of E=mc² October 11, 2005. PBS NOVA. - ↑ Lawrence Berkeley National Laboratory The ABC of Nuclear Science - ↑ Eugene Hecht: How Einstein confirmed E0=mc², American Journal of Physics, Volume 79, Issue 6, pp. 591-600 (2011) - ↑ Herbert E. Ives Derivation of the Mass-Energy Relation, JOSA, Vol. 42, Issue 8, pp. 540-543 (1952) - ↑ Lexi Krock, David Levin (editors) E=mc² explained, June, 2005. PBS NOVA. - ↑ David Bodanis Ancestors of E=mc², NOVA, Nov 10, 2005 - ↑ Nobel Prize Organization - ↑ John D. Cockroft Experiments on the interaction of high-speed nucleons with atomic nuclei, Nobel Lecture, Dec 11, 1951 - ↑ Gerard Piel The age of science: what scientists learned in the 20th century, Basic Books, 2001, p. 144-145 - ↑ Simon Rainville, James K. Thompson, et. al World Year of Physics: A direct test of E=mc² Nature 438, 1096-1097 (22 December 2005)] doi:10.1038/4381096a; Published online 21 December 2005
http://conservapedia.com/User:HHB/emc2
13
58
|Part of the nature series| |Problems playing this file? See media help.| Rain is liquid water in the form of droplets that have condensed from atmospheric water vapor and then precipitated—that is, become heavy enough to fall under gravity. Rain is a major component of the water cycle and is responsible for depositing most of the fresh water on the Earth. It provides suitable conditions for many types of ecosystem, as well as water for hydroelectric power plants and crop irrigation. The major cause of rain production is moisture moving along three-dimensional zones of temperature and moisture contrasts known as weather fronts. If enough moisture and upward motion is present, precipitation falls from convective clouds (those with strong upward vertical motion) such as cumulonimbus (thunder clouds) which can organize into narrow rainbands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation which forces moist air to condense and fall out as rainfall along the sides of mountains. On the leeward side of mountains, desert climates can exist due to the dry air caused by downslope flow which causes heating and drying of the air mass. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah climes. The urban heat island effect leads to increased rainfall, both in amounts and intensity, downwind of cities. Global warming is also causing changes in the precipitation pattern globally, including wetter conditions across eastern North America and drier conditions in the tropics. Antarctica is the driest continent. The globally averaged annual precipitation over land is 715 millimetres (28.1 in), but over the whole Earth it is much higher at 990 millimetres (39 in). Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes. Rainfall is measured using rain gauges. Rainfall amounts can be estimated by weather radar. Air contains water vapor and the amount of water in a given mass of dry air, known as the mixing ratio, is measured in grams of water per kilogram of dry air (g/kg). The amount of moisture in air is also commonly reported as relative humidity; which is the percentage of the total water vapor air can hold at a particular air temperature. How much water vapor a parcel of air can contain before it becomes saturated (100% relative humidity) and forms into a cloud (a group of visible and tiny water and ice particles suspended above the Earth's surface) depends on its temperature. Warmer air can contain more water vapor than cooler air before becoming saturated. Therefore, one way to saturate a parcel of air is to cool it. The dew point is the temperature to which a parcel must be cooled in order to become saturated. There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation. The main ways water vapor is added to the air are: wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains. Water vapor normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. Elevated portions of weather fronts (which are three-dimensional in nature) force broad areas of upward motion within the Earth's atmosphere which form clouds decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions. Coalescence occurs when water droplets fuse to create larger water droplets. Air resistance typically causes the water droplets in a cloud to remain stationary. When air turbulence occurs, water droplets collide, producing larger droplets. As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain. Coalescence generally happens most often in clouds above freezing, and is also known as the warm rain process. In clouds below freezing, when ice crystals gain enough mass they begin to fall. This generally requires more mass than coalescence when occurring between the crystal and neighboring water droplets. This process is temperature dependent, as supercooled water droplets only exist in a cloud that is below freezing. In addition, because of the great temperature difference between cloud and ground level, these ice crystals may melt as they fall and become rain. Raindrops have sizes ranging from 0.1 to 9 millimetres (0.0039 to 0.35 in) mean diameter, above which they tend to break up. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Large rain drops become increasingly flattened on the bottom, like hamburger buns; very large ones are shaped like parachutes. Contrary to popular belief, their shape does not resemble a teardrop. The biggest raindrops on Earth were recorded over Brazil and the Marshall Islands in 2004 — some of them were as large as 10 millimetres (0.39 in). The large size is explained by condensation on large smoke particles or by collisions between drops in small regions with particularly high content of liquid water. Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration. Rain drops associated with melting hail tend to be larger than other rain drops. Raindrops impact at their terminal velocity, which is greater for larger drops due to their larger mass to drag ratio. At sea level and without wind, 0.5 millimetres (0.020 in) drizzle impacts at 2 metres per second (4.5 mph) (2 m/s or 6.6 ft/s), while large 5 millimetres (0.20 in) drops impact at around 9 metres per second (20 mph) (9 m/s or 30 ft/s). The sound of raindrops hitting water is caused by bubbles of air oscillating underwater. The METAR code for rain is RA, while the coding for rain showers is SHRA. Stratiform (a broad shield of precipitation with a relatively similar intensity) and dynamic precipitation (convective precipitation which is showery in nature with large changes in intensity over short distances) occur as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as in the vicinity of cold fronts and near and poleward of surface warm fronts. Similar ascent is seen around tropical cyclones outside of the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones. A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. What separates rainfall from other precipitation types, such as ice pellets and snow, is the presence of a thick layer of air aloft which is above the melting point of water, which melts the frozen precipitation well before it reaches the ground. If there is a shallow near surface layer that is below freezing, freezing rain (rain which freezes on contact with surfaces in subfreezing environments) will result. Hail becomes an increasingly infrequent occurrence when the freezing level within the atmosphere exceeds 11,000 feet (3,400 m) above ground level. Convective rain, or showery precipitation, occurs from convective clouds, e.g., cumulonimbus or cumulus congestus. It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts. Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed. In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it has the second highest average annual rainfall on Earth, with 460 inches (12,000 mm). Systems known as Kona storms affect the state with heavy rains between October and April. Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover. In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts. Within the tropics The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the intertropical convergence zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly. Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counter clockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage. The fine particulate matter produced by car exhaust and other human sources of pollution forms cloud condensation nuclei, leads to the production of clouds and increases the likelihood of rain. As commuters and commercial traffic cause pollution to build up over the course of the week, the likelihood of rain increases: it peaks by Saturday, after five days of weekday pollution has been built up. In heavily populated areas that are near the coast, such as the United States' Eastern Seaboard, the effect can be dramatic: there is a 22% higher chance of rain on Saturdays than on Mondays. The urban heat island effect warms cities 0.6 °C (1.1 °F) to 5.6 °C (10.1 °F) above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 20 to 40 miles (32 to 64 km) downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%. Increasing temperatures tend to increase evaporation which can lead to more precipitation. Precipitation generally increased over land north of 30°N from 1900 through 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation and/or more evaporation). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1 percent since 1900, with the greatest increases within the East North Central climate region (11.6 percent per century) and the South (11.1 percent). Hawaii was the only region to show a decrease (-9.25 percent). Rainbands are cloud and precipitation areas which are significantly elongated. Rainbands can be stratiform or convective, and are generated by differences in temperature. When noted on weather radar imagery, this precipitation elongation is referred to as banded structure. Rainbands in advance of warm occluded fronts and warm fronts are associated with weak upward motion, and tend to be wide and stratiform in nature. Rainbands spawned near and ahead of cold fronts can be squall lines which are able to produce tornadoes. Rainbands associated with cold fronts can be warped by mountain barriers perpendicular to the front's orientation due to the formation of a low-level barrier jet. Bands of thunderstorms can form with sea breeze and land breeze boundaries, if enough moisture is present. If sea breeze rainbands become active enough just ahead of a cold front, they can mask the location of the cold front itself. Once a cyclone occludes, a trough of warm air aloft, or "trowal" for short, will be caused by strong southerly winds on its eastern periphery rotating aloft around its northeast, and ultimately northwestern, periphery (also known as the warm conveyor belt), forcing a surface trough to continue into the cold sector on a similar curve to the occluded front. The trowal creates the portion of an occluded cyclone known as its comma head, due to the comma-like shape of the mid-tropospheric cloudiness that accompanies the feature. It can also be the focus of locally heavy precipitation, with thunderstorms possible if the atmosphere along the trowal is unstable enough for convection. Banding within the comma head precipitation pattern of an extratropical cyclone can yield significant amounts of rain. Behind extratropical cyclones during fall and winter, rainbands can form downwind of relative warm bodies of water such as the Great Lakes. Downwind of islands, bands of showers and thunderstorms can develop due to low level wind convergence downwind of the island edges. Offshore California, this has been noted in the wake of cold fronts. Rainbands within tropical cyclones are curved in orientation. Tropical cyclone rainbands contain showers and thunderstorms that, together with the eyewall and the eye, constitute a hurricane or tropical storm. The extent of rainbands around a tropical cyclone can help determine the cyclone's intensity. The pH of rain varies, especially due to its origin. On America's East Coast, rain that is derived from the Atlantic Ocean typically has a pH of 5.0-5.6; rain that comes across the continental from the west has a pH of 3.8-4.8; and local thunderstorms can have a pH as low as 2.0. Rain becomes acidic primarily due to the presence of two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3). Sulfuric acid is derived from natural sources such as volcanoes, and wetlands (sulfate reducing bacteria); and anthropogenic sources such as the combustion of fossil fuels, and mining where H2S is present. Nitric acid is produced by natural sources such as lightning, soil bacteria, and natural fires; while also produced anthropogenically by the combustion of fossil fuels and from power plants. In the past 20 years the concentrations of nitric and sulfuric acid has decreased in presence of rainwater, which may be due to the significant increase in ammonium (most likely as ammonia from livestock production), which acts as a buffer in acid rain and raises the pH. Köppen climate classification The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert. Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between 1,750 and 2,000 millimetres (69 and 79 in). A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between 750 and 1,270 millimetres (30 and 50 in) a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone where winter rainfall is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator. An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation. Rain is measured in units of length per unit time, typically in millimeters per hour, or in countries where imperial units are more common, inches per hour. The "length", or more accurately, "depth" being measured is the depth of rain water that would accumulate on a flat, horizontal and impermeable surface during a given amount of time, typically an hour. One millimeter of rainfall is the equivalent of one liter of water per square meter. The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100-mm (4-in) plastic and 200-mm (8-in) metal varieties. The inner cylinder is filled by 25 mm (0.98 in) of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to 0.25 mm (0.0098 in) resolution, while metal gauges require use of a stick designed with the appropriate 0.25 mm (0.0098 in) markings. After the inner cylinder is filled, the amount inside it is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge. For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how. When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather or met office will likely be interested in the measurement. One of the main uses of weather radar is to be able to assess the amount of precipitations fallen over large basins for hydrological purposes. For instance, river flood control, sewer management and dam construction are all areas where planners use rainfall accumulation data. Radar-derived rainfall estimates compliment surface station data which can be used for calibration. To produce radar accumulations, rain rates over a point are estimated by using the value of reflectivity data at individual grid points. A radar equation is then used, which is, where Z represents the radar reflectivity, R represents the rainfall rate, and A and b are constants. Satellite derived rainfall estimates use passive microwave instruments aboard polar orbiting as well as geostationary weather satellites to indirectly measure rainfall rates. If one wants an accumulated rainfall over a time period, one has to add up all the accumulations from each grid box within the images during that time. The sound of a heavy rain fall in suburban neighborhood |Problems playing this file? See media help.| Rainfall intensity is classified according to the rate of precipitation: - Light rain — when the precipitation rate is < 2.5 millimetres (0.098 in) per hour - Moderate rain — when the precipitation rate is between 2.5 millimetres (0.098 in) - 7.6 millimetres (0.30 in) or 10 millimetres (0.39 in) per hour - Heavy rain — when the precipitation rate is > 7.6 millimetres (0.30 in) per hour, or between 10 millimetres (0.39 in) and 50 millimetres (2.0 in) per hour - Violent rain — when the precipitation rate is > 50 millimetres (2.0 in) per hour The likelihood or probability of an event with a specified intensity and duration, is called the return period or frequency. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historic data for the location. The term 1 in 10 year storm describes a rainfall event which is rare and is only likely to occur once every 10 years, so it has a 10 percent likelihood any given year. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. The term 1 in 100 year storm describes a rainfall event which is extremely rare and which will occur with a likelihood of only once in a century, so has a 1 percent likelihood in any given year. The rainfall will be extreme and flooding to be worse than a 1 in 10 year event. As with all probability events, it is possible, though improbable, to have multiple "1 in 100 Year Storms" in a single year. The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within 6 to 7 hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast. Effect on agriculture Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive. In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season. Rain may be harvested through the use of rainwater tanks; treated to potable use or for non-potable use indoors or for irrigation. Excessive rain during short periods of time can cause flash floods. Cultural attitudes towards rain differ across the world. In temperate climates, people tend to be more stressed when the weather is unstable or cloudy, with its impact greater on men than women. Rain can also bring joy, as some consider it to be soothing or enjoy the aesthetic appeal of it. In dry places, such as India, or during periods of drought, rain lifts people's moods. In Botswana, the Setswana word for rain, "pula", is used as the name of the national currency, in recognition of the economic importance of rain in this desert country. Several cultures have developed means of dealing with rain and have developed numerous protection devices such as umbrellas and raincoats, and diversion devices such as gutters and storm drains that lead rains to sewers. Many people find the scent during and immediately after rain pleasant or distinctive. The source of this scent is petrichor, an oil produced by plants, then absorbed by rocks and soil, and later released into the air during rainfall. Approximately 505,000 cubic kilometres (121,000 cu mi) of water falls as precipitation each year across the globe with 398,000 cubic kilometres (95,000 cu mi) of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in). Deserts are defined as areas with an average annual precipitation of less than 250 millimetres (10 in) per year, or as areas where more water is lost by evapotranspiration than falls as precipitation. The northern half of Africa is primarily desert or arid, containing the Sahara. Across Asia, a large annual rainfall minimum, composed primarily of deserts, stretches from the Gobi desert in Mongolia west-southwest through western Pakistan (Balochistan) and Iran into the Arabian desert in Saudi Arabia. Most of Australia is semi-arid or desert, making it the world's driest inhabited continent. In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The drier areas of the United States are regions where the Sonoran desert overspreads the Desert Southwest, the Great Basin and central Wyoming. Since rain only falls as liquid, in frozen temperatures, rain can not fall. As a result, very cold climates see very little rainfall and are often known as polar deserts. A common biome in this area is the tundra which has a short summer thaw and a long frozen winter. Ice caps see no rain at all, making Antarctica the world's driest continent. Rainforests are areas of the world with very high rainfall. Both tropical and temperate rainforests exist. Tropical rainforests occupy a large band of the planet mostly along the equator. Most temperate rainforests are located on mountainous west coasts between 45 and 55 degrees latitude, but they are often found in other areas. Around 40-75% of all biotic life is found in rainforests. Rainforests are also responsible for 28% of the world's oxygen turnover. The equatorial region near the Intertropical Convergence Zone (ITCZ), or monsoon trough, is the wettest portion of the world's continents. Annually, the rain belt within the tropics marches northward by August, then moves back southward into the Southern Hemisphere by February and March. Within Asia, rainfall is favored across its southern portion from India east and northeast across the Philippines and southern China into Japan due to the monsoon advecting moisture primarily from the Indian Ocean into the region. The monsoon trough can reach as far north as the 40th parallel in East Asia during August before moving southward thereafter. Its poleward progression is accelerated by the onset of the summer monsoon which is characterized by the development of lower air pressure (a thermal low) over the warmest part of Asia. Similar, but weaker, monsoon circulations are present over North America and Australia. During the summer, the Southwest monsoon combined with Gulf of California and Gulf of Mexico moisture moving around the subtropical ridge in the Atlantic ocean bring the promise of afternoon and evening thunderstorms to the southern tier of the United States as well as the Great Plains. The eastern half of the contiguous United States east of the 98th meridian, the mountains of the Pacific Northwest, and the Sierra Nevada range are the wetter portions of the nation, with average rainfall exceeding 30 inches (760 mm) per year. Tropical cyclones enhance precipitation across southern sections of the United States, as well as Puerto Rico, the United States Virgin Islands, the Northern Mariana Islands, Guam, and American Samoa. Impact of the Westerlies Westerly flow from the mild north Atlantic leads to wetness across western Europe, in particular Ireland and the United Kingdom, where the western coasts can receive between 1,000 mm (39 in), at sea-level and 2,500 mm (98 in), on the mountains of rain per year. Bergen, Norway is one of the more famous European rain-cities with its yearly precipitation of 2,250 mm (89 in) on average. During the fall, winter, and spring, Pacific storm systems bring most of Hawaii and the western United States much of their precipitation. Over the top of the ridge, the jet stream brings a summer precipitation maximum to the Great Lakes. Large thunderstorm areas known as mesoscale convective complexes move through the Plains, Midwest, and Great Lakes during the warm season, contributing up to 10% of the annual precipitation to the region. The El Niño-Southern Oscillation affects the precipitation distribution, by altering rainfall patterns across the western United States, Midwest, the Southeast, and throughout the tropics. There is also evidence that global warming is leading to increased precipitation to the eastern portions of North America, while droughts are becoming more frequent in the tropics and subtropics. Wettest known locations Cherrapunji, situated on the southern slopes of the Eastern Himalaya in Shillong, India is the confirmed wettest places on Earth, with an average annual rainfall of 11,430 mm (450 in). The highest recorded rainfall in a single year was 22,987 mm (905.0 in) in 1861. The 38-year average at nearby Mawsynram, Meghalaya, India is 11,873 mm (467.4 in). The wettest spot in Australia is Mount Bellenden Ker in the north-east of the country which records an average of 8,000 millimetres (310 in) per year, with over 12,200 mm (480.3 in) of rain recorded during 2000. Mount Waialeale on the island of Kauaʻi in the Hawaiian Islands averages more than 11,680 millimetres (460 in) of rain per year over the last 32 years, with a record 17,340 millimetres (683 in) in 1982. Its summit is considered one of the rainiest spots on earth. It has been promoted in tourist literature for many years as the wettest spot in the world. Lloró, a town situated in Chocó, Colombia, is probably the place with the largest measured rainfall in the world, averaging 13,300 mm (520 in) per year. The Department of Chocó is extraordinarily humid. Tutunendo, a small town situated in the same department, is one of the wettest estimated places on Earth, averaging 11,394 mm (448.6 in) per year; in 1974 the town received 26,303 mm (86 ft 3.6 in), the largest annual rainfall measured in Colombia. Unlike Cherrapunji, which receives most of its rainfall between April and September, Tutunendo receives rain almost uniformly distributed throughout the year. Quibdó, the capital of Chocó, receives the most rain in the world among cities with over 100,000 inhabitants: 9,000 millimetres (350 in) per year. Storms in Chocó can drop 500 mm (20 in) of rainfall in a day. This amount is more than falls in many cities in a year's time. |Continent||Highest average||Place||Elevation||Years of Record| |South America||523.6||13,299||Lloró, Colombia (estimated)[a][b]||520||158[c]||29| |Oceania||460.0||11,684||Mount Waiʻaleʻale, Kauai, Hawaii (USA)[a]||5,148||1,569||30| |South America||354.0||8,992||Quibdo, Colombia||120||36.6||16| |Australia||340.0||8,636||Mount Bellenden Ker, Queensland||5,102||1,555||9| |North America||256.0||6,502||Henderson Lake, British Columbia||12||3.66||14| |Source (without conversions): Global Measured Extremes of Temperature and Precipitation, National Climatic Data Center. August 9, 2004.| |Highest average annual rainfall||Asia||Mawsynram, India||467.4||11,870| |Highest in one year||Asia||Cherrapunji, India||1,042||26,470| |Highest in one Calendar month||Asia||Cherrapunji, India||366||9,296| |Highest in 24 hours||Indian Ocean||Foc Foc, La Reunion Island||71.8||1,820| |Highest in 12 hours||Indian Ocean||Foc Foc, La Reunion Island||45.0||1,140| |Highest in one minute||North America||Unionville, Maryland, USA||1.23||31.2| Outside of Earth On Titan, Saturn's largest moon, infrequent methane rain is thought to carve the moon's numerous surface channels. On Venus, sulfuric acid virga evaporates 25 kilometres (16 mi) from the surface. There is likely to be rain of various compositions in the upper atmospheres of the gas giants, as well as precipitation of liquid neon in the deep atmospheres. Extrasolar planet OGLE-TR-56b in the constellation Sagittarius is hypothesized to have iron rain. - a b c The value given is continent's highest and possibly the world's depending on measurement practices, procedures and period of record variations. - ^ The official greatest average annual precipitation for South America is 354 inches at Quibdo, Colombia. The 523.6 inches average at Lloro, Colombia [14 miles SE and at a higher elevation than Quibdo] is an estimated amount. - ^ Approximate elevation. - ^ Recognized as "The Wettest place on Earth" by the Guinness Book of World Records. - "The Water Cycle". Planetguide.net. Retrieved 2011-12-26. - Steve Kempler (2009). "Parameter information page". NASA Goddard Space Flight Center. Archived from the original on November 26, 2007. Retrieved 2008-12-27. - Mark Stoelinga (2005-09-12). Atmospheric Thermodynamics. University of Washington. p. 80. Retrieved 2010-01-30. - Glossary of Meteorology (June 2000). "Relative Humidity". American Meteorological Society. Retrieved 2010-01-29. - Glossary of Meteorology (June 2000). "Cloud". American Meteorological Society. Retrieved 2010-01-29. - Naval Meteorology and Oceanography Command (2007). "Atmospheric Moisture". United States Navy. Retrieved 2008-12-27.[dead link] - Glossary of Meteorology (2009). "Adiabatic Process". American Meteorological Society. Retrieved 2008-12-27. - TE Technology, Inc (2009). "Peltier Cold Plate". Retrieved 2008-12-27. - Glossary of Meteorology (2009). "Radiational cooling". American Meteorological Society. Retrieved 2008-12-27. - Robert Fovell (2004). "Approaches to saturation". University of California in Los Angelese. Retrieved 2009-02-07. - Robert Penrose Pearce (2002). Meteorology at the Millennium. Academic Press. p. 66. ISBN 978-0-12-548035-2. Retrieved 2009-01-02. - National Weather Service Office, Spokane, Washington (2009). "Virga and Dry Thunderstorms". Retrieved 2009-01-02. - Bart van den Hurk and Eleanor Blyth (2008). "Global maps of Local Land-Atmosphere coupling". KNMI. Retrieved 2009-01-02. - Krishna Ramanujan and Brad Bohlander (2002). "Landcover changes may rival greenhouse gases as cause of climate change". National Aeronautics and Space Administration Goddard Space Flight Center. Archived from the original on June 3, 2008. Retrieved 2009-01-02. - National Weather Service JetStream (2008). "Air Masses". Retrieved 2009-01-02. - Dr. Michael Pidwirny (2008). "CHAPTER 8: Introduction to the Hydrosphere (e). Cloud Formation Processes". Physical Geography. Retrieved 2009-01-01. - Glossary of Meteorology (June 2000). "Front". American Meteorological Society. Retrieved 2010-01-29. - David Roth. "Unified Surface Analysis Manual". Hydrometeorological Prediction Center. Retrieved 2006-10-22. - FMI (2007). "Fog And Stratus - Meteorological Physical Background". Zentralanstalt für Meteorologie und Geodynamik. Retrieved 2009-02-07. - Glossary of Meteorology (June 2000). "Warm Rain Process". American Meteorological Society. Retrieved 2010-01-15. - Paul Sirvatka (2003). "Cloud Physics: Collision/Coalescence; The Bergeron Process". College of DuPage. Retrieved 2009-01-01. - Alistair B. Fraser (2003-01-15). "Bad Meteorology: Raindrops are shaped like teardrops.". Pennsylvania State University. Retrieved 2008-04-07. - United States Geological Survey (2009). "Are raindrops tear shaped?". United States Department of the Interior. Retrieved 2008-12-27. - Paul Rincon (2004-07-16). "Monster raindrops delight experts". British Broadcasting Company. Retrieved 2009-11-30. - J . S. 0guntoyinbo and F. 0. Akintola (1983). "Rainstorm characteristics affecting water availability for agriculture". IAHS Publication Number 140. Retrieved 2008-12-27. - Robert A. Houze Jr (October 1997). "Stratiform Precipitation in Regions of Convection: A Meteorological Paradox?". Bulletin of the American Meteorological Society 78 (10): 2179–2196. Bibcode:1997BAMS...78.2179H. doi:10.1175/1520-0477(1997)078<2179:SPIROC>2.0.CO;2. ISSN 1520-0477. Retrieved 2008-12-27.[dead link] - Norman W. Junker (2008). "An ingredients based methodology for forecasting precipitation associated with MCS’s". Hydrometeorological Prediction Center. Retrieved 2009-02-07. - "Falling raindrops hit 5 to 20 mph (8.0 to 32 km/h) speeds". Weather Quest. Retrieved 2008-04-08. - Andrea Prosperetti and Hasan N. Oguz (1993). "The impact of drops on liquid surfaces and the underwater noise of rain" (PDF). Annual Review of Fluid Mechanics 25: 577–602. Bibcode:1993AnRFM..25..577P. doi:10.1146/annurev.fl.25.010193.003045. Retrieved 2006-12-09. - Ryan C. Rankin (June 2005). "Bubble Resonance". The Physics of Bubbles, Antibubbles, and all That. Retrieved 2006-12-09. - Alaska Air Flight Service Station (2007-04-10). "SA-METAR". Federal Aviation Administration. Retrieved 2009-08-29.[dead link] - B. Geerts (2002). "Convective and stratiform rainfall in the tropics". University of Wyoming. Retrieved 2007-11-27. - David Roth (2006). "Unified Surface Analysis Manual". Hydrometeorological Prediction Center. Retrieved 2006-10-22. - MetEd (2003-03-14). "Precipitation Type Forecasts in the Southeastern and Mid-Atlantic states". University Corporation for Atmospheric Research. Retrieved 2010-01-30. - "Meso-Analyst Severe Weather Guide". University Corporation for Atmospheric Research. 2003-01-16. Retrieved 2009-07-16. Unknown parameter - Robert Houze (October 1997). "Stratiform Precipitation in Regions of Convection: A Meteorological Paradox?". Bulletin of the American Meteorological Society 78 (10): 2179. Bibcode:1997BAMS...78.2179H. doi:10.1175/1520-0477(1997)078<2179:SPIROC>2.0.CO;2. ISSN 1520-0477. - Glossary of Meteorology (2009). "Graupel". American Meteorological Society. Retrieved 2009-01-02. - Toby N. Carlson (1991). Mid-latitude Weather Systems. Routledge. p. 216. ISBN 978-0-04-551115-0. Retrieved 2009-02-07. - Diana Leone (2002). "Rain supreme". Honolulu Star-Bulletin. Retrieved 2008-03-19. - Steven Businger and Thomas Birchard, Jr. A Bow Echo and Severe Weather Associated with a Kona Low in Hawaii. Retrieved on 2007-05-22. - Western Regional Climate Center (2002). "Climate of Hawaii". Retrieved 2008-03-19. - Paul E. Lydolph (1985). The Climate of the Earth. Rowman & Littlefield. p. 333. ISBN 978-0-86598-119-5. Retrieved 2009-01-02. - Michael A. Mares (1999). Encyclopedia of Deserts. University of Oklahoma Press. p. 252. ISBN 978-0-8061-3146-7. Retrieved 2009-01-02. - Adam Ganson (2003). "Geology of Death Valley". Indiana University. Retrieved 2009-02-07. - Glossary of Meteorology (2009). "Rainy season". American Meteorological Society. Retrieved 2008-12-27. - Costa Rica Guide (2005). "When to Travel to Costa Rica". ToucanGuides. Retrieved 2008-12-27. - Michael Pidwirny (2008). "CHAPTER 9: Introduction to the Biosphere". PhysicalGeography.net. Retrieved 2008-12-27. - Elisabeth M. Benders-Hyde (2003). "World Climates". Blue Planet Biomes. Retrieved 2008-12-27. - J . S. 0guntoyinbo and F. 0. Akintola (1983). "Rainstorm characteristics affecting water availability for agriculture". Retrieved 2008-12-27. - Mei Zheng (2000). "The sources and characteristics of atmospheric particulates during the wet and dry seasons in Hong Kong". University of Rhode Island. Retrieved 2008-12-27. - S. I. Efe, F. E. Ogban, M. J. Horsfall, E. E. Akporhonor (2005). "Seasonal Variations of Physico-chemical Characteristics in Water Resources Quality in Western Niger Delta Region, Nigeria". Journal of Applied Scientific Environmental Management 9 (1): 191–195. ISSN 1119-8362. Retrieved 2008-12-27. - C. D. Haynes, M. G. Ridpath, M. A. J. Williams (1991). Monsoonal Australia. Taylor & Francis. p. 90. ISBN 978-90-6191-638-3. Retrieved 2008-12-27. - Chris Landsea (2007). "Subject: D3) Why do tropical cyclones' winds rotate counter-clockwise (clockwise) in the Northern (Southern) Hemisphere?". National Hurricane Center. Retrieved 2009-01-02. - Climate Prediction Center (2005). "2005 Tropical Eastern North Pacific Hurricane Outlook". National Oceanic and Atmospheric Administration. Retrieved 2006-05-02. - Jack Williams (2005-05-17). "Background: California's tropical storms". USA Today. Retrieved 2009-02-07. - R. S. Cerveny and R. C. Balling (1998-08-06). "Weekly cycles of air pollutants, precipitation and tropical cyclones in the coastal NW Atlantic region". Nature 394 (6693): 561–563. Bibcode:1998Natur.394..561C. doi:10.1038/29043. - Dale Fuchs (2005-06-28). "Spain goes hi-tech to beat drought". London: The Guardian. Retrieved 2007-08-02. - Goddard Space Flight Center (2002-06-18). "NASA Satellite Confirms Urban Heat Islands Increase Rainfall Around Cities". National Aeronautics and Space Administration. Archived from the original on June 12, 2008. Retrieved 2009-07-17. - Climate Change Division (2008-12-17). "Precipitation and Storm Changes". United States Environmental Protection Agency. Retrieved 2009-07-17. - American Meteorological Society (1998-10-02). "Planned and Inadvertent Weather Modification". Retrieved 2010-01-31. - Glossary of Meteorology (2009). Rainband. Retrieved on 2008-12-24. - Glossary of Meteorology (2009). Banded structure. Retrieved on 2008-12-24. - Owen Hertzman (1988). Three-Dimensional Kinematics of Rainbands in Midlatitude Cyclones. Retrieved on 2008-12-24 - Yuh-Lang Lin (2007). Mesoscale Dynamics. Retrieved on 2008-12-25. - Glossary of Meteorology (2009). Prefrontal squall line. Retrieved on 2008-12-24. - J. D. Doyle (1997). The influence of mesoscale orography on a coastal jet and rainband. Retrieved on 2008-12-25. - A. Rodin (1995). Interaction of a cold front with a sea-breeze front numerical simulations. Retrieved on 2008-12-25. - St. Louis University (2003-08-04). "What is a TROWAL? via the Internet Wayback Machine". Archived from the original on 2006-09-16. Retrieved 2006-11-02. - David R. Novak, Lance F. Bosart, Daniel Keyser, and Jeff S. Waldstreicher (2002). A Climatological and composite study of cold season banded precipitation in the Northeast United States. Retrieved on 2008-12-26. - Ivory J. Small (1999). An observation study of island effect bands: precipitation producers in Southern California. Retrieved on 2008-12-26. - University of Wisconsin–Madison (1998).Objective Dvorak Technique. Retrieved on 2006-05-29. - Joan D. Willey (1988-01). "Effect of storm type on rainwater composition in southeastern North Carolina". Environmental Science & Technology. - Joan D. Willey (2006-08-19). "Changing Chemical Composition of Precipitation in Wilmington, North Carolina, U.S.A.: Implications for the Continental U.S.A". Environmental Science & Technology. - Peel, M. C. and Finlayson, B. L. and McMahon, T. A. (2007). "Updated world map of the Köppen-Geiger climate classification". Hydrol. Earth Syst. Sci. 11: 1633–1644. doi:10.5194/hess-11-1633-2007. ISSN 1027-5606. (direct:Final Revised Paper) - Susan Woodward (1997-10-29). "Tropical Broadleaf Evergreen Forest: The Rainforest". Radford University. Retrieved 2008-03-14. - Susan Woodward (2005-02-02). "Tropical Savannas". Radford University. Retrieved 2008-03-16. - "Humid subtropical climate". Encyclopædia Britannica. Encyclopædia Britannica Online. 2008. Retrieved 2008-05-14. - Michael Ritter (2008-12-24). "Humid Subtropical Climate". University of Wisconsin–Stevens Point. Retrieved 2008-03-16. - Lauren Springer Ogden (2008). Plant-Driven Design. Timber Press. p. 78. ISBN 978-0-88192-877-8. - Michael Ritter (2008-12-24). "Mediterranean or Dry Summer Subtropical Climate". University of Wisconsin–Stevens Point. Retrieved 2009-07-17. - Brynn Schaffner and Kenneth Robinson (2003-06-06). "Steppe Climate". West Tisbury Elementary School. Retrieved 2008-04-15. - Michael Ritter (2008-12-24). "Subarctic Climate". University of Wisconsin–Stevens Point. Retrieved 2008-04-16. - 5 - Principal Hazards in U.S.doc "Chapter 5 - Principal Hazards in U.S.doc". p. 128. - Rain gauge and cubic inches - "FAO.org". FAO.org. Retrieved 2011-12-26. - National Weather Service Office, Northern Indiana (2009). "8 Inch Non-Recording Standard Rain Gauge". Retrieved 2009-01-02. - Chris Lehmann (2009). "10/00". Central Analytical Laboratory. Retrieved 2009-01-02. - National Weather Service (2009). "Glossary: W". Retrieved 2009-01-01. - Discovery School (2009). "Build Your Own Weather Station". Discovery Education. Archived from the original on 2008-12-26. Retrieved 2009-01-02. - "Community Collaborative Rain, Hail & Snow Network Main Page". Colorado Climate Center. 2009. Retrieved 2009-01-02. - The Globe Program (2009). "Global Learning and Observations to Benefit the Environment Program". Retrieved 2009-01-02. - National Weather Service (2009). "NOAA's National Weather Service Main Page". Retrieved 2009-01-01. - Kang-Tsung Chang, Jr-Chuan Huang, Shuh-Ji Kao, and Shou-Hao Chiang (2009). "Radar Rainfall Estimates for Hydrologic and Landslide Modeling". Data Assimilation for Atmospheric, Oceanic and Hydrologic Applications: 127–145. doi:10.1007/978-3-540-71056-1_6. ISBN 978-3-540-71056-1. Retrieved 2010-01-15. - Eric Chay Ware (August 2005). "Corrections to Radar-Estimated Precipitation Using Observed Rain Gauge Data: A Thesis". Cornell University. p. 1. Retrieved 2010-01-02. - Pearl Mngadi, Petrus JM Visser, and Elizabeth Ebert (October 2006). "Southern Africa Satellite Derived Rainfall Estimates Validation". International Precipitation Working Group. p. 1. Retrieved 2010-01-05. - Glossary of Meteorology (June 2000). "Rain". American Meteorological Society. Retrieved 2010-01-15. - Met Office (August 2007). "Fact Sheet No. 3: Water in the Atmosphere". Crown Copyright. p. 6. Retrieved 2011-05-12. - Gullywasher | Define Gullywasher at Dictionary.com - toad-strangler - Wiktionary - Glossary of Meteorology (2009). "Return period". American Meteorological Society. Retrieved 2009-01-02. - Glossary of Meteorology (2009). "Rainfall intensity return period". American Meteorological Society. Retrieved 2009-01-02. - Boulder Area Sustainability Information Network (2005). "What is a 100 year flood?". Boulder Community Network. Retrieved 2009-01-02. - Jack S. Bushong (1999). "Quantitative Precipitation Forecast: Its Generation and Verification at the Southeast River Forecast Center". University of Georgia. Retrieved 2008-12-31. - Daniel Weygand (2008). "Optimizing Output From QPF Helper". National Weather Service Western Region. Retrieved 2008-12-31. - Noreen O. Schwein (2009). "Optimization of quantitative precipitation forecast time horizons used in river forecasts". American Meteorological Society. Retrieved 2008-12-31. - Christian Keil, Andreas Röpnack, George C. Craig, and Ulrich Schumann (2008-12-31). "Sensitivity of quantitative precipitation forecast to height dependent changes in humidity". Geophysical Research Letters 35 (9): L09812. Bibcode:2008GeoRL..3509812K. doi:10.1029/2008GL033657. - P. Reggiani and A. H. Weerts (February 2008). "Probabilistic Quantitative Precipitation Forecast for Flood Prediction: An Application". Journal of Hydrometeorology 9 (1): 76–95. Bibcode:2008JHyMe...9...76R. doi:10.1175/2007JHM858.1. Retrieved 2008-12-31. - Charles Lin (2005). "Quantitative Precipitation Forecast (QPF) from Weather Prediction Models and Radar Nowcasts, and Atmospheric Hydrological Modelling for Flood Simulation". Achieving Technological Innovation in Flood Forecasting Project. Retrieved 2009-01-01. - Bureau of Meteorology (2010). "Living With Drought". Commonwealth of Australia. Retrieved 2010-01-15. - Robert Burns (2007-06-06). "Texas Crop and Weather". Texas A&M University. Retrieved 2010-01-15. - James D. Mauseth (2006-07-07). "Mauseth Research: Cacti". University of Texas. Retrieved 2010-01-15. - A. Roberto Frisancho (1993). Human Adaptation and Accommodation. University of Michigan Press, pp. 388. ISBN 978-0-472-09511-7. Retrieved on 2008-12-27. - Marti J. Van Liere, Eric-Alain D. Ategbo, Jan Hoorweg, Adel P. Den Hartog, and Joseph G. A. J. Hautvast (1994). "The significance of socio-economic characteristics for adult seasonal body-weight fluctuations: a study in north-western Benin". British Journal of Nutrition (Cambridge University Press) 72 (3): 479–488. doi:10.1079/BJN19940049. PMID 7947661. - Texas Department of Environmental Quality (2008-01-16). "Harvesting, Storing, and Treating Rainwater for Domestic Indoor Use". Texas A&M University. Retrieved 2010-01-15. - Glossary of Meteorology (June 2000). "Flash Flood". American Meteorological Society. Retrieved 2010-01-15. - A. G. Barnston (1986-12-10). "The effect of weather on mood, productivity, and frequency of emotional crisis in a temperate continental climate". International Journal of Biometeorology 32 (4): 134–143. Bibcode:1988IJBm...32..134B. doi:10.1007/BF01044907. Retrieved 2010-01-15. - IANS (2009-03-23). "Sudden spell of rain lifts mood in Delhi". Thaindian news. Retrieved 2010-01-15. - William Pack (2009-09-11). "Rain lifts moods of farmers". San Antonio Express-News. Retrieved 2010-01-15. - Robyn Cox (2007). "Glossary of Setswana and Other Words". Retrieved 2010-01-15. - Allen Burton and Robert Pitt (2002). Stormwater Effects Handbook: A Toolbox for Watershed Managers, Scientists, and Engineers. CRC Press, LLC. p. 4. Retrieved 2010-01-15. - Bear, I.J.; R.G. Thomas (March 1964). "Nature of argillaceous odour". Nature 201 (4923): 993–995. Bibcode:1964Natur.201..993B. doi:10.1038/201993a0. - Dr. Chowdhury's Guide to Planet Earth (2005). "The Water Cycle". WestEd. Retrieved 2006-10-24. - Publications Service Center (2001-12-18). "What is a desert?". United States Geologic Survey. Retrieved 2010-01-15. - According to What is a desert?, the 250 mm threshold definition is attributed to Peveril Meigs. - "desert". Encyclopædia Britannica online. Retrieved 2008-02-09. - "About Biodiversity". Department of the Environment and Heritage. Archived from the original on 2007-02-05. Retrieved 2007-09-18. - NationalAtlas.gov (2009-09-17). "Precipitation of the Individual States and of the Conterminous States". United States Department of the Interior. Retrieved 2010-01-15. - Todd Mitchell (October 2001). "Africa Rainfall Climatology". University of Washington. Retrieved 2010-01-02. - W. Timothy Liu, Xiaosu Xie, and Wenqing Tang (2006). "Monsoon, Orography, and Human Influence on Asian Rainfall". Proceedings of the First International Symposium in Cloud-prone & Rainy Areas Remote Sensing (CARRS), Chinese University of Hong Kong (National Aeronautic and Space Administration Jet Propulsion Laboratory). Retrieved 2010-01-04. - National Centre for Medium Range Forecasting (2004-10-23). "Chapter-II Monsoon-2004: Onset, Advancement and Circulation Features". India Ministry of Earth Sciences. Retrieved 2008-05-03. - Australian Broadcasting Corporation (1999-08-11). "Monsoon". Retrieved 2008-05-03. - David J. Gochis, Luis Brito-Castillo, and W. James Shuttleworth (2006-01-10). "Hydroclimatology of the North American Monsoon region in northwest Mexico". Journal of Hydrology 316 (1–4): 53–70. Bibcode:2006JHyd..316...53G. doi:10.1016/j.jhydrol.2005.04.021. Retrieved 2010-01-05. - Bureau of Meteorology. Climate of Giles. Retrieved on 2008-05-03. - J. Horel. Normal Monthly Precipitation, Inches. Retrieved on 2008-03-19. - NationalAtlas.gov Precipitation of the Individual States and of the Conterminous States. Retrieved on 2008-03-09. - Kristen L. Corbosiero, Michael J. Dickinson, and Lance F. Bosart (2009). "The Contribution of Eastern North Pacific Tropical Cyclones to the Rainfall Climatology of the Southwest United States". Monthly Weather Review (American Meteorological Society) 137 (8): 2415–2435. Bibcode:2009MWRv..137.2415C. doi:10.1175/2009MWR2768.1. ISSN 0027-0644. - Central Intelligence Agency. The World Factbook – Virgin Islands. Retrieved on 2008-03-19. - BBC. Weather Centre - World Weather - Country Guides - Northern Mariana Islands. Retrieved on 2008-03-19. - Walker S. Ashley, Thomas L. Mote, P. Grady Dixon, Sharon L. Trotter, Emily J. Powell, Joshua D. Durkee, and Andrew J. Grundstein. Distribution of Mesoscale Convective Complex Rainfall in the United States. Retrieved on 2008-03-02. - John Monteverdi and Jan Null. Western Region Technical Attachment NO. 97-37 November 21, 1997: El Niño and California Precipitation. Retrieved on 2008-02-28. - Southeast Climate Consortium (2007-12-20). "SECC Winter Climate Outlook". Archived from the original on 2008-03-04. Retrieved 2008-02-29. - Reuters (2007-02-16). "La Nina could mean dry summer in Midwest and Plains". Retrieved 2008-02-29. - Climate Prediction Center. El Niño (ENSO) Related Rainfall Patterns Over the Tropical Pacific. Retrieved on 2008-02-28. - A. J. Philip (2004-10-12). "Mawsynram in India". Tribune News Service. Retrieved 2010-01-05.[dead link] - Bureau of Meteorology (2010). "Significant Weather - December 2000 (Rainfall)". Commonwealth of Australia. Retrieved 2010-01-15. - "USGS 220427159300201 1047.0 Mt. Waialeale rain gauge nr Lihue, Kauai, HI". USGS Real-time rainfall data at Waiʻaleʻale Raingauge. Retrieved 2008-12-11. - National Climatic Data Center (2005-08-09). "Global Measured Extremes of Temperature and Precipitation". National Oceanic and Atmospheric Administration. Retrieved 2007-01-18. - Alfred Rodríguez Picódate (2008-02-07). "Tutunendaó, Choco: la ciudad colombiana es muy lluviosa". El Periódico.com. Retrieved 2008-12-11.[dead link] - "Global Measured Extremes of Temperature and Precipitation#Highest Average Annual Precipitation Extremes". National Climatic Data Center. August 9, 2004. - "Global Weather & Climate Extremes". World Meteorological Organization. Retrieved 2013-04-18. - "World Rainfall Extremes". Members.iinet.net.au. 2004-03-02. Retrieved 2011-12-26. - Emily Lakdawalla (2004-01-21). "Titan: Arizona in an Icebox?". The Planetary Society. Retrieved 2005-03-28.[dead link] - Paul Rincon (2005-11-07). "Planet Venus: Earth's 'evil twin'". BBC News. Retrieved 2010-01-25. - Paul Mahaffy. "Highlights of the Galileo Probe Mass Spectrometer Investigation". NASA Goddard Space Flight Center, Atmospheric Experiments Laboratory. Retrieved 2007-06-06. - Katharina Lodders (2004). "Jupiter Formed with More Tar than Ice". The Astrophysical Journal 611 (1): 587–597. Bibcode:2004ApJ...611..587L. doi:10.1086/421970. Retrieved 2007-07-03. - Harvard University and Smithsonian Institution (2003-01-08). "New World of Iron Rain". Astrobiology Magazine. Retrieved 2010-01-25. - UFL - Dispute between Mawsynram and Cherrapunji for the rainiest place in the world[dead link] |Wikiquote has a collection of quotations related to: Rain| |Wikimedia Commons has media related to: Rain| - What are clouds, and why does it rain? - BBC article on the weekend rain effect - BBC article on rain-making - BBC article on the mathematics of running in the rain
http://en.wikipedia.org/wiki/Rain
13
50
Genes correspond to regions within DNA, a molecule composed of a chain of four different types of nucleotides—the sequence of these nucleotides is the genetic information organisms inherit. DNA naturally occurs in a double stranded form, with nucleotides on each strand complementary to each other. Each strand can act as a template for creating a new partner strand—this is the physical method for making copies of genes that can be inherited. The sequence of nucleotides in a gene is translated by cells to produce a chain of amino acids, creating proteins—the order of amino acids in a protein corresponds to the order of nucleotides in the gene. This is known as the genetic code. The amino acids in a protein determine how it folds into a three-dimensional shape; this structure is, in turn, responsible for the protein's function. Proteins carry out almost all the functions needed for cells to live. A change to the DNA in a gene can change a protein's amino acids, changing its shape and function: this can have a dramatic effect in the cell and on the organism as a whole. Although genetics plays a large role in the appearance and behavior of organisms, it is the combination of genetics with what an organism experiences that determines the ultimate outcome. For example, while genes play a role in determining a person's height, the nutrition and health that person experiences in childhood also have a large effect. Although the science of genetics began with the applied and theoretical work of Gregor Mendel in the mid-1800s, other theories of inheritance preceded Mendel. A popular theory during Mendel's time was the concept of blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents. Mendel's work disproved this, showing that traits are composed of combinations of distinct genes rather than a continuous blend. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrong—the experiences of individuals do not affect the genes they pass to their children. Other theories included the pangenesis of Charles Darwin (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited. The importance of Mendel's work did not gain wide understanding until the 1890s, after his death, when other scientists working on similar problems re-discovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905. (The adjective genetic, derived from the Greek word genesis - γένεσις, "origin" and that from the word genno - γεννώ, "to give birth", predates the noun and was first used in a biological sense in 1860.) Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London, England, in 1906. After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1910, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies. In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome. Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA—scientists did not know which of these was responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation (see Griffith's experiment): dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, Oswald Theodore Avery, Colin McLeod and Maclyn McCarty identified the molecule responsible for transformation as DNA. The Hershey-Chase experiment in 1952 also showed that DNA (rather than protein) was the genetic material of the viruses that infect bacteria, providing further evidence that DNA was the molecule responsible for inheritance. James D. Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin that indicated DNA had a helical structure (i.e., shaped like a corkscrew). Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what looks like rungs on a twisted ladder. This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for duplication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. Although the structure of DNA showed how inheritance worked, it was still not known how DNA influenced the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production. It was discovered that the cell uses DNA as a template to create matching messenger RNA (a molecule with nucleotides, very similar to DNA). The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide and amino acid sequences is known as the genetic code. With this molecular understanding of inheritance, an explosion of research became possible. One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger: this technology allows scientists to read the nucleotide sequence of a DNA molecule. In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of a DNA from a mixture. Through the pooled efforts of the Human Genome Project and the parallel private effort by Celera Genomics, these and other techniques culminated in the sequencing of the human genome in 2003. At its most fundamental level, inheritance in organisms occurs by means of discrete traits, called genes. This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants. In his experiments studying the trait for flower color, Mendel observed that the flowers of each pea plant were either purple or white - and never an intermediate between the two colors. These different, discrete versions of the same gene are called alleles. In the case of pea plants, each organism has two alleles of each gene, and the plants inherit one allele from each parent. Many organisms, including humans, have this pattern of inheritance. Organisms with two copies of the same allele are called homozygous, while organisms with two different alleles are heterozygous. The set of alleles for a given organism is called its genotype, while the observable trait the organism has is called its phenotype. When organisms are heterozygous, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once. When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation. Geneticists use diagrams and symbols to describe inheritance. A gene is represented by a letter (or letters) - the capitalized letter represents the dominant allele and the recessive is represented by lowercase. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene. In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square. When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits. These charts map the inheritance of a trait in a family tree. Organisms have thousands of genes, and in sexually reproducing organisms assortment of these genes are generally independent of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "Law of independent assortment", means that the alleles of different genes get shuffled between parents to form offspring with many different combinations.(Some genes do not assort independently, demonstrating genetic linkage, a topic discussed later in this article.) Often different genes can interact in a way that influences the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all: color or white. When a plant has two copies of this white allele, its flowers are white - regardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first. Many traits are not discrete features (eg. purple or white flowers) but are instead continuous features (eg. human height and skin color). These complex traits are the product of many genes. The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability. Measurement of the heritability of a trait is relative - in a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a complex trait with a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%. The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of a chain of nucleotides, of which there are four types: adenine (A), cytosine (C), guanine (G), and thymine (T). Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain. Viruses are the only exception to this rule—sometimes viruses use the very similar molecule RNA instead of DNA as their genetic material. DNA normally exists as a double-stranded molecule, coiled into the shape of a double-helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand. Genes are arranged linearly along long chains of DNA sequence, called chromosomes. In bacteria, each cell has a single circular chromosome, while eukaryotic organisms (which includes plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length. The DNA of a chromosome is associated with structural proteins that organize, compact, and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, repeating units of DNA wound around a core of histone proteins. The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome. While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene. The two alleles for a gene are located on identical loci of sister chromatids, each allele inherited from a different parent. An exception exists in the sex chromosomes, specialized chromosomes many animals have evolved that play a role in determining the sex of an organism. In humans and other mammals, the Y chromosome has very few genes and triggers the development of male sexual characteristics, while the X chromosome is similar to the other chromosomes and contains many genes unrelated to sex determination. Females have two copies of the X chromosome, but males have one Y and only one X chromosome - this difference in X chromosome copy numbers leads to the unusual inheritance patterns of sex-linked disorders. When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones. Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid). Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes. Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium. Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genome, a phenomenon known as transformation. This processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated. The diploid nature of chromosomes allows for genes on different chromosomes to assort independently during sexual reproduction, recombining to form new combinations of genes. Genes on the same chromosome would theoretically never recombine, however, were it not for the process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes. This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid germ cells that later combine with other germ cells to form child organisms. The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between them. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated. For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage - alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome. Genes generally express their functional effect through the production of proteins, which are complex molecules responsible for most functions in the cell. Proteins are chains of amino acids, and the DNA sequence of a gene (through RNA intermediate) is used to produce a specific protein sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription. This messenger RNA molecule is then used to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds to one of the twenty possible amino acids in protein - this correspondence is called the genetic code. The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNA—a phenomenon Francis Crick called the central dogma of molecular biology. The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of protein are related to their function. Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood. A single nucleotide difference within DNA can cause a single change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the β-globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties. Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease. Some genes are transcribed into RNA but are not translated into protein products - these are called non-coding RNA molecules. In some cases, these products fold into structures which are involved in critical cell functions (eg. ribosomal RNA and transfer RNA). RNA can also have regulatory effect through hybridization interactions with other RNA molecules (eg. microRNA). Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotype—a dichotomy often referred to as "nature vs. nurture." The phenotype of an organism depends on the interaction of genetics with the environment. One example of this is the case of temperature-sensitive mutations. Often, a single amino acid change within the sequence of a protein does not change its behavior and interactions with other molecules, but it does destabilize the structure. In a high temperature environment, where molecules are moving more quickly and hitting each other, this results in the protein losing its structure and failing to function. In a low temperature environment, however, the protein's structure is stable and functions normally. This type of mutation is visible in the coat coloration of Siamese cats, where a mutation in an enzyme responsible for pigment production causes it to destabilize and lose function at high temperatures. The protein remains functional in areas of skin that are colder—legs, ears, tail, and face—and so the cat has dark fur at its extremities. Environment also plays a dramatic role in effects of the human genetic disease phenylketonuria. The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive mental retardation and seizures. If someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, however, they remain normal and healthy. The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA (and translated into protein), and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to the start of genes, either promoting or inhibiting the transcription of the gene. Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genes—tryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process. Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells. Within eukaryotes there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells. These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance. During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can have an impact on the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases. (Without proofreading error rates are a thousand-fold higher; because many viruses rely on DNA and RNA polymerases that lack proofreading ability, they experience higher mutation rates.) Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well, and cells use DNA repair mechanisms to repair mismatches and breaks in DNA—nevertheless, the repair sometimes fails to return the DNA to its original sequence. In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions or deletions of entire regions, or the accidental exchanging of whole parts between different chromosomes (called translocation). Mutations produce organisms with different genotypes, and those differences can result in different phenotypes. Many mutations have little effect on an organism's phenotype, health, and reproductive fitness. Mutations that do have an effect are often deleterious, but occasionally mutations are beneficial. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. Population genetics research studies the distributions of these genetic differences within populations and how the distributions change over time. Changes in the frequency of an allele in a population can be influenced by natural selection, where a given allele's higher rate of survival and reproduction causes it to become more frequent in the population over time. Genetic drift can also occur, where chance events lead to random changes in allele frequency. Over many generations, the genomes of organisms can change, resulting in the phenomenon of evolution. Mutations and the selection for beneficial mutations can cause a species to evolve into forms that better survive their environment, a process called adaptation. New species are formed through the process of speciation, a process often caused by geographical separations that allow different populations to genetically diverge. The application of genetic principles to the study of population biology and evolution is referred to as the modern synthesis. As sequences diverge and change during the process of evolution, these differences between sequences can be used as a molecular clock to calculate the evolutionary distance between them. Genetic comparisons are generally considered the most accurate method of characterizing the relatedness between species, an improvement over the sometimes deceptive comparison of phenotypic characteristics. The evolutionary distances between species can be combined to form evolutionary trees - these trees represent the common descent and divergence of species over time, although they cannot represent the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria). Although geneticists originally studied inheritance in a wide range of organisms, researchers began to specialize in studying the genetics of a particular subset of organisms. The fact that significant research already existed for a given organism would encourage new researchers to choose it for further study, and so eventually a few model organisms became the basis for most genetics research. Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer. Organisms were chosen, in part, for convenience—short generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), and the common house mouse (Mus musculus). Although it is not an inherited disease, cancer is also considered a genetic disease. The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. While these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body. DNA can be manipulated in the laboratory. Restriction enzymes are a commonly used enzyme that cuts DNA at specific sequences, producing predictable fragments of DNA. The use of ligation enzymes allows these fragments to be reconnected, and by ligating fragments of DNA together from different sources, researchers can create recombinant DNA. Often associated with genetically modified organisms, recombinant DNA is commonly used in the context of plasmids - short circular DNA fragments with a few genes on them. By inserting plasmids into bacteria and growing those bacteria on plates of agar (to isolate clones of bacteria cells), researchers can clonally amplify the inserted fragment of DNA (a process known as molecular cloning). (Cloning can also refer to the creation of clonal organisms, through various techniques.) DNA can also be amplified using a procedure called the polymerase chain reaction (PCR). By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences. As sequencing has become less expensive and with the aid of computational tools, researchers have sequenced the genomes of many organisms by stitching together the sequences of many different fragments (a process called genome assembly). These technologies were used to sequence the human genome, leading to the completion of the Human Genome Project in 2003. New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars. The large amount of sequences available has created the field of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. Patent No. 7,683,244 Issued on March 23, Assigned to Kirin Agribio for novel Lobelia Plant, Breeding Method (Japanese Inventor) Mar 24, 2010; ALEXANDRIA, Va., March 30 -- Daigaku Takeshita, Tochigi, Japan, has developed a novel lobelia plant and a breeding method. The...
http://www.reference.com/browse/filial+generation
13
68
Have you ever stood around a cocktail party discussing statistics? I didn't think so. But we hear about them all the time, especially when we're absorbing clinical trial results -- or election year polls. Understanding the basics of statistics is not that hard to do, despite how it seemed in school! There are a handful of concepts that serve as the building blocks for learning statistics. I'm jumping ahead of mean, median, mode; those will not be covered here (see sidebar). We will look first at standard deviation, standard error, and confidence interval, and how they all tie in to p-value. The standard deviation is simply a measure of the amount of variability in your particular sample. Because we can't practically measure everyone in a population that we are interested in studying, we have to take a sample of the population. The standard deviation describes the variation in measurements from individual to individual data point within the sample. Below is a picture of what the standard deviation (SD) tells us. The mean, μ, which is the average, is the highest point in the center of the bell-shaped curve. The area between plus or minus (±) 1 standard deviation (1σ) of the mean captures 68% of all measurements in your sample; the area between ± 2 standard deviations (2σ) captures 95% of all measurements in your sample. In other words, not very many data points -- approximately 5% -- will lie more than 2 standard deviations from the mean. The standard error (SE) is important in describing how well the sample mean represents the true population mean. Remember, because we can't practically measure everyone in the population, we take a random sample. Every random sample will give a slightly different estimation of the whole population. The standard error gives you a measure of how precise your sample mean is compared to the true population mean. It is calculated by the standard deviation divided by the square root of the mean. So it depends on the size of your sample. As the sample size gets larger, then variability gets smaller and we get a more precise measurement of the truth. If we measure every member of the population, then it is no longer a sample. There is only one value that can be computed by measuring every member of the population, thus there is no variability and the truth is known. Since every random sample will give a slightly different estimation of the whole population, it makes sense to try to describe what the true population looks like with more than a single number. The confidence interval (CI) estimates a range of values within which we are pretty sure the truth lies. The confidence interval depends on the standard error. It is calculated by the sample mean you have measured in your sample plus or minus approximately 2 times the standard error. For example, the 95% CI gives us the range of values within which we are confident that the true population mean falls 95% of the time. Every point outside of the confidence interval is very unlikely to occur by chance alone. If we have a 95% CI, then it means that there is a 5% chance that the true mean of the population is outside of that interval. In other words, we're pretty confident of the value of our confidence interval! This area outside the confidence area corresponds to a parameter called alpha, α. And usually we split α so that half is in the upper tail (2.5%) and half is in the lower tail (2.5%). This is what we mean by a two-tailed confidence interval. The CI tells us a lot about our sample. We can draw conclusions about statistical significance based on the location of the CI. For example, suppose we have a CI that estimates the difference between two groups: a value of zero corresponds to no difference (that is, the null value). Therefore a CI that excludes zero denotes a statistically significant finding. Beware, though, that the null value is not always zero! It depends upon your null hypothesis, which will be described below. Before moving on to new concepts, let's put the three summary statistics of standard deviation, standard error, and confidence interval together by considering an example from blood pressure measurements on 50 individuals. The scatter plot in the picture below shows each of the 50 individual measurements from this hypothetical sample. Our sample mean is represented by the large dot in the center of the 4 vertical lines beside the scatter plot. In the first green line, we can see that about 2/3 of our sample results are contained within +/- 1 standard deviation. In the second green line, 95% of our data points are covered by +/- 2 standard deviations. Once we know the standard error, we can construct the confidence interval. The 95% CI, depicted by the second blue line, gives us the range within which we are 95% confident that the true population mean lies. The first step of the experiment is to state your hypothesis. The null hypothesis, H0, is pre-defined and represents a statement which is the reverse of what we hope the experiment will show. It is named the null hypothesis because it is the statement that we want the data to reject. The alternative hypothesis, Ha, is also predefined and represents a statement of what we hope the experiment will show. Ha is the hypothesis that there is a real effect. Suppose we design a study to test if Optimized Background Treatment (OBT) + New Drug are better than just OBT alone. Our null hypothesis is that there is no difference between the groups. Our alternative hypothesis is that the two groups are not the same. (We hope that OBT + New Drug is better!) Depending on the data gathered from the study, we will either reject the null hypothesis or not. The next step is to design your experiment and select the test statistic. The test statistic defines the method that will be used to compare the two groups and help interpret the outcome at the end of the study. Sometimes the comparison may be based on the differences in means, and use continuous data analysis methods. Sometimes the comparison may be based on proportions, and use categorical data analysis methods. There are many possibilities. For our example from Step 1, the test statistic will be based on the comparison of the proportion of patients with HIV viral load (VL) less than (<) 50 copies/mL in each treatment group of our study sample. Many other important decisions go into designing the experiment besides selecting the test statistic. We also calculate the sample size, agree on the power of the study, and establish parameters like α and β (described below). After we generate a random study sample, we conduct the study and collect the data. The third step is to investigate the hypotheses stated in Step 1 and compare the groups. In our hypothetical example, we find that 75% of subjects in the OBT + New Drug group achieve VL <50 copies/mL compared to 35% of patients in the control group. This produces a p-value <0.0001. The p-value (the "p" stands for "probability") helps us decide whether the data from the random study sample supports the null hypothesis or the alternative hypothesis. P-value is the probability that these results would occur if there was truly no difference between the groups -- that is, how likely the results would have been observed purely by chance. The closer the p-value is to 0, the greater the likelihood that the observed difference in viral load is real and not due to chance, thus the more reason we have to reject the null hypothesis in favor of the alternative hypothesis. We look for a p-value of 0.05 or smaller. This represents a 5-in-100 probability -- a very small chance indeed! The last step is to compare the p-value with α and interpret the finding. Alpha is called the significance level. As described above in the section on CI, it is the area outside of the confidence area. It is most commonly defined as α=0.05. If the p-value is less than or equal to α, then the null hypothesis is rejected and we declare a statistically significant finding has been observed. If the p-value is greater than α, then the null hypothesis is not rejected. Remember, our hypothetical example produced a p-value <0.0001. This is well below α=0.05, so we reject the null hypothesis and conclude that OBT + New Drug and OBT alone are different. We can even take it one step further and conclude that OBT + New Drug are better than OBT. The results of a study are often described by both the p-value and the 95% CI. The p-value is a single number that guides whether or not to reject the null hypothesis. The 95% CI provides a range of plausible values for describing the underlying population. |Ho: no fire||Ha: fire| |Accept Ho: no alarm||No error||Type II| |Reject Ho: alarm||Type I||No error| Three more terms that we often hear or read about are called Type I error, Type II error, and power. They are inter-related and are important in the design stage and in the interpretation stage as well. One way to think about them is to consider the relationship between a smoke detector and a house fire. (Reference: Larry Gonick & Woollcott Smith; The Cartoon Guide to Statistics; 1993; pp151-152). The purpose of the smoke detector, of course, is to warn us in case of a fire. However, it is possible to have a fire without an alarm, as well as an alarm without a fire. Those are situations or errors that we do not ideally want, but they are possible events nevertheless. So the "true state" can be either no fire (Ho) or house fire (Ha). Ideally, we want the alarm to alert us if there is a fire and we want the alarm to remain silent when there is no fire. If we have an alarm without a fire, then a Type I error has been committed. This corresponds to α, which is the probability of claiming a difference/rejecting Ho. Alpha is normally pre-set to 0.05. In other words, we accept a 5% chance of a "false alarm." If we have a fire but it does not cause an alarm, then a Type II error has been committed. This corresponds to beta, β, which is the probability of missing a difference when one truly exists/not rejecting Ho. Beta is normally pre-set to 10% or 20%. In other words, we accept a 10% or 20% chance of a "failed alarm." Power is defined by 1-β, which is the probability of a real fire when there is an alarm. It is normally pre-set to 80% or 90%. It controls the probability of observing a true difference, or a "true alarm." In other words, with power=80%, we accept that eight trials out of 10 will correctly declare a true difference and that two trials out of 10 will incorrectly miss a true difference. β is a risk that we would want to minimize; and it is a risk to minimize as much as possible but it comes with a price: a larger study, plus more time to recruit subjects, measure, and report. A p-value greater than α=0.05 could be non-significant because there is truly no difference between the groups. Or it could be non-significant because the study is not large enough to detect a true underlying difference. Determining the optimal sample size for a study requires a great deal of thought in the beginning at the planning stage. A sample size that is unnecessarily large is a waste of resources. But a sample size that is too small has a higher likelihood of not representing the underlying population and consequently missing a "true alarm." The small study has a wider confidence interval because the standard error is large, or less precise. As we said above in Building Block #2, when the sample size gets larger, the variability gets smaller and we get a more precise measurement of the truth. The optimal sample size depends on all of the various assumptions that go into its calculation. For instance, to plan a superiority study as in Step 2 above, we need to make decisions/assumptions on the following parameters: α (generally 0.05), whether the hypothesis is one-sided or two-sided (generally two-sided), power (generally 80-90%), the response rate in the test arm, and the response rate in the control arm. These assumptions are directly tied to the study being designed -- so different types of studies require different sets of information for the sample size calculation. Changing any one of the decisions/assumptions will change the sample size calculation. Understanding the basics of statistics is helpful in evaluating the messages that arise out of research. Good research follows clearly articulated steps and serious planning. The goal of research is to answer a question. In order to do so, it comes down to establishing: In conclusion, study designs are chosen depending on the questions that are being studied. Study endpoints are selected according to the hypothesis under investigation and the study population being enrolled. And study interpretations depend on the hypotheses being tested. Statistics can help weigh the evidence and draw conclusions from the data. The Median, the Mean, and the Mode Before you can begin to understand statistics, there are four terms you will need to fully understand. The first term, "average," is something we have been familiar with from a very early age when we start analyzing our marks on report cards. We add together all of our test results and then divide it by the sum of the total number of marks there are. We often call it the average. However, statistically it's the mean! The median is the "middle value" in your list. When the totals of the list are odd, the median is the middle entry in the list aft er sorting the list into increasing order. When the totals of the list are even, the median is equal to the sum of the two middle (after sorting the list into increasing order) numbers divided by two. Thus, remember to line up your values, the middle number is the median! Be sure to remember the odd and even rule. The mode in a list of numbers refers to the list of numbers that occur most frequently. A trick to remembering this one is to remember that mode starts with the same first two letters that most does. Most frequently -- mode. You'll never forget that one! It is important to note that there can be more than one mode. If no number occurs more than once in the set, then there is no mode for that set of numbers. Occasionally in statistics you'll be asked for the "range" in a set of numbers. The range is simply the smallest number subtracted from the largest number in your set. Thus, if your set is 9, 3, 44, 15, and 6, the range would be 44-3=41. Your range is 41. A natural progression once the three terms in statistics are understood is the concept of probability. Probability is the chance of an event happening and is usually expressed as a fraction. But that's another topic! Amy Cutrell resides in Chapel Hill, NC, and has worked at GlaxoSmithKline for twenty years. She received a MS in biostatistics from the UNC School of Public Health.
http://www.thebody.com/content/art49491.html?ts=pf
13
73
|Copyright July 1, 2000|| 1. Systematic versus 2. Determining Random Errors (a) Instrument Limit of Error, least count (b) Estimation (c) Average Deviation (d) Conflicts (e) Standard Error in the Mean 3. What does uncertainty tell me? Range of possible values 4. Relative and Absolute error 5. Propagation of errors (a) add/subtract (b) multiply/divide (c) powers (d) mixtures of +-*/ (e) other functions 6. Rounding answers properly 7. Significant figures 8. Problems to try 9. Glossary of terms (all terms that are bold face and underlined) |In this manual there will be problems for you to try. They are highlighted in yellow, and have answers.| |There are also examples highlighted in green.| 1. Systematic and random errors. 2. Determining random errors. 3. What is the range of possible values? 4. Relative and Absolute Errors 5. Propagation of Errors, Basic Rules Suppose two measured quantities x and y have uncertainties, Dx and Dy, determined by procedures described in previous sections: we would report (x ± Dx), and (y ± Dy). From the measured quantities a new quantity, z, is calculated from x and y. What is the uncertainty, Dz, in z? For the purposes of this course we will use a simplified version of the proper statistical treatment. The formulas for a full statistical treatment (using standard deviations) will also be given. The guiding principle in all cases is to consider the most pessimistic situation. Full explanations are covered in statistics courses. The examples included in this section also show the proper rounding of answers, which is covered in more detail in Section 6. The examples use the propagation of errors using average deviations. (a) Addition and Subtraction: z = x + y or z = x - y |Derivation: We will assume that the uncertainties are so as to make z as far from its true value as possible. Average deviations Dz = |Dx| + |Dy| in both cases With more than two numbers added or subtracted we continue to add the uncertainties. |Using simpler average errors||Using standard deviations| |Eq. 1a||Eq. 1b| Example: w = (4.52 ± 0.02) cm, x = ( 2.0 ± 0.2) cm, y = (3.0 ± 0.6) cm. Find z = x + y - w and its uncertainty. z = x + y - w = 2.0 + 3.0 - 4.5 = 0.5 cm Notice that we round the uncertainty to one significant figure and round the answer to match. |For multiplication by an exact number, multiply the uncertainty by the same exact number.| Example: The radius of a circle is x = (3.0 ± 0.2) cm. Find the circumference and its uncertainty. C = 2 p x = 18.850 cm DC = 2 p Dx = 1.257 cm (The factors of 2 and p are exact) C = (18.8 ± 1.3) cm We round the uncertainty to two figures since it starts with a 1, and round the answer to match. Example: x = (2.0 ± 0.2) cm, y = (3.0 ± 0.6) cm. Find z = x - 2y and its uncertainty. z = x - 2y = 2.0 - 2(3.0) = -4.0 cm Dz = Dx + 2 Dy = 0.2 + 1.2 = 1.4 cm So z = (-4.0 ± 1.4) cm. Using Eq 1b, z = (-4.0 ± 0.9) cm. The 0 after the decimal point in 4.0 is significant and must be written in the answer. The uncertainty in this case starts with a 1 and is kept to two significant figures. (More on rounding in Section 7.) (b) Multiplication and Division: z = x y or z = x/y |Derivation: We can derive the relation for Take the largest values for x and y, that is z + Dz = (x + Dx)(y + Dy) = xy + x Dy + y Dx + Dx Dy Usually Dx << x and Dy << y so that the last term is much smaller than the other terms and can be neglected. Since z = xy, Dz = y Dx + x Dy which we write more compactly by forming the relative error, that is the ratio of Dz/z, namely The same rule holds for multiplication, division, or combinations, namely add all the relative errors to get the relative error in the result. |Using simpler average errors||Using standard deviations| Example: w = (4.52 ± 0.02) cm, x = (2.0 ± 0.2) cm. Find z = w x and its uncertainty. z = w x = (4.52) (2.0) = 9.04 So Dz = 0.1044 (9.04 ) = 0.944 which we round to 0.9 , z = (9.0 ± 0.9) . Using Eq. 2b we get Dz = 0.905 and z = (9.0 ± 0.9). The uncertainty is rounded to one significant figure and the result is rounded to match. We write 9.0 rather than 9 since the 0 is significant. Example: x = ( 2.0 ± 0.2) cm, y = (3.0 ± 0.6) sec Find z = x/y. z = 2.0/3.0 = 0.6667 cm/s. So Dz = 0.3 (0.6667 cm/sec) = 0.2 cm/sec z = (0.7 ± 0.2) cm/sec Using Eq. 2b we get z = (0.67 ± 0.15) cm/sec Note that in this case we round off our answer to have no more decimal places than our uncertainty. (c) Products of powers: . The results in this case are Using simpler average errors Using standard deviations Eq. 3a Eq.3b Example: w = (4.52 ± 0.02) cm, A = (2.0 ± 0.2), y = (3.0 ± 0.6) cm. Find. The second relative error, (Dy/y), is multiplied by 2 because the power of y is 2. The third relative error, (DA/A), is multiplied by 0.5 since a square root is a power of one half. So Dz = 0.49 (28.638 ) = 14.03 which we round to 14 z = (29 ± 14) Using Eq. 3b, z=(29 ± 12) Because the uncertainty begins with a 1, we keep two significant figures and round the answer to match. (d) Mixtures of multiplication, division, addition, subtraction, and powers. If z is a function which involves several terms added or subtracted we must apply the above rules carefully. This is best explained by means of an example. Example: w = (4.52 ± 0.02) cm, x = (2.0 ± 0.2) cm, y = (3.0 ± 0.6) cm. Find z = w x +y^2 z = wx +y^2 = 18.0 First we compute v = wx as in the example in (b) to get v = (9.0 ± 0.9) . Next we compute Finally, we compute Dz = Dv + D(y^2) = 0.9 + 3.6 = 4.5 rounding to 4 Hence z = (18 ± 4) . We have v = wx = (9.0 ± 0.9) cm. The calculation of the uncertainty in is the same as that shown to the left. Then from Eq. 1b Dz = 3.7 z = (18 ± 4) . (e) Other Functions: e.g.. z = sin x. The simple approach. For other functions of our variables such as sin(x) we will not give formulae. However you can estimate the error in z = sin(x) as being the difference between the largest possible value and the average value. and use similar techniques for other functions. D(sin x) = sin(x + Dx) - sin(x) Example: Consider S = x cos (q) for x = (2.0 ± 0.2) cm, q = 53 ± 2 °. Find S and its uncertainty. S = (2.0 cm) cos 53° = 1.204 cm To get the largest possible value of S we would make x larger, (x + Dx) = 2.2 cm, and q smaller, (q - Dq) = 51°. The largest value of S, namely (S + DS), is (S + DS) = (2.2 cm) cos 51° = 1.385 cm. The difference between these numbers is DS = 1.385 - 1.204 = 0.181 cm which we round to 0.18 cm. Then S = (1.20 ± 0.18) cm. (f) Other Functions: Getting formulas using partial derivatives The general method of getting formulas for propagating errors involves the total differential of a function. Suppose that z = f(w, x, y, ...) where the variables w, x, y, etc. must be independent variables! The total differential is then We treat the dw = Dw as the error in w, and likewise for the other differentials, dz, dx, dy, etc. The numerical values of the partial derivatives are evaluated by using the average values of w, x, y, etc. The general results are |Using simpler average errors| Example: Consider S = x cos (q) for x = (2.0 ± 0.2) cm, q = (53 ± 2) °= (0.9250 ± 0.0035) rad. Find S and its uncertainty. Note: the uncertainty in angle must be in radians! S = 2.0 cm cos 53° = 1.204 cm Hence S = (1.20 ± 0.13) cm (using average deviation approach) or S = (1.20 ± 0.12) cm (using standard deviation approach.) 6. Rounding off answers in regular and scientific notation. In the above examples we were careful to round the answers to an appropriate number of significant figures. The uncertainty should be rounded off to one or two significant figures. If the leading figure in the uncertainty is a 1, we use two significant figures, otherwise we use one significant figure. Then the answer should be rounded to match. Example Round off z = 12.0349 cm and Dz = 0.153 cm. Since Dz begins with a 1, we round off Dz to two significant figures: Dz = 0.15 cm. Hence, round z to have the same number of decimal places: z = (12.03 ± 0.15) cm. When the answer is given in scientific notation, the uncertainty should be given in scientific notation with the same power of ten. Thus, if z = 1.43 x s and Dz = 2 x s, we should write our answer as z = (1.43± 0.02) x s. This notation makes the range of values most easily understood. The following is technically correct, but is hard to understand at a glance. z = (1.43 x ± 2 x ) s. Don't write like this! (i) m = 14.34506 grams, Dm = 0.04251 grams. (ii) t = 0.02346 sec, Dt = 1.623 x 10-3sec. (iii) M = 7.35 x kg DM = 2.6 x kg. (iv) m = 9.11 x kg Dm = 2.2345 x kg Answer 7. Significant Figures The rules for propagation of errors hold true for cases when we are in the lab, but doing propagation of errors is time consuming. The rules for significant figures allow a much quicker method to get results that are approximately correct even when we have no uncertainty values. A significant figure is any digit 1 to 9 and any zero which is not a place holder. Thus, in 1.350 there are 4 significant figures since the zero is not needed to make sense of the number. In a number like 0.00320 there are 3 significant figures --the first three zeros are just place holders. However the number 1350 is ambiguous. You cannot tell if there are 3 significant figures --the 0 is only used to hold the units place --or if there are 4 significant figures and the zero in the units place was actually measured to be zero. How do we resolve ambiguities that arise with zeros when we need to use zero as a place holder as well as a significant figure? Suppose we measure a length to three significant figures as 8000 cm. Written this way we cannot tell if there are 1, 2, 3, or 4 significant figures. To make the number of significant figures apparent we use scientific notation, 8 x cm (which has one significant figure), or 8.00 x cm (which has three significant figures), or whatever is correct under the circumstances. We start then with numbers each with their own number of significant figures and compute a new quantity. How many significant figures should be in the final answer? In doing running computations we maintain numbers to many figures, but we must report the answer only to the proper number of significant figures. In the case of addition and subtraction we can best explain with an example. Suppose one object is measured to have a mass of 9.9 gm and a second object is measured on a different balance to have a mass of 0.3163 gm. What is the total mass? We write the numbers with question marks at places where we lack information. Thus 9.9???? gm and 0.3163? gm. Adding them with the decimal points lined up we see 10.2???? = 10.2 gm. In the case of multiplication or division we can use the same idea of unknown digits. Thus the product of 3.413? and 2.3? can be written in long hand as 7.8????? = 7.8 The short rule for multiplication and division is that the answer will contain a number of significant figures equal to the number of significant figures in the entering number having the least number of significant figures. In the above example 2.3 had 2 significant figures while 3.413 had 4, so the answer is given to 2 significant figures. It is important to keep these concepts in mind as you use calculators with 8 or 10 digit displays if you are to avoid mistakes in your answers and to avoid the wrath of physics instructors everywhere. A good procedure to use is to use use all digits (significant or not) throughout calculations, and only round off the answers to appropriate "sig fig." Problem: How many significant figures are there in each of the following? Answer (i) 0.00042 (ii) 0.14700 (ii) 4.2 x (iv) -154.090 x 8. Problems on Uncertainties and Error Propagation. Try the following problems to see if you understand the details of this part . The answers are at the end. (a) Find the average and the average deviation of the following measurements of a mass. 4.32, 4.35, 4.31, 4.36, 4.37, 4.34 grams. (b) Express the following results in proper rounded form, x ± Dx. (i) m = 14.34506 grams, Dm = 0.04251 grams. (ii) t = 0.02346 sec, Dt = 1.623 x sec. (iii) M = 7.35 x kg DM = 2.6 x kg. (iv) m = 9.11 x kg Dm = 2.2345 x kg (c) Are the following numbers equal within the expected range of values? (i) (3.42 ± 0.04) m/s and 3.48 m/s? (ii) (13.106 ± 0.014) grams and 13.206 grams? (iii) (2.95 ± 0.03) x m/s and 3.00 x m/s (d) Calculate z and Dz for each of the following cases. (i) z = (x - 2.5 y + w) for x = (4.72 ± 0.12) m, y = (4.4 ± 0.2) m, w = (15.63 ± 0.16) m. (ii) z = (w x/y) for w = (14.42 ± 0.03) m/, x = (3.61 ± 0.18) m, y = (650 ± 20) m/s. (iii) z = for x = (3.55 ± 0.15) m. (iv) z = v (xy + w) with v = (0.644 ± 0.004) m, x = (3.42 ± 0.06) m, y = (5.00 ± 0.12) m, w = (12.13 ± 0.08). (v) z = A sin y for A = (1.602 ± 0.007) m/s, y = (0.774 ± 0.003) rad. (e) How many significant figures are there in each of the following? (i) 0.00042 (ii) 0.14700 (ii) 4.2 x (iv) -154.090 x 10-27 (f) I measure a length with a meter stick which has a least count of 1 mm I measure the length 5 times with results in mm of 123, 123, 124, 123, 123 mm. What is the average length and the uncertainty in length? Answers for Section 8: (a) (4.342 ± 0.018) grams (b) i) (14.34 ± 0.04) grams ii) (0.0235 ± 0.0016) sec or (2.35 ± 0.16) x sec iii) (7.35 ± 0.03) x kg iv) (9.11 ± 0.02) x kg (c) Yes for (i) and (iii), no for (ii) (d) i) (9.4 ± 0.8) m ii) (0.080 ± 0.007) m/s iii) (45 ± 6) iv) 18.8 ± 0.6) v) (1.120 ± 0.008 m/s (e) i) 2 ii) 5 iii) 2 iv) 6 (f) (123 ± 1) mm (I used the ILE = least count since it is larger than the average deviation.) 9. Glossary of Important Terms |Absolute error||The actual error in a quantity, having the same units as the c = (2.95 ± 0.07) m/s, the absolute error is 0.07 m/s. See Relative Error. |Accuracy||How close a measurement is to being correct. For gravitational acceleration near the earth, g = 9.7 m/s2 is more accurate than g = 9.532706 m/s2. See Precision.| |Average||When several measurements of a quantity are made, the sum of the measurements divided by the number of measurements.| |Average Deviation||The average of the absolute value of the differences between each measurement and the average. See Standard Deviation.| |Confidence Level||The fraction of measurements that can be expected to lie within a given range. Thus if m = (15.34 ± 0.18) g, at 67% confidence level, 67% of the measurements lie within (15.34 - 0.18) g and (15.34 + 0.18) g. If we use 2 deviations (±0.36 here) we have a 95% confidence level.| |Deviation||A measure of range of measurements from the average. Also called error oruncertainty.| |Error||A measure of range of measurements from the average. Also called deviation or uncertainty.| |Estimated Uncertainty||An uncertainty estimated by the observer based on his or her knowledge of the experiment and the equipment. This is in contrast to ILE, standard deviation or average deviation.| |Gaussian Distribution||The familiar bell-shaped distribution. Simple statistics assumes that random errors are distributed in this distribution. Also called Normal Distribution.| |Independent Variables||Changing the value of one variable has no effect on any of the other variables. Propagation of errors assumes that all variables are independent.| of Error (ILE) |The smallest reading that an observer can make from an instrument. This is generally smaller than the Least Count.| |Least Count||The size of the smallest division on a scale. Typically the ILE equals the least count or 1/2 or 1/5 of the least count.| |Normal Distribution||The familiar bell-shaped distribution. Simple statistics assumes that random errors are distributed in this distribution. Also called Gaussian Distribution.| |Precision||The number of significant figures in a measurement. For gravitational acceleration near the earth, g = 9.532706 m/s2 is more precise than g = 9.7 m/s2. Greater precision does not mean greater accuracy! See Accuracy.| |Propagation of Errors||Given independent variables each with an uncertainty, the method of determining an uncertainty in a function of these variables.| |Random Error||Deviations from the "true value" can be equally likely to be higher or lower than the true value. See Systematic Error.| |Range of Possible |Measurements give an average value, <x> and an uncertainty, Dx. At the 67% confidence level the range of possible true values is from <x> - Dx to <x> + Dx. See Confidence Level .| |Relative Error||The ratio of absolute error to the average, Dx/x. This may also be called percentage error or fractional uncertainty. See Absolute Error.| |Significant Figures||All non-zero digits plus zeros that do not just hold a place before or after a decimal point.| |Standard Deviation||The statistical measure of uncertainty. See Average Deviation.| in the Mean |An advanced statistical measure of the effect of large numbers of measurements on the range of values expected for the average (or mean).| |Systematic Error||A situation where all measurements fall above or below the "true value". Recognizing and correcting systematic errors is very difficult.| |Uncertainty||A measure of range of measurements from the average. Also called deviation or error.| Send any comments or corrections to Vern Lindberg.
http://www.rit.edu/cos/uphysics/uncertainties/Uncertaintiespart2.html
13
60
Friction of RIGID bodies can be approximated by 3 rules: 1. The force of friction is directly proportional to the applied load. (Amontons 1st Law) 2. The force of friction is independent of the apparent area of contact. (Amontons 2nd Law) 3. Kinetic friction is independent of the sliding velocity. (Coulomb's Law) Coulomb FrictionCoulomb friction, (named after Charles-Augustin de Coulomb), is a model used to calculate the force of dry friction. It is an approximation to what happens in real life, according to the 3 rules in the summary box above. It is meant to be used on rigid bodies, since soft and flexible materials (like rubber tyres for example) are more sensitive to the area of contact (See point 2 above). The definition is below; is the friction force. is the coefficient of friction. (depends on materials) is the normal force exerted between the surfaces, (which is equal to W in this diagram). This formula is a rule of thumb giving an approximation of an extremely complicated physical interaction. In many cases, the relationship between normal force and frictional force is not exactly linear (the frictional force is not entirely independent of the contact area of the surfaces - especially so for soft materials like rubber). The Coulomb approximation works best for relatively hard, rigid materials. Static FrictionWhen the two surfaces are not moving, the friction is slightly higher. = coefficient of static friction. Motion cannot begin until the applied force is higher than the maximum friction force Kinetic FrictionWhen the two surfaces are moving, the friction usually goes down a little. = coefficient of kinetic friction. Friction force always opposes the direction of motion and is considered to be constant. The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. Coefficients of FrictionThe coefficient of friction , is a dimensionless (scalar) value so it has no units. (This is because it is a ratio between two forces). The coefficient of friction depends on the materials used; for example, ice on steel has a low coefficient of friction, while rubber on pavement has a high coefficient of friction. Coefficients of friction range from near zero to greater than 1 – such as a soft rubber tyre on rough concrete. The coefficient of friction is measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher effective values. Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, which is impossible in any practical sense – even magnetic levitation vehicles have drag. Rubber in contact with other surfaces can yield friction coefficients from 1 to 2. The normal force The normal force is the net force compressing two parallel surfaces together, and is always perpendicular to the surfaces. In the simple case of a mass resting on a horizontal surface, the only component of the normal force is the force due to gravity, so the normal force = weight. = W = mg. If the object is on an inclined plane, the normal force is less, because less of the force of gravity is perpendicular to the face of the plane. Therefore, the normal force, and ultimately the frictional force, is determined using force components. Note: There may be forces other than gravity - like springs. In a typical friction problem, we must first determine and then multiply by to give us the friction force Block on a ramp (top) and corresponding free body diagram of just the block (bottom). Approximate coefficients of frictionThe most slippery solid known, discovered in 1999, dubbed BAM (for the elements boron, aluminum, and magnesium), has an approximate coefficient of friction of 0.02, about half that of Teflon. Angle of frictionAnother way to define the friction is by the angle of the opposing force. The friction angle is defined as; tan = / So tan = Angle of reposeWhen a body is on an incline, there is a maximum angle that can be reached before it will begin to slide. That maximum angle is called the angle of repose, (Sometimes the angle of repose refers to the maximum slope of granular material instead. Never mind). It is defined as: where is the angle from horizontal and is the static coefficient of friction between the objects. The angle of repose occurs when = . The block will not slide as long as (friction force) is greater than (parallel force). At the point where motion begins, we can use that angle to calculate the coefficient of friction.
http://www.ejsong.com/mdme/memmods/MEM23041A/dynamics/friction/Friction.html
13
154
This unit is around discovering and applying the rule for area of a triangle (area equals half base times height). Students will practice multiplication and division strategies as they calculate areas. recognise that two identical right angled triangles can be joined to make a rectangle recognise that a triangle has half the area of a rectangle with the same base and height lengths apply the rule ‘area of triangle equals half base times height’ Area is a two-dimensional concept related to the geometric concept of an enclosed region. It is defined in the maths curriculum as the size of a surface expressed as a number of square units. Investigations of the size of an area should begin with comparisons between different surfaces and progress to the use of non-standard, and then standard, units. The use of formulae to calculate the areas of common polygons is the final stage of the learning sequence. When the students are able to measure efficiently and effectively using standard units, their leaning experiences can be directed to situations that encourage them to "discover" measurement formula. In area work, the students may realise as they count squares to find the area of a rectangle, that it would be quicker to find the number of squares in one row and multiply this by the number of rows. In the same way, in this unit the students find a formula for calculating the area of a triangle by seeing it as half of a rectangle. This unit is also designed to allow students to practice their multiplicative strategies as they calculate the area of triangles.In particular it reinforces the fact that the order in which they multiply and divide is not important (commutative property of multiplication). In this session students revise the rule for the area of a rectangle. This session may not be required if this unit is being taught following a unit on area of rectangles. - Draw an unlabelled rectangle on the board. - Ask students to tell you what its area is. Some discussion may be required to ensure that all students are clear about what ‘area’ means – it is likely that at least some will confuse it with perimeter. - When they tell you they can’t work out the area, ask them what they would need to know to find its area. - If they say “two sides” then label two opposite sides and see what they say. It is important that students realise that they need to know the base and the height, or two sides at right angles to each other, to be able to find the area of a rectangle. - If students can correctly work out the area of the rectangle from its base and height measurements ask them to describe their method and explain why it works. If nobody can give an explanation then a brief discussion is in order. Drawing a grid to link the concept of area with the array model of multiplication may help to clarify students’ understanding. - Students at this level should have their tables to 10 as known facts and should be able to use a variety of methods to ‘find’ the answer to 2-digit multiplication problems. Finding areas of rectangles is a good opportunity to practice and reinforce this. How did you work out the area of the rectangle? How did you work out 5x7? [should be known fact, but this question is applicable to larger rectangles.] - Ensure that students have a clear understanding that the area of a rectangle equals its base times its height. Ensure that correct units are used; if the lengths are labelled in centimetres then the area has to be given in square centimetres (cm2), if the lengths are not given units then the area should be given in square units (units2). In this session students divide rectangles diagonally to produce right angled triangles. They realise that the two triangles formed are equal in area. - Ask students to cut a rectangle out of grid paper, ensuring that its base and height are whole square amounts. (You may want to put limits on the size of the rectangle depending on the ability of your students – eg. No more than 10 squares along each side.) - Ask students to work out the area of their rectangle (answers to be given in units2). Students should be encouraged to work out the area mentally and explain their method to a partner. - Now ask students to rule a line from one corner of their rectangle to the diagonally opposite corner. Ensure that they can see the two right angled triangles they have created. - Ask them to find the area of each triangle. They may need to count all the squares within the triangles. What do you notice about the areas of the two triangles? Students should notice that the two triangles have the same areas and that therefore the area of each is half the area of the rectangle. If they had worked out the areas by counting, ask them to make a calculation to check their counting. What is the area of each triangle exactly? How could you work out the areas of the triangles? [Halve the area of the rectangle] What numbers would you need to multiply or divide? [Either base times height times half or base times height divided by two.] What would be the easiest strategy for you? [Initially students are likely to mistakenly think that they have to multiply the base and height before dividing - often a better strategy is to halve one of these factors before multiplying.] - This can be reinforced by having students cut along the diagonal and rotate one triangle to sit on top of the other. They will then see that not only do the triangles have the same area, but they are identical triangles. Will this work for any rectangle you can make? - Students should be given the opportunity to experiment with a few different rectangles so that they can see that the rule holds true for any rectangle. In this session students draw right angled triangles, complete the rectangle and calculate the area of the original triangle. - Draw a right angled triangle on the board; label its base 10cm and its height 5cm. - Ask students whether they can tell you its area (insist on units – cm2). - Discuss suggestions for how you could work out its area. Do we have enough information to work out the area? Could we work out the area of another shape that would help? Remember what we discovered about rectangles last maths lesson – could that help? - Ask students to draw a triangle on their grid paper so that two of its sides are along lines of the grid paper. (You may want to put limits on the size of the triangle depending on the ability of your students – eg. No more than 10 squares along each side.) - Now get them to draw the matching triangle that makes a rectangle. Draw the triangle to make a rectangle on your diagram on the board to illustrate. - Now challenge students to work out the area of their completed rectangle and then that of the triangle. - Get each student to complete several triangles to ensure they are doing it correctly. - Return together as a class to discuss: Does this work for every right angled triangle? Can you describe a rule for the area of a right angled triangle? Are there any clever tricks to make the maths easier? [Divide one of the sides by two before multiplying] - Get students to record a rule for the area of right angled triangles in their own words. Hopefully the students will see that the area of any right angled triangle is equal to the area of the rectangle with the same base and height divided by two. Get them to record the statement “For right angled triangles, area equals half base times height.” Ensure that they can see that this means the same thing. In this session students draw non right angled triangles, and experiment with finding their area. - Draw an unlabelled non right angled triangle on the board. Draw one of its sides horizontal. - Ask students what information they will need to be able to work out its area. It is likely that students will try to apply their learning from the previous session and tell you that they need to know the length of two sides. - Ask students to draw a triangle of their own on grid paper, so that all three corners are on grid intersections, but only one of its sides is along a line of the grid. - Now challenge them to find its area by measuring two sides. - If they apply their rule “area equals half base times height”, ask them to draw the rectangle to illustrate. They will be unable to. - Bring the class back together and discuss why it does not work. Hopefully at least one of your students will recognise that in a non-right angled triangle the height is not equal to either of the sides. Discuss this then send students to try to find a rectangle that will work for their triangle. - When most students have identified that the height of the rectangle needs to be at right angles to the base since rectangles have all right angles bring the class back together. - Draw the diagram below on the board and ask students whether the rectangle is twice the size of the triangle. - If students cannot see that the two smaller triangles join to make the larger triangle, add an extra line as illustrated below. What is the area of the left hand rectangle? What is the area of the left hand part of the right angled triangle? What is the area of the right hand rectangle? What is the area of the right hand part of the right angled triangle? - Give students time to draw some of their own triangles with only one side along a grid line, and work out their area. Ensure that they are including units in their answers. - Return together as a class to discuss: Does this work for every triangle? Can you describe a rule for the area of any triangle? - Get students to record a rule for the area of right angled triangles in their own words. Hopefully the students will see that the area of any triangle is equal to the area of the rectangle with the same base and height divided by two. Get them to record the statement “For all triangles, area equals half base times height.” Ensure that they can see that this means the same thing. In this session students state a rule for the area of a triangle and use it to find the area of some triangles. - Begin the lesson by asking students to tell you the rule for the area of a triangle. All going well they should be able to answer easily! - Have students complete the triangle area worksheet individually (Copymaster 1). This should give you a good idea of any students who are still struggling with the concepts. - Challenge students to find triangles in the classroom and make the measurements required to calculate their area. Provide measuring tapes and rulers as required. Insist on correct units. - Ask students to challenge each other with triangles to calculate the areas of. Compare answers and strategies for measuring and calculating and discuss differences.
http://www.nzmaths.co.nz/resource/triangles?parent_node=
13
52
Pure resistive AC circuit: resistor voltage and current are in phase. If we were to plot the current and voltage for a very simple AC circuit consisting of a source and a resistor (Figure above), it would look something like this: (Figure below) Voltage and current “in phase” for resistive circuit. Because the resistor simply and directly resists the flow of electrons at all periods of time, the waveform for the voltage drop across the resistor is exactly in phase with the waveform for the current through it. We can look at any point in time along the horizontal axis of the plot and compare those values of current and voltage with each other (any “snapshot” look at the values of a wave are referred to as instantaneous values, meaning the values at that instant in time). When the instantaneous value for current is zero, the instantaneous voltage across the resistor is also zero. Likewise, at the moment in time where the current through the resistor is at its positive peak, the voltage across the resistor is also at its positive peak, and so on. At any given point in time along the waves, Ohm's Law holds true for the instantaneous values of voltage and current. We can also calculate the power dissipated by this resistor, and plot those values on the same graph: (Figure below) Instantaneous AC power in a pure resistive circuit is always positive. Note that the power is never a negative value. When the current is positive (above the line), the voltage is also positive, resulting in a power (p=ie) of a positive value. Conversely, when the current is negative (below the line), the voltage is also negative, which results in a positive value for power (a negative number multiplied by a negative number equals a positive number). This consistent “polarity” of power tells us that the resistor is always dissipating power, taking it from the source and releasing it in the form of heat energy. Whether the current is positive or negative, a resistor still dissipates energy. Inductors do not behave the same as resistors. Whereas resistors simply oppose the flow of electrons through them (by dropping a voltage directly proportional to the current), inductors oppose changes in current through them, by dropping a voltage directly proportional to the rate of change of current. In accordance with Lenz's Law, this induced voltage is always of such a polarity as to try to maintain current at its present value. That is, if current is increasing in magnitude, the induced voltage will “push against” the electron flow; if current is decreasing, the polarity will reverse and “push with” the electron flow to oppose the decrease. This opposition to current change is called reactance, rather than resistance. Expressed mathematically, the relationship between the voltage dropped across the inductor and rate of current change through the inductor is as such: The expression di/dt is one from calculus, meaning the rate of change of instantaneous current (i) over time, in amps per second. The inductance (L) is in Henrys, and the instantaneous voltage (e), of course, is in volts. Sometimes you will find the rate of instantaneous voltage expressed as “v” instead of “e” (v = L di/dt), but it means the exact same thing. To show what happens with alternating current, let's analyze a simple inductor circuit: (Figure below) Pure inductive circuit: Inductor current lags inductor voltage by 90o. If we were to plot the current and voltage for this very simple circuit, it would look something like this: (Figure below) Pure inductive circuit, waveforms. Remember, the voltage dropped across an inductor is a reaction against the change in current through it. Therefore, the instantaneous voltage is zero whenever the instantaneous current is at a peak (zero change, or level slope, on the current sine wave), and the instantaneous voltage is at a peak wherever the instantaneous current is at maximum change (the points of steepest slope on the current wave, where it crosses the zero line). This results in a voltage wave that is 90o out of phase with the current wave. Looking at the graph, the voltage wave seems to have a “head start” on the current wave; the voltage “leads” the current, and the current “lags” behind the voltage. (Figure below) Current lags voltage by 90o in a pure inductive circuit. Things get even more interesting when we plot the power for this circuit: (Figure below) In a pure inductive circuit, instantaneous power may be positive or negative Because instantaneous power is the product of the instantaneous voltage and the instantaneous current (p=ie), the power equals zero whenever the instantaneous current or voltage is zero. Whenever the instantaneous current and voltage are both positive (above the line), the power is positive. As with the resistor example, the power is also positive when the instantaneous current and voltage are both negative (below the line). However, because the current and voltage waves are 90o out of phase, there are times when one is positive while the other is negative, resulting in equally frequent occurrences of negative instantaneous power. But what does negative power mean? It means that the inductor is releasing power back to the circuit, while a positive power means that it is absorbing power from the circuit. Since the positive and negative power cycles are equal in magnitude and duration over time, the inductor releases just as much power back to the circuit as it absorbs over the span of a complete cycle. What this means in a practical sense is that the reactance of an inductor dissipates a net energy of zero, quite unlike the resistance of a resistor, which dissipates energy in the form of heat. Mind you, this is for perfect inductors only, which have no wire resistance. An inductor's opposition to change in current translates to an opposition to alternating current in general, which is by definition always changing in instantaneous magnitude and direction. This opposition to alternating current is similar to resistance, but different in that it always results in a phase shift between current and voltage, and it dissipates zero power. Because of the differences, it has a different name: reactance. Reactance to AC is expressed in ohms, just like resistance is, except that its mathematical symbol is X instead of R. To be specific, reactance associate with an inductor is usually symbolized by the capital letter X with a letter L as a subscript, like this: XL. Since inductors drop voltage in proportion to the rate of current change, they will drop more voltage for faster-changing currents, and less voltage for slower-changing currents. What this means is that reactance in ohms for any inductor is directly proportional to the frequency of the alternating current. The exact formula for determining reactance is as follows: If we expose a 10 mH inductor to frequencies of 60, 120, and 2500 Hz, it will manifest the reactances in Table Figure below. Reactance of a 10 mH inductor: |Frequency (Hertz)||Reactance (Ohms)| In the reactance equation, the term “2πf” (everything on the right-hand side except the L) has a special meaning unto itself. It is the number of radians per second that the alternating current is “rotating” at, if you imagine one cycle of AC to represent a full circle's rotation. A radian is a unit of angular measurement: there are 2π radians in one full circle, just as there are 360o in a full circle. If the alternator producing the AC is a double-pole unit, it will produce one cycle for every full turn of shaft rotation, which is every 2π radians, or 360o. If this constant of 2π is multiplied by frequency in Hertz (cycles per second), the result will be a figure in radians per second, known as the angular velocity of the AC system. Angular velocity may be represented by the expression 2πf, or it may be represented by its own symbol, the lower-case Greek letter Omega, which appears similar to our Roman lower-case “w”: ω. Thus, the reactance formula XL = 2πfL could also be written as XL = ωL. It must be understood that this “angular velocity” is an expression of how rapidly the AC waveforms are cycling, a full cycle being equal to 2π radians. It is not necessarily representative of the actual shaft speed of the alternator producing the AC. If the alternator has more than two poles, the angular velocity will be a multiple of the shaft speed. For this reason, ω is sometimes expressed in units of electrical radians per second rather than (plain) radians per second, so as to distinguish it from mechanical motion. Any way we express the angular velocity of the system, it is apparent that it is directly proportional to reactance in an inductor. As the frequency (or alternator shaft speed) is increased in an AC system, an inductor will offer greater opposition to the passage of current, and vice versa. Alternating current in a simple inductive circuit is equal to the voltage (in volts) divided by the inductive reactance (in ohms), just as either alternating or direct current in a simple resistive circuit is equal to the voltage (in volts) divided by the resistance (in ohms). An example circuit is shown here: (Figure below) However, we need to keep in mind that voltage and current are not in phase here. As was shown earlier, the voltage has a phase shift of +90o with respect to the current. (Figure below) If we represent these phase angles of voltage and current mathematically in the form of complex numbers, we find that an inductor's opposition to current has a phase angle, too: Current lags voltage by 90o in an inductor. Mathematically, we say that the phase angle of an inductor's opposition to current is 90o, meaning that an inductor's opposition to current is a positive imaginary quantity. This phase angle of reactive opposition to current becomes critically important in circuit analysis, especially for complex AC circuits where reactance and resistance interact. It will prove beneficial to represent any component's opposition to current in terms of complex numbers rather than scalar quantities of resistance and reactance. In the previous section, we explored what would happen in simple resistor-only and inductor-only AC circuits. Now we will mix the two components together in series form and investigate the effects. Take this circuit as an example to work with: (Figure below) Series resistor inductor circuit: Current lags applied voltage by 0o to 90o. The resistor will offer 5 Ω of resistance to AC current regardless of frequency, while the inductor will offer 3.7699 Ω of reactance to AC current at 60 Hz. Because the resistor's resistance is a real number (5 Ω ∠ 0o, or 5 + j0 Ω), and the inductor's reactance is an imaginary number (3.7699 Ω ∠ 90o, or 0 + j3.7699 Ω), the combined effect of the two components will be an opposition to current equal to the complex sum of the two numbers. This combined opposition will be a vector combination of resistance and reactance. In order to express this opposition succinctly, we need a more comprehensive term for opposition to current than either resistance or reactance alone. This term is called impedance, its symbol is Z, and it is also expressed in the unit of ohms, just like resistance and reactance. In the above example, the total circuit impedance is: Impedance is related to voltage and current just as you might expect, in a manner similar to resistance in Ohm's Law: In fact, this is a far more comprehensive form of Ohm's Law than what was taught in DC electronics (E=IR), just as impedance is a far more comprehensive expression of opposition to the flow of electrons than resistance is. Any resistance and any reactance, separately or in combination (series/parallel), can be and should be represented as a single impedance in an AC circuit. To calculate current in the above circuit, we first need to give a phase angle reference for the voltage source, which is generally assumed to be zero. (The phase angles of resistive and inductive impedance are always 0o and +90o, respectively, regardless of the given phase angles for voltage or current). As with the purely inductive circuit, the current wave lags behind the voltage wave (of the source), although this time the lag is not as great: only 37.016o as opposed to a full 90o as was the case in the purely inductive circuit. (Figure below) Current lags voltage in a series L-R circuit. For the resistor and the inductor, the phase relationships between voltage and current haven't changed. Voltage across the resistor is in phase (0o shift) with the current through it; and the voltage across the inductor is +90o out of phase with the current going through it. We can verify this mathematically: The voltage across the resistor has the exact same phase angle as the current through it, telling us that E and I are in phase (for the resistor only). The voltage across the inductor has a phase angle of 52.984o, while the current through the inductor has a phase angle of -37.016o, a difference of exactly 90o between the two. This tells us that E and I are still 90o out of phase (for the inductor only). We can also mathematically prove that these complex values add together to make the total voltage, just as Kirchhoff's Voltage Law would predict: Let's check the validity of our calculations with SPICE: (Figure below) Spice circuit: R-L. ac r-l circuit v1 1 0 ac 10 sin r1 1 2 5 l1 2 0 10m .ac lin 1 60 60 .print ac v(1,2) v(2,0) i(v1) .print ac vp(1,2) vp(2,0) ip(v1) .end freq v(1,2) v(2) i(v1) 6.000E+01 7.985E+00 6.020E+00 1.597E+00 freq vp(1,2) vp(2) ip(v1) 6.000E+01 -3.702E+01 5.298E+01 1.430E+02 Note that just as with DC circuits, SPICE outputs current figures as though they were negative (180o out of phase) with the supply voltage. Instead of a phase angle of -37.016o, we get a current phase angle of 143o (-37o + 180o). This is merely an idiosyncrasy of SPICE and does not represent anything significant in the circuit simulation itself. Note how both the resistor and inductor voltage phase readings match our calculations (-37.02o and 52.98o, respectively), just as we expected them to. With all these figures to keep track of for even such a simple circuit as this, it would be beneficial for us to use the “table” method. Applying a table to this simple series resistor-inductor circuit would proceed as such. First, draw up a table for E/I/Z figures and insert all component values in these terms (in other words, don't insert actual resistance or inductance values in Ohms and Henrys, respectively, into the table; rather, convert them into complex figures of impedance and write those in): Although it isn't necessary, I find it helpful to write both the rectangular and polar forms of each quantity in the table. If you are using a calculator that has the ability to perform complex arithmetic without the need for conversion between rectangular and polar forms, then this extra documentation is completely unnecessary. However, if you are forced to perform complex arithmetic “longhand” (addition and subtraction in rectangular form, and multiplication and division in polar form), writing each quantity in both forms will be useful indeed. Now that our “given” figures are inserted into their respective locations in the table, we can proceed just as with DC: determine the total impedance from the individual impedances. Since this is a series circuit, we know that opposition to electron flow (resistance or impedance) adds to form the total opposition: Now that we know total voltage and total impedance, we can apply Ohm's Law (I=E/Z) to determine total current: Just as with DC, the total current in a series AC circuit is shared equally by all components. This is still true because in a series circuit there is only a single path for electrons to flow, therefore the rate of their flow must uniform throughout. Consequently, we can transfer the figures for current into the columns for the resistor and inductor alike: Now all that's left to figure is the voltage drop across the resistor and inductor, respectively. This is done through the use of Ohm's Law (E=IZ), applied vertically in each column of the table: And with that, our table is complete. The exact same rules we applied in the analysis of DC circuits apply to AC circuits as well, with the caveat that all quantities must be represented and calculated in complex rather than scalar form. So long as phase shift is properly represented in our calculations, there is no fundamental difference in how we approach basic AC circuit analysis versus DC. Now is a good time to review the relationship between these calculated figures and readings given by actual instrument measurements of voltage and current. The figures here that directly relate to real-life measurements are those in polar notation, not rectangular! In other words, if you were to connect a voltmeter across the resistor in this circuit, it would indicate 7.9847 volts, not 6.3756 (real rectangular) or 4.8071 (imaginary rectangular) volts. To describe this in graphical terms, measurement instruments simply tell you how long the vector is for that particular quantity (voltage or current). Rectangular notation, while convenient for arithmetical addition and subtraction, is a more abstract form of notation than polar in relation to real-world measurements. As I stated before, I will indicate both polar and rectangular forms of each quantity in my AC circuit tables simply for convenience of mathematical calculation. This is not absolutely necessary, but may be helpful for those following along without the benefit of an advanced calculator. If we were to restrict ourselves to the use of only one form of notation, the best choice would be polar, because it is the only one that can be directly correlated to real measurements. Impedance (Z) of a series R-L circuit may be calculated, given the resistance (R) and the inductive reactance (XL). Since E=IR, E=IXL, and E=IZ, resistance, reactance, and impedance are proportional to voltage, respectively. Thus, the voltage phasor diagram can be replaced by a similar impedance diagram. (Figure below) Series: R-L circuit Impedance phasor diagram. Given: A 40 Ω resistor in series with a 79.58 millihenry inductor. Find the impedance at 60 hertz. XL = 2πfL XL = 2π·60·79.58×10-3 XL = 30 Ω Z = R + jXL Z = 40 + j30 |Z| = sqrt(402 + 302) = 50 Ω ∠Z = arctangent(30/40) = 36.87o Z = 40 + j30 = 50∠36.87o Let's take the same components for our series example circuit and connect them in parallel: (Figure below) Parallel R-L circuit. Because the power source has the same frequency as the series example circuit, and the resistor and inductor both have the same values of resistance and inductance, respectively, they must also have the same values of impedance. So, we can begin our analysis table with the same “given” values: The only difference in our analysis technique this time is that we will apply the rules of parallel circuits instead of the rules for series circuits. The approach is fundamentally the same as for DC. We know that voltage is shared uniformly by all components in a parallel circuit, so we can transfer the figure of total voltage (10 volts ∠ 0o) to all components columns: Now we can apply Ohm's Law (I=E/Z) vertically to two columns of the table, calculating current through the resistor and current through the inductor: Just as with DC circuits, branch currents in a parallel AC circuit add to form the total current (Kirchhoff's Current Law still holds true for AC as it did for DC): Finally, total impedance can be calculated by using Ohm's Law (Z=E/I) vertically in the “Total” column. Incidentally, parallel impedance can also be calculated by using a reciprocal formula identical to that used in calculating parallel resistances. The only problem with using this formula is that it typically involves a lot of calculator keystrokes to carry out. And if you're determined to run through a formula like this “longhand,” be prepared for a very large amount of work! But, just as with DC circuits, we often have multiple options in calculating the quantities in our analysis tables, and this example is no different. No matter which way you calculate total impedance (Ohm's Law or the reciprocal formula), you will arrive at the same figure: In an ideal case, an inductor acts as a purely reactive device. That is, its opposition to AC current is strictly based on inductive reaction to changes in current, and not electron friction as is the case with resistive components. However, inductors are not quite so pure in their reactive behavior. To begin with, they're made of wire, and we know that all wire possesses some measurable amount of resistance (unless its superconducting wire). This built-in resistance acts as though it were connected in series with the perfect inductance of the coil, like this: (Figure below) Inductor Equivalent circuit of a real inductor. Consequently, the impedance of any real inductor will always be a complex combination of resistance and inductive reactance. Compounding this problem is something called the skin effect, which is AC's tendency to flow through the outer areas of a conductor's cross-section rather than through the middle. When electrons flow in a single direction (DC), they use the entire cross-sectional area of the conductor to move. Electrons switching directions of flow, on the other hand, tend to avoid travel through the very middle of a conductor, limiting the effective cross-sectional area available. The skin effect becomes more pronounced as frequency increases. Also, the alternating magnetic field of an inductor energized with AC may radiate off into space as part of an electromagnetic wave, especially if the AC is of high frequency. This radiated energy does not return to the inductor, and so it manifests itself as resistance (power dissipation) in the circuit. Added to the resistive losses of wire and radiation, there are other effects at work in iron-core inductors which manifest themselves as additional resistance between the leads. When an inductor is energized with AC, the alternating magnetic fields produced tend to induce circulating currents within the iron core known as eddy currents. These electric currents in the iron core have to overcome the electrical resistance offered by the iron, which is not as good a conductor as copper. Eddy current losses are primarily counteracted by dividing the iron core up into many thin sheets (laminations), each one separated from the other by a thin layer of electrically insulating varnish. With the cross-section of the core divided up into many electrically isolated sections, current cannot circulate within that cross-sectional area and there will be no (or very little) resistive losses from that effect. As you might have expected, eddy current losses in metallic inductor cores manifest themselves in the form of heat. The effect is more pronounced at higher frequencies, and can be so extreme that it is sometimes exploited in manufacturing processes to heat metal objects! In fact, this process of “inductive heating” is often used in high-purity metal foundry operations, where metallic elements and alloys must be heated in a vacuum environment to avoid contamination by air, and thus where standard combustion heating technology would be useless. It is a “non-contact” technology, the heated substance not having to touch the coil(s) producing the magnetic field. In high-frequency service, eddy currents can even develop within the cross-section of the wire itself, contributing to additional resistive effects. To counteract this tendency, special wire made of very fine, individually insulated strands called Litz wire (short for Litzendraht) can be used. The insulation separating strands from each other prevent eddy currents from circulating through the whole wire's cross-sectional area. Additionally, any magnetic hysteresis that needs to be overcome with every reversal of the inductor's magnetic field constitutes an expenditure of energy that manifests itself as resistance in the circuit. Some core materials (such as ferrite) are particularly notorious for their hysteretic effect. Counteracting this effect is best done by means of proper core material selection and limits on the peak magnetic field intensity generated with each cycle. Altogether, the stray resistive properties of a real inductor (wire resistance, radiation losses, eddy currents, and hysteresis losses) are expressed under the single term of “effective resistance:” (Figure below) Equivalent circuit of a real inductor with skin-effect, radiation, eddy current, and hysteresis losses. It is worthy to note that the skin effect and radiation losses apply just as well to straight lengths of wire in an AC circuit as they do a coiled wire. Usually their combined effect is too small to notice, but at radio frequencies they can be quite large. A radio transmitter antenna, for example, is designed with the express purpose of dissipating the greatest amount of energy in the form of electromagnetic radiation. Effective resistance in an inductor can be a serious consideration for the AC circuit designer. To help quantify the relative amount of effective resistance in an inductor, another value exists called the Q factor, or “quality factor” which is calculated as follows: The symbol “Q” has nothing to do with electric charge (coulombs), which tends to be confusing. For some reason, the Powers That Be decided to use the same letter of the alphabet to denote a totally different quantity. The higher the value for “Q,” the “purer” the inductor is. Because its so easy to add additional resistance if needed, a high-Q inductor is better than a low-Q inductor for design purposes. An ideal inductor would have a Q of infinity, with zero effective resistance. Because inductive reactance (X) varies with frequency, so will Q. However, since the resistive effects of inductors (wire skin effect, radiation losses, eddy current, and hysteresis) also vary with frequency, Q does not vary proportionally with reactance. In order for a Q value to have precise meaning, it must be specified at a particular test frequency. Stray resistance isn't the only inductor quirk we need to be aware of. Due to the fact that the multiple turns of wire comprising inductors are separated from each other by an insulating gap (air, varnish, or some other kind of electrical insulation), we have the potential for capacitance to develop between turns. AC capacitance will be explored in the next chapter, but it suffices to say at this point that it behaves very differently from AC inductance, and therefore further “taints” the reactive purity of real inductors. As previously mentioned, the skin effect is where alternating current tends to avoid travel through the center of a solid conductor, limiting itself to conduction near the surface. This effectively limits the cross-sectional conductor area available to carry alternating electron flow, increasing the resistance of that conductor above what it would normally be for direct current: (Figure below) Skin effect: skin depth decreases with increasing frequency. The electrical resistance of the conductor with all its cross-sectional area in use is known as the “DC resistance,” the “AC resistance” of the same conductor referring to a higher figure resulting from the skin effect. As you can see, at high frequencies the AC current avoids travel through most of the conductor's cross-sectional area. For the purpose of conducting current, the wire might as well be hollow! In some radio applications (antennas, most notably) this effect is exploited. Since radio-frequency (“RF”) AC currents wouldn't travel through the middle of a conductor anyway, why not just use hollow metal rods instead of solid metal wires and save both weight and cost? (Figure below) Most antenna structures and RF power conductors are made of hollow metal tubes for this reason. In the following photograph you can see some large inductors used in a 50 kW radio transmitting circuit. The inductors are hollow copper tubes coated with silver, for excellent conductivity at the “skin” of the tube: High power inductors formed from hollow tubes. The degree to which frequency affects the effective resistance of a solid wire conductor is impacted by the gauge of that wire. As a rule, large-gauge wires exhibit a more pronounced skin effect (change in resistance from DC) than small-gauge wires at any given frequency. The equation for approximating skin effect at high frequencies (greater than 1 MHz) is as follows: Table below gives approximate values of “k” factor for various round wire sizes. “k” factor for various AWG wire sizes. |gage size||k factor||gage size||k factor| For example, a length of number 10-gauge wire with a DC end-to-end resistance of 25 Ω would have an AC (effective) resistance of 2.182 kΩ at a frequency of 10 MHz: Please remember that this figure is not impedance, and it does not consider any reactive effects, inductive or capacitive. This is simply an estimated figure of pure resistance for the conductor (that opposition to the AC flow of electrons which does dissipate power in the form of heat), corrected for the skin effect. Reactance, and the combined effects of reactance and resistance (impedance), are entirely different matters. Contributors to this chapter are listed in chronological order of their contributions, from most recent to first. See Appendix 2 (Contributor List) for dates and contact information. Jim Palmer (June 2001): Identified and offered correction for typographical error in complex number calculation. Jason Starck (June 2000): HTML document formatting, which led to a much better-looking second edition. Lessons In Electric Circuits copyright (C) 2000-2013 Tony R. Kuphaldt, under the terms and conditions of the Design Science License.
http://www.ibiblio.org/kuphaldt/electricCircuits/AC/AC_3.html
13