score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4.0625 | This year, Germany finally paid off its old bonds for World War 1 reparations, as Margaret MacMillan has noted in the New York Times. MacMillan asserts that “John Maynard Keynes, a member of the British delegation in Paris, rightly argued that the Allies should have forgotten about reparations altogether.” Actually, the truth is more complicated. A fuller understanding of Keynes’s role in the 1919 Paris peace conference after World War 1 may also offer a useful perspective on his contributions to economics.
Keynes became the most famous economist of his time, not for his 1936 General Theory, but for his Economic Consequences of the Peace (1920) and A Revision of the Treaty (1922). These were brilliant polemics against the 1919 peace conference, exposing the folly of imposing on Germany a reparation debt worth more than 3 times its prewar annual GDP, which was to be repaid over a period of decades.
Germans saw the reparations as unjust extortion, and efforts to accommodate the Allies’ demands undermined the government’s legitimacy, leading to the rise of Nazism and the coming of a second world war. Keynes seemed to foresee the whole disaster. In his 1922 book, he posed the crucial question: “Who believes that the Allies will, over a period of one or two generations, exert adequate force over the German government to extract continuing fruits on a vast scale from forced labor?”
But what Keynes actually recommended in 1922 was that Germany should be asked to pay in reparations about 3% of its prewar GDP annually for 30 years. The 1929 Young Plan offered Germany similar terms and withdrew Allied occupation forces from the German Rhineland, but the Nazis’ rise to national power began after that.
In his 1938 memoirs, Lloyd George tells us that, during World War 1, Germany also had plans to seize valuable assets and property if they won WW1, “but they had not hit on the idea of levying a tribute for 30 to 40 years on the profits and earnings of the Allied peoples. Mr. Keynes is the sole patentee and promoter of that method of extraction.”
How did Keynes get it so wrong on reparations? In 1871, after the Franco-Prussian War, Germany demanded payments from France, on a less vast scale (only a fraction of France’s annual GDP), while occupying northern France. To hasten the withdrawal of German troops, France made the payments well ahead of the required 3-year schedule, mainly by selling bonds to its own citizens. But the large capital inflow destabilized Germany’s financial system, which then led to a recession in Germany. Before 1914, some argued that such adverse consequences of indemnity payments for a victor’s economy would eliminate incentives for war and assure world peace. In response to such naive arguments, Keynes suggested in 1916 that postwar reparation payments could be extended over decades to avoid macroeconomic shock from large short-term capital flows and imports from Germany.
Nobody had ever tried to extract payments over decades from a defeated nation without occupying it, but that is what the Allies attempted after World War 1, following Keynes’s suggestion. Keynes argued about the payments’ size but not their duration.
Today economists regularly analyze the limits on a sovereign nation’s incentive to pay external debts. In our modern analytical framework, we can argue that the scenario of long-term reparation payments was not a sequential equilibrium. But such analysis uses game-theoretic models that were unknown to Keynes. As a brilliant observer, he certainly recognized the political problems of motivating long-term reparation payments over 30 years or more, but these incentive problems did not fit into the analytical framework that guided him in formulating his policy recommendations. So while condemning the Allies’ demands for Germany to make long-term reparation payments of over 7% of its GDP, Keynes considered long-term payments of 3% of GDP to be economically feasible for Germany, regardless of how politically poisonous such payments might be for its government. Considerations of macroeconomic stability could crowd out strategic incentive analysis for Keynes, given the limits of economic analysis in his time.
Reviewing this history today, we should be impressed both by Keynes’s skill as a critical observer of great policy decisions but also by the severe limits of Keynes’s analytical framework for suggesting better policies. Advances in economic theory have greatly expanded the scope of economic analysis since Keynes’s day and have given us a better framework for policy analysis than what Keynes ever had. | http://cheaptalk.org/2010/12/27/keynes-and-the-ww1-reparations/ |
4.03125 | Describes and analyses political, military, religious, social, cultural and economic features of ancient societies
Identifies factors that contribute to change and continuity in the ancient world
uses historical terms and concepts appropriately to answer historical questions.
This tutorial presents the social structures and people's occupations in the Persian society at that time, using key terms, concepts and evidence.
Darius' inscription at Naqsi -i Rustam (1. 81-5) informs us of his: family (Mnana) as he is son of
Vishtaspa; clan (Vis) as he is of the
Haxamanisiya; tribe (Zama) as he was of the Pasargadae; people as he was of the Parsa; race as he was of
the Ariya; land (Dahyu) as it was Fars. Indeed, Wiesehoefer (1996: 34) argues that the Avesta divides
society into three functions: priest, warrior and farmer.
The family was the basic social unit in Persian society. Fathers had tyrannical authority, treating their children as slaves (Aristotle, Nicomedian Ethics, IX, 12). Marriage was a formal affair which saw grooms toasted and brides kissed (Arrian Anabasis 7. 4-5). Children were much sought after as legitimate heirs (Herodotos, m. 2), therefore polygamy was encouraged for this reason (Strabo 15.3.17). Children had to obey their fathers (Aelian Varia Historiae 1. 34) and could be rewarded (Fortification texts record a gift of 100 sheep by Danus I to his daughter Artystone). The death of a spouse was a time of mourning (A Babylonian Chronicle 7. iii. 22-24). Incest was against Persian customs and laws (Herodotos III. 31), but successive Persian Great Kings named sisters, cousins, nieces, daughters, and slept with the wives and daughters of their brothers (Plutarch Artaxerxes XXIII. 5-6). Divorce is almost unheard of: an adulterous wife of Xerxes son-in-law only gets a reprimand, then promises to behave! (Ctesias 39b). Only Xerxes divorced his disobedient wife Vashti. (Esther 1.9-22).
Several families made up the clan. Several clans made up the tribe. The Achaemenids were one clan of the Pasargadae. Intermarriage went on between families within the same clan. The clan was the basic unit of identification, but not social function: you lived with your family, obeying your patriarchal father, but told people you met the name of your clan. (Herodotos III. 119.2).
If you told the people you met to which clan you belonged, then you told the world to which tribe you belonged. Both Darius and Xerxes make sure we know their tribe, as well as clan. Many Persians identified in the Fortification tablets and Treasury tablets from Persepolis are identified by name, region and tribe. The bulk of the Persians were small farmers (Weisenhoefer, 1996: 35). We know that divisions in society were made at the tribal level. Several clans made a tribe. Entire tribes were either nomadic herders or settled farmers. Within these divisions was a clear heirarchy, attested to by Herodotos (I. 134.3), and Strabo, who refers to proskynesis. There were other tribes: the Panthialaei, the Derusiaei and the Germanii, who were farmers. The Dai, Mardi, Dropici and Sagartii were herders. These all had the status of skauthis, peasants, whose labour was the basis of agriculture. Free workers were even recruited from neighbouring satrapies at harvest time (Dandemaev and Lukonin, 1989: 157). Paid free-born labourers worked on the Babylonian canals, and free non-citizen farmers worked the land of the state, temples and the rich (Dandamaev and Lukonin, 1989: 152), and provided the corvee labour at such sites as Susa and Persepolis (Kent 1953DSf 22-58). They could not be sold, and so were not actually slaves, and could be considered non-citizen workers.
One tribe of the Persians, the Magi became the priest class. They interpreted the teachings of the prophet, Zarathshtra, through their own beliefs. In some cases, this meant that they encouraged night-time sacrifices of cattle to Mithra, drinking haoma, and worshipping the mother goddess Anahita (Olmstead, 1948: 106), all of which had been expressly forbidden by Zarathushtra. A professional and hereditary priesthood, "such as the Magi provided... may develop superstition elaborately, since scrupulousness may easily come to be counted for righteousness and so be a road to eminence" (Burn, 1984: 79-80).
There was a small artisan class within Persian or Median society. In Babylonia, an inscription says that the temples relied on the skilled labour of "carpenters, metal engravers goldsmiths and ...all the craftsmen (of the temple)" (Dandemaev and Lukonin, 1989: 157). Great Kings used the skills of the conquered peoples. Lydian stonemasons worked on Pasargadae (Roaf, 1990: 204). The slaves who worked these sites were called Kurtash, and ration payments for them are recorded on the Persepolis Fortification tablets (Dandemaev and Lukonin, 1989: 158), and show that Darius borrowed from the architectural traditions of the Medes, the Mesopotamians, the Greeks and the Egyptians. According to the Treasury tablets, there were Egyptian and Carian stonemasons, and Ionian slaves in the quarries. (Dandamaev and Lukonin, 1989: 160). The Phoenicians provided purple dye, the Egyptians manufactured rope, the Greeks built the bridges across the Danube. The social heirachy put craftsmen between warriors and peasants. Scribes were essential for administration and for distributing the propaganda of the Great King. They ranked higher than other craftsmen in the East. Although they had their own slaves, those who worked in Persepolis were referred to as slaves in the records (Dandamaev and Lukonin, 1989: 159). Mostly, they worked in languages foreign to the Persians, mostly Elamite and Akkadian, and little in Old Persian (Kurht, 1995: 649).
With the land grants of fiefs, from the Great King came the obligation to be constantly ready to provide troops in times of war (and this included their full kit). Each member of the military class carried a duty to pay a service (ilku) of silver, by whomever owned the fief. Xerxes says of himself that he was a good horseman, bowman and spearman (Kent 1953 Dnb). Babylonian documents attest the renting out of these lands. The "census officers made sure that a soldier matching the obligation of each land grant appeared at the call up" Kurht (1995: 695). This census was taken in 500-499, and the information was "kept by army scribes at the main mustering points of the satrapy" (Kurht, 1995: 695). The division of land into bow (bit qasha), horse (bit sisi) and chariot (bit narkabhi), shows the place of the military within Persian society. The peace which this system brings, allows agricultural workers to produce their maximum taxable amount which maintains the empire and its military system.
Domesticated animals and enslaved humans and a vast number of people were needed to work on projects of agriculture, warfare and monumental construction. State owned slaves in the mines (Olmstead, 1948: 74 ff), and they were well paid (Dandemaev and Lukonin, 1989: 161-2), but they had the status of livestock moveable property (op. cit 153). The household of the Great King maintained a large retinue of slaves who functioned as plowmen, millers, cow herds, shepherds, winemakers and beer brewers, cooks, bakers, wine waiters and eunuchs (Dandamaev and Lukonin, 1989: 158, 170). Of the slaves at Persepolis, 12.7% were boys, and 10% were girls (Fortification Tablets). Dandemaev and Lukonin (1989: 160-1), concluded that these slaves lived together as families but they were also moved around the empire in what amounts to job lots. Documents record the movements of between 150 and 1500 slaves from one site to another. In Babylon, Egypt and the Greek cities of Lydia, the arrangements predating the Persians were kept. Slaves were usually acquired through warfare (Falcelière et al, 1970: 433), and were known as "the booty of the bow" (Dandamaev and Lukonin, 1989: 156). The peace established by the Great King would have effectively dried up this source. However, the Great Kings enslaved satrapies and cities which rebelled (Dandemaev and Lukonin, 1989: 170). Slavery was usually seen as a hereditary state, the children of those slaves maintained private stocks. Household slaves could be bought (Herodotos, vm, 1os). There was a privately owned slave labour force doing menial tasks. In Babylon, debtors could sell themselves into slavery (Olmstead, 1948: 74 ff), but this quickly died out under Persian rule (Dandemaev and Lukonin, 1989: 156). Everyone from the highest nobles down were defined as bandaka (the slaves of the Great King) (Kurht, 1995: 687), or 'those who wear the belt of dependence' (Wiesehoefer, 1996:31). This meant that taxation was due in money, precious metals, goods, military service and labour.
Burn, ARR. (1984) Persia and the Greeks, Duckworth, London.
Dandemaev, M.A. and Lukonin, V.G. (1989) The Culture and Social Institutions of Ancient Iran, Cambridge University Press, Cambridge.
Herodotos (1985) The Histories, trans. Rex Warner, Penguin, Harmondsworth.
Kuhrt, A. (1995) The Ancient Near East c. 3000-330BC Vol 2 Routledge, London.
Lawless and Cameron (1994) Studies in Ancient Persia, Thomas Nelson, South Melbourne.
Olmstead, A.T., (1948) History of the Persian Empire, University of Chicago Press, Chicago.
Roaf, M. (1990) Cultural Atlas of Mesopotamia and the Near East, Facts on File Ltd, New York.
Weishoefer, J. (1996) Ancient Persia 550BC-650 AD trans. Azodi, A. IB Tauris Publishing, London. | http://hsc.csu.edu.au/ancient_history/societies/near_east/persian_soc/persiansociety.html |
4 | The various ecological and economic benefits of tropical marine seascapes and their biodiversity are under threat by a variety of sources, including human development and climate change. Globally, mangroves are being cleared at a quicker rate than tropical rainforests, and tropical fisheries are significantly over-fished. Overfishing of species that graze on algae and seaweeds on the reef can disrupt the ecosystem’s overall balance and prevent new corals from growing. Overfishing also removes the adult grazing fish that are most attractive to larger predator fish, and increases predation on juvenile fish before they’ve had a chance to breed.
Coral bleaching events—when corals lose the symbiotic algae within their tissues that provide them with much of their energy through photosynthesis-- have increased in recent years due to global climate change and warming water temperatures. Scientists also acknowledge that the limestone skeletons of corals will become weakened as seas become more acidic, taking up more carbon dioxide as atmospheric concentration increases. Without healthy coral reefs, tropical marine seascapes can’t maintain their wide varieties of species—and all the benefits they confer to surrounding communities.
One of the key tools for reducing the effects of these and other threats to tropical seascapes is the establishment of marine reserves. These areas, where tourism is actively encouraged but fishing and other destructive activities are banned, can increase the health of many tropical habitats. Reefs in protected areas therefore tend to be more biodiverse and more resilient—so while such reserves cannot directly prevent the effects of climate change (or hurricanes), they can give reefs and the species that depend on them better odds for recovery, after major bleaching events or storms, for example.
For marine reserves to be successful, they must be well designed. If marine reserves are placed in areas with naturally poor-quality habitat there will be very few benefits to wildlife. A number of guidelines are available to coastal managers to help them site their marine reserves. Central among these is to try and include sea grass, mangrove, and coral reef habitats in reserves because of the importance of the interactions between them. However, most tropical seascapes include many types of mangrove and coral reef areas, with differing characteristics and qualities. Which types, and how many, should be included in a marine reserve or a network of reserves? How close to each other do they need to be? What species do they need to shelter? Answering these and related questions is critically important throughout The Bahamas, the wider Caribbean, and indeed anywhere coral reefs exist.
Meet the Scientists
Dr. Alastair Harborne
NERC Independent Research Fellow
University of Exeter, UK
Dr. Harborne is a coral reef ecologist with wide ranging interests in fish and coral ecology and the overarching aim to use ecological insights to aid biodiversity conservation. His key research interest concerns the processes affecting the abundance of reef fishes on coral reefs, and he also studies the landscape ecology of reefs and the design and effects of marine reserves. He’s worked on coral reef ecosystems for nearly ten years, and holds a PhD from The University of Exeter (UK) and a BSc from Southampton University (UK). He is a member of the Ecology and Conservation Biology research group at Exeter, the UK coordinator for Reef Check since 1997, and a founding member of the Reef Conservation-UK committee. A frequently published author and co-author in scientific media, he has also given multiple interviews for mass media outlets like the BBC on coral conversation issues. He is a certified PADI Rescue Diver and Emergency First Responder with more than 550 logged dives in the Caribbean, South East Asia, South Pacific and the Red Sea, and has worked with volunteers for a non-governmental organization in Central America for many years.
Dr. Rod Wilson
Associate Professor in Integrative Animal Physiology
University of Exeter, UK
Dr. Wilson is a comparative physiologist and his research uses multi-disciplinary approaches to provide a broader understanding of homeostasis (the ways an organism regulates itself to maintain a fairly stable biological condition) in animals, with a particular focus upon fish. This includes studies of how anthropogenic (human-caused) and natural environmental changes affect fish physiology and behavior, as well as projects on the welfare and environmental enrichment of laboratory fish. Dr. Wilson is a member of the Ecotoxicology and Ecophysiology research group at Exeter, and holds both a BSc and a PhD from the University of Birmingham (UK). He is the Assistant Editor for the Journal of Fish Biology and Co-Editor for Serial Advances in Experimental Biology.
Dr. Andrew Gill
Senior Lecturer in Aquatic Ecology
Cranfield University, UK
Dr. Andrew Gill started his career in 1989 as a NERC-funded research assistant at Leicester University. Following his Ph.D., he worked for three years with a coral reef conservation organization on field projects, mapping reef communities and providing scientific advice and support for the development of marine protected areas in Belize and the Philippines. On returning to the UK in 1996, Andrew took up a temporary lectureship in fish and fisheries biology at Liverpool University, and in 1999 set up a new postgraduate course in restoration ecology and was appointed course director. In late 2003, Andrew moved to Cranfield to take up his current position as lecturer in applied aspects of aquatic ecology. Andrew manages the environmental water management option on the water management postgraduate program. Andrew graduated in zoology (marine and fisheries biology) from Aberdeen University in Scotland, and subsequently studied for his Ph.D. in fish behavioral ecology at Leicester University. He is a member of the Fisheries Society of the British Isles, a member of the Society for Ecological Restoration International, a member and scientific advisor to the Shark Trust, a member of British Ecological Society and a BES representative, and a member and visiting fellow of the Marine Biological Association UK. He is currently the marine and aquatic editor for the international journal Biological Conservation.
Dr. Katherine Sloman
University of the West of Scotland, UK
Dr. Katherine Sloman graduated from the University of Wales, Swansea, in 1997, and then received her Ph.D. from the University of Glasgow in 2000 following work on stress responses in salmon. She then undertook a series of postdoctoral research projects in fish physiology before becoming a lecturer and then senior lecturer at the University of Plymouth (UK). In 2010 she moved to the University of the West of Scotland to continue her work on ecotoxicology and environmental physiology. Katherine is the author of numerous research papers and book chapters, and is a member of organizations including the Society of Experimental Biology, the Fisheries Society of the British Isles, and the Association for the Study of Animal Behaviour. She has also taught extensively to undergraduates and supervised many post-graduate students. | http://www.earthwatch.org/australia/exped/harborne_research.html |
4.1875 | - How does Radio Echo Sounding Work?
- Frequencies and Wavelengths
- Radio Wave Propagation in Ice
- Field Work
- Data Processing
- Photos and Links
How Does Radio-Echo Sounding Work?
A radio-echo sounding system consists of two main
components: 1) the transmitter, and 2) the receiver. The
transmitter sends out a brief burst of radio waves of a
specific frequency. The receiver detects the radio waves from
the transmitter and any waves that have bounced, or reflected
off nearby surfaces. The receiver records the amount of time
between the arrival of the transmitted wave and any reflected
waves as well as the strength of the waves (measured as an AC
The radio waves travel at different speeds
through different materials. For example, radio waves travel
very close to 300,000,000 meters/second (3 x 108 m/s) through air, a little less than double the speed in ice
at 1.69 x 108 m/s.
See the next three tabs for more indepth explaination.
Frequencies & Wavelengths of Waves
Electro-Magnetic (EM) energy is made up of both particles
and waves. A single wavelength is 2¼ or 360° of
the wave's angular distance. When a wave travels through a
material, the wavelength is the distance travelled through
the material by 2¼ of a wave.
The number of times a wave oscillates over a certain
amount of time is know as the frequency of the wave.
The units of frequencies are Hertz (Hz) which is the number
of complete wavelengths that pass a point in a single second.
Therefore, 1 Hz = 1 cycle/second or 1/s.
The wavelength of a signal passing through a material
depends on the frequency (f ) of the wave and the
signal velocity (u ) through the material (a
property of the material itself). As shown above, the units
of frequency are 1/s, and the units of velocity are m/s.
Since wavelength(l ) is
measured in m, the equation to obtain wavelength is:
l = f * u
or wavelength = frequency * velocity
A higher amplitude wave of a
given frequency carries more energy than a low amplitude
wave. A signal can be detected only if its amplitude is
greater than that of any background noise. For example, if
you are listening to a radio in New York City, you can pick
up a station from Seattle only if its signal is stronger than
the EM noise caused by the sun, electric motors, local radio
There are numerous radio-echo sounding devices used by
various researchers thorughout the world. The components
described here are those used by researchers at the
University of Wyoming, which is based on that designed by
Barry Narod and Garry Clarke at the University of British
Coloumbia (Narod & Clarke, J. of Glaciology, 1995). It
has been designed for use on temperate glaciers.
The transmitter emits a 10 ns (nanosecond) long pulse at a
frequency of 100 MHz. The details of the pulse-generation
circuitry can be found in Narod & Clarke, 1995. The
frequency of the pulse is modulated for use on temperate
glaciers by attaching two 10 m antennas. The resulting 5 MHz
frequency is ideal for temperate glacier radio-echo sounding.
The transmitter is powered by a 12 V battery.
The transmitter and battery are housed in a small tackle
box which is attached to a pair of old skis. The antennas
extend out the front and back of the tackle box. The forward
antenna is carried by the person pulling the transmitter
sled's tow rope, while the rear antenna drags behind. There
is no focusing of the transmitted signal, so it propagates in
all directions into the ice and air. In order to reduce
"ringing" of the signal along the antenna,
resistors are embedded every meter along the antenna. The
total resistance of each 10 m antenna is 11 ohms.
The receiver begins with an antenna identical
to that of the transmitter. As each pulse is sent out of the
transmitter, some of the transmitted energy travels through
the air and some through the ice. The velocity of radio waves
in air is almost twice that in ice, so the receiver first
detects the "Direct Wave" transmitted through the
air between the transmitter and receiver. This triggers the
oscilloscope to begin recording the signal. For the next 10
µs, the oscilloscope records the voltage of the signals that
have reflected off nearby surfaces. The scope averages 64 of
the transmitter pulses and reflected waves to generate a
single trace. By averaging the scope reduces niose due to
signal scatter and instrument noise in order to obtain a
better trace to be recorded on the laptop computer. The
entire receiver is placed in a small sled which is pulled by
a tow rope. A third researcher monitors the signals on the
oscilloscope and records the information onto the laptop.
Both the scope and the laptop are powered by a 12 V battery
which can be charged by a solar panel for extended surveys.
Radio Wave Propagation in Temperate Ice
As most people know, both water and ice are transparent to
the visible light portion of the Electro-Magnetic (EM)
spectrum. At the much lower frequencies (and longer
wavelengths) of radio waves, liquid water is opaque while ice
is still relatively transparent. This is why radio-echo
sounding is used in the sub-freezing regions of the Arctic
and Antarctic glaciers and ice sheets. There is little water
present within these cold ice masses to scatter or block the
radio signals. The lack of water has allowed researchers to
use frequencies ranging from a few MHz for subglacial
mapping, up to 200-500 MHz for crevasse detection near the
ice surface. Frequencies in the GHz range are used for
studies of snow structure and stratigraphy.
By definition, temperate ice exists at the
pressure-melting point. This means that both ice and water
phases coexist. The presence of liquid water presents a
problem when trying to use radio waves in temperate glaciers
because the water scatters the radio signals making it
difficult to receive coherent reflections that can later be
In the late 1960s through the mid-1970s, a number of
researchers experimented with various frequencies and
transmitter designs. Their findings concluded that
frequencies between ~2 and ~10 MHz are best for temperate
glaciers. 5 MHz pulse-transmitters are the most common used
The basic reason that a 5 MHz signal works in most
temperate ice is that the resulting 34 m wavelength is far
larger than the size of the majority of the englacial water
bodies that scatter the signal. Unfortunately, the long
wavelength of the signal seriously limits the resolution of
the radio-echo sounding survey.
EM Wave Propagation Through a Dielectric Material
Radio waves travel through ice due to its dielectric
properties. The dielectric constant of a given material is a
complex number describing the comparison of the electrical
permittivity of a material and that of a vacuum. As a complex
number, the dielectric constant contains both real and
imaginary portions. The imaginary part of the number
represents the polarization of atoms in the material as the
EM energy passes through it (Feynman, 1964). The EM wave
propagation velocity is determined by its entire complex
The propagation velocity of a radio wave in ice is
determined by the dielectric properties of ice. Liquid water
and various types of bedrock have unique dielectric
constants. Since the dieliectric properties of a material are
related to conductivity, concentrations of dissolved ions in
liquid water will affect the dielectric constant (more free
ions increase the conductivity of water). The dielectric
constants of some materials are listed below:
|Ice (at 0ºC)
||3.2 ± 0.03
Reflections of Waves
The Basic Concept
When a wave encounters an interface between materials of
different properties, the wave may be refracted, reflected,
or both. Snell's Law describes the reaction of light to a
boundary between materials of different dielectric contrasts
(or refractive index), based on the angle at which a ray
perpendicular to the wave front hits the interface. The angle
of the incoming ray (Angle of Incidence: ai)
is equal to the angle of reflection (ar).
The Angle of Refraction (aR)
is determined by the ratio of the sines of the Angle of
Incidence to the Angle of Refraction and the ratio of the
dielectric constants for the upper and lower layers (e1 and e2).
There is a point where the Angle of Incidence
is large enough (close to horizontal) that there is no
refraction. This is called the Angle of Critical Refraction
where all the incoming waves are either reflected or
refracted along the interface. Ay angles larger than the
Angle of Critical Refracion result in only reflection.
Radio-Echo Sounding in the Field
The appropriate field methods for gathering Radio-Echo
Sounding (RES) data depend upon the objective of the survey.
If a researcher simply wants a rough estimate of the glacier
thickness, only a couple readings might suffice. If a
high-resolution map of the glacier bed is desired, a dense
grid of measurement points is necessary. Below is a
description of the field techniques used to develop a
high-resolution map of the glacier bed. It is important to
remember that even after the field work is over there are
many hours of data processing to be done. The techniques
described here were developed to minimize the processing time
and to maximize the resolution of the resulting map.
Mapping the RES Grid
When processing and interpreting the RES data after the
field season, the researcher needs to know the topography of
the glacier surface to correct for changes in the recorded
wave travel times. The glacier surface topography is mapped
using the Global Positioning System (GPS) or by traditional
optical surveying. While GPS is faster, it does not have the
vertical or horizontal resolution of optical surveying. The
horizontal positions are necessary to locate the map with
respect to other maps of the area, while the vertical
coordinates are critical for the data processing and need to
be accurate to within 0.5 m.
In order to reduce the possibility of spatial aliasing and
to maximize the resolution of the RES survey, the traces
should be recorded less than one-quarter wavelength apart.
For example, a 5 MHz RES system produces a 34 m wavelength.
Therefore the grid of RES traces should be less than 8.5 m
A rectangular grid with the traces aligned at 90° to one
another greatly simplifies the data processing.
Unfortunately, field conditions do not always oblige such an
orderly system and the grid is modified by the presence of
crevasses, melt-water ponds, steep slopes, avalanche debris,
etc. In such cases, detailed notes help to recreate the grid
during the data processing.
Recording the Profiles
The transmitter and receiver occupy separate sleds. These
may be pulled in-line or side-by-side depending on the design
specifications of the instruments. The Univ. of Wyoming
system is pulled side-by-side so that the transmitter and
receiver are pulled parallel to one another. A single
researcher pulls the transmitter on its homemade sled while
another pulls the receiver sled. A third researcher walks
beside the receiver sled to monitor the incoming signals on
the oscilloscope and then record them to the laptop computer.
Some systems can continuously record traces to a computer
and do minor amounts of pre-processing such as trace stacking
(or averaging) and digital filtering to remove noise. The
Univ. of Wyoming system is much simpler requiring the
researchers to stop at each position in the RES grid and
manually tell the computer to retrieve data from the
oscilloscope. Although more time consuming, this method
allows the researchers to monitor the condition of the
incoming data and results in a smaller data set. Each trace
recorded onto the computer is an average of at least 64
received pulses from the transmitter so that the
signal-to-noise ratio is improved.
RES Field Work on the Worthington
The Worthington Glacier is a
small temperate valley glacier in the Chugach Mts. of
South-Central Alaska. Radio-echo sounding surveys have been
recorded there in support of ice-dynamics research by the Univ. of Wyoming and the Institute
of Arctic & Alpine Research at
Processing Radio Echo-Sounding Data
Processing the Radio Echo-Sounding (RES) data transforms
the data from incoherent numbers to a data set that can be
interpreted. Our processing methods are drawn from refection
seismology techniques. These are outlined in Welch, 1996; Welch et
al., 1998; and Yilmaz, 1987. We use a number of IDL (from
Research Systems, Inc.) scripts to organize our data and
usually create screen plots of each profile through each step
of the processing to help identify problems or mistakes. We
also use Seismic
Unix (SU), a collection of freeware seismic processing
scripts from the Colorado School of Mines. SU handles the
filtering, gain controls, RMS, and migration of the data. IDL
is used for file manipulation and plotting and provides a
general programming background for the processing.
The processing steps below are listed in the order that
they are applied. The steps should be followed in this order.
Note that quality of the processing results are strongly
dependent on the quality of the field data.
Data Cleaning and Sorting
The first step of data processing is to organize and clean
the field data so that all the profiles are oriented in the
same direction (South to North, for example), any duplicated
traces are deleted, profiles that were recorded in multiple
files are joined together, and surface coordinates are
assigned to each trace based on survey data. These steps are
some of the most tedious, but are critical for later
migration and interpretation.
Static and Elevation Corrections
The data is plotted as though the transmitter and receiver
were a single point and the glacier surface is a horizontal
plane. Since neither is the case, the data must be adjucted
to reflect actual conditions. The transmitter-receiver
separation results in a trigger-delay equivalent to the
travel-time of the signal across the distance separating the
two. This travel-time is added to the tops of all the traces
as a Static Correction.
The data is adjusted with respect to the highest trace
elevation in the profile array. Trace elevations are taken
from the survey data and the elevation difference between any
trace and the highest trace is converted into a travel-time
through ice by multiplying the elevation distance by the
radio-wave velocity in ice (1.69 x 108 m/s). The
travel-time is added to the top of the trace, adjusting the
recorded data downward.
Filtering and Gain Controls
We use a bandpass filter in SU to elimitate low and high
frequency noise that result from the radar instrumentation,
nearby generators, etc. Generally we accept only frequencies
within a window of 4-7 MHz as our center transmitter
frequency is 5 MHz. Depending on the data, we will adjust the
gain on the data, but generally avoid any gain as it also
increases noise amplitude. We try to properly adjust gain
controls in the field so that later adjustment is
Cross-Glacier Migration (2-D)
We 2-D migrate the data in the cross-glacier (or across
the dominent topography of the dataset) in order to remove
geometric errors introduced by the plotting method. Yilmaz
(1987) provides a good explanation for the need for migration
as well as descriptions of various migration algorithms.
Why is migration necessary?
The radar transmitter emits an omni-directional signal
that we can assume is roughly spherical in shape. As the wave
propagates outward from the transmitter, the size of the
spherical wavefront gets bigger so when it finally reflects
off a surface, that surface may be far from directly beneath
the transmitter. Since by convention, we plot the data as
though all reflections come from directly below the
transmitter, we have to adjust the data to show the
reflectors in their true positions.
We generally use a TK migration routine that is best for
single-velocity media where steep slopes are expected. As you
can see from the plot below, the shape of the bed reflector
has changed from the unmigrated plots shown in the previous
Down-Glacier Migration (2-D)
In order to account for the 3-dimensional topography of
the glacier bed, we now migrate the profiles again, this time
in the down-glacier direction. We use the same migration
routine and the cross-glacier migrated profiles as the input.
Although not as accurate as a true 3-dimensional migration,
this two-pass method accounts for much of the regional
topography by migrating in two orthogonal directions. Radar Profile After Down-Glacier
Interpretating and Plotting the Bed Surface
Once the profiles have been migrated in both the
cross-glacier and down-glacier directions, we use IDL to plot
the profiles as an animation sequence. The animation shows
slices of the processed dataset in both the down-glacier and
cross-glacier direction. By animating the profiles, it is
easier to identify coherent reflection surfaces within the
dataset. Another IDL script allows the user to digitize,
grid, and plot reflection surfaces.
The resolution of an interpreted surface is a function of
the instrumentation, field techniques and processing methods.
Through modeling of synthetic radar profiles, we have shown
that under ideal circumstances, we can expect to resolve
features with a horizontal radius greater than or equal to
half the transmitter's wavelength in ice. So for a 5 MHz
system, we can expect to resolve features that are larger
than about 34 m across. Since the horizontal resolution is
far coarser than the vertical resolution of 1/4 wavelength,
we use the horizontal resolution as a smoothing window size
for the interpreted reflector surfaces. We use a
distance-weighted window to smooth the surfaces.
The ice and bedrock surfaces of a portion of the Worthington
Glacier obtained in the 1996 radio echo sounding survey. The 1994
boreholes are also plotted. (Plot by Joel Harper, U. of Wyo.)
Click on the image for a larger version.
The ice surface and bedrock surface beneath the Worthington
Glacier, Alaska. Resolution of both surfaces is 20 x 20 m. Yellow
lines indicate the positiond of boreholes used to measure ice
Pictures of the Worthington Glacier Area
Notes on Radar Profiles
Three arrays of Radio-Echo Sounding profiles have been
recorded on the Worthington Glacier. The 1994 survey was recorded
using different field methods than the field methods used in 1996
& 1998. The same eqpuipmet was used in all three surveys as
well as the same data processing techniques.
The first profiles were recorded in 1994 and oriented
parallel to the ice flow direction. The locations of these
profiles were not measured accurately, and the profiles were
recorded a few at a time over a period of about a month. The
resulting glacier bed map was not very accurate, with a
resolution of about 40 x 40 meters.
The 1996 radar profiles were recorded in the cross-glacier
direction. The location of every fourth trace of each profile
was measured with optical surveying equipment using a local coordinate system seen in the map
below. The profiles were spaced 20 m apart and a trace
recorded every 5 m along each profile. The resulting glacier
bed map had a resolution of 20 x 20 meters.
In 1998 we used the radio-echo sounding equipment to look
for englacial conduits that transport surface meltwater
through the glacier to its bed. This study required the
maximum resolution that we could obtain from the eqpuipment,
so the profiles and traces were spaced every 5 m. Every
fourth trace on each profile was surveyed to locate it to
within 0.25 m and the entire RES survey was recorded in two
days. The survey was repeated a month later to look for
changes in the geometry of any englacial conduits found. The
first RES survey was processed to produce a map of the
glacier bed surface with a resolution of 17.5 x 17.5 m. The
maximum resolution obtainable by an RES survey is half of the
signal wavelength. Our 5 MHz system, therefore, can obtain 17
x 17 m resolution under the best of circumstances. | http://stolaf.edu/other/cegsic/background/index.htm |
4.1875 | ozone is concentrated in the so-called ozone layer. The atmosphere's
ozone layer plays a very important role in protecting life on earth
from potentially harmful UV rays, and it also helps shape the earth's
climate. However, gasses resulting from human activity such as chlorofluorocarbons
(CFCs) are believed to deplete the ozone layer.
The discovery of a
hole in the ozone layer which occurs each spring over Antarctica
focused the world's attention on the importance of the ozone layer
and stirred the global community into action.
Governments committed themselves to protecting the ozone layer and
better understanding atmospheric processes.
Only satellites can
measure ozone on a global scale, so they are essential for ozone
studies. Satellite ozone data are mainly used for monitoring the
global and vertical distribution of ozone, and are a valuable tool
for policy makers who need to take appropriate measures to protect
the ozone layer. | http://eoedu.belspo.be/en/applications/ozon-info.asp?section=8.5.3 |
4.1875 | All characteristics (traits) of an animal that can be seen or measured are referred to as its phenotype. This includes height, weight, growth rate, wool colour, temperament, reproductive ability, disease resistance etc. An animal’s phenotype for each of its traits depends on both genetics and environment. At conception, genetic material from the sperm and the egg merge, and the resulting fetus will contain 50% of its genes from the dam and 50% from the sire. These genes contain information regarding how each of the animal’s traits will develop (genetic potential or genotype). The environment in which an animal develops will affect whether the full genetic potential will be achieved.
For example, a lamb may have the genetic potential to achieve a maximum adult height of 90 cm. How tall the animal actually becomes depends on the environment in which it develops (i.e. food supply, protection from the elements, health care etc.). With optimum conditions the lambs will be 90 cm. Under natural circumstances, the lamb could never be taller than this as the genes it inherited from its parents have set the upper limit. If the lamb is raised under very poor conditions, with poor feed, heavy parasite load etc, its adult height will be much less, as growth will be limited by environment.
However, even with the phenotypic adult height of 70 cm, it would still be able to pass the genetic potential for greater height to its offspring. The opposite is also true, and a favourable environment can mask poor genetics, just as good genetics may be masked by poor conditions. Sheep do not pass on their environment to their progeny – only their genes. Sheep that have been especially well fed and pampered may look exceptionally good at shows or sales. However, their genetics may not result in a similar phenotype if their offspring are raised under different conditions. Therefore, it may be worthwhile to purchase genetic stock proven to perform well under a management system similar to your own.
Selection is the process of deciding which animals will be used as breeding stock, and which will not be used (castrated, sold, or slaughtered). Producers may base their selection decisions on various economically important traits of the stock, in the hope that the offspring will be profitable. Selecting and breeding rams and ewes that grew quickly as lambs, for example, should produce lambs with a genetic potential for fast growth.
If the fastest growing lambs from the next generation are retained, the genetic potential for this trait should continue to increase. Selection over long periods of time for particular characteristics has lead to the development of breeds with recognisable phenotypes.
Traits are not equally affected by the animal’s genetics. The heritability of a trait is a measure of the relative importance of genetics and environment in developing the phenotype. Table 1 list a few examples of traits and their heritability. The genetic contribution to the phenotype of a trait with a low heritability is only~10%, whereas with highly heritable traits genetics may account for approximately half of the final phenotypic result. Since the heritability of all these traits is greater than zero, selection will result in genetic improvement for that trait. However, improvement will be much faster (occur in fewer generations) for traits with high heritabilities.
‘What breed should I choose?’ is one of the first questions asked by people interested in getting into sheep production. The answer to that question will be based on many factors including:
- Management system (e.g. producers interested in lambing on pasture will probably steer away from the higher maintenance prolific breeds; accelerated lambing programs will run much smoother with breeds that have long breeding seasons etc.)
- Marketing Strategy (e.g. producers who wish to sell into the heavy lamb market will probably select a breed that will not over-fatten by the time the market weight is reached (heavy mature weight). Conversely, producers who are selling the majority of their lambs at less than 80lbs may find that the heavier breeds do not adequately fatten at lighter weights.)
- Breeding Strategy (e.g. pure bred vs. commercial, see below)
Although individual breeds have unique characteristics, sheep can be grouped into several general classes (for further descriptions of individual breeds see the pamphlet ‘Canadian Sheep’):
1. Terminal or Sire Breeds: These breeds are generally characterized by rapid growth, muscularity, and good carcass traits. Reproductive performance may be somewhat lower than in the maternal breeds.
Some examples of terminal breeds are: Texel, Suffolk, and Charollais
2. Maternal Breeds: These breeds tend to have higher fertility, increased number of multiple births, higher milk production, increased longevity, and mothering ability. However, they tend not to be as large or well muscled as the terminal breeds. Some examples of maternal breeds are: Dorset, Outaouais Arcott, and Romanov
3. Dairy Breeds: These breeds have been specifically selected for high milk production. The milk from ewes is mainly used to produce cheeses, such as feta, ricotta, and Camembert. Examples of dairy breeds are: East Friesian and British Milk Sheep.
4. Wool Breeds: Different breeds have different types of wool. Although the production of various items requires the use of different types of wool, some breeds have become known as ‘wool breeds’ in light of the fact that their wool may be highly valued in specialty markets. Examples of these breeds are: Icelandic, Merino, and Shetland
In spite of the fact that these categories have been listed, please keep in mind that all breeds have lambs, will grow, have wool, produce milk, and have a carcass! There is considerable variation both within and between breeds and paying close attention to the breeders reputation for quality genetics, as well as the breed that they are selling, is an important part of making the correct decision when buying breeding stock.
Pure breeding (Straight cross)
This is the simplest type of breeding system, as all the sheep (rams and ewes) are the same breed. As noted above breeds have generally been selected for a specific aspect of their production (e.g. fast growth, strong maternal characteristics etc.). However, in a commercial lamb operation the number and quality of market lambs are important, so both terminal and maternal characteristics need to be incorporated into the breeding strategy. Therefore, purebred producers often supply ‘seed stock’ to commercial producers, who will use the purebreds as a foundation for a crossbreeding program.
As the name suggests, crossbreeding is the mixing of two or more breeds together. There are two main benefits to crossbreeding. Hybrid vigour or heterosis: Hybrid vigour refers to the fact that crossbred offspring often out-perform the average of their parents. Hybrid vigour decreases as the heritability of a trait increases. Therefore, it is often used to improve performance for low heritability traits. For example, two maternal breeds may be crossed to further improve reproductive performance in their offspring. This greatly benefits fertility traits, which are of low heritability and do not respond well to selection. Crossbred ewes are generally more fertile, productive, and long-lived than purebred ewes. For example, if a prolific breed (produces 3 lambs on average) is crossed with a less prolific breed (1.6 lambs on average), the cross is expected to produce (3+1.6)/2 = 2.3 lambs on average. However, the crossbreed ewe might well produce 2.5 lambs on average. The extra production over the average of the parental breeds (2.5 – 2.3 = 0.2 lambs) is due to hybrid vigour or heterosis.
Breed Complementarity: This refers to the crossing of two dissimilar breeds in order to combine the best traits of both breeds. An example of this would be crossing a well-muscled Texel ram with highly fertile Rideau Arcott ewes to produce a large crop of high quality lambs. Although the lambs may not be as heavily muscled as straight Texels, the lambing percentage will be much higher with the Arcott influence.
This type of strategy is likely to produce better results than trying to select for highly fertile, heavily muscled animals within one breed.
Two Way Cross
In this case, rams of one breed are used to breed ewes of a second breed, resulting in crossbred lambs. This strategy takes advantage of hybrid vigour and/or breed complimentarity in the offspring. Breeding is relatively simple as you are only dealing with one breed of ewes and one breed of ram. However, since the offspring are crossbred, all replacements must be purchased.
This strategy mates the two-way crossbred ewe lambs (‘F1’ lambs) to a ram of a third breed. The resulting progeny are a mix of three different breeds. This strategy takes advantage of hybrid vigour in the crossbred ewe as well as in the three-way crossed lambs. However, all replacement ewes still need to be purchased. Some producers specialize in producing crossbred ewe lambs for this type of system.
Three-Way Rotational Crosses
Similar to the three-way cross, the three way rotational cross starts with mating a crossbred ewe to a ram of a third breed. The crossbred ewe lambs are kept as replacements rather than being sold. These three-way cross ewe lambs are then mated to one of the two breeds in the first cross, and the process continues in the same manner. The following table illustrates the system.
This method of breeding helps maintain hybrid vigour and eliminates the problem of having to buy in replacements. However, the breeding season can get complex since there is a need for three separate breeding flocks each year for the three different breeds of rams. Accurate record keeping and animal ID are critical with this system.
This type of breeding program combines three-way crossing and the rotational crossing programs. In this system, a percentage of the Rideau Arcott x Dorset ewes from generation 1 (table above) would be bred in a two-breed rotational system using a Rideau Arcott or Dorset ram. All of the ewe lambs from this breeding would be kept as replacements. The remaining portion of the Rideau Arcott x Dorset ewes would be bred to a terminal sire, such as a Suffolk ram, and these lambs would be marketed.
This strategy produces replacements within the system and retains hybrid vigour in the ewe flock. However, three separate breeding groups are required each year to accommodate the three different breeds of ram, which requires reliable animal ID and record keeping systems | http://informedfarmers.com/sheep-breeding-and-genetics/ |
4.0625 | Gather together a variety of lengths of string, of at least 3
metres long, each tied into a loop. Chalk and/or a camera would be
useful so that you can record the shapes made. A pair of scissors
might also come in handy.
Suggest to the children that they find a partner to work
'Big space' maths
Use everyday words to describe position
Join with another pair. Take it in turns to make one shape
inside/outside/next to/above/under another.
Explore other ways of working together. Tell me what you've
Count reliably up to ten everyday objects
Use language such as more or less to compare two
Does your shape have corners/sides? How many? What do you
Can you fit any children in your shape? If so, how many? If you
change your shape, will there be more or fewer children?
See what you can make. You could use both hands.
Tell me about what you've made.
Look at theirs - can you make one like it?
I like the shape you've made.
I wonder how you could change it?
I wonder what you could do next?
Use everyday and/or mathematical language such as
'straight', 'corner', 'bigger', 'side', 'longer', 'triangle' ... to
describe the shape and size of flat shapes
Let's look at what happens when we hold it on the ground.
What can we see?
Have you seen this sort of shape before? Where?
How is it the same as/different from others?
Would you like to change your shape in some way?
Would you like to make another shape?
Use developing mathematical ideas and methods to solve
How could you make a shape with a different number of
What's the biggest object you can put your string around? What
would happen if we joined two or more pieces of string
Use photographs of the children's string shapes for sorting
and matching activities.
Have a go at Mrs Trimmer's String: http://nrich.maths.org/2907
In addition to these mathematical observations you will have
opportunities to observe other aspects of the EYFS Themes and
You may like to print off one of these sheets as an aide (1, 2, 3), on
which you can note down what individual children say and do as they
engage with the activity. Please do send us photos, further
suggestions and comments. | http://nrich.maths.org/7420/index?nomenu=1 |
4.125 | widely recognized for its wildlife and its rough, untamed beauty, the Arctic
region is a sensitive indicator of global change. The northern latitudes amplify
shifts in temperature, ocean circulation, precipitation and evaporation that
occur elsewhere on the planet, making the region a kind of early warning system
for global climate change.
Alaskas Muir Glacier has retreated more than 120 kilometers (75 miles) in the past 200 years. Courtesy of Bruce Molnia.
Over the past century, Arctic and sub-Arctic regions have warmed by 2 or 3 degrees Celsius, two or three times the amount of warming that has occurred elsewhere on the planet. That seemingly incremental shift upward in temperature has created noticeable changes in the region.
Last year, Jonathan Overpeck, director of the Institute for the Study of Planet Earth at the University of Arizona in Tucson, and his co-authors said that the Arctic may have reached a point where it will tip into a super interglacial state, a long warm period unlike anything experienced in the past 1 million years. The drivers behind that shift are complex but not completely unclear, Overpeck and co-authors concluded, in a paper published last August in Eos. Surprisingly, human activities, such as the release of greenhouse gases, seem to have less of a direct impact except in how they affect physical processes from far away, the scientists said.
Direct effects from ocean and air temperature changes, Overpeck and his co-workers determined, mean that sea ice in the Arctic, which generally freezes the surface of a significant portion of the North Poles open ocean no matter what time of year, could disappear during the summer months in the next century, resulting in completely open water in the Arctic for the first time in the past 800,000 years. Other researchers found similar timing in models for the degradation of near-surface permafrost, the layer of long-frozen soil that underlies millions of acres of Alaskan and Siberian tundra.
Even the massive ice sheet that covers Greenland seems to be feeling the effects of warming, as the glaciers that ring the edge of the massive block of continental ice flow faster. Greenlands ice sheet is more than 2 million square kilometers, and 85 percent of it is extremely thick: more than 3 kilometers deep in some places. If the entire sheet were to melt, it would raise sea level by as much as 6 meters (nearly 20 feet).
From the regions glaciers and Greenlands giant ice sheet to the Arctic Oceans sea ice and the regions permafrost, the icy landscape is shifting dramatically, fulfilling many climate scientists predictions that the Arctic would be the first to experience climate change, and that the effects there would be most pronounced. The ultimate outcome of these changes remains uncertain for plants, animals and ice, however, due to the complexities of the physical and biological systems there. Whether or not climate changes cause the Arctic to lose its ice completely, the warming temperatures have already set off a complex web of events.
Perhaps the most obvious change in the Arctic has been to the regions glaciers. Only recently have glaciologists determined how glaciers could start to move and shrink very rapidly.
The fastest-moving glacier in the world flows out of western Greenland. The glacier, called Jakobshavn Isbrae, moved at its fastest velocity of 12.6 kilometers a year in 2003, and continues to send its ice into the Arctic Ocean while receding inland. Image courtesy of NASA/USGS.
In the early 1980s, scientists noticed that glaciers at the edge of Greenland, as well as those in Alaska, had started to melt drastically, even though the local climate had not dramatically warmed. By the 1990s, though, long-term changes had thinned glacier ice to a threshold where a subtle change could set off massive melting, according to Tad Pfeffer of the Institute of Arctic and Alpine Research at the University of Colorado in Boulder.
When it gets a little warmer, things dont necessarily get only a little worse, Pfeffer said at a press conference at last Decembers annual meeting of the American Geophysical Union in San Francisco. He described the retreat of the Columbia Glacier in Alaska, for example, that has uncovered plants and soils for the first time since the mid-19th century. Disappearing as quickly as 30 meters a day, Pfeffer said, Columbias retreat is not unique or unusual in Alaska, where the termini of many so-called tidewater glaciers have retreated kilometers up their valleys.
Melting Alaskan glaciers already contributed between one-tenth and one-fifth of a millimeter to sea-level rise over the past half-century, according to Anthony Arendt of University of Alaska in Fairbanks and his co-workers, and at increasing rates in the 1990s. Using airborne laser altimetry to measure glacier ice thickness, they extrapolated a loss of 35 cubic kilometers a year by 2002 amounting to almost 0.3 millimeters of sea-level rise, and nearly twice the annual loss of ice from Greenlands ice sheet.
Rapid melting and retreat is now also occurring on the outlet or exit glaciers that rim Greenland. Fed by the huge continental ice sheet, exit glaciers carry ice off the continent and into the ocean, where it could melt and potentially raise sea level.
Cumulative research by glaciologists around the globe recently showed how such land-to-sea glaciers speed up. Added weight on the glacier can drive the velocity of an exit or tidewater glacier. Once it hits the ocean, the ice makes a transition at what glaciologists call the grounding line, the boundary on the glacier between its floating and grounded halves.
That line shifts with changes in ocean and air temperatures. A cold ocean allows a glacier to push into the water without melting, bringing the grounding line down toward the water with it as the glaciers weight pushes into the land and then into the water. But warmer ocean waters might melt the ice tongue, thinning the glacier and buoying up the ice, moving the grounding line inland as the glaciers weight recedes. Warmer air temperatures could cause a glacier to calve, decreasing its weight at the water end which also would make the glacier more buoyant and move the grounding line further inland.
These ice dynamics have dramatically thinned Jakobshavn Isbrae, a glacier in West Greenland, making it the fastest-moving glacier in the world (its fastest clip clocked well over 12 kilometers a year in 2003). In the past five years, calving at its terminus effectively took the brakes off the glacier, scientists from NASA and elsewhere have said, so that it both sped up and got shorter while it thinned, despite all the ice behind it from the continental sheet.
The weight and height of Greenlands ice sheet itself affects the behavior of the glaciers at its edges, even from its center, and the continental ice sheet is behaving curiously, says Waleed Abdalati, head of NASAs Cryospheric Sciences Branch at the Goddard Space Flight Center in Greenbelt, Md. Scientists have detected more snowfall inland on the ice sheet. If sea ice is retreating, that means theres more water vapor, Abdalati says, and more vapor produces more precipitation, essentially depositing more snow in the inland snow bank which eventually could feed the rapidly shrinking glaciers at the edge of the sheet.
The warming that produces more precipitation could also eventually change ocean circulation patterns, producing a different cascade of perhaps surprising effects: It could eventually cool the Arctic, Abdalati says. Disruptions, for example, in the Atlantics thermohaline circulation the so-called ocean conveyor belt that brings warm water north from the tropics could occur due to a sudden influx of cold freshwater from melting glaciers or increased discharge from large Arctic rivers. One such recent event (known as a Heinrich event) took place 16,000 years ago, when thousands of melting icebergs from North America changed ocean salinity and local air temperatures by as much as 10 degrees Celsius (see Geotimes, February 2006).
Overpeck and colleagues study, which projected that the Arctic could be completely ice-free during summer months by the end of the century, raises a slew of questions about complex feedback systems in the region. For example, Arctic researchers point out that losing ice means a change in the reflectivity across the region for several reasons, says Terry Chapin, an ecologist at the University of Alaska in Fairbanks, including the simplest: When it gets warmer anywhere, it changes snow cover, especially in the spring and the fall, or it changes the amount of sea ice that is present.
Snow and ice reflect the suns energy, but dark-colored areas absorb more heat, and then heat the air above. Any small amount of change in snow cover or ice from melting will dramatically change the amount of energy reflected, or the albedo, from the Arctic, Chapin says. Shifting from a white surface to dark has tremendous amplification effects on the warming that occurs. Chapin estimates that the loss of all the Arctic Oceans sea ice would increase the amount of energy absorbed by the water from 5 percent to 70 to 90 percent.
Satellite measurements of ice covering the north polar ocean show smaller and smaller coverage of the Arctic over the past 25 years. Analyses made every September, the month of the year in the Arctic with the minimum annual ice cover, show that the Arctic Ocean has lost more than 329,000 square kilometers of sea ice per decade over the past 30 years, according to one NASA study. Last fall, researchers at the National Snow and Ice Data Center at the University of Colorado in Boulder, reported that Septembers sea ice covering the Arctic measured 5 million square kilometers the smallest area since satellite measurements started in 1978.
Changes to albedo from that loss promise to ripple through the rest of the planets climate system, according to climate models. One preliminary study by David Rind of NASAs Goddard Institute for Space Studies in New York City and co-workers shows that the complete disappearance of sea ice accounted for as much as 37 percent of the global average temperature change in model runs. Another by Jacob Sewall and Lisa Sloan of the University of California in Santa Cruz shows that precipitation over the western United States could decrease dramatically, possibly through teleconnections between shifting temperatures and effects on weather in the northern latitudes and subsequent storm tracks over western North America.
But scientists working in the Arctic say they do not know exactly what will happen when sea ice disappears. So far, too few studies have been conducted, says Marika Holland, a co-author of Overpecks at the National Center for Atmospheric Research (NCAR) in Boulder, Colo. Its really difficult to isolate the effects of the sea ice versus everything else thats changing, she says.
As a former student and now as a hydrology professor at the University of Alaska in Fairbanks, Larry Hinzman says that he has seen a variety of changes to the regional landscape, some of which have been taking place over the past century. But some changes seem to have happened overnight, Hinzman says such as the appearance of thermokarst.
The development of such sinkholes, which occurs as permafrost degrades, is one of the most dramatic changes that Hinzman says he has seen since he first moved to Alaska in the 1980s. Recent short winters, warmer summers and other aspects of warming climate have thawed permafrost that then does not refreeze solidly during the next cold season. The melt has left the Alaskan interior pockmarked by thermokarst sometimes creating small lakes and sinkholes underneath houses, some large enough to swallow an 18-wheeler truck.
Defined as ground that has been frozen for at least two years, permafrost can extend as deep as several kilometers in some regions, and its extent may be patchy or continuous over large regions. The layer that is most at risk is the top 3 to 5 meters, according to modeling by Dave Lawrence of NCAR and Andrew Slater of the University of Colorados National Snow and Ice Data Center in Boulder.
Using results from a global climate model to evaluate a variety of possible future climate scenarios, Lawrence and Slater found that northern latitudes surface layer of permafrost could nearly disappear in the next 100 years, they reported online Dec. 17 in Geophysical Research Letters. Other permafrost researchers disagree, saying it will take much longer (from hundreds of years to thousands), and the modelers themselves underscore the uncertainty of their conclusions. Nevertheless, fallout from current permafrost loss or degradation can be seen even now, says Glenn Juday, an ecologist at the University of Alaska in Fairbanks.
As thawing of permafrost continues, there are all kinds of implications, Juday says. In a landscape that is extremely cold, vegetation and other organic matter doesnt decompose, making permafrost a giant storehouse of carbon, he says. Its the perfect carbon sequestration mechanism, and were basically unwinding it, the product of thousands of years of excess growth and decomposition, and releasing greenhouse gases (carbon dioxide and methane) to the atmosphere. The amount of carbon stored in the Arctic remains under debate, with some researchers estimating that one-third of the worlds soil-stored carbon is tucked away in western Siberia alone (see Geotimes, July 2005).
Juday also cites previous work showing that permafrost thaw has led to a loss of about 40 percent of some regions surface water. Water perched in lakes and ponds that sat atop long-frozen soil percolated away once the permafrost barrier melted.
Such thawing has led to dramatic changes in river systems and watersheds, Hinzman says. He and his colleagues have found that softened permafrost allowed rivers to change their courses: In some cases, streambeds shifted from straight parallel lines, like the tines of a comb, to a more mature network, like the branches of a tree. The resulting changes increase erosion, producing more silt and sediment, carried downstream where it impacts fish, insects and plants, as well as river and stream flows.
Shifts in permafrost may also allow darker plants to move northward into previously lighter-colored regions, with the potential to increase the amount of heat absorbed locally by Earths surface. Although these effects on albedo are relatively small, warmer air temperatures could lead to more warming and melting.
Beyond their potential to affect the entire planets climate system, many Arctic changes will remain particularly intense locally, as concluded in results of the Arctic Climate Impact Assessment. The coalition of scientists from Arctic-ringing nations released several reports last year detailing the effects of climate change in the region. Biodiversity there might increase, as warmer weather allows species that could not previously tolerate the cold to move in, while some native species might disappear. Anecdotal evidence shows that indigenous people have already lost some traditional hunting routes, as have polar bears, which rely on sea ice as a platform for winter feeding.
Oil exploration and drilling seasons in Alaska have shortened with shrinking winters, as thawed tundra cannot support heavy equipment in warmer temperatures. Although an ice-free Arctic Ocean may mean expanded shipping passages, new fishing grounds and access to previously ice-covered oil deposits, drawbacks include political boundary disputes as international interest increases in getting access there and the increased resources necessary to patrol Arctic waters, the Arctic Climate Impact Assessment reports say.
On the one hand, some in the Arctic region may be overjoyed by [an] Arctic ice-free sea route, says Mickey Glantz of NCAR. On the other hand, its devastating, because the loss of Arctic sea ice will alter the northern hemisphere. Meanwhile, all interested parties scientists, policy-makers and residents will continue to watch the northern latitudes for changes, and how those changes might translate to the rest of the planet. | http://www.geotimes.org/mar06/feature_ArcticIce.html |
4.0625 | Computer programming (often shortened to programming or coding) is the process of writing, testing, and maintaining the source code of computer programs. The source code is written in a programming language. This code may be a modification of existing source or something completely new, the purpose being to create a program that exhibits the desired behavior. The process of writing source code requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.
(More at Computer programming Wiki)
A programming language is an artificial language that can be used to write programs which control the behavior of a machine, particularly a computer. Programming languages are defined by syntactic and semantic rules which describe their structure and meaning respectively. Many programming languages have some form of written specification of their syntax and semantics; some are defined by an official implementation (for example, an ISO Standard), while others have a dominant implementation (such as Perl).
Programming languages are also used to facilitate communication about the task of organizing and manipulating information, and to express algorithms precisely. Some authors restrict the term "programming language" to those languages that can express all possible algorithms; sometimes the term " computer language" is used for more limited artificial languages.
(More at Programming language Wiki) | http://www.codethispc.com/ |
4.03125 | ||This article needs additional citations for verification. (September 2009)|
The megahertz myth, or less commonly the gigahertz myth, refers to the misconception of only using clock rate (for example measured in megahertz or gigahertz) to compare the performance of different microprocessors. While clock rates are a valid way of comparing the performance of different speeds of the same model and type of processor, other factors such as pipeline depth and instruction sets can greatly affect the performance when considering different processors. For example, one processor may take two clock cycles to add two numbers and another clock cycle to multiply by a third number, whereas another processor may do the same calculation in two clock cycles. Comparisons between different types of processors are difficult because performance varies depending on the type of task. A benchmark is a more thorough way of measuring and comparing computer performance.
The myth started around 1984 when comparing the Apple II with the IBM PC. The argument was that the PC was 5x times faster than the Apple, as its Intel 8088 processor had a clock speed roughly 5x the clock speed of the MOS Technology 6502 used in the Apple. However, what really matters is not how finely divided a machine's instructions are, but how long it takes to complete a given task. Consider the LDA # (Load Accumulator Immediate) instruction. On the 6502, which runs at 1 MHz, that instruction requires 2 clock cycles, or 2 μs. Although the 4.77 MHz 8088's clock cycles are shorter, the LDA # needs 25 of them, so it takes 25/(4.77 x 106) = 5.24 μs. For that instruction the Intel machine runs 2.62 times slower than the Rockwell.
The x86 CISC based CPU architecture which Intel introduced in 1978 was used as the standard for the DOS based IBM PC, and developments of it still continue to dominate the Microsoft Windows market. An IBM RISC based architecture was used for the PowerPC CPU which was released in 1992. In 1994 Apple Computer introduced Macintosh computers using these PowerPC CPUs, but IBM's intention to produce its own desktop computers using these processors was thwarted by delays in Windows NT and a falling out with Microsoft. Initially this architecture met hopes for performance, and different ranges of PowerPC CPUs were developed, often delivering different performances at the same clock rate. Similarly, at this time the Intel 80486 was selling alongside the Pentium which delivered almost twice the performance of the 80486 at the same clock rate.
Rise of the myth
The myth arose because the clock rate was commonly taken as a simple measure of processor performance, and was promoted in advertising and by enthusiasts without taking into account other factors. The term came into use in the context of comparing PowerPC-based Apple Macintosh computers with Intel-based PCs. Marketing based on the myth led to the clock rate being given higher priority than actual performance, and led to AMD introducing model numbers giving a notional clock rate based on comparative performance to overcome a perceived deficiency in their actual clock rate.
Modern adaptations of the myth
With the advent of multi-core and multi-threaded processing, the myth has stirred up more misconceptions regarding the measurement of performance in multi-core processors. Many people believe that a quad-core processor running at 3 GHz would result in an overall performance of 12 GHz. Others may say that the overall performance is in fact 3 GHz, with each core running at 750 MHz. Both of these ideas are in fact incorrect. While micro-architecture traits such as pipeline depth play the same role in performance, the design of parallel processing brings other factor into the picture: software efficiency.
It is true that a poorly written program will run poorly on even a single-core system, but even a well written program that was designed in a linear fashion, will often (if not always) perform better on a single-core system than a multi-core one.
Take the following instructions, for example:
|1||x = (x + 1);|
A program such as this will actually run faster on a single-core chip with a 4 GHz clock rate, than on a dual-core chip that clocks at 2 GHz. Why? Because the equation x = (x + 1) depends on the previous value of x, which can only be accessed by the core that computed that previous value. Therefore, every time that instruction repeats (indefinitely, in this case) the new value of x must be derived by the same core as the previous value, effectively limiting the process to one core. On a single core system, this is fine and dandy, as the entire 4GHz peak performance is churned out by one lone core, but on a multi-core system, each core is running at only 2GHz, slicing the program's speed in half (or more).
However, if the code is altered:
|1||x = (x + 1);|
|2||y = (y + 1);|
The values x and y are independent of each other, and therefore can be processed on separate cores at the same time (rather than waiting in queue for one core). Programs that are written to take advantage of multi-threading, such as this one (please note that setting up and writing a muti-threaded program is much more complex than depicted here, these examples have been simplified for the sake of readability and aim of the article) are able to approach the peak efficiency of (clock rate * number of cores), but due to shared resources between cores (each core needs to pick up instructions from a common place) %100 peak efficiency will never fully be reached, thus even a well-written multi-threaded program will still shed favorable results (however slightly) on a single-core system of twice the advertised clock speed.
A system's overall performance cannot be judged by simply comparing the amount of processor cores and clock rates, the software running on the system is also a major factor of observed speed. The myth of the importance of clock rate has confused many people as to how they judge the speed of a computer system.
Challenges to the myth
Computer advertising emphasized processor megahertz, and by late 1997 rapidly increasing clock rates enabled the Pentium II to surpass the PowerPC in performance. Apple then introduced Macs using the PowerPC 750 (or G3) which they claimed outperformed Pentium IIs while consuming less power. Intel continued to promote their higher clock rate, and the Mac press frequently used the "megahertz myth" term to emphasise claims that Macs had the advantage in certain real world uses, particularly in laptops.
Comparisons between PowerPC and Pentium had become a staple of Apple presentations. At the New York Macworld Expo Keynote on July 18, 2001, Steve Jobs described an 867 MHz G4 as completing a task in 45 seconds while a 1.7 GHz Pentium 4 took 82 seconds for the same task, saying that "the name that we've given it is the megahertz myth". He then introduced senior hardware VP Jon Rubinstein who gave a tutorial describing how shorter pipelines gave better performance at half the clock rate. The online cartoon Joy of Tech subsequently presented a series of cartoons inspired by Rubinstein's tutorial.
Intel reaches its own speed limit
For many years from approximately 1995 to 2005, Intel advertised its Pentium mainstream processors primarily on the basis of clock speed alone, in comparison to competitor products such as from AMD. Press articles had predicted that computer processors may eventually run as fast as 10 to 20 gigahertz in the next several decades.
This continued up until about 2005, when the Pentium Extreme Edition was reaching thermal dissipation limits running at speeds of nearly 4 gigahertz. The processor could go no faster without requiring complex changes to the cooling design, such as microfluidic cooling channels embedded within the chip itself to remove heat rapidly.
This was followed by the introduction of the Core 2 desktop processor, which was a major change from previous Intel desktop processors, allowing nearly a 50% decrease in processor speed while retaining the same performance.
Core 2 had its beginnings in the Pentium M mobile processor, where energy efficiency was more important than raw power, and initially offered power-saving options not available in the Pentium 4 and Pentium D. | http://en.wikipedia.org/wiki/Megahertz_myth |
4.125 | The greatest component of drag, and the main difficulty for ship designers, is frictional drag created from the interaction between the hull surface and the surrounding water. The region of water affected by the passage of a ship—known as the boundary layer—is a turbulent area where the presence of the solid surface slows general water flow. Injected air lubricates the boundary layer. Because air's viscosity—its resistance to flow—is only about 1 percent that of water, the ship moves through more efficiently. "Most of the action occurs only a millimeter or two away from the surface," says Steven Ceccio, a University of Michigan mechanical engineer leading a U.S. team's research of ship-hull drag. "One bubble diameter away is enough to halt the effect." Ceccio's work is supported by the Defense Advanced Research Projects Agency (DARPA) and the Office of Naval Research.
Over the past eight years, the Michigan team has investigated a variety of techniques to cut friction drag. First it looked at injecting slippery polymers into the water at the boundary layer. "Near the injector, drag was reduced by 70 percent, but the polymer degrades in the turbulence and just diffuses away," Ceccio says, "which means it needs constant replenishment, so we turned elsewhere."
The researchers next shot bubbles—a millimeter or less in diameter—into the boundary layer. They got an 80 percent drag decrease for six feet (two meters) or so, but again, no satisfaction; the bubbles refused to cling to the hull surface long enough to have a significant effect on overall efficiency. If one injects enough gas, however, the bubbles eventually coalesce into a buoyant film that can sit (at least for awhile) between the horizontal hull and the water, which is what Ceccio's team is working on now—air layer drag reduction. In this concept, the bubbles typically would leak sternward and out from under the hull. New air would be injected forward to constantly refill the lubricating air pocket.
Scientists speculate that more effective drag-lowering systems using smaller "microbubbles" might be possible if someone could come up with a low-cost way to make the sub-millimeter bubbles. Winkler says that his company is working on a "super-microbubble generator" that would enable existing ship hull designs to be retrofitted with such technology. These systems would also require the installation of surface cavities in the hulls.
The big issue then becomes maintaining stable coverage of nearly the entire hull surface so that rough seas do not simply wash away the bubbles. Continuous, maximal coverage is the key to success; every millisecond that a section of hull contacts water directly contributes to drag. This means ships might have to be equipped with radar and laser sensors that detect oncoming waves, which could permit constant adjustment of air flow in time to compensate for rough seas.
Although the costs of this air-carpet technology have not been fully worked out, Winkler says that adding relatively simple air cavity systems into new ship construction would add 2 to 3 percent to building costs. | http://www.scientificamerican.com/article.cfm?id=air-cavity-system&page=2 |
4.125 | Information about the Swampy Cree Indians.
The Cree can trace their earliest origins to the James Bay region of northern Quebec. The arrival of European fur traders into the James Bay area in the 1600s prompted the Cree to use their trapping and river navigation skills to secure a place for themselves as animal pelt suppliers. The trading of European goods brought about a shift in the traditional way of life for the Cree, as access to European firearms gave them a sudden advantage in hunting and warfare. Traditional Cree tools were gradually replaced by European-made implements. The Cree also traded extensively with the Nakoda (Assiniboine) People and forged a close alliance with them during the 1600s. Trading partnerships for the Cree often led to kinship alliances with those with whom they traded, and intermarriage with other Aboriginal peoples was common.
The depletion of fur-bearing animals pushed the fur trade west to the woodland regions of modern day Manitoba, Saskatchewan, and Alberta, and the Cree moved and expanded their territory alongside it. By the early 1700s, the Cree had shifted south, out of the northern woodlands, living part of the year hunting bison on the plains, while spending winters in the north trapping for animal pelts. Eventually, some bands of Cree separated and became culturally distinct from the Woodland Cree. The Plains Cree opted to live a permanent life on the plains, expanding south, displacing some existing peoples in the region while establishing trade and military alliances with others. | http://www.obsidianportal.com/campaign/house-of-stuart/wikis/the-cree |
4 | for National Geographic News
NASA scientists have programmed a model airplane to seek out rising columns of hot air called thermals and use them to soar like a bird.
The airplane could help monitor forest fires, guard borders, and collect weather data, according to the team.
In the future such planes could use similar updrafts to extend flight time on Mars, giving scientists a bird's-eye view of the planet.
"There have been large dust devils detected on Mars, which indicates a lot of convection [warm updrafts]," said Michael Allen, an aerospace engineer at NASA's Dryden Flight Research Center in Edwards, California.
Allen led the team that helped design the soaring airplane. The vehicle is nicknamed Cloud Swift after a bird known to eat insects found floating in thermals.
The airplane is an off-the-shelf model glider with a 14-foot (4-meter) wingspan. It's equipped with an autopilot device that's programmed to seek out and fly in thermals.
Allen was inspired to make the planeknown as an unmanned air vehicle (UAV)after watching birds and glider pilots seek out thermals to extend their time in the air.
Once inside the thermal, birds and gliders are able to stay aloft and gain altitude without using any energy.
Since the columns of rising hot air are invisible, glider pilots and birds look for places where they're likely to form, such as a plowed field baking under the sun, or signs that thermals are nearby, such as cumulus clouds.
"We programmed the UAV to fly in a search pattern, and when the aircraft starts to rise it would then decide whether the thermal was strong enough to stop and soar," Allen said.
SOURCES AND RELATED WEB SITES | http://news.nationalgeographic.com/news/2006/04/0407_060407_airplane.html |
4.03125 | President Abraham Lincoln issued the preliminary order for the Emancipation Proclamation on September 22, 1862. Although by January 1st the document was signed, it was a few years before black freedom was recognized in the South.
One of the first tools for change was education. Now that former slaves could be taught to read and write, funding was needed for the schools. In New Orleans, abolitionists sold pictures that showed very light-skinned mixed-race slave children longing to read. To the naked eye, the children appeared to be Caucasian.
The 25-cent photos were taken and distributed in the mid to late 1860’s in order to draw more money and sympathy from rich whites in the North for the black slaves of New Orleans. The children were posed in ways that would be ‘appealing’ to sympathetic whites. The National Freedman’s Association, the American Missionary Association and officers from the Union Army fostered the propaganda.
Four mixed-race children were used in the pictures, like 11 year-old Rebecca Huger, who had worked in her father’s home during slavery. She was carefully seated next to patriotic symbols of freedom while the caption read “Oh, how I loved the old flag.” The other children were Charles Taylor, Rosina Downs and Augusta Broujey. In a few of the photos, the children were paired with darker-skinned slaves, or former slaves, then sent on publicity tours raise monies.
The signs even sometimes read “White and Black Slaves” to build a sense of urgency among whites. The photos sometimes went into detail about the slave’s life and ownership. For instance, Wilson Chinn, an older dark-skinned slave was described as 'about 60 years old' with the initials of his former 'owner' branded on his head with a hot iron. There were stories of cuts and lashes on the bodies of the slaves in the picture to build sympathy. There were also stories of progression and education for some of the children, highlighting their ability to learn like that of white children.
The U.S. Library of Congress currently holds many of the photos. | http://blackamericaweb.com/44692/little-known-black-history-fact-white-slave-children/comment-page-1/ |
4 | Listening Skills (Grade 9-12) The purpose of this activity is to increase the students' ability to listen and to understand what is being read and/or told to them.
Pictures: Following Oral Directions
Make Me a
Copy Please (Grade 5-6) Often times students are not able to communicate clearly what they would like to say. It is the purpose of this lesson to help
student understand the need to be articulate and precise when explain steps to another student. In addition the student listening
will learn to be a more effective listener.
Pictures (Following Oral Directions) Many children have difficulty accurately giving or following verbal instructions. To encourage students to focus
on the importance of clear, oral communication.
In this lesson, students will hone their group communication skills by role-playing the parts of tour guides at Yellowstone National Park.
Convince Me! In this lesson students go to the Internet to learn the art of persuasive speaking in order to present a speech in a convincing manner.
Grade: 1 - 3
Aboard Grade: 2
Autobiography Grade: 7 - 12
Egypt Grade: 6 - 8
Adjectives Grade: 4 - 8
for Your Thoughts Grade: 4 - 6
in Poetry: Teaching the Imagists Grade: 9 - 12
Poem Grade: 3 - 12
Grammar Review Using Jabberwocky Grade: 7 - 12
a Logophile Grade: 4 - 8
of Virtues Grade: 5
Letter Grade: 7 - 8
as a Bee Grade: 3 - 6
Using Antonyms To Write Short Stories Grade: 2 - 3
Directory Grade: 7 - 12
an Autobiography Using a Web Grade: 1 - 4
Meaning Through Drawing Pictures Grade: 5 - 7
a Newspaper Grade: 3 - 5
a Story Grade: 6 - 8
your own grammar exercise Grade: 10 - 12
Writing - Collaborative Stories Grade: 1 - 12
Writing - Rainbow Fish Grade: 1
Writing - What Would Happen If? Grade: 6 - 8
Writing Using Comics Grade: 4 - 8
Writing with Newspaper Photos Grade: 5 - 12
Grade: 8 - 12
Tale Journey Grade: 3 - 4
(happy, sad, silly, angry, scared) Grade: 1 - 2
Poetry Grade: 6 - 12
on Summarizing Information Grade: 2 - 8
Grade: 3 - 5
Outlining Grade: 10 - 12
Grade: 1 - 3
Review - My Favorite Author Grade: 12, Adult/Continuing
Kiss Discovery Grade: 3 - 8
Books Grade: 5
Homophones Grade: 6 - 8
to Write a Biopoem Grade: 3 - 4
Trees Could Speak, What Would They Say? Grade: 3 - 6
Postman--Improved by Your Students! Grade: 1 - 4
Sandwiches Grade: 2
Ourselves and Others through Poetry Grade: 6 - 12
Tennessee - Poem Model Grade: 6 - 12
Guide Lesson Plan Grade: 6
Synonyms and Antonyms in Pairs Grade: 6
Writing/ Introduction to Autobiography or Journal Writing
Grade: 8 - 12
the Back of My Hand Grade: 9 - 12
Letter/Syllables and Punctuation Grade: kindergarten - 2
Pyramid: Preparing for a Journey Grade: 4 - 8
Year With ______(Specific author's name is written in the blank)
Grade: 4 - 12
Activities Grade: kindergarten - 1
Being a Successful Learner: Setting Goals Grade: 9 - 12
Upon A Time . . . Grade: 4 - 6
Describes Both Grade: 2 - 6
Pals Grade: 5 - 8
Touch: A Lesson in Expository Writing Grade: 4 - 12
Essay Grade: 2 - 4
Lesson Grade: 6 - 12, Adult/Continuing education
Endings Grade: 1 - 3
a trip by creating an itinerary or brochure Grade:
kindergarten - 12
Gifts Grade: 9 - 12
Me for the Horror: The Feminist Way Grade: 12
Grade: kindergarten - 6
Scrapbook Grade: 12
Lesson Plan (The Sequencing Monster) Grade: 3 - 5
Creation Magic: Character, Setting, and Plot Grade:
kindergarten - 6
Pops Grade: kindergarten - 12
Biographers Grade: 5 - 8
Paragraphs Grade: 4 - 6
Tales: A Study of Perspective Grade: 7 - 12
Mouse Country Mouse: Recognizing Story Grammar Grade: 1 -
of Sentences Grade: 3 - 5
the Sea Grade: kindergarten - 1
Some Adjectives! Grade: 1
Star Trek to Enhance Critical Thinking Skills Grade: 10 -
Poetry Using Poems by Langston Hughes Grade: 9
W Poems Grade: 4 - 5
Your Spelling Words
Grade: 1 - 3
Learning spelling words does not have to be all drill. In this activity the students will be rhyming and playing with
their own names and then doing the same things for their spelling words.
- A Spelling Game
Grade: 1 - 5
Student will play a game that challenges their spelling skills.
Go Fish Grade:
This lesson allows students to learn their spelling words for the week and enjoy it. The students play the game as enjoyment and end up learning their spelling
words meanwhile without it being noticed.
As American as apple pie, the weekly spelling list is a "cornerstone" of education! Admit it or not, we all use lists
(by choice or by administrative mandate!), and drilling and practicing for Friday's test can be pretty boring! The following are
three ideas I use with my third graders, though I feel each could be used with any grade.
Grade: 3 - 6
Children are encouraged to see words/learning as something fun and challenging; the good spellers are an important
part of the team rather than being looked down on as "bookworms". Natural leaders surface helping the group form the words.
Group cooperation becomes important and a reachable, seeable, profitable entity rather than some teacher's unimportant
(Grade 4) There are no manipulatives to reinforce the language arts curriculum. Creative dramatics is a method of providing practice in the
for Your Thoughts (Middle grade) Students in writing classes are given apples and are asked to examine them closely for unique characteristics that will serve as
the basis for a descriptive paragraph.
Introduction to Similes (Grade 1) A direct lesson about similes and their use to facilitate comprehension of text that uses similes.
Adjectives (Grades 4-8) Have students redesign a restaurant menu. The students will use adjectives to make the menus more appetzing.
Poem (Grade 3-12) In this lesson, the writer analyzes self to provide an introduction to the rest of the class.
Grammar Review (Grade 7-12) The purpose of this activity, used at the beginning of the year is to help students identify where they are weak in their grammar
skills (in a fun fashion). From there, the teacher can choose to emphasize the various areas of grammar that need to be
Logophile (Grade 4-8) The purpose is to provide a variety of pre-writing activities which will encourage students to manipulate, explore, discover and
fall in love with words.
Busy as a
Bee (Grade 3-6) The purpose of this activity is to expose students to similes and how they can be used in writing. This activity will allow students
to "write" their own similes without the pressure that is often found when we ask students to write for us.
Using Antonyms to Write Short Stories (Grade 2-3) Children will write short stories about themselves using antonyms and comparisons of themselves to animals. By the end of the
lesson children will understand the meaning of antonym and will have enhanced their writing abilities.
Directory (Grade 7-12) A class directory is a booklet of stories written by the students in a given class about other students in the class. By doing this
project, students become better acquainted and bond as a class. When done at the beginning of the year it not only
"breaks the ice", it serves as a diagnostic tool for the teacher. I can quickly assess where each student is in social skills, language, reading,
writing, spelling, etc. Writing skills, such as asking for complete information, following up on questions, organizing information
on a variety of topics, and making generalizations based on specific bits of information, are also developed.
An Autobiography Using A Web (Grade 1-4) Composing an
autobiography for the first time can be difficult. Through using a web layout the students will be able to pick out
the interesting and important facts about themselves.
Meaning by Drawing Pictures (N/A) 1. The learner will sketch a picture to represent their understanding of the key concepts.
2. The learner will interact with peers to construct meaning.
Newspaper (Grade 3-5) In this lesson, children will create a newspaper on the web. They can choose their own links to news sources, comics, local
events, etc. They will be able to modify the paper whenever they like. The students may add their own links and can use their
paper as a personalized homepage.
Writing (Grade 1-12) This is a creative writing time that takes a minimum of 25 minutes. During this time students are beginning their
own story, reading another's beginning and creating the middle section, reading yet another story and finally developing a
conclusion for that story.
Writing - Rainbow Fish (Grade 1) A creative writing exercise on the story The Rainbow Fish. An activity to deal with feelings.
Writing Using Comics To use comics to foster creative writing and vocabulary skills
A Class Newspaper To develop students' writing skills through production of a class newspaper
Tale Journey (grade 3-4) In this creative writing process, the student will assume the role of the main character in the fairy tale. The student will use
fantasy to change the ending of a familiar fairy tale.
Books (Grade 5) To write, illustrate, and publish a book fora specific audience.
Homonyms (Grade 6-8) When writing, students at the junior high level often confuse and misuse words that sound alike but have different meanings.
Words pairs such as your-you're, whose- who's, there-their, and past-passed are examples of these "horrid
homonyms" where mistakes are not evident in speech but are only too evident in writing! This activity is designed to remind students of the specific
meanings and correct usage of some of these often confused words.
Sandwiches (Grade 5-8) This lesson is useful as a prewriting activity. Sandwiches have likely been a dietary mainstay of your students. Likewise, most of
them have some experience with eating in a restaurant. This lesson will ask students to design sandwiches for all meals, courses,
Ourselves and Others Through Poetry (Grade 6-12) Getting to know students and getting them to know themselves through writing.
Guide Lesson Plan (Grade 6) The purpose of this learning guide is to reinforce the writing process and to teach good proofreading skills. The writing process
is information that the students have seen before. The dreaded errors are ten words that many people misuse when they are
Synonyms and Antonyms in Pairs Grades 6 In this lesson students will work cooperatively to learn about synonyms and antonyms and how to use them. They will do this
by matching word cards that have the same meaning and word cards with different meanings and using the words in sentences.
Writing (Grade 8-12) This lesson plan serves as an introduction to a study of autobiography (such as Frederick Douglass') and/or journal
writing. In addition, students will learn to distinguish between "facts" they know, sensory detail, and their imagination, and
practice applying all three to their writing.
Back of my Hand (Grade 9-12) The purpose of this exercise is to introduce students to writing for fun.
Pyramid - Preparing for a Journey (Grade 6-9)
The students will write a three paragraph paper describing the treasures they would stock in their
pyramid and explaining why their Ka would want and/or need these items on its journey.
A Time . . . (Grade Intermediate) This writing activity uses the fairy tale structure to demonstrate all of the elements of a short story.
Personal Touch: A Lesson In Expository Writing (Grade 7-12) In addition to providing an opportunity to practice clarity and thoroughness in writing, students are made
aware of some of the subtle non-verbal messages in common social situations involving hand touching.
Essay (Grade 2-4) 1.Understand the purpose of a photo essay. 2.Sequence a series of events. 3.Understand the format in creating a photo essay which includes a caption for each picture. 4.Complete a photo essay as a creative activity by using photos, magazine pictures or drawings to illustrate a story. 5.Read and enjoy a photo essay.
Endings (Grade 1-3) This lesson plan allows students to become familar with fairy tale genre through personal writing practices and
computer software programs.
a trip by creating an itinerary or brochure (Grade - any level) The students will be planning an itinerary or brochure for a trip to a place of their choice. Through previous lessons and
research, they will choose which area they would like to visit . In their itinerary/brochure, the students will
state where they are going (including a map of the location), sites they will see there and information that would be helpful for a
of View Point of View-writing from 5 different characters point of view. Using language to show emotion and description.
The importance of good hand-writing.
(Grade k-6) This project covers many language arts concepts and skills at each learner's level
of competency. It inspires joy in reading books to a captive audience and pride in work well done. Older students discover the need to write
purposefully, descriptively and
clearly for a younger audience.
Lesson Plan (Grade 3-5) This lesson provides a visual experience in which students develop a better understanding of sequencing, while further
developing their writing skills.
Biographers To Teach students how to be a biographer. This will include what types of questions to have the
biography answer, and the kinds of problems which biographers run into when on assignment.
Paragraphs (Grade 4-6) This activity guides students through the writing process for a successful five-sentence
paragraph with varied sentence beginnings. Repeating this process frequently with many, varied topics teaches students to use
variety to create interesting paragraphs.
Mouse Country Mouse: Recognizing Story Grammar (Grade: any primary)
1. Using a Venn diagram students will verbally compare and contrast the experiences of the country mice and the town mice.
2. Students will demonstrate their knowledge of the basic parts of a story by successfully completing a story map of Town
Mouse Country Mouse by Jan Brett.
for Audiences (Grade 7-12) To write 4 letters for 4 different audiences with the appropriate language and style for each. Using correct letter conventions.
Prompt for Audience, Persuation, and Point of View (Grade 9-12) To develop an awareness of audience, methods of persuasion, and the proper tone or mood to
achieve writer's goal as well as point of view. To practice letter writing.
Poems (Grades 4-5) This lesson is designed to give students a new or different way to write a poem. It is more structured than just telling students to
write a poem, so some students may find they like this type of poem writing.
The Art of Reading Poetry
In this lesson students go on the Internet to collect poems written by other young
people, then practice expressing the feelings in the poems by reading them aloud. Students discover that the end of a line in poetry doesn't always call for a pause. The poet's thought is the important thing and punctuation is the clue.
Fun with The Alphabet
To review and reinforce the sounds and symbols of the alphabet
Walk in Their Shoes
In this lesson students have a chance to live out this fantasy. First they investigate the lives of some intriguing personalities and make notes about biographical information. Then students write first-person memoirs for
the personalities and read them aloud to the class.
Puppets 'n' Plays
In this lesson students reinforce communication skills, create the puppets, write or improvise dialog for them, and put on a play. This procedure allows a student to put words into the mouth of a character he or she created, which in turn makes the student feel even more secure about being in the puppet play!
The Way They Are
This lesson requires students to use critical thinking and problem-solving skills as they read about the
animals, hypothesize how they might have changed, isolate animal
characteristics, and write stories about new and unusual animals.
It's News to Me!
In this lesson, students learn what the standard sections of a newspaper are. Then students go to the Internet to learn how to create their own online newspaper in the same way more than 120,000 other people have already done! Finally, students prepare a mock-up of a class newspaper, complete with original art and important sections like "What's for Lunch?"
Language - ARTS Elementary (K-5)
Successful Paragraphs (4-6)
Creative writing; multi-author story writing (1-12)
'School News' using writing, speaking and/or questioning skills (3-12)
Whole language experience using "Casey at the Bat" (3-5)
Vocabulary & language concept development (PreK-2)
Descriptive/Persuasive writing 'My Pyramid - Preparing for a Journey"
Increasing vocabulary for primary students (1-3)
Sounding-out CVC words, 'The Blending Slide' (K-1)
Using popcorn to create a reading book (K-3)
Creative Writing; turn on inventiveness with 'Potato Possibilities' (4-6)
Color Code Writing; forming letters and numbers with colors (K-3)
Integrated vocabulary, listening and creative writing exercise (K-12)
Writing - a photo essay (2-4)
Whole language story for developmental activities (K)
Writing 'Auto-Bio' poems (4-12)
Spelling; three great techniques for weekly spelling lists (3-6)
Literature; activity to understand character's personality (K-12)
Learn a topic through research & drawing - Alphabet Book (K-12)
Vocabulary & language comprehension using "Land Before Time"
Reinforcing alphabet names/sounds (K-1)
Whole Language; Oklahoma Indian History: Spiro Mounds (3-6)
Working with syllables using music patterns (2-4)
Inferring Character Traits (all grades)
Reading, Whole Language; Story Pyramid
Adverbily, practicing the use of adverbs (4)
'Busy As A Bee', working with similes (3-6)
Creative thinking, writing, reading & character analysis using
"Frog and Toad are Friends" (2)
Bibliotherapy, studying the perils of prejudice (3-6)
Following oral directions using 'Mystery Pictures' (1-6)
'Read In'; Unique story writing involving two grade levels (K-6)
'Poetry Cubes', develop an appreciation for different styles of poetry
'An Irritating Creature' - poetry lesson (3-4)
Writing activity using fairy tale structure to identify elements of a
short story (4-6)
Enigmas - 'Mysteries in...'; activity to encourage research & creative
thinking skills (4-5)
'Zoo Animal Poetry',activity involving field trip and video (K-3)
'Let Me Tell You About My State', activity involving Amateur Radio
'American Experiences Abroad -- An Interview', activity involving Amateur
Radio services (4-6)
'Just Sandwiches', creative language arts activity (4-9)
Use of literature in SDMPS (3-5)
'Parts of Speech Review', hands-on activity (3-6)
Stories That Grow on Trees", creative writing activity (4-8)
Appropriate Use of Helping Verbs, (3-12)
'Invent A Holiday' (4-6)
'Apples Are A....Peeling', activity filled lesson involving all subject
The Middle Ages and Children's Literature", (Gifted, 2-5)
Language - ARTS Intermediate
Creative writing activity using shopping mall personalities (7-9)
Basic Grammar; review with fun using "Jabberwocky" (7-12)
Writing poems with photographs (6-12)
Vocabulary - unfolding meaning (6-7)
Creative Writing; 'Becoming a Logophil (4-8)
Activities for descriptive character analysis (K-12)
Reading; learning propaganda techniques through advertisements (5-12)
Activity to stimulate thought and verbal participation of students (4-12)
Learning nursery rhymes through many activities (4-7)
Learning vocabulary words with core curriculum (5-7)
Writing, Poetry: Knowing Ourselves and Others Through Poetry (6-12)
Vocabulary, The Dictionary Game, "Balderdash" (4-12)
Expository Writing, "The Personal Touch" (6-12)
Story Starters, introduction to story telling (all grades)
'What? You want me to read AND enjoy it?' activity to encourage reading
'Horrid Homonyms' - confusing word pairs/homonyms (6-8)
Mass Media - Magazine ads and You, the Teenager (6-12)
'What You See Isn't Always What You Get!', reading comprehension activity
'Cooperation Blocks', practice in effective communications and cooperation
'Review Basketball', learning to use reading material to find information
Password" vocabulary review activity (4-12)
'Make A Statement!', using environmental bumper stickers (6-8)
"The 'Real' Fairy Tales", fun creative writing activity (5-8)
'Paragraph Unity', writing activity (7-9)
'Book Review', pre-writing activity (8-9)
'Reading..Try It, You Might Like It!', activity to enjoy reading (6-7)
'Decimal Search', working with the Dewey Decimal System (4-8)
'Novel Partners', independent reading activity toward a structured whole
class reading activity (5-8)
'Vocabulary Stumpers', activity to increase vocabulary (6-12)
'Adjective? What's An Adjective?' (5-8)
Indexing and writing skills activity (5-12)
Create "Who Did It" mysteries with the computer (5-12)
Reference Book of the Year", fun research/library skills activity,
Language - ARTS High School
Creative writing - writing for fun (9-12)
Increase listening skill activity (9-12)
Literature Review; using knowledge, interpretation & judgement
Writing, Creating a 'Class Directory' (9-12)
'MacBeth' made easy (6-12)
'Junk Mail Explosion' - activity to increase student awareness of
persuasion tactics (7-10)
'Symbols of Language', understanding written communication (6-11)
Introduction to American Literature, creative freewriting activity (11)
Using prominent personalities with identifiable social causes to stimulate
'Map of Ship Trap Island', reading for detail (9)
'Inventions', understanding the relationships between things and
'Write? No Way!', "re-newed" writing activity (7-12)
'Olympic Shadow Boxes', learning to use reference materials (9-12)
Timelines", using research activites to discover history of city,
state and self, (9-12)
Life After the Fact", creative culmination activity after reading
"Lord of the Flies", (9-12)
Family Feud" fun format to review "Romeo and Juliet",
Teaching Shakespeare", a different approach, (9-12)
Spotting Details", creative writing activity, (9-12) | http://www.theteachersguide.com/langarts.html |
4.09375 | Seneferu was succeeded by his son Khufu, known to the Greeks as Cheops (pronounced Kee-ops), and he built the biggest pyramid of them all. It is 751 feet (229 m) at the base and originally stood 479 feet (146 m) high. Stone robbers have taken stones from the top, leaving it only 446 feet (136 m) high today. So many tourists fell to their deaths or were badly injured attempting to climb the pyramid that today climbing is forbidden.
The work of building this pyramid must have started by leveling the stone base from corner to corner. It appears that there was a natural rise or hump in the middle which was not removed and leveled. Perhaps this was left so that there would be fewer blocks to fit into place. Each of the lower blocks measures about a cube of 3.28 feet (1 m) and weighs approximately 2.5 tons each. Had there not been the hump, the lowest layer would have required over 50,000 heavy squared blocks which came from the limestone quarry less than 0.6 miles (1 km) to the south.
How the building of the pyramid was accomplished and how many workmen were involved is still a matter of conjecture and admiration. Herodotus stated, “The work went on in three-monthly shifts, a hundred thousand men in a shift. It took ten years of this oppressive slave labour to build the track along which the blocks were hauled—a work in my opinion of hardly less magnitude than the pyramid itself, for it is five furlongs in length. . . . To build the pyramid itself took 20 years.”1
Herodotus cannot be regarded as an authority on the matter. He arrived at the scene many centuries after it was all over and was dependent on what the local priests told him, and there is no guarantee that they had it right.
More recent evidence comes from the discovery of a bakery by Mark Lehner south of the pyramid, which he estimated would have been capable of producing enough bread to feed 20,000 men each day. Even that is a lot of people. The problem would have not only been finding such a large work force, but organizing them so that they were not all walking on each other’s toes.
As far as we know, the wheel was not used in Egypt at that time, so Herodotus would have been correct in saying that the blocks were hauled from the quarry to the site. A large number of masons could have worked in the quarry chopping out the stones and roughly squaring them. Examination of the stones visible in the pyramid today reveals that the stones in each layer were carefully trimmed to the same height, but the length and breadth of each stone was rather irregular. It was up to the on-site foremen to fit them into matching places. Lime plaster was poured between many of the blocks to steady them. This debunks the nonsense about all the blocks being poured from liquid lime.
Fitting the lower courses into position would have been relatively simple and fast. They could have been dragged into position from all four sides, but once the edifice rose to a higher level the problems began. Herodotus wrote, “The method employed was to build it in steps, or as some call them, tiers or terraces. When the base was complete, the blocks for the first tier above it were lifted from ground level by contrivances made of short timbers. On this first tier there was another which raised the blocks a stage higher, then yet another which raised them higher still. Each tier or storey had its set of levers.”2
All very well, but we do not know what sort of levers could raise the larger 15-ton blocks into place. A few years ago, some Japanese engineers claimed that they had made some successful levers that could raise blocks of stone weighing two tons, but that did not solve the problem of the 15-ton blocks.
The popular theory is that a ramp was built, up which the stones were dragged. Some suggest that a ramp could have wound in an ascending spiral around the pyramid. At the Temple of Karnak there is a pylon or gateway which has some huge blocks of stone. It is apparent that these were dragged up a ramp made of sun-dried mud bricks because not all the bricks were removed after the job was completed. They are still there to verify the method used, but the length of a ramp to reach the height of the great pyramid of Khufu has been calculated to be in the order of 0.6 miles (1 km) or more. The amount of material needed for such a ramp is staggering, and the question of where all this material went is hard to answer.
The construction of the pyramid was extraordinarily precise. It is precisely level and exactly square with no more than 8 inches (20 cm) difference in length between the sides of the pyramid. The sides are aligned true north, south, east, and west, indicating an advanced knowledge of astronomy and surveying.
The dimensions and geometry of the pyramid are such that if a vertical circle is imagined whose center is the top of the pyramid and radius is the height of the pyramid, the circumference of that circle is exactly the circumference of the base of the pyramid; that is, the sum of the length of the four sides at the base. This feature suggests knowledge of the value of pi, centuries ahead of the Greeks.
The pyramid contains an estimated 2.3 million blocks of stone averaging 2.5 tons in weight each, with the biggest stone weighing a massive 15 tons. We do not know for sure how long it took to build the pyramids. If we accept Herodotus’ report that Cheops’ pyramid took 20 years to build, we can calculate the rate at which the construction stones were put in place. If we assume that the Egyptian builders worked 12 hours per day continuously for 20 years, the 2.3 million blocks would require 26.3 stones to be put in place each hour, or just over 2 minutes to place each block, averaging 2.5 tons accurately in place, many feet above the ground. This feat is truly amazing even by today’s construction standards and suggests a very highly developed knowledge of engineering. If we accept a shorter time period of just two years, in line with the dates given in the Bent Pyramid, we require that one of these huge stones was precisely placed every 13.5 seconds.
All this has led to wild speculation about how the pyramids were built, such as the involvement of UFOs etc., but there is no inscriptional or archaeological evidence to support these speculations, which leaves us with the conclusion that we do not know for sure just how this gigantic feat was accomplished. With all our modern inventions and machines, it would still be a challenge to any civil engineer to build such a pyramid today. Instead, we are left to marvel at the ingenuity, craftsmanship, and organizing skill of this wonderful people who lived so long ago. They were certainly not primitive cave men, but rather were highly intelligent and cultured people.
The man who supervised this giant project was Khufu’s nephew, Hemiunu. His statue was found in a chamber of his tomb. It is a magnificent life-sized statue, and depicts him as a solidly built fellow with a copious bosom befitting his rank. Tomb robbers had broken into the tomb at an early date and severed the head and smashed it to retrieve the inlaid eyes. However, archaeologists carefully gathered the pieces, enabling the statue to be restored.
The entrance to this pyramid is on the north side above ground level and it is 26 feet (8 m) off center. This was obviously not due to a miscalculation by the builders. Rather, it was undoubtedly a subtle attempt to thwart the inevitable tomb robbers. They would naturally start their illicit digging from the center, and that is what they did.
The entrance used by tourists today is a devious tunnel which was cut through the stones and finally connected with the ascending passage. The man responsible for this entrance, which was constructed about 1,100 years ago, was a Turkish governor called Mamun, who was apparently hoping to find treasures in the tomb chamber. However, we do not know if he was successful or not.
As the original pyramid builders anticipated, Mamun’s men started digging through the center of the pyramid and might have gone clean through it and out the other side without finding anything, except for a piece of luck. It appears that as the workmen hammered away with their picks they dislodged the stone which sealed the entrance to the ascending passage. Its crash to the floor of the access tunnel alerted them to the presence of this passage, and they changed direction to link up with this ascending passage and thence into the body of the pyramid.
The entire structure of the pyramid was finally clad with huge blocks of shining white Tura limestone brought from the Maqqatam Quarry, 7.5 miles (12 km) across the other side of the Nile. These blocks had to be dragged to the river, floated across, and hauled to the building site. Most of these stones have been stripped off by local builders in the not-too-distant past, leaving the inner stones exposed.
From the true entrance, a passage descends into bedrock to a tomb chamber which was never completed. It was unlikely to have been intended as the final resting place of the king, because it was not even within the pyramid they took so much trouble to build. It was more likely a blind to fool tomb robbers into thinking that there was nothing of value to be stolen.
Deviating from the roof of this descending passage was an ascending passage. It was plugged with huge blocks of stone which had been slid down from above to prevent anyone entering. At the same time, it would not have been easily visible to anyone going down the descending passage. Halfway to the tomb chamber, this passage opens out into an ascending gallery which has corbeled walls. Each layer of stone was placed a little farther inward to reduce the span of stones on the ceiling of this gallery, an ingenious device.
Where the ascending passage meets the gallery, a horizontal passage branches off to the center of the pyramid to what has become known as the “Queen’s Tomb Chamber.” There is no evidence to support the idea that the queen was to have been buried here. It was more likely to have been for the installation of a statue of a god, or of the king himself. This tomb chamber also was left unfinished.
From the side walls of this chamber, two small passages penetrate the pyramid but do not reach the outside of the pyramid, and their purpose is not known. In 1993, Dr. Rudolph Gantenbrink, an expert in robots, was given permission to send a small robot up the 7.8-inch (200 mm) square left-hand passage to investigate it. The robot was fitted with a miniature camera which transmitted pictures back to the scientists. Gantenbrink claimed that this camera revealed that there was a portcullis stone door (one that slides up and down rather than swinging open) at the top of the passage, and in this door were two copper handles. In 2002, pyramid researchers were given permission to drill through this door and insert a miniature camera only to find another stone door or plug a few hundred millimeters behind it. At the time of writing, these tunnels still have not been explored or their purpose in the structure of the pyramid understood.
Also at the junction of the ascending passage and the ascending gallery there is a rough shaft that goes down to join the top of the descending gallery. Apparently, after the king had been buried in his tomb chamber, workmen slid some huge blocks of stone down the ascending passage to block any future entrance from the descending passage, but that would have left them entombed in the pyramid. This rough passage would have enabled them to make their escape.
At the top of the ascending gallery, a low passage enters the king’s tomb chamber. The huge granite blocks lining this chamber weigh up to 30 tons each and are so perfectly squared and fitted together that it has been estimated that there is only an average gap of half a millimeter between them. We can only marvel at the skill of the masons who achieved this perfection with the copper and stone tools available to them.
Above this chamber are five ceilings of granite blocks, one above the other, with cavities in between. A workman had scribbled Khufu’s name in one of these cavities. The top one has a gable roof to divert the enormous weight of the stones above it. All of the slabs of granite forming the immediate ceiling of the tomb chamber are cracked, but there seems to be no danger of collapse.
At the end of this tomb chamber is a sarcophagus which is empty. It has been broken on one corner, possibly when thieves prized off the lid, which is missing. This sarcophagus must have been installed there as the pyramid was being built because it is slightly higher than the opening from the ascending gallery into the tomb chamber.
Two small passages were also made in the sides of this tomb chamber, and they go right to the outside of the pyramid. They are too small for anyone to climb through and too insignificant to allow fresh air to enter the chamber. They most likely had ritualistic significance for allowing the king’s ba to leave the tomb chamber each morning and return at sunset.
Whatever the original idea, one of these so-called vents now serves a very useful purpose. The thousands of tourists milling through the pyramid each day used to make the air insufferable. However, now an electric exhaust fan has been installed in the south vent, pumping out the bad air and sucking fresh air into the passages and tomb chamber.
Besides these three tomb chambers already described, there seem to be other cavities. In 1986, French scientists used stone scanning equipment on the pyramid and discovered three gaps beyond the west wall of the passage leading to the “Queen’s Tomb Chamber.” They drilled three holes through the wall of the passage and broke into a cavity filled with sand. Beyond that was more stone and then the cavity their scanning equipment had found. It was about 10 feet (3 m) long, 6.5 feet (2 m) wide, and 6.5 feet (2 m) high. A TV lens was inserted and the breathless scientists waited for an image to show up. Who knew what fabulous treasure might be hidden within. However, the monitor picture finally showed that the cavity was completely empty. The mystery of the empty chambers is still puzzling scientists.
The solution may lie in the construction method for the pyramid. The builders may have saved themselves some stone by leaving gaps bridged by larger stones, or cavities filled with sand, which would be simpler to provide than stone. Who knows how many other such laborsaving devices may be scattered through this huge monument.
On the east side of Khufu’s pyramid was a mortuary temple with a causeway down to the valley. The causeway has now gone and so has most of the temple. Only the black basalt floor remains.
The only statue of Khufu that has ever been found was a small ivory statue that came to light at Abydos. Sir Flinders Petrie was excavating there when his men found the body of this statue. Never one to give up easily, Petrie set his men to work sieving for the small head he felt sure must be there somewhere. It took three weeks of arduous work until the coveted head was found. The reassembled statue is now in the Cairo Museum.
On the east side of the great pyramid are three smaller pyramids. There are no inscriptions in them to identify their owners, but it is usually assumed that the two southern ones belong to Khufu’s queens, Meritites and Henutsen. Some scholars feel that the third one may have been for his mother Hetepheres because her burial shaft is just to the north of this pyramid, but it would be rather strange for her to have a pyramid and a burial shaft at a distance from the pyramid.
There is more than one mystery connected with the burial of Hetepheres. It would be reasonable to suppose that she would have been buried with her husband, Seneferu, at Dahshur, but in 1925 George Reisner’s photographer was setting up his camera on the east side of the pyramid when he uncovered a patch of plaster under the sand. When the plaster was removed, they found steps leading down into a burial shaft. The shaft was filled with blocks of stone set in plaster, indicating that the tomb beneath must have been undisturbed. Eighty-two feet (25m) down they found stone blocks plastered together.
Under this course of masonry they found a tomb chamber filled with fabulous treasures, one of which bore the name of Hetepheres. It took many months to remove, preserve, and catalogue all these valuables, but at last, on March 3, 1927, the dramatic moment came when they opened the sarcophagus. As the lid rose, those present eagerly leaned forward for their first glimpse of the golden coffin they expected to find beneath. There was a gasp of surprise when they realized that the sarcophagus was empty.
Why had all these funeral treasures been carefully buried when there was no body? That question has never been satisfactorily answered. Reisner speculated that Hetepheres had originally been buried at Dahshur, but when grave robbers started their depredations, Khufu had given orders for his mother to be reburied near his great pyramid. Perhaps the body had already been stolen, and the officials, fearing to inform the king of the tragedy, had gone ahead with the burial anyway. Rather unlikely, but what is the alternative explanation? Mark Lehner suggested that it had been reburied in the nearby pyramid when it was built, but perhaps we will never know the answer for sure.
The Egyptian belief in the afterlife required a funeral boat to be buried with the deceased. It is not certain what function this boat was supposed to perform. Perhaps it was a solar boat to take the ba to the heavenly abode. Perhaps it was to ferry the ba in joy rides up the Nile, or perhaps to take it to the sacred city of Abydos. Most Pharaohs were content to have miniature boats, but Khufu, who always did things on a grand scale, had six huge boats associated with his pyramid.
There is a boat pit about 144 feet (44 m) in length on the southeast side of his pyramid. It is in the shape of a boat and undoubtedly there was an assembled boat buried there. It has long since disappeared, probably taken for firewood by local peasants thousands of years ago.
There are two smaller boat pits of similar shape next to the so-called queens’ pyramids. These pits also are empty, their funeral boats having suffered the same fate as Khufu’s large boat.
In 1954, a spectacular discovery was made. South of the pyramid were huge heaps of rubble 65 feet (20 m) high that had been left there by archaeologists who had been excavating the surrounding area. They thought that the flat area beside the pyramid would be a suitable place to dump the rubble. It was decided to clear the area, and so the work was begun under Kamal el-Malakh. When the workmen got down to the level of the pavement made of stone blocks 1.5 feet (0.5 m) thick, they uncovered the foundations of a wall which had originally been 6.5 feet (2m) high encircling the pyramid. But Malakh noticed that the wall on this side of the pyramid was closer to the pyramid than it was on the other three sides, and he suspected that it may have been deliberately placed there to hide something.
With a sharp stick he started probing the pavement. Sure enough, he exposed some pink lime mortar that seemed to outline the shape of a pit, and he ordered the paving blocks to be removed. This was no easy task. The blocks were securely fixed in place with mortar, and had to be chiseled apart. Knowing there might be some priceless treasure beneath, great care had to be exercised lest a heavy block collapse into the pit, destroying the contents.
On May 26, the work was begun, and when it became possible to peer into the pit, Kamal was excited to find that it contained the components of a complete funeral boat. Even the wood and the ropes were in remarkably good condition after being buried for thousands of years.
Then followed the even more exacting task of removing the ancient items. There were 651 separate pieces, and the amazing thing was that, although there were no missing members, the boat was not assembled, but stacked and tied in neat bundles. The beams of the ship were of cedars of Lebanon and were up to 75 feet (23 m) in length, and the ship when reassembled would be 148 feet (45 m) long. It was the oldest, largest, and best-preserved ancient boat ever discovered. The last item was removed from the pit in late June 1957.
The task of reassembling such an ancient ship of unknown shape and design was obviously not going to be easy. The job was assigned to Ahmed Moustafa, the Cairo Museum’s official restorer.
Moustafa took pride in his work, and was meticulous in his approach. He first studied all the known tomb paintings and reliefs for clues as to the nature of early boats, and then made scale models 1:10 of every item taken out of the pit. He then experimented with assembling the model ship until he was satisfied that he was following the original plan. Only then did he try assembling the actual boat. At last, in 1974, the boat stood proudly in its original glory.
It was a remarkable piece of workmanship by any standard. Apart from a few copper staples, the whole craft consisted of wood lashed together by rope, but so expertly that when immersed in water, the beams would swell to make the craft watertight. There were five pairs of oars up to 26 feet (8m) long, and when it is considered that all this work was done before the invention of pulleys, block and tackle, or even wheels, we are obliged to acknowledge the skill and intelligence of these ancient artisans. Actually, Herodotus had described in great detail how the Egyptians had made their boats. It was found that his account, written 2,500 years ago, corresponded very exactly with what was found in the pit.
The intriguing question that has engaged archaeologists is the original purpose of this craft. It is speculated that the Egyptians had a concept of the ba of the king being ferried across the water to the future life, or up and down the Nile; of a ship required by the sun god to traverse sky and land, but all these theories seem inadequate to explain why the ship was not assembled. Even if it only had ceremonial significance, one would think that an assembled ship would be needed to fulfill even a ceremonial concept.
Perhaps the answer is to be found in the observation by Moustafa that some of the beams display marks of ropes, suggesting that the ship had been assembled, and perhaps used just once, and then dismantled and buried. Possibly this was the craft used to ferry the king’s mummy from the palace at Memphis, 19 miles (30 km) to the south, to the site of the burial, and then the ship was buried in the area in much the same way as we may place flowers on a grave. It was known at the time of the original discovery that there was another pit next to the first pit. This other pit was opened in October 1990 and is in the process of being exhumed, but why two boats side by side?
Five boats had been accounted for, but in 1984 another came to light. Authorities were concerned at the erosion of the monuments in Egypt, and atmospheric pollution was a likely cause, so it was decided to reduce traffic near the pyramid by demolishing the road that ran between Khufu’s pyramid and the queens’ pyramids. When that was done, another large boat pit was exposed, making six altogether.
To the southeast of the big pyramid is a massive stone wall. The gateway through this wall has some huge stone slabs, 26 feet (8 m) in length, spanning overhead. Passing through this gateway is a path that leads to some recently discovered tombs. They turned out to be the graves of some of the officers who supervised the building of the pyramids at Giza. The Egyptian Archaeological Mission found some 20 tombs belonging to the men who worked on building the great pyramids of Giza. The tombs were made of sun-dried mud bricks. Inside the tombs they found a number of pottery objects and six skeletons dating back to the 4th Dynasty, in which the great pyramids were built.
Dr. Zahi Hawass, director of the Giza Antiquities, said that the tombs were of a special architectural style. The skeletons had been analyzed, and some of them had been surgically operated on. Apparently, the operations on the feet had been successful, as the bones had recovered from the operation. One tomb was surmounted by a miniature pyramid. This was significant, as it was previously thought that pyramids were the sole prerogative of the pharaohs. However, this pyramid seems to have been sanctioned by the king.
There are differences of opinion about how long Khufu reigned. Some say 21 years, others 41 years. According to Herodotus, “Cheops (to continue the account which the priests gave me) brought the country into all sorts of misery. He closed the temples, then, not content with excluding his subjects from the practice of their religion, compelled them without exception to labour as slaves for his own advantage.”3
This report need not be taken too seriously. It was only what the priests told him centuries after Khufu lived, and who can say whether they were telling the truth as they believed it or whether they were deliberately trying to mislead this intruder into their country? All we can say is that Herodotus was a good journalist. He simply reported what was told to him. Whether he believed it or not is not the point. We can certainly doubt the veracity of his next statement.
He continues, “No crime was too great for Cheops. When he was short of money, he sent his daughter to a bawdyhouse with instructions to charge a certain sum—they did not tell me how much. This she actually did, adding to it a further transaction of her own; for with the intention of leaving something to be remembered by after her death, she asked each of her customers to give her a block of stone, and of these stones (the story goes) was built the middle pyramid of the three which stand in front of the great pyramid.”4
Nobody in their right mind could conceive of a king of Egypt selling off his daughter like that, no matter how unscrupulous he was. So how accurate is this story that the priests told Herodotus?
These stories surrounding Great Pyramid of Khufu epitomize the mysteries and difficulties facing archaeologists and historians who try to piece together the history of the pyramids and their ancient builders.
Help keep these daily articles coming. Support AiG. | http://www.answersingenesis.org/articles/utp/khufu-built-the-big-one |
4.53125 | The remote chain of islands in the Pacific Ocean called Hawaii has a unique and fragile ecology. Formed by volcanic activity thousands of miles from other land masses, Hawaii is home to wildlife that evolved with few external influences before the arrival of humans and so has a high degree of endemism. The volcanic activity that resulted in the formation of the islands is the result of a different kind of "hot spot." Scientist believe that the Pacific tectonic plate is moving across a geological spot in the earth's mantle, one that is literally hot and that has created islands through the millennia as volcanic eruptions have slowly built up from the ocean floor. After moving off the hot spot, islands cool, erode, and eventually sink beneath the ocean's surface.
It is believed that Polynesians reached the islands before 1000 AD. The first confirmed western visitors to the islands came in 1775, led by Captain James Cook who named them the Sandwich Islands. There is evidence that Spanish ships had arrived earlier but did not stay.
The ecosystems of Hawaii cover a range of different terrains on each of its eight main islands, including tropical coastal vegetation, lowland wet forests, montane wet forests and bogs, montane dry forests, alpine vegetation, lowland grasslands and shrublands, and montane grasslands and shrublands. Coral reefs and other complex marine ecosystems surround the islands. Unique areas of plant and animal life developed around lava fields as species gained a foothold in this hostile environment. The Haleakala Silversword, for example, is found only in the crater and on the slopes of the Haleakala Volcano. Another terrain extreme exists on top of the volcanic mountains, some of which extend over 13,000 feet above sea level and are snow-capped.
Because the islands are remote and were formed by volcanic activity from the ocean depths, all the species found there either flew or were carried by winds and ocean currents. That is why there are no large mammals native to the islands. As species colonized the islands, they adapted to the remarkable diversity of habitats there in nearly complete isolation, giving rise to a high level of endemism. When the Polynesians arrived they brought pigs, goats, and chickens. Sometime in the 1600s rats reached the islands by hitchhiking on European ships; mongoose were brought in by Europeans in an unsuccessful attempt to control the rat population. All of these introduced species have had detrimental affect on the plants and birds of Hawaii. One of the most serious problem faced is the damage caused by the large population of feral pigs, which crush and eat plants, and gnaw on the bark of trees, killing them. An article in Scientific American, Costly Interlopers, reports that pigs have destroyed 80 percent of the plant cover in areas where they are found. Mongoose and rats have damaged native bird populations by preying on eggs and young birds in nests. As Hawaii's human population travels and trades more, even more non-native plants and animals are introduced and some of them in turn cause significant damage. For example, introduction of Asian songbirds, which are host to avian pox and avian malaria, has resulted in the almost total elimination of native Hawaiian birds from lowland areas.
According to the United States Fish and Wildlife Service Endangered Species Program, more than 300 species in Hawaii are currently listed as endangered or threatened, more than in any other state in the U.S. Over half of the endangered animals in Hawaii are birds. There are four times as many threatened plant species as threatened animals on the islands. Animals on the endangered list include the sea turtle and the humpback whale.
The Hawaii Visitors and Convention Bureau provides a brief overview of Hawaiian culture, history and facts about the state.
The long trail of the Hawaiian hotspot
This U.S. Geological Survey map shows the trail of undersea volcanic mountains collectively known as the Hawaiian Ridge-Emperor Seamounts chain.
FOR THE CLASSROOM
How Islands Form
An Earth science lesson plan from DiscoverySchool.com with teaching tools and links to online maps and other resources. [Grades 6-8] | http://enviroliteracy.org/article.php/492.php |
4.1875 | at the Johns Hopkins University Applied Physics Laboratory
(APL), Laurel, Md.
designed and built a spacecraft called Near Earth Asteroid
Rendezvous (NEAR) Shoemaker. The spacecraft was sent into
orbit around an asteroid called 433 Eros.
spacecraft was launched Feb. 17, 1996, from Cape Canaveral,
Fla. It went into orbit around Eros on Feb. 14, 2000. At the
end of the mission, it landed on Eros on Feb. 12, 2001.
mission was to study what asteroid Eros is made of and to
learn more about the many asteroids, comets and meteors that
come close to Earth. Scientists also hope to learn more about
how the planets were formed.
NEAR Shoemaker is the first spacecraft
ever to orbit an asteroid and the first to land on one. NEAR
was the first mission in NASA's Discovery Program to study
the planets and other objects in the solar system.
Asteroids are small bodies
without atmospheres that orbit the sun but are too small to
be called planets.
Asteroid 433 Eros is the
shape of a potato and measures 8 by 8 by 21 miles. Its gravity
is so weak that a 100-pound person would weigh only 1 ounce.
If you threw a baseball faster than 22 miles per hour from
its surface, the ball would escape into space and never come
During its 5-year mission,
the NEAR Shoemaker spacecraft traveled 2 billion miles and
took 160,000 pictures of Eros.
NEAR Shoemaker spacecraft orbits
asteroid 433 Eros. | http://www.jhuapl.edu/education/elementary/newspapercourse/storyscenarios/mission.htm |
4.28125 | Have you ever noticed that often, when someone is being interviewed, they say “That’s a good question.”?
It’s usually when it’s a question they can’t answer quickly and easily.
Indeed, “good” questions are ones that generally need thinking about.
Inspectors must consider whether:
Notice, in this instance it does not say “ASSESS” learning, although clearly this is undeniably a major purpose for questioning. Hence, this post is focused on using questions to promote learning and stimulate thinking.
Questions that are easy to answer don’t move learning on; they might indicate that learning has happened, or that at least something has been noticed, thought about or memorised, but they don’t promote learning.
How do questions promote learning?
- Good questions stimulate thinking, and often generate more questions to clarify understanding.
- Good questions generate informative responses often revealing not only misconceptions and misunderstanding, but understanding and experience beyond that expected.
- Good questions encourage learners to make links.
- Good questions push learners to the limit of their understanding.
- Good questions from pupils push teachers to the limits of their understanding too, and challenge them to find better ways of explaining.
- Good questions offer opportunities for learners to hear others’ answers to questions, it helps them to reflect on their own understanding.
Questioning can fail because:
- questioning techniques are inappropriate for the material.
- there may be an unconscious gender bias.
- there may be an unconscious bias towards most able or more demanding students.
- levels of questions might be targeted to different abilities inappropriately.
- students don’t have enough thinking time.
- learners don’t have any idea as to whether they are the only ones to get it wrong/right.
- learners fear being seen by their peers to be wrong.
- questions are too difficult.
- questions are too easy.
Questioning succeeds when:
- all learners get a chance to answer.
- learners can see how others are thinking.
- teachers gain information about thinking and learning.
- learners have time to consider their answers.
- learners have time to discuss and follow up on their answers.
- the answers are not always clear-cut.
- learners feel safe to answer.
- questions stimulate more questions.
- questions stimulate thinking.
What kinds of questions do you routinely ask, and how do you ask them ?
A great deal is talked about open and closed questions, and I’d be surprised to find any teacher who isn’t aware of the difference, but good questioning to promote learning has much more to it than that, and is a vital skill to keep on developing.
There are many questioning and response techniques that are employed throughout schools, many of them very effective:
- “No hands up”;
- Mini whiteboards;
- Vote or student response systems;
- Online discussions and forums etc..
……….but more important than the technique, is the quality of the questions asked. (Assessment for Learning – Don’t let the tools become the focus! )
In the short video below, Professor Dylan Wiliam talks about the need to get away from the IRE system (Initiation, Response, Evaluation), and to think more carefully about the way in which we ask questions and respond to pupil’s answers.
Teacher: How many sides does a hexagon have? (Initiate)
Pupil: 6? (Response)
Teacher: Well done. (Evaluate)
(Yes…I accept that this is an oversimplified example, but I’m sure you can think of others you’ve seen/used)
He gives an example of what one teacher calls “Pose, Pause, Pounce, Bounce”:
The teacher poses a question, pauses to allow pupils time to think, pounces on any pupil (keeps them on their toes) and then bounces the pupil’s response onto another pupil.
T: How might you describe a hexagon?
P: It’s a shape with 6 sides
T: (to second pupil) How far do you agree with that answer?
Depending on the answer of the second pupil – the line of questioning could continue –
Is the first answer completely right?
How could we improve the question?
How could we make the answer accurate?
In this PowerPoint presentation, Wiliam also puts forward the idea of ‘Hinge” questions.
- A hinge question is based on the important concept in a lesson that is critical for students to understand before you move on in the lesson.
- The question should fall about midway during the lesson.
- Every student must respond to the question within two minutes.
- You must be able to collect and interpret the responses from all students in 30 seconds
E.g. Choose the best description of a rhombus.
a. a 2D shape with two pairs of parallel sides
b. a quadrilateral with two pairs of parallel sides, each side being of equal length
c. a quadrilateral where all four sides have equal length. Opposite sides are parallel and opposite angles are equal.
d. a quadrilateral where all four sides have equal length. Opposite sides are parallel and all angles are right angles.
You can collate the responses using ABCD cards, mini whiteboards etc.
These types of questions are particularly useful for using with student response systems (Like the voting system on “Who wants to be a millionaire?”), as they will record the responses too.
Whatever the response, it offers an opportunity for probing and further discussion.
(See PowerPoint presentation for more examples)
Dilemmas and discussion
Asking questions which stimulate discussion are a great way to promote learning.
They lead pupils to express their thinking, reveal their understanding and to reflect and compare their thinking with others.
They also enable learning and progress to be demonstrated explicitly, as shown in this comment from a recent inspection report.
“In the best lessons, teachers engage their classes with imaginative activities. In a Year 10 history class, the teacher provided a collection of interesting resources, some print based and some in electronic format. Students worked in groups to explore these resources and form a judgement as to the quality of leadership provided by Field Marshall Haig in the First World War.”
Clearly the students were working at the higher end (Evaluation) of Bloom’s taxonomy.
Lower order questions
- What did we say a noun was?
- What’s the symbol for sodium?
- What happened when we heated the wax?
- What’s the formula for working out area?
- What do we have to remember about starting a new sentence?
- Which note is higher?
- Which words tell us that the character is sad?
- What happened to the salt when we added it to the water?
- Why does the water level go down faster on a hot day?
Higher order questions – (These are the kind that will promote learning!)
- Given what you have just learned, how could you devise a better way of doing this experiment?
- How might you use this technique to solve this (another) problem?
- Use your understanding of changes of state to explain how the water cycle works.
- Why did this event in the match prove to be the turning point?
- Why is this business website more successful than this one?
- What would we need to know about geology and chemistry to understand the industrial development of Stoke-on-Trent?
- What features of the writing work to increase the tension in this chapter?
- What elements in this piece of music create the sense of anger?
- How accurate were the measurements in the experiment we have just carried out?
- How well does this piece of music create the sense of anger?
- Which material is better for this purpose?
- What are the characteristics of this material that make it worth considering for this purpose?
- Which method of calculation do you think is more efficient/accurate?
- Design a pocket guide to fair testing.
- Create a one minute video/audio to explain why we have night and day.
- Write a “Ten commandments” of good design.
- Re-present the information in the text as a diagram.
- Compose a piece of music of your own to convey one of these emotions…..
How is your questioning?
- Do you ever consciously audit your questions?
- How good are the key questions you plan for each lesson?
- How well do the questions you ask relate to the learning objectives?
- Do the questions you ask challenge thinking?
- How often do you ask further questions that really probe understanding?
- How many questions do you ask to which you don’t know the answer?
- How often do the learners ask the questions?
- How often do you ask the learners to generate probing questions?
- How do the questions you ask promote learning?
Posts in this series -
- Consistently high expectations?
- Developing skills in reading
- Developing skills in writing
- Developing skills in communication
- Developing skills in mathematics
- “Well judged” teaching strategies
- Challenging tasks matched to pupils’ learning needs
- Engaged pupils
- Pupils understanding how to improve their learning
- Questioning to promote learning
- Discussion to promote learning
- Pace and depth of learning
- Developing curiosity
- Teacher expertise and subject knowledge
- Promoting independent learning
- Homework to develop understanding
- Addressing individual needs. | http://www.fromgoodtooutstanding.com/2012/05/ofsted-2012-questioning-to-promote-learning |
4.03125 | Normal Respiratory Rate and Ideal Breathing
Definition. Respiratory rate (also known as ventilation rate, respiration rate, breathing rate, pulmonary ventilation rate, breathing frequency, and respiratory frequency or Rf) = the number of breaths a person takes during one minute. It is usually measured at rest, while sitting.
Medical research suggests that respiratory rate is the marker of pulmonary dysfunction that gets progressively worse with advance of a large number of chronic health conditions. This website has scientific references related to increased respiratory rates for adults with cancer patients, cystic fibrosis, heart disease, asthma, diabetes, COPD and many other conditions.
What is the normal respiratory rate?
Medical textbooks suggest that the normal respiratory rate for adults is only 12 breaths per minute at rest. Older textbooks often provide even smaller values (e.g., 8-10 breaths per minute). Most modern adults breathe much faster (about 15-20 breaths per minute) than their normal respiratory rate. Respiratory rates in the sick are usually higher, generally about 20 breaths/min or more. This site quotes numerous studies that testify that respiratory rates in terminally sick people with cancer, HIV-AIDS, cystic fibrosis and other conditions is usually over 30 breaths/min.
Important note.You cannot define your own breathing rate by simply counting it. As soon as you try it, your breathing will be more deep and slow. You can ask other people to count it, when you are unaware about your breathing, or you can record your breathing using sensitive microphones fixed near your nose at night or when you sit quietly and are busy with some other activities. It is also possible to define your breathing frequency by asking other people to count the number of your breathing cycles during one minute when you are sleeping. (During sleep the respiratory frequency remains about the same as during wakeful states at rest, but the tidal volume or amplitude of breathing is reduced.)
What are the effects of increased respiratory rates?
When we breathe more than the medical norm, we lose CO2 and reduce body oxygenation due to vasoconstriction and the suppressed Bohr effect caused by hypocapnia (CO2 deficiency). Hence, overbreathing leads to reduced cell oxygenation, while slower and easier breathing (with lower respiratory rates) improves cell-oxygen content.
Normal pediatric respiratory rate for infants, newborn, toddlers, and children
(the source for this pediatric table is provided in references)
|Groups of children||Their ages||Normal respiratory rates|
|Newborns and infants||Up to 6 months old||30-60 breaths/min|
|Infants||6 to 12 months old||24-30 breaths/min|
|Toddlers and children||1 to 5 years old||20-30 breaths/min|
|Children||6 to 12 years||12-20 breaths/min|
More about respiratory rate and body oxygenation
From physiological viewpoint, the body-oxygen test or stress-free breath holding time after your usual exhalation is the more meaningful and important DIY test, than one's breathing frequency. If you have less than 20 s of oxygen in the morning (when you wake up), you are likely to have health problems.
Ideal Respiratory Rate
Ideal respiratory rate at rest for maximum possible brain- and body-oxygen levels corresponds to the automatic or unconscious breathing with only about 3-4 breaths per minute (see Buteyko Table of Health Zones for details). Bear in mind that this relates to one's basal breathing or unconscious breathing pattern at rest (e.g., during sleep, when reading, writing, etc.) The practical test for the ideal breathing pattern is to measure one's body oxygen level (see the link below). The person with ideal breathing has about 3 min for the body-oxygen test (after exhalation and without any forcing oneself). This corresponds to the maximum breath holding time of about 8 or more minutes (if breath holding is done after maximum inhalation and for as long as possible).
Resources and further info:
- Mouth Breathing in Children, Babies, Toddlers, and Infants: Its causes, effect, treatment, and prevention: This web page will help you to slow down the breathing of your children naturally
- Ideal breathing pattern
- Normal respiratory rates for children (from Healthwise - health.msn.com - this page is not available now.)
Reference pages: Breathing norms and medical facts:
- Breathing norms: Parameters, graph, and description of the normal breathing pattern
- 6 breathing myths: Myths and superstitions about breathing and body oxygenation (prevalence: over 90%)
- Hyperventilation: Definitions of hyperventilation: their advantages and weak points
- Hyperventilation syndrome: Western scientific evidence about prevalence of chronic hyperventilation in patients with chronic conditions (37 medical studies)
- Normal minute ventilation: Small and slow breathing at rest is enjoyed by healthy subjects (14 studies)
- Hyperventilation prevalence: Present in over 90% of normal people (24 medical studies)
- HV and hypoxia: How and why deep breathing reduces oxygenation of cells and tissues of all vital organs
- Body-oxygen test (CP test) : How to measure your own breathing and body oxygenation (two in one) using a simple DIY test
- Body oxygen in healthy: Results for the body-oxygen test for healthy people (27 medical studies)
- Body oxygen in sick : Results for the body-oxygen test for sick people (14 medical studies)
- Buteyko Table of Health Zones: Clinical description and ranges for breathing zones: from the critically ill (severely sick) up to super healthy people with maximum possible body oxygenation
- Morning hyperventilation: Why people feel worse and critically ill people are most likely to die during early morning hours
References: pages about CO2 effect:
- Vasodilation: CO2 expands arteries and arterioles facilitating perfusion (or blood supply) to all vital organs
- The Bohr effect: How and why oxygen is released by red blood cells in tissues
- Cell oxygen levels: How alveolar CO2 influences oxygen transport
- Oxygen transport: O2 transport is controlled by vasoconstriction-vasodilation and the Bohr effects, both of which rely on CO2
- Free radical generation: Reactive oxygen species are produced within cells due to anaerobic cell respiration caused by cell hypoxia
- Inflammatory response: Chronic inflammation in fueled by the hypoxia-inducible factor 1, while normal breathing reduces and eliminates inflammation
- Nerve stabilization: People remain calm due to calmative or sedative effects of carbon dioxide in neurons or nerve cells
- Muscle relaxation: Relaxation of muscle cells is normal at high CO2, while hypocapnia causes muscular tension, poor posture and, sometimes, aggression and violence
- Bronchodilation: Dilation of airways (bronchi and bronchioles) is caused by carbon dioxide, and their constriction by hypocapnia (low CO2)
- Blood pH: Regulation of blood pH due to breathing and regulation of other bodily fluids
- CO2: lung damage: Elevated carbon dioxide prevents lung injury and promotes healing of lung tissues
- CO2: Topical carbon dioxide can heal skin and tissues
- Synthesis of glutamine in the brain, CO2 fixation, and other chemical reactions
- Deep breathing myth: Ignorant and naive people promote the idea that deep breathing and breathing more air at rest is beneficial for health
- Breathing control: How is our breathing regulated? Why hypocapnia makes breathing uneven, irregular and erratic.
Your social engagement and comments are appreciated. Thanks.
|Disclaimer||Copyright 2013 Artour Rakhimov||Contact details||About Artour Rakhimov (Google profile)| | http://www.normalbreathing.com/index-rate.php |
4.125 | A synthetic speech system is composed of two parts: the synthesizer that does the speaking, and the screen reader that tells the synthesizer what to say.
The synthesizers used with PCs are text-to-speech systems. Their programming includes all the phonemes and grammatical rules of a language. This allows them to pronounce words correctly. Names and compound words can cause problems, as they often contain unusual spellings and letter combinations.
The synthesizer can be a card that is inserted into the computer, a box attached to the computer by a cable, or software that works with the computer's sound card. Some synthetic speech sounds robotic, although some can sound almost human. Software synthesizers are routinely included with the purchase of a screen reader.
A screen reader is a program that is loaded into the computer's memory that reads the text displayed on the screen. It allows the user to send commands instructing the speech synthesizer what to say by : (1) pressing different key combinations on the computer keyboard; (2) pressing keys on a separate keypad; or (3) automatically when changes occur on the computer screen. These commands instruct the synthesizer to read a word, line, or full screen of text. Different key combinations give the commands to spell a word, find a string of text on the screen, announce the location of the PC cursor or focused item, and so on. They can also perform more advanced functions such as: locating text that is written in a certain color, reading pre-designated parts of the screen on demand, or reading text that is highlighted--allowing the user to know which is the active choice on a menu. They also permit the user to use the spell checker in a word processor or to read the cells of a spreadsheet.
There are screen access programs available currently for use with the PC running DOS, Windows 95, Windows 98, and Windows NT, as well as MACs and UNIX. Each incorporates a different command structure and most support a variety of speech synthesizers.
How Windows-based Screen Readers Work.
The graphical and visual nature of the Windows operating environment makes it necessary for the screen reader to do more than simply lift material from the screen and send it to the synthesizer. Its functions can be divided into five categories:
- Identifying and Reading Text and Graphics
- Once text has been displayed on the screen, Windows 95 stores it in a matrix of pixels, or tiny dots. It is impossible for the screen reader to interpret this information or to determine what is text and what is a picture. Windows-based screen readers intercept all information as it is being sent by Windows applications to the screen and store it in a memory construct known as the off-screen model (OSM). The screen reader then reads from the OSM rather than from the graphical image drawn on the screen itself.
- Identifying and Announcing the Function of Windows Constructs
- Windows maintains the type, or class, of each element in an application, and most screen readers are capable of retrieving this information and delivering it to the user. In a typical Windows dialog box there may be a button that the user must select to proceed with a task. The Windows screen reader can identify the item as a button rather than simply reading the text and color of the item along with other text.
- Identifying Graphics
- Many Windows features are not labeled with text, but are simply displayed as icons or pictures on the screen. Windows screen readers label these graphics so that they can be spoken in meaningful terms. A picture of a waste basket can be labeled "Delete," for example.
- Serving as a Mouse or Pointing Device
- Some features of Windows 95 applications are available only by clicking with a mouse. To overcome the difficulty of positioning the mouse on a particular point of the screen, Windows 95 screen readers incorporate features which move the mouse pointer in straight rows and columns or by meaningful units such as words or characters, find specified text and place the mouse pointer on it, and provide keystrokes that simulate the clicking of a mouse button.
- Providing the Information Efficiently
- The screen reader must provide an alternative interface to the user that gives efficient access. A synthetic speech program that reads the entire screen from top to bottom may eventually divulge the essential information, but it may take several minutes to do so. At the same time, it must be easy for the user to determine which of the items being spoken is the "current" item and which is additional, essential information. For example, if the speech program reads an entire dialog box, which of the controls is the focused item?
In the process of testing and reviewing a Windows screen reader for purchase, several questions must be answered:
- What version of Windows will be used? Is the screen reader compatible with the version of Windows to be used?
- Are there standard system configurations with which the screen reader does not work (color schemes, common video cards, etc.)?
- What synthesizers are/are not supported?
- From among the applications that will likely be used, are there some with which the screen reader does not work, no matter the skill level of the user?
- How much "automatic" speech does the screen reader give when the user is performing standard Windows functions such as selecting menu items or moving through items in dialog boxes? Can the amount of speech be adjusted to suit the user's skill level and preferences?
- How difficult is it to change simple standard features such as voice rate or the choice of a reading key?
- What must the user do in order to make an unfriendly program work well enough to be usable?
- What useful and unique features does the screen reader have?
- What problems does the screen reader add to Windows use?
- Is the manual accessible and accurate?
- Is there a tutorial in a usable format?
Source: AFB Copyright © American Foundation for the Blind 2005. All rights reserved. Used with permission. | http://www.ocusource.com/main.cfm?page=shop&topic=dictionary&term=ScreenReader |
4.125 | Physics Help » Physics Tutorial Index »
Physics Tutorial: Newton's Law of Cooling and Coffee
Newton’s Law of cooling states that a hot object transfers heat to its surroundings (cools) at a rate proportional to the difference in temperature between the two.
Newton’s Law of cooling means that if a hot object is subjected to a very cold object, it will transfer its heat a lot faster than if the hot object that is subjected to a mildly cool object.
You are having dinner with your friend at a restaurant one evening. You place your order, and the waitress brings you your coffee much earlier than the rest of your meal. You want the coffee to stay hot until your food arrives so you can have them at the same time. You always add cream to your coffee, but know that from Newton’s Law of Cooling that a hot object transfers heat to its surroundings at a rate proportional to the difference in temperature between the two. So your choice is to either add the cream to your coffee now, or add the cream to your coffee once your meal arrives. You think about the problem for a moment and come to a conclusion.
If you add the cream right away the temperature difference between the coffee and its surrounding air is brought closer together than between just the hot coffee without cream and restaurant air. A hot object cools at a rate that is faster when the difference between the temperatures of liquid and the surrounding air and cup is the greatest. Adding cool cream at the beginning slows down the cooling speed because it decreases the difference in temperature between the hot coffee and its surroundings. If you did not add cream right away the difference in temperatures of the hot coffee and restaurant air and cup is the greatest, so it would cool more rapidly and then when the cream would be added, it would cool even further. You add your cream to your coffee as soon you got it,and enjoy a nice hot cup of coffee when your meal arrives all thanks to Newton’s Law of Cooling to help you out.
Question – What is another food that Newton’s Law of Cooling can be applied to, to make consuming it more pleasant?
Answer – Newton’s Law of Cooling can be applied to a hot bowl of soup that you enjoy eating together with a sandwich. You enjoy adding a liquid spice flavor called “Maggi” – but do not know if you should add it before your sandwich arrives or after. By applying Newton’s Law of Cooling, you should add it before, so your soup is still nice and warm when your sandwich arrives.
more thermodynamics problems
For help with physics problems, try
Physics Homework Help
- Gregory Shepertycky | http://www.physics247.com/physics-tutorial/newtons-law-cooling.shtml |
4.25 | In August, the Curiosity rover landed on Mars and began gathering data on the planet’s geology and atmosphere. While NASA has not yet released the Curiosity’s data, expected among its discoveries is a controversial substance: methane. While scientists agree that trace amounts of methane should be present, the concentrations of the gas consistently exceed predicted quantities, leaving researchers to wonder what has produced it.
On earth, methane is an extremely common organic compound. It is colorless and odorless but highly combustible, making it useful as fuel. Methane is the principal component of natural gas and is produced by living organisms as diverse as cattle, termites, and anaerobic bacteria.
It is not so easy to explain how Mars got its methane. Scientists say they expect to see some traces of the gas on Mars, but not the concentrations that consistently appear to be present. Telescopes first detected Mars’ methane, but many researchers dismissed those readings as interference from Earth’s atmosphere.
Recent evidence, however, emphasizes that the methane really is there. The Thermal Emission Spectrometer on the Mars Global Surveyor, an orbiting satellite that collected data from 1996 until 2006, detected relatively high levels of methane in Mars’ atmosphere. MGS revealed that Mars’ methane levels vary by location and season: they are highest in summer and autumn, in regions with volcanoes or other geothermal activity. Chris McKay, a Mars specialist at NASA, told SPACE.com, ”Methane on Mars should have a lifetime of 300 years and should not be variable. If it is variable, this is very hard to explain with present theory. It requires unexpected sources and unexpected sinks.”
This makes it sound like the methane is produced by geology, not biology, but scientists are skeptical that geological processes can account for the quantity and variability of methane found. “Methane is really quite a rare gas in hydrothermal/volcanic exhalations,” Dirk Schulze-Makuch, an astrobiologist at Washington State University, said in an interview with SPACE.com.
While methane comprises less than 1% of Mars’ atmosphere, there is nonetheless a lot of methane in Mars’ air. Malynda Chizek, a graduate student in astronomy at New Mexico State University, uses a colorful image to describe to phys.org how much methane seems to be present. In order to produce the quantity of methane that MGS and other devices have observed, Chizek says that five million cows would have to generate 200,000 tons of methane per year.
Like many researchers, Chizek is eager to see what Curiosity will detect. The rover carries an advanced suite of chemical analysis equipment, the Sample Analysis at Mars (SAM). SAM can “sniff” gases in Mars’ air, and it can heat or chemically treat soil and rock samples to extract gases from them. SAM’s precision and diversity of tools will likely provide information that will help scientists identify the source of the methane.
Researchers are cautious when they hypothesize that the methane could be a clue toward life on Mars, but it is clear that many of them hope SAM will reveal that Mars once supported life. As Michael J. Mumma, a senior scientist at NASA’s Goddard Space Flight Center, told The Daily Galaxy, ”Based on evidence, what we do have is, unequivocally, the conditions for the emergence of life were present on Mars — period, end of story.” The samples that SAM collects could reveal whether those conditions actually produced life, and perhaps hint at the nature of that life and why it disappeared. Mars’ mysterious methane might be all that remains of ancient (and probably microscopic) Martians. | http://thebunsenburner.com/news/curiosity-rover-finds-clues-to-the-mystery-of-mars-methane/ |
4.34375 | Language Access Services
About the Park
About the Agency
Access to Records
Geology of the Adirondack Park
The Adirondack Mountains are very different in shape and content
from other mountain systems. Unlike elongated ranges like the Rockies
and the Appalachians, the Adirondacks form a circular dome, 160
miles wide and 1 mile high.
Although the Dome as we know it today is a relatively recent development, having
emerged about 5 million years ago, it is made of ancient rocks more than a
1,000 million years old. Hence, the Adirondacks are "new mountains from
Birth of a Glacier
A quarter of a million years ago, when the earth was a few degrees
cooler, the snow which fell in the winter did not melt entirely
in the cool summers. As it accumulated over millennia, its enormous
weight compressed the lower layers of snow into ice, eventually
becoming thousands of feet thick. The increased pressure softened
the lower ice, causing it to flow like thick molasses. A glacier
Shaping the Landscape
As the ice advanced southward into the Adirondack region, soil
and rock was scraped from the land and embedded in the ice like
sand in sandpaper. Alternately scratching and smoothing the earth's
surface, the glacier pulverized boulders into pebbles, carrying
the debris as it moved. As it thickened, the glacier crept over
hills and, eventually, over the highest mountains, breaking and
lifting rocks as it rounded their summits. When the ice sheet melted,
these rocks, called erratics, were deposited throughout
the Adirondacks, where they can be seen today in fields, along
forest trails, and scattered on mountaintops.
Alpine Glaciers, Cirques, and Horns
As the massive continental glacier grew to the north, small alpine
glaciers were forming in the Adirondack Mountains. These alpine
glaciers carved the upper slopes of the mountains for thousands
of years. Gradually, they became buried by the advance of the continental
The distinctive summit of Whiteface Mountain owes its shape to alpine glaciers.
Bowl-shaped amphitheaters called cirques were carved from the rock on the north,
east and west sides of the mountain by three separate alpine glaciers. Where
the tops of the cirques joined, sharp ridges, called aretes, were formed. If
this process had continued, the cirques would have ended up back-to-back, leaving
a horn, and Whiteface Mountain would now look like the Matterhorn in Switzerland.
Kettle Holes & Kettle Ponds
As the glacier thawed, iceberg-sized chunks of ice broke off and were buried
beneath accumulating sand and gravel washed from the ice. When these ice blocks
melted, they left depressions - kettle holes - in the landscape. When a kettle
hole went below the water table, a kettle pond was established as the steady
supply of water remained in the basin. Many of the small, circular ponds and
wetlands in the Adirondacks were created in this fashion.
Eskers and Kames
Meltwater streams, flowing under and within the glacier through
tunnels in the ice, built their own stream beds from rock material
embedded in the glacier. After the glacier melted, these riverbed
sediments were deposited on the landscape as winding ridges called
eskers. When sediment-laden water flowed over the glacier's surface,
it filled depressions with sand and gravel.
As the glacier melted, material from circular depressions was deposited on
the landscape as mounds called kames.
Adirondack soils are young, having developed only since the glacial
retreat about 10,000 years ago. Unglaciated areas in the rest of
the United States have soils that have developed over millions
of years. Soils in the Adirondacks are generally thin, sandy, acid,
infertile, and subject to drought.
Forest soils have a layer of leaves, needles, twigs and other
plant and animal parts covering the mineral soil. This organic
debris accumulated with every season and, as it slowly decomposed,
it recycles nutrients back to the growing plants. The mineral soil
provides plants with solid anchorage for their roots and a secondary
source of nutrients. Tree roots will grow in all directions within
the soil in search of water, nutrients and support. Root growth
will continue as long as the soil temperature is greater then 40
degree F and roots are not limited by rock or compacted soil, or
by soil so water-saturated that it contains no oxygen for the roots.
The melting ice sheet created huge, sediment-laden rivers that
roared across the Adirondacks, depositing sand and gravel outwash
on giant, shifting floodplains. Coarse gravels and boulders settled
on river bottoms; lighter sand particles, silts and clays were
carried downstream. As the glacial rivers changed velocity and
direction, layers of these various outwash materials built up on
top of one another, forming the sedimentary strata normally found
in valleys and lower elevations today.
The debris that was deposited directly on the land by melting
glaciers without being carried and stratified by meltwater streams
contained unsorted rocks of all shapes and sizes. These are referred
to as till. Because they have not been smoothed by the movement
of the meltwater stream, till materials are often rough and jagged.
Four Basic Ingredients
Soil is made up of four components: mineral and rock particles,
decayed organic matter, live organisms, and space for air and water.
Minerals and Rocks
The mineral component of soil ranges from fine clays to rocks.
The upper mineral layer - topsoil - may have organic matter incorporated
into it. The lower mineral layers, or horizons, are collectively
called the subsoil.
Dead plants and animals and their waste products, in varying
stages of decomposition, provide the organic component of soil.
These horizons exist near the top of most forest soils.
From microorganisms - bacteria, fungi, and protozoa to earthworms,
soil organisms may account for 5 tons of living tissue per acre.
These organisms aid in the essential enrichment of soil by destroying
plant residues, decomposing the dead bodies of all organisms, and
mixing and granulation soil particles.
Healthy soil promotes the recycling of nutrients from mineral
and organic material to live organisms. This transfer occurs in
the voids between materials in the soil, in the spaces for air
and water, often called, collectively, pore space. Typically, topsoil
has 50 percent pore space in its mix of organic and mineral materials.
Growing conditions are ideal when soil pore space holds equal parts
of air and water, allowing room for root expansion, diffusion of
nutrients, and movement of soil life.
Soils are formed by the action of plants and animals and the
physical breakdown of minerals, called weathering.
Plant and animal material which accumulated on the surface of the ground is
decomposed by numerous micro-organisms. The by-products of the process are
organic acids that are washed into the ground, making the soils acidic. These
acids dissolve and transport organic matter, iron, and other elements into
the soil, to a depth of one or two feet on the better drained sites.
Organic matter accumulates to form blackened layers in the soil; below, iron
accumulates to form red/rusty colored layers.
Formation of Water Systems
Melting ice, glacial debris, and changing glacial topography
contributed to the continual disruption of the meltwater drainage
system of the Adirondack region. Lakes and ponds were formed as
ice debris dammed river valleys; as dams broke, sand and gravel
were redistributed downstream. This process left glaciated regions
like northern Minnesota, Wisconsin, and the Adirondacks dotted
with thousands of beautiful, natural lakes. Yet for all this reconfiguration
of the landscape, the major drainage patterns of the Adirondack
Dome were essentially unchanged by glaciation. Taking the path
of least resistance, Adirondack waters drain from the central high
country to the region's periphery. Water flows east from the mountains
to Lake Champlain, northwest to the St. Lawrence River, west to
Lake Ontario, and southward to the Hudson and the Mohawk rivers,
as it did before the arrival of the glaciers.
Rivers and Streams
Water is both the workhorse of the sun and the lifeblood of the
living world, flowing in an endless cycle through the landscape.
Continually replenished in flakes of snow, drops of rain and dew,
or moisture condensed into clouds and fog, water links all the
plant and animal communities of the Adirondack Park. Wherever it
falls to earth, water moves downhill in response to gravity. Rivulets
and trickles join brooks, which combine to form streams and rivers.
Nearly 30,000 miles of streams and brooks that emerge from the
mountains and forests form the network from which 1,000 miles of
powerful Adirondack rivers gather their volume and strength. These
rivers and their networks are perhaps the greatest multiple-use
natural resources in the Adirondacks. They provide habitat for
fish and wildlife from the kingfisher to the salmon to the otter.
In the past, they were the vessels of transport for pulp and provided
for sawing logs into lumber for market. They also were the trade
and travel corridors that set the pattern of settlement of Adirondack hamlets
that we still see.
Riffles and Pools
In their mountainous headwater reaches, most streams fall steeply
through narrow v-shaped channels in the shallow soil and bedrock,
developing swift-running riffles that alternate with deeper, more
sluggish pools. Riffles, places of high energy where air and water
freely mix, charge stream water with oxygen. Pools are quieter
areas where organic materials tend to collect and decompose, consuming
oxygen in the process. This allows the vital recycling of nutrients
necessary for living organisms in the stream. One after another,
watershed streams join force, forming rivers that link the mountains
with the sea.
The river carries sediments eroded from the hills down to the flatland, where,
as it slows and meanders, it deposits its bounty in the slack water of bends
or along the floodplain. Lakes and Ponds Flowing and still water has always
been an integral part of the Adirondack landscape. But the lakes and ponds
we know today are relatively young, resulting from the retreat of the last
glacier, the Wisconsin, only 10,000 years ago. Each lake and pond is a separate
ecosystem composed of a community of plants, animals and microbes living together
in a stillwater environment.
Ponds are typically shallow enough for sunlight to reach across their entire
bottom; lakes usually fall off into darkness, where rooted aquatic plants cannot
grow. Coldwater lakes are often deep and clear, with steep sides and rocky
or sandy bottoms. Because light does not penetrate all the way to the bottom,
relatively little plant growth takes place. Warmwater ponds are typically shallower,
with gently sloping sides and thicker, organic-rich sediments. Their shoreline
offers a fertile environment for aquatic plant growth.
The Adirondacks: A Gift of Wilderness
This information was compiled by the Adirondack Park Visitor Interpretive | http://apa.ny.gov/about_park/geology.htm |
4.1875 | South Carolina African Americans – Major Events in Reconstruction Politics
Also see African-Americans - Reconstruction - 1865-1900 Main Page
Written by Michael Trinkley of the Chicora Foundation
Free at last, but not for longAfter the Civil War, white South Carolinians moved quickly to eliminate black people's newfound freedom. They wanted to return blacks, in effect, to their prewar status as slaves. Thus for most African-Americans, the days of celebration were few. Within months of the Confederacy's defeat, South Carolina had adopted a new constitution. This constitution was riddled with Black Codes, or what later became known as the laws of Jim Crow.
Freedom lost: The Constitution of 1865In the summer of 1865 President Andrew Johnson, who had succeeded Lincoln, ordered that lands under federal control be returned to their previous white owners. Many African-Americans found themselves forcibly evicted from lands they had been told were theirs forever. They had no choice but to work as laborers on white-owned plantations. There was a deep sense of betrayal which lasted throughout Reconstruction and beyond.
In addition, South Carolina's constitution of 1865 failed to grant African-Americans the right to vote, and it retained racial qualifications for the legislature. The tone of the 1865 Constitution was set by Governor B.F. Perry:
To extend this universal suffrage to the "freedmen" in their present ignorant and degraded condition, would be little less than folly and madness ... [because] this is a white man's government, and intended for white men only.This constitution created the climate necessary for the enactment of Black Codes in South Carolina. These laws sought to recapture the power of the white master over African-Americans, thus denying them social and political equality. For example, the Black Codes mandated that
Freedom Regained: The Constitution of 1868At the national level, President Johnson vetoed two bills – one extending the life of the Freedmen's Bureau and one (called the Civil Rights Bill of 1866) spelling out the rights any citizen of the United States was to enjoy, without regard to race. Fortunately both bills were passed over Johnson's vetoes. Moreover, Congress approved the Fourteenth Amendment, which broadened the federal government's power to protect the rights of American citizens. While this amendment included many provisions, the most important was that it made the federal government – not the individual states – the protector of citizens' rights.
In 1867 Congress also passed, again over Johnson's veto, the Reconstruction Acts, which divided the South into five military districts and called for the creation of new governments which allowed blacks the right to vote. Only after the new governments ratified the Fourteenth Amendment would the Southern states be readmitted to the Union.
In South Carolina, the development of a new constitution in 1868 was an extraordinary departure from the past. Blacks comprised 71 to 76 of the 124 members. An observer from the New York Times commented:
The colored men in the Convention possess by long odds the largest share of mental calibre. They are all the best debaters; some of them are peculiarly apt in raising and sustaining points of order; there is a homely but strong grasp of common sense in what they say, and although the mistakes made are frequent and ludicrous, the South Carolinians are not slow to acknowledge that their destinies really appear to be safer in the hands of these unlettered Ethiopians than they would be if confided to the more unscrupulous care of the white men in the body.The resulting 1868 South Carolina constitution
Whites Regain ControlThe 1868 constitution was also different (from both earlier documents and the later constitution of 1895) in that it was submitted to the people of the state for ratification. Sadly, this constitution – and its progressive approach – were doomed by both internal and external events.
Internally, white South Carolinians could not accept the idea of former slaves voting, holding office, and enjoying equality before the law. The black legislature of South Carolina was called a menagerie and a monkey house. Planter William Gregorie commented, "I think the time will come, if we ever have a white man's civil government again, when [there] will be more slaves than [there] ever were."
James S. Pike, a journalist who came to South Carolina in the 1870s and wrote The Prostrate State, said, "It is impossible not to recognize the immense proportion of ignorance and vice the permeates this body [the legislature]."
A vast body of lies were developed concerning these blacks, and these lies continue in modern scholarship. For example, the Democratic Party issued a broadside in 1868 claiming that of the 71 black delegates to the 1868 Constitutional Convention, only 14 were on the tax list. But if you go to the manuscript census, you find that 31 of them owned more than $1,000 in property – a substantial sum for that period.
The Election of 1876By 1873 the entire country had plunged into a severe economic depression. This distracted Congress, furthered the anger of Southerners, and caused the Northern public to retreat from Reconstruction. Violence in South Carolina increased, flaunting the belief that there was little to fear from Washington. In 1876 Wade Hampton, one of the state's most popular Confederate veterans (at least among whites), was nominated for governor. Hampton's supporters, sporting red shirts, formed "rifle" and "gun" clubs and disrupted Republican gatherings. They also drove freedmen from their homes and made it known that they intended to carry the election no matter what. One planter remarked that they would win even "if we have to wade in blood knee-deep."
Not only did Hampton win, but these events also effected the Tilden-Hayes presidential election. This election was so close that it was decided by Congress – in favor of the Republican Hayes. However, in order to ensure inauguration, the Bargain of 1877 was struck whereby Hayes would recognize Democratic control of the Southern states and would also remove the last of the federal troops.
Consequently Reconstruction ended in the South. Republicans did not even offer a candidate for governor in 1878. Moreover, the federal government stood silently by as Southern states passed laws stripping African-Americans of their rights, including their right to vote.
Freedom Lost Again: The Constitution of 1895In 1882 South Carolina's new white legislature passed a law requiring voters to place ballots for each category of office in a separate box – eight in all – with the provision that any ballots placed in an incorrect box would be disqualified. Registration books were kept open for only a short time each month and reregistration was required every time a voter moved – even if the movement was within the same precinct.
By 1894 the law also required potential voters registering for the first time to provide detailed personal information, as well as affidavits from two reputable citizens attesting to the applicant's good character. The South Carolina Constitution of 1895 completed the disenfranchisement by requiring a literacy test, by disqualifying voters for crimes that blacks were stereotypically expected to commit, and by requiring the payment of a poll tax six months prior to the election.
The Constitution of 1895 also created state-sanctioned segregation. Article 2, Section 7, stated that "Separate schools shall be provided for children of white and colored races, and no child of either race shall ever be permitted to attend a school provided for children of the other race."
This document also banned interracial marriage, defined as "marriage of a white person with a Negro or mulatto or a person who shall have one-eighth or more of Negro blood."
The Hampton legislature also swept away laws that raised taxes that provided benefits to blacks. Funding was cut to the state hospital and asylum. The law allowing poll taxes to be used to fund public education was repealed, virtually eliminating public education. Furthermore, laws were passed making oral contracts binding, even without witnesses, which favored the plantation owner in disputes with blacks. A law was even passed that gave planters the right to hold laborers who were indebted to them on their plantations until the laborers worked off their debt.
South Carolina White ViewpointIn 1900 "Pitchfork" Ben Tillman, then a US Senator from South Carolina, made this speech on the floor of the United States Congress:
As white men we are not sorry for it, and we do not propose to apologize for anything we have done in connection with it. We took the government away from them in 1876 .... We did not disfranchise the negroes until 1895. Then we had a constitutional convention convened which took the matter up calmly, deliberately, and avowedly with the purpose of disfranchising as many of them as we could under the fourteenth and fifteenth amendments. We adopted the educational qualification as the only means left to us, and the negro is as contented and as prosperous and as well protected in South Carolina today as in any State of the Union south of the Potomac. He is not meddling with politics, for he found that the more he meddled with them the worse off he got. As to his "rights" – I will not discuss them now. We of the South have never recognized the right of the negro to govern the white man, and we never will. We have never believed him to be equal to the white man, and we will not submit to his gratifying his lust on our wives and daughters without lynching him. I would to God the last one of them was in Africa and that none of them had ever been brought to our shores.
SCIWAY . . . "sky-way" . . . South Carolina Information Highway
© 2013 SCIWAY.net, LLC All rights reserved. | http://www.sciway.net/afam/reconstruction/majorevents.html |
4.15625 | Dr.Fraser begins the new section on Radical Expressions with Simplifying Radical Expressions. After a thorough introduction of radical expression simple forms as principal square roots, he teaches you the product rule. Then, after learning how to deal with square roots of variables with even powers, you will dive into the quotient rule, rationalizing denominators, and conjugates. At the end of this lecture are four additional examples on how to simplify expressions.
expression contains a square root. The expression inside the
square root is called a radicand.
To simplify a
radical expression, extract all perfect squares from the radicand.
Use the product
and quotient properties of square roots to help you simplify radical
If the exponent of
the variable inside the radical is even and the resulting simplified
expression has an odd exponent, take the absolute value of
the expression for the simplified expression to guarantee that it is
form, there can be no radicals in the denominator. Removing such
radicals is called rationalizing the denominator.
To rationalize a
monomial denominator, simply multiply the numerator and
denominator by the radical in the denominator.
To rationalize a
binomial denominator, multiply the numerator and denominator
by the conjugate of the denominator. The conjugate is the
same as the original binomial but with the sign between the first
term and the second term reversed.
To be in
simplified form, there must be no perfect squares or
fractions in the radicand and there must be no radicals in the
Simplifying Radical Expressions
Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. | http://www.educator.com/mathematics/algebra-1/fraser/simplifying-radical-expressions.php |
4.125 | Designing an effective reading program involves defining what one means by the word literacy and developing a conceptual framework based on that definition. Another important step in designing a reading program concerns identifying the strategies that good readers use to understand the text and designing instructional strategies based on these reading strategies.
A Conceptual Model of Adolescent Literacy
Carnahan and Cobb (2004) present a data-driven model for adolescent literacy.
Defining Reading Instruction for Adolescents
This brief synopsis provides research-based modules of instruction.
Classroom Observation Tools
Observation and reflection tools to assist literacy coaches in their work with secondary content-area teachers.
Copyright © 2013 Learning Point Associates. All rights reserved. | http://www.learningpt.org/literacy/adolescent/components.php |
4.375 | How to Take Better Notes
Rule number one: There are no rules! Kids' minds work in different ways, and their notes should reflect those differences.
You can help your child become an independent learner -- and a great note taker -- by encouraging her to think about how she thinks. Then she can figure out the note-taking methods and tricks that will work best for her.
Set her on the right course with these four guiding principles:
Principle 1: Your Language and Attitude Matters
The language you use to talk about taking notes with your child really matters. Make sure your child knows that it's okay to do things differently.
Take an approach that acknowledges the frustration that many kids feel about school and note taking. Try saying, "You know what? Taking notes is hard sometimes. And class can be boring sometimes. Let's figure out a way to play this game." Then let him know that all kids take notes differently. Tell him that you'd like to help him find out how to take notes in a way that is right for him.
Principle 2: Embrace Coping Mechanisms
Often, positive strategies that children develop are discounted as "coping mechanisms." This can be true for note taking. Take the time to identify your child's note-taking coping mechanism and figure out what unique skills it represents.
Ask your child how she "gets by in class." Ask her what tricks she uses to get the information down. Does she just listen, look at other peoples' notes, draw to help pay attention, or daydream? Try to figure out what these tricks say about the type of learner your child is. If she's drawing on her notes, often that means she's a visual learner. Daydreaming while still absorbing the information means that she's an auditory learner. Help her to see that these coping mechanisms can be viewed positively as unique skills. Keep this information in mind as you look at different note-taking strategies and read "Note Taking: Finding the Method that Works."
Principle 3: Talk About Form, Content, and Notations
This is the most important discussion you will have with your child and it's one that you should continue to have throughout your child's school career. There are five different note-taking structures, but your child might already have one that is all his own. Talk with your child about whether or not the Roman numeral system works for him. Ask whether or not he has a special notation system like abbreviation or color-coding. And lastly, ask if he has a specific focus on details, themes, stories, or connections that is helpful for remembering information.
It's okay and totally normal if your child gives you the usual, "I don't know" or "Nothing." What matters is that you've taken the first step in making your child's thoughts and ideas central to the process of individualizing his notes.
Principle 4: Understand Process
Empowering your child to individualize her notes won't happen overnight. Let your child know that better note taking will develop over time. Have her take a guess at what might work and try it out. Tell her that it's okay if something doesn't work -- she'll learn from the experience and can always try something new. If you stick to that process of trial and error, over the course of a few weeks or a month you will see improvement.
To Notes homepage
More on: ADHD | http://school.familyeducation.com/learning-disabilities/treatments/37784.html |
4.09375 | How is natural gas formed? Before providing a proper answer, it is worth understanding what natural gas actually does. Natural gas is the driving force behind steam and gas turbines, and it contributes to industrial production. Since it burns relatively cleanly, natural gas is undoubtedly one of the world's most precious commodities. Natural gas develops slowly, and it also takes a long time to gather it and deliver it to homes, but, given all that it can do, this form of energy is worth the wait.
The first steps in the process of natural gas formation began several million years ago. Plants and animals that lived during these ancient times began to pile up after their deaths, creating a glut of organic material. Eventually, these piles of dead organisms and animals were smothered beneath hardened rock, and a naturally unfolding process of applied heat changed some of the trapped material into both natural, odorless gas.
Finding And Acquiring The Gas
Many are surprised to learn that the natural gas we use today actually began to form millions of years ago. While the gas itself is old, the methods used for finding and acquiring the gas are relatively new. Geologists begin the work by evaluating land for areas of promising rock. As only certain types of rock conceal ancient decaying material, specialists must use seismic surveys to assess vibrations. If the rock appears to contain natural gas, drilling will commence. This drilling is made simpler through the digging of large wells, which enables the trapped gas to flow upwards for collection.
Transporting The Gas
While acquiring gas is an impressive process in its own right, gas is relatively useless if unavailable for public consumption. To this end, large pipelines transport the gas from the drilling fields to homes and factories. Once the gas reaches the general location of the communities that rely on it, it flows to its destination via much smaller pipes called veins. These veins transport the gas directly to where it is needed. | http://www.life123.com/career-money/commodities-2/natural-gas/how-is-natural-gas-formed.shtml |
4.03125 | The Cambodian Hindu temple of Angkor Wat was built using 5-10 million enormous sandstone blocks, some weighing nearly two tons, but no one has ever definitively explained how the blocks were moved from the nearby mountain to the location of the temple. Thanks to Google Earth, we may finally have the answer.
As explained in the Huffington Post:
Researchers report in a paper in press at the Journal of Archaeological Science that when they examined Google Earth maps of the area, they saw lines that looked like a transportation network. Field surveys revealed that the lines are a series of canals, connected by short stretches of road and river, that lead from the quarries straight to Angkor. The roads and canals–some of which still hold water–would’ve carried blocks from the 9th century to the 13th century on a total journey of 37 kilometers or so. The researchers don’t know whether the blocks would’ve floated down the canals on rafts or via some other method.
It seems like a solid theory, and might very well be the answer. For a bit more on Angkor Wat, go check out the various 3D models that are available, or simply fly there using this KML file.
Check out the full article and then share your thoughts below. Does this really explain how the blocks were moved? | http://www.gearthblog.com/blog/archives/2012/10/google_earth_reveals_the_secret_of.html |
4.28125 | Students need a great curriculum, delivered within an environment that eliminates barriers to success. There’s no way that our students can become the thinkers, innovators and leaders of tomorrow if they have been taught only the subjects tested. All students need rich, well-rounded curricula that ground them in areas ranging from foreign languages to physical education, civics to the sciences, history to health, as well as literature, mathematics and the arts. Curricula do not work in isolation and must be a part of the entire system—from instruction to professional development to assessment. Curricula must be aligned with the academic standards and standards-based assessments that students are expected to master, including the Common Core standards for reading and math. And teachers must have access to high-quality, ongoing professional development to help them use the curricula to differentiate their instruction to ensure all students succeed. Right now, such curricula aren’t routinely in place, and a lot of teachers are forced to make it up every single day.
A curriculum does what academic content standards can’t do. It provides teachers with a detailed road map for helping students reach the standards. It is the how-to guide for teachers. It conveys the “what” of the standards, and it clarifies how much of the “what” is good enough. The curriculum provides information to teachers about the content, instructional strategies and complexity of student performance levels necessary to meet standards. Curriculum must be comprehensive without being restrictive; it must provide examples and allow for flexibility; and, it must establish the broad parameters within which teachers apply their professional knowledge and judgment. The AFT believes that it is necessary to develop a shared understanding of what a curriculum must contain in a standards-based system. We do not condone the use of an intractable, scripted curriculum that provides no flexibility for teachers. We also do not believe that teachers should have to go it alone. A curriculum should provide enough examples to allow a teacher, in collaboration with other teachers, to develop a common understanding of the standards. Elements of a high-quality curriculum include:
- learning continuums that show the progression and development from grade to grade and within each grade;
- instructional resources—reading materials, textbooks, software and so forth—that are aligned to the standards;
- information on instructional strategies or techniques to help teach the standards in a variety of ways;
- performance indicators to clarify the quality of student work needed for mastery of the standards, including performance indicators, rubrics or scoring guides, sample student work and quality feedback; and
- a clearinghouse of high-quality lesson plans and units based on the standards and developed by teachers. | http://www.aft.org/issues/standards/curriculum/index.cfm |
4.46875 | David Fahey of the National Oceanic and Atmospheric Administration's (NOAA) Aeronomy Laboratory in Boulder, Colo., explains.
The severe depletion of stratospheric ozone during the winter in Antarctica is known as the "ozone hole." A significant decrease appeared first over Antarctica because atmospheric conditions there increase the effectiveness of reactive halogen gases containing chlorine and bromine that destroy ozone. Formation of the Antarctic ozone hole requires not only these ozone-depleting chemicals but also air temperatures low enough to form polar stratospheric clouds (PSCs).
Ozone-depleting chemicals are produced in the stratosphere from halogen source gases emitted at the earth¿s surface. These source gases, such as chlorofluorocarbons (CFC-11 and CFC-12, for example), are manufactured and released in the troposphere by human activities. Source gases exist in comparable abundances throughout the stratosphere in both hemispheres even though most of the emissions occur in the Northern Hemisphere. The amounts are comparable because most source gases have no important natural removal processes in the lower atmosphere, allowing winds and warm-air convection to redistribute and mix the gases efficiently throughout the troposphere. These well-mixed source gases enter the stratosphere primarily from the upper tropical troposphere. Atmospheric air motions then transport the gases farther upward and toward the poles in both hemispheres. Once in the stratosphere, the source gases--which alone do not destroy ozone--chemically degrade to form ozone-depleting chemicals, such as chlorine and chlorine monoxide.
The severe ozone destruction represented by the ozone hole requires that low temperatures be present in polar regions over a range of stratospheric altitudes, over large geographical regions, and for extended periods. Low temperatures are important because they allow polar stratospheric clouds (PSCs) to form. Reactions on the surfaces of the cloud particles initiate a remarkable increase in the concentrations of the most reactive ozone-depleting chemicals, which in turn elevates the rate of ozone depletion. Temperatures are lowest in the stratosphere over both polar regions during the winter months. The temperatures are low enough for PSCs to form for nearly the entire Antarctic winter but usually only for part of every Arctic winter. Thus, the chemical depletion of ozone in the Arctic is usually less than that in the Antarctic.
PSC particles grow large enough and are numerous enough that cloudlike features can be observed from the ground under certain conditions, particularly when the sun is near the horizon (see image). PSCs often occur near mountain ranges in polar regions because the motion of air over the mountains can cause local cooling of stratospheric air. The formation of PSCs has been recognized for many years from ground-based observations, but scientists did not realize the full geographical and altitude extent of PSCs in both polar regions until PSCs were observed by a satellite instrument in the late 1970s. In addition, the role of PSCs in forming ozone-depleting gases was not known until after the discovery of the Antarctic ozone hole in 1985. Our understanding of the role of PSCs developed from laboratory studies, computer modeling, and direct sampling of PSC particles and reactive chlorine gases (such as chlorine monoxide) in the polar stratospheric regions.
A general thinning of the ozone layer has indeed occurred over the past two decades. The most reactive ozone-depleting substances are also present in the stratosphere at lower latitudes outside of winter polar regions but in much smaller quantities. Most of the chlorine and bromine from the source gases remains in so-called reservoir substances that are much less reactive towards ozone. These smaller quantities of reactive substances also deplete ozone. The stratospheric ozone layer has been diminishing gradually since 1980 and now is about 3 percent lower on average around the globe. | http://www.scientificamerican.com/article.cfm?id=why-do-ozone-depleting-ch |
4.21875 | The American redstart is a small songbird that nests in North America and winters in the tropics. The adult male is black with bright orange-red patches in its tail and on its flanks.
So what function might these bright red patches serve? They are certainly not useful for camouflaging the bird from predators.
Scientists studying this bird found that each patch serves a different function.
The birds with the brightest red tail patches were more likely to have two territories (each with a female and a nest). They frequently fan their tails to show off the bright patch to other males to keep them away.
You see, redstarts, like most migratory birds, are not particularly faithful. It is not unusual at all for a nest to contain young that are sired by several males.
However, birds with bright red flank patches are more likely to be the fathers of the young in their own nest. Females are less likely to stray from a male with a bright flank patch.
So these bright red patches are serving as signals, one for males and one for females. The significance of the red color is likely due to the fact that the birds cannot make red, they must get the red coloration from their diet, from the food that they eat. It is "expensive" for the male redstarts to make the bright red patches so having them means that the birds are healthy and is a sign of fitness.
This article summarizes the information in this publication:
Reudink, M. W., Marra, Peter P., Boag, P. T. and Ratcliffe, L. M. 2009. Plumage colouration predicts paternity and polygyny in the American redstart. Animal Behaviour, 77: 495-501.
Many animals display multiple signals that can be used by conspecifics to gather information about the condition or quality of potential mates or competitors. Different signals can indicate different aspects of individual quality or function in spatially or temporally separated periods. However, for long-distance migratory birds, it is unclear if signals, such as plumage traits, function in different phases of the annual cycle. We investigated the potential role of carotenoid-based tail and flank plumage, and bib size, in relation to extrapair paternity and polygyny in the American redstart, Setophaga ruticilla. This work complements our previous research suggesting tail feather brightness acts as a status signal, mediating territory acquisition during the nonbreeding season in Jamaica. Here, we show that tail feather brightness also serves as an important signal during the breeding season. Specifically, our results indicate that polygyny, a behaviour highly dependent on obtaining and defending multiple territories, is significantly predicted by tail brightness. Interestingly, flank redness best predicted whether individuals secured paternity at their nest and the proportion of within-pair offspring sired. We suggest that by expanding the study of plumage function in long-distance migrants to events occurring throughout the annual cycle, we gain a critical perspective on the function and evolution of ornamental traits.
Teachers, Standards of Learning, as they apply to these articles, are available for each state. | http://nationalzoo.si.edu/scbi/migratorybirds/science_article/default.cfm?id=102 |
4 | The Anglo-Irish Treaty did not end the violence. The treaty effectively confirmed the partition of Ireland, setting up the Irish Free State in the south while Ulster remained part of the United Kingdom. Eamon de Valera had not been party to the Treaty and did not support it. When the Dail approved the treaty in January 1922, making way for provisional government under Michael Collins and Arthur Griffith, de Valera resigned and the nationalist movement split.
Many IRA officers were also against the treaty and established the Army Executive as the 'real' government. In April 1922 anti-treaty members of the IRA occupied the Four Courts in Dublin. The provisional government (in the process of building the National Army) was largely dependent on the IRA for policing and was unable to deal effectively with the escalating violence. In the same month the Cabinet decided to provide the provisional government with military assistance.
Winston Churchill, as Colonial Secretary, was increasingly angry about Collins' willingness to negotiate with de Valera. Collins made a pact with de Valera to form a joint government of republicans and pro-treaty members. At Cabinet meetings during May, Churchill argued strongly that the provisional government should be forced to take a stand against republicanism. This caused a rift in the Cabinet as the Prime Minister, Lloyd George, advocated a more liberal stance.
On June 16 1922, Ireland went to the polls. The pro-treaty representatives took 58 seats and the anti-treaty seats took 35. However, shortly afterwards republicans killed the Ulster MP Sir Henry Wilson, a prominent opponent of an independent Ireland, and kidnapped a general of the Free State Army. Collins responded by attacking the republican-occupied courts in Dublin.
The civil war progressed with increasing bitterness, but the anti-treaty faction did not have widespread support and the size of the National Army was increasing. The war ended in May 1923, but not before Eamon de Valera had been arrested and Michael Collins had been assassinated. | http://www.nationalarchives.gov.uk/cabinetpapers/themes/irish-civil-war.htm |
4.28125 | September 13, 2007
What is Ion Propulsion
Ion propulsion involves the ionization of a gas to propel a craft. Instead of a spacecraft being propelled with standard chemicals, xenon gas (which is four times heavier than air) is given an electrical charge, or ionized. It is then electrically accelerated to a speed of about 25 miles per second. When xenon ions are emitted at such high speed as exhaust from a spacecraft, they push the spacecraft in the opposite direction.
For more detailed info on ion propulsion:
In an ion thruster, xenon ions are accelerated by electrostatic forces. The electric fields used for acceleration are shaped by electrodes positioned at the downstream end of the thruster. These electrodes contain thousands of coaxial apertures. Each pair of apertures acts as a lens that electrically extracts, focuses and accelerates ions out of the thruster. NASA's ion thrusters use a two-electrode system, where the upstream electrode (called the screen grid) is charged highly positive, and the downstream electrode (called the accelerator grid) is charged highly negative. The ions are generated in a region of high positive voltage and the accelerator grid's voltage is negative, and since opposite charges attract, the positive xenon ions are attracted toward the accelerator grid and are focused out of the thruster through the apertures, creating thousands of ion jets. The stream of all the ion jets together is called the ion beam. The exhaust velocity of the ions in the beam is determined by the voltage applied the electrodes. While a chemical rocket's top exhaust speed is limited by the thermal capability of the rocket nozzle, the ion thruster's top speed is limited by the applied voltage (which is theoretically unlimited). The greater the exhaust speed, the more efficient the rocket engine.
Because the ion thruster expels a large amount of positive ions, an equal amount of negative charge must be expelled to keep the spacecraft from charging up to a large negative voltage. An electron emitter called the neutralizer is located on the downstream perimeter of the thruster and expels exactly the same number of negatively-charged electrons as there are ions leaving the thruster. | http://www.jpl.nasa.gov/news/profiles.php?profile=1469 |
4.09375 | One of the great mysteries of the ancient world was the nature of the stars. As people watched these twinkling lights wheel across the sky night after night, year after year, they thought of them as the eyes or souls of the dead, as candles flickering against a tall background, or as holes in the dome of the sky that allowed the light of heaven to shine through.
Modern science tells us that stars are gigantic balls of hot gas that shine because they produce nuclear reactions in their cores. Yet the scientific explanation does little to detract from the feeling of awe that a night of stargazing inspires.
How are stars born?
Stars are born from vast clouds of gas and dust. Perhaps nudged by the shockwave of an exploding star or some other event, a cloud collapses under its own gravity. When the cloud reaches a critical density, it gets hot enough to trigger nuclear fusion -- a process that combines lightweight atoms to create heavier ones, releasing vast amounts of energy. In most stars, such as the Sun, hydrogen atoms combine to make helium.
Late in its life, though, the star can create other chemical elements, including oxygen, nitrogen, and carbon, which are essential for life. In fact, almost all the elements in the universe other than hydrogen and helium were forged in the hearts of stars or in the violent processes that can end a star's life. When a star dies, it expels these materials into space, where they can form new stars, planets, and people. So the elements on Earth -- from the oxygen we breathe to the iron in our blood -- came from the stars.
A star's life history is regulated by its mass.
The smallest stars are less than one-tenth as massive as the Sun. The nuclear reactions in their cores take place at a slower rate, so the stars will live for hundreds of billions of years or longer. These stars are cool and faint, though, so they shine as dull red cosmic embers.
Medium-mass stars like the Sun shine for several billion years. These stars are relatively hot, so their surfaces shine yellow or white. At the end, such a star puffs up like a beachball, then casts its outer layers into space to form a colorful "bubble" called a planetary nebula. When the nebula dissipates, it leaves behind a hot, dense core.
The most-massive stars live short, spectacular lives. Their great mass squeezes their interiors, making them extremely hot, so they consume their nuclear fuel in a hurry. They can be millions of times brighter than the Sun, and their diameters can be dozens of times larger, too. These stars will die in titanic explosions called supernovae. Such a blast leaves behind either an ultra-dense neutron star or an even denser black hole.
Even with our modern understanding of stars, though, many mysteries remain. Astronomers continue to study this dazzling array of cosmic lights for answers.
What makes stars shine?
Stars produce their energy through nuclear fusion. For most stars, this process is dominated by a process called the "proton-proton chain," a sequence of events that transforms four hydrogen atoms into one helium atom. The proton-proton chain reaction fuels most stars and provides them with the energy required to support their enormous masses for most of their lifetimes; indeed, it is the source of our own Sun's power.
Larger stars, whose crushing weight generates even higher temperatures at their cores, utilize a more complex fusion process called the "CNO cycle." In this reaction, trace amounts of carbon, nitrogen, and oxygen serve as catalysts in fusing four hydrogen atoms into one helium. While this method yields more energy, the higher temperatures required can only be achieved by stars more massive than the Sun, and such stars are doomed by their prolific output to short lives.
What is the difference between luminosity and brightness?
Luminosity is not the same as brightness. A 100-watt light bulb has a constant luminosity but can appear brighter or fainter depending on how far away it is. A dim star may be dim because it is small, cool, far away, or all three. | http://stardate.org/print/6688 |
4 | Q&A: Black Holes
What it looks like when black holes are made, how long it takes to make them, and how do they mess with the stream of time?
Black holes, because of their very intense gravity will cause light to bend around them, which will cause the
appearance of background objects (like stars) to be distorted. It isn't known how long it takes for a supermassive black hole to form (this is a type of black hole found at the center of galaxies that weighs
millions or billions of times the mass of the Sun). We do know it can take less
than a billion years for one to reach a very large size:
A stellar-mass black hole, on the other hand, can likely form in seconds, after the collapse of a massive star. Finally, if you were to approach a black hole, the powerful gravity would cause time to slow down. | http://www.chandra.harvard.edu/resources/faq/black_hole/bhole-91.html |
4.0625 | On this day, the bulk of the Army of the Potomac begins moving towards Petersburg, Virginia, precipitating a siege that lasted for more than nine months.
From early May, the Union army hounded Robert E. Lee's Army of Northern Virginia as it tried to destroy the Confederates in the eastern theater. Commanded officially by George Meade but effectively directed by Ulysses S. Grant, the Army of the Potomac sustained enormous casualties as it fought through the Wilderness, Spotsylvania, and Cold Harbor.
After the disaster at Cold Harbor, where Union troops suffered horrendous losses when they attacked fortified Rebels just east of Richmond, Grant paused for more than a week before ordering another move. The army began to pull out of camp on June 12, and on June 13 the bulk of Grant's force was on the move south to the James River. As they had done for six weeks, the Confederates stayed between Richmond and the Yankees. Lee blocked the road to Richmond, but Grant was after a different target now. After the experience of Cold Harbor, Grant decided to take the rail center at Petersburg, 23 miles south of Richmond.
By late afternoon, Union General Winfield Hancock's Second Corps arrived at the James. Northern engineers were still constructing a pontoon bridge, but a fleet of small boats began to ferry the soldiers across.
By the next day, skirmishing flared around Petersburg and the last great battle of war in Virginia began. This phase of the war would be much different, as the two great armies settled into trenches for a war of attrition. | http://www.history.com/this-day-in-history/grant-swings-toward-petersburg |
4.03125 | From The Art and Popular Culture Encyclopedia
A tragic hero is a character in a work of fiction (often the protagonist) who commits an action or makes a mistake which eventually leads to his or her defeat. The idea of the tragic hero was created in ancient Greek tragedy and defined by Aristotle (and others). Usually, this includes the realization of the error (anagnorisis), which results in catharsis or epiphany.
The modern use of the term usually involves the notion that such a hero makes an error in his or her actions that leads to his or her downfall or flaw. The idea that this be a balance of crime and punishment is incorrectly ascribed to Aristotle, who is quite clear in his pronouncement that the hero's misfortune is not brought about "by vice and depravity but by some error of judgment." In fact, in Aristotle's Poetics it is imperative that the tragic hero be noble. Later tragedians deviated from this tradition: the more prone the tragic hero was to vice, the less noble and the less tragic, in the Aristotelian sense of the word, the tragic hero happened to be.
Tragic heroes appear in the dramatic works of Aeschylus, Sophocles, Euripides, Seneca, Marlowe, Shakespeare, Webster, Marston, Corneille, Racine, Goethe, Schiller, Kleist, Strindberg, and many other writers.
Some common traits characteristic of a tragic hero:
- The defining characteristic of a tragic hero, esp. in the Hellenic dramas, is Hamartia.
- The hero discovers that his downfall is the inevitable result of his own actions, not by things happening to him.
- The hero's downfall is understood by Aristotle in his Poetics to arouse pity and fear that leads to an epiphany and a catharsis (for hero and audience). It is not necessary by the Aristotelian standard that the downfall or suffering be death/total ruin, as in the myth of Herakles, who ultimately ascends to Mount Olympus and immortality. Since at least the time of William Shakespeare, however, the flaw of a tragic hero has generally been regarded to necessarily result in his death, or a fate worse than death. The Shakespearean tragic hero dies at some point in the story; one example is the eponymous protagonist of the play Macbeth. Shakespeare's characters show that tragic heroes are neither fully good nor fully evil.
- A tragic hero is often of noble birth, or rises to noble standing (King Arthur, Okonkwo, the main character in Chinua Achebe's novel Things Fall Apart).
- The suffering of the hero is meaningful, because although the suffering is a result of the hero's own volition, it is not wholly deserved and may be cruelly disproportionate. John Proctor, a major character in "The Crucible"
- There may sometimes be supernatural involvement (in Shakespeare's Julius Caesar, Caesar is warned of his death via Calpurnia's vision and Brutus is warned of his impending death by the ghost of Caesar).
- The hero's misfortune is not wholly deserved.
Famous tragic heroes
- King Lear
- Romeo and Juliet
- Richard III
- Doctor Faustus
- Willy Loman
Modern tragic heroes
In the modernist era, a new kind of tragic hero was synthesized as a reaction to the English Renaissance, the Age of Enlightenment, and Romanticism. The modern hero, rather than falling calamitously from a high position, begins the story appearing to be an ordinary, average person; for example, Truman Capote's fictionalized version of Perry Smith in In Cold Blood. Also, Arthur Miller's Joe Keller in All My Sons (1947) is an average man, which serves to illustrate Miller's belief that all people, not just the nobility, are affected by materialistic and capitalist values. The modern hero's story does not require the protagonist to have the traditional catharsis to bring the story to a close. He may die without an epiphany of his destiny and he may suffer without the ability to change events that are happening to him. The story may end without closure and even without the death of the hero. This new hero of modernism is the antihero and may not be considered by everyone to even be a tragic hero. | http://www.artandpopularculture.com/Tragic_hero |
4.40625 | There are two types of geologic accretion. The first kind of accretion, plate accretion, involves the addition of material to a tectonic plate. When two tectonic plates collide, one of the plates may slide under the other, a process known as subduction. The plate which is being subducted (the plate going under), is floating on the asthenosphere and is pushed up and against the other plate. Sediment on the ocean floor will often be scraped by the subducting plate. This scraping causes the sediment to come off the subducted plate and form a mass of material called the accretionary wedge, which attaches itself to the subducting plate (the top plate). Volcanic island arcs or seamounts may collide with the continent, and as they are of relatively light material (i.e. low density) they will often not be subducted, but are thrust into the side of the continent, thereby adding to it.
The second form of accretion is landmass accretion. This involves the addition of sediment to a coastline or riverbank, increasing land area. The most noteworthy landmass accretion is the deposition of alluvium, often containing precious metals, on riverbanks and in river deltas.
Plate accretion
Continental plates are formed of rocks that are very noticeably different from the rocks that form the ocean floor. The ocean floor, is usually composed of basaltic rocks that make the ocean floor denser than continental plates. In places where plate accretion has occurred, land masses may contain the dense, basaltic rocks that are usually indicative of oceanic lithosphere. In addition, a mountain range that is distant from a plate boundary suggests that the rock between the mountain range and the plate boundary is part of an accretionary wedge.
This process occurs in many places, but especially around the Pacific Rim, including the western coast of North America, the eastern coast of Australia, and New Zealand. New Zealand consists of areas of accreted rocks which were added on to the Gondwana continental margin over a period of many millions of years. The western coast of North America is made of accreted island arcs. The accreted area stretches from the Rocky Mountains to the Pacific coast. The island of Barbados is a similar process being actively formed in the Atlantic Ocean.
- Robert, Ballard D. Exploring Our Living Planet. Washington D.C.: The National Geographic Society, 1983.
- Sattler, Helen Roney. Our Patchwork Planet. New York: Lee & Shepard, 1995.
- Watson, John. "This Dynamic Planet." US Geological Survey. 6 December. 2004 | http://en.wikipedia.org/wiki/Accretion_(geology) |
4.09375 | DNAArticle Free Pass
DNA, abbreviation of deoxyribonucleic acid, organic chemical of complex molecular structure that is found in all prokaryotic and eukaryotic cells and in many viruses. DNA codes genetic information for the transmission of inherited traits.
A brief treatment of DNA follows. For full treatment, see genetics: DNA and the genetic code.
The chemical DNA was first discovered in 1869, but its role in genetic inheritance was not demonstrated until 1943. In 1953 James Watson and Francis Crick determined that the structure of DNA is a double-helix polymer, a spiral consisting of two DNA strands wound around each other. Each strand is composed of a long chain of monomer nucleotides. The nucleotide of DNA consists of a deoxyribose sugar molecule to which is attached a phosphate group and one of four nitrogenous bases: two purines (adenine and guanine) and two pyrimidines (cytosine and thymine). The nucleotides are joined together by covalent bonds between the phosphate of one nucleotide and the sugar of the next, forming a phosphate-sugar backbone from which the nitrogenous bases protrude. One strand is held to another by hydrogen bonds between the bases; the sequencing of this bonding is specific—i.e., adenine bonds only with thymine, and cytosine only with guanine.
The configuration of the DNA molecule is highly stable, allowing it to act as a template for the replication of new DNA molecules, as well as for the production (transcription) of the related RNA (ribonucleic acid) molecule. A segment of DNA that codes for the cell’s synthesis of a specific protein is called a gene.
DNA replicates by separating into two single strands, each of which serves as a template for a new strand. The new strands are copied by the same principle of hydrogen-bond pairing between bases that exists in the double helix. Two new double-stranded molecules of DNA are produced, each containing one of the original strands and one new strand. This “semiconservative” replication is the key to the stable inheritance of genetic traits.
Within a cell, DNA is organized into dense protein-DNA complexes called chromosomes. In eukaryotes, the chromosomes are located in the nucleus, although DNA also is found in mitochondria and chloroplasts. In prokaryotes, which do not have a membrane-bound nucleus, the DNA is found as a single circular chromosome in the cytoplasm. Some prokaryotes, such as bacteria, and a few eukaryotes have extrachromosomal DNA known as plasmids, which are autonomous, self-replicating genetic material. Plasmids have been used extensively in recombinant DNA technology to study gene expression.
The genetic material of viruses may be single- or double-stranded DNA or RNA. Retroviruses carry their genetic material as single-stranded RNA and produce the enzyme reverse transcriptase, which can generate DNA from the RNA strand. Four-stranded DNA complexes known as G-quadruplexes have been observed in guanine-rich areas of the human genome.
What made you want to look up "DNA"? Please share what surprised you most... | http://www.britannica.com/EBchecked/topic/167063/DNA |
4 | Further research revealed that the amygdala—an almond-shaped cluster of neurons deep within the brain—plays a pivotal role in the fear-association response in rats, and also in humans. The sight of a loaded gun, for example, triggers activity in this part of the brain. People with an injured amygdala have dampened emotional responses and so do not learn to fear new things through association.
In the 1980s, Caroline and Robert Blanchard, working together at the University of Hawaii, carried out a pioneering study on the natural history of fear. They put wild rats in cages and then brought cats gradually closer to them. At each stage, they carefully observed how the rats reacted. They found that the rats responded to each kind of threat with three distinct sets of behaviors.
The first kind of behavior is a reaction to a potential threat, in which a predator, such as a cat, is not visible but there is good reason to worry that it might be nearby, such as the scent of fresh cat urine. In such a case, a rat will proceed cautiously, assessing the risk. The second behavior occurs when the rat see the cat. The rat will freeze and then make a choice about what to do next, either remain immobile or run away. In the third behavior, the cat notices something and walks toward the rat to investigate. At this point, the rat will flee if it has an escape route. If the cat gets close, the rat will choose either to fight or to run for its life. | http://tkdtutor.com/articles/topics/protect-and-defend/116-combat-mental-aspects/526-predator-defense-response?showall=&start=1 |
4.03125 | At the Crossroads of Freedom and Equality
Black History Month
The year 2013 marks two important anniversaries in the history of African Americans and the United States. On January 1, 1863, the Emancipation Proclamation set the United States on the path of ending slavery. A century later on August 27, 1963, hundreds of thousands of Americans, marched to the memorial of Abraham Lincoln, in pursuit of the ideal of equality of citizenship.
Answers to the puzzle include names, places, and historical events.
Use the "Printable HTML" button to get a clean page,
in either HTML or PDF,
that you can use your browser's print button to print.
This page won't have buttons or ads, just your puzzle.
The PDF format allows the web site to know how large a
printer page is, and the fonts are scaled to fill the page.
The PDF takes awhile to generate. Don't panic! | http://www.armoredpenguin.com/crossword/Data/2013.01/3110/31103612.227.html |
4.0625 | Homonyms are groups of words with the same spelling and pronunciation, but different meanings and different origins. Homonyms include both homophones (words that sound the same, but may or may not be spelled the same) and homographs (words that are spelled the same, but may or may not sound the same).
These technical distinctions and labels are not important for young, beginning readers and writers to learn. However, because meaning is key in reading and writing, children need to learn how to spell, read, and understand these groups of related words correctly.
Beginning writers learn that when they use the wrong sun/son, bear/bare, or, one/won in a sentence, they have difficulty communicating with the reader. Spelling can completely change the meaning of a sentence. Learning about homophones and homographs is a fun challenge.
Homophones are groups of words that sound alike, but have different meanings. They can be spelled differently, such as do - due - dew, or spelled the same. Some examples of homophones that are commonly used in beginning readers are:
son - sun
tale - tail
their - there - they’re
to - two - too
ant - aunt
see - sea
one - won
four - for - fore
ate - eight
Homophones are often called Sound-Alike Words. Since the difference between the words in each group of homophones is visual, not auditory, students need learning activities that include sorting, editing, and choosing the correct word based on meaning. Here are some examples:
- Play Concentration Make cards with pairs of homophones. Mix them up and lay them out in a grid face down. Students turn over two cards at a time, trying to find matching pairs of homophones.
- Group Editing Write two sentences on the board using a pair of homophones. Draw a blank where the homophone would go. Ask the students to choose which spelling of the homophone pair goes with each sentence.
- Homophones Picture Boxes Provide students with pairs of blank boxes. Students will write one spelling of a homophone pair in the top of each box. Discuss the meaning of each word. Draw a picture demonstrating the meaning of each word in the box.
Homographs are words with different meanings that share the same spelling. They may have similar or different pronunciation. If they are pronounced the same they are also homophones.
Examples of homographs are common in early readers, and can be very confusing for students who are English Language Learners.
Anna hit the ball with her bat.
The bat is a flying animal.
We ate at the lunch counter.
Billy was given the job as counter for his math team.
Your argument runs counter to the prevailing opinion.
I like to chew gum.
I need to floss to keep my gums healthy.
One way to help students learn the different meanings of a homograph is to use a visual image. Write the word on the board. Discuss the different meanings of the word, and use each meaning in a sentence. Draw lines coming out of the word, like sun rays, and write the different sentences at the end of each ray. Write or color code the homograph word so it stands out in each sentence. Students can do this independently or in small groups after practicing it with the class.
Homophones and homographs are often taught using the humorous Amelia Bedelia books by Peggy Parish and her nephew Herman Parish. In these stories, Amelia Bedelia has one adventure -and disaster- after another because of her confusion with words that sound the same, but have different meanings. There are dozens of Amelia Bedelia stories available at Amazon.com.
Aunt Ant Leaves Through the Leaves is another humorous picture book available at Amazon.com that plays around with homophones and homonyms. | http://www.bellaonline.com/ArticlesP/art4624.asp |
4.03125 | A transit of Venus occurs when the planet crosses the face of the sun, as seen from Earth. In this gallery, we look at the Venus transit as recorded throughout history. The ancient Babylonian Venus Tables of Ammizaduga (shown here) contains information about the movements of Venus, but mentions no transit, though the Babylonians had opportunity to see ones in 1512, 1520, and 1641. [See our Transit of Venus 2012: Complete Coverage Special Report.]
Could Montezuma, the great Aztec leader, have seen the Venus transit in 1518 AD? It would have been visible to him at sunset. In the British Museum, a jade figure of the god Quetzalcoatl, related to Venus, wears a sun as a neck ornament, possibly marking the rare event.
Around 1610, Galileo discovered that Venus has phases similar to the moon's. (His drawing is shown here.) These phases are only possible if Venus orbits the sun, thus the planet helped confirm the heliocentric model of the solar system developed by Copernicus.
Johannes Kepler analyzed the astronomical data of Tycho Brahe, and formulated three important laws of planetary motion. He predicted the Venus transit of 1631, though was not able to witness it, having passed away in 1630.
English astronomer Jeremiah Horrocks is considered the first human to have witnessed a Venus transit. He concluded that existing information about planetary positions was incorrect, so he gathered his own data, allowing him to correctly predict a transit of Venus in 1639 (which Kepler had not foreseen).
On June 5, 1761 the transit of Venus was observed by 176 scientists positioned all over the world. Russian astronomer Mikhail Lomonosov noticed a halo of light that surrounded the disk of Venus as it crossed the edge of the sun, and deduced that Venus must possess an atmosphere. Shown here are drawings of Venus and the "black drop effect" by Torben Bergman, later discovered to be caused by image blurring and solar limb darkening.
The atmosphere of Venus seen during the 1761 transit was sketched by Russian astronomer Lomonosov. Here the atmosphere is sketched as a ring in figs. 6 and 7.
James Ferguson's sketch of the path of Venus across the sun disk on June 6, 1761 emphasized the dryly technical aspects of the event.
The transit of Venus on June 3, 1769, led to the publication of 400 sightings. Benjamin Franklin observed it in the United States, as did explorers Mason and Dixon at the Cape of Good Hope. Many international expeditions were launched to observe the event.
Benjamin Franklin of the U.S. Continental Congress sponsored the publication of the 1769 Venus transit measurements taken by Biddle and Bayley. This image shows the first page of the article in the British publication, Philosophical Transactions of the Royal Society.
Captain Cook undertook perhaps the most famous expedition to observe the transit of Venus on June 3, 1769. Aboard the H.M.S. Endeavor, he and his crew reached Tahiti, where an observatory was set up on a high point still known as "Point Venus" today. The expedition astronomers made many measurements successfully, and the Black Drop Effect was studied carefully.
By the Venus transit of December 8, 1874, photography had been invented, and hundreds of photographs were taken of the event, though few were useful enough for scientists. Over $1 million was spent worldwide on observations. The sketch shown here is of the transit as observed in London.
The United States Naval Observatory expedition practices for the event on the USNO grounds. Professor Simon Newcomb, director of the USNO, sits in the foreground. Newcomb's calculation of the Earth-sun distance using the transit data edged out Joseph Harkness' calculation for international adoption, though perhaps Newcomb's popularity had an effect on the decision.
Composer John Philip Sousa took a great interest in the Venus transit of 1882. During 1882-3, he created the "Venus Transit March." The Smithsonian Institution commissioned Sousa to compose the piece in honor of American physicist Prof. Joseph Henry, so the march was not specifically produced in commemoration of the transit.
The December 6, 1882 transit of Venus generated enormous public interest. Smoked glass and amateur telescopes were put into use abundantly. One of the first photographs of the transit of Venus 1882 is shown here.
Astronomer William Harkness labored mightily using data from the 1882 Venus transit to determine the distance to the sun. His value was 92,797,000 miles, with a probable error of 59,700 miles. However, his calculations were not adopted by the international astronomical community, who instead took up Simon Newcomb's figure. Harkness, though, taking the long view, is quoted as having said, "There will be no other transit of Venus till the twenty-first century of our era has dawned upon the Earth, and the June flowers are blooming in 2004. When the last transit occurred the intellectual world was awakening from the slumber of ages, and that wondrous scientific activity which has led to our present knowledge was just beginning. What will be the state of science when the next transit season arrives, God only knows." [See our Transit of Venus 2012: Complete Coverage Special Report.] | http://www.space.com/15816-venus-transits-sun-history-images.html |
4.375 | Just Ask Antoine!
Atoms & ions
Energy & change
The quantum theory
Electrons in atoms
The periodic table
- Use the
- Know the SI base units.
- State rough equivalents for the SI base units in the English system.
- Read and write the symbols for SI units.
- Recognize unit prefixes and their abbreviations.
- Build derived units from the basic units for mass, length, temperature, and time.
- Convert measurements from SI units to English, and from one prefixed unit to another.
- Use derived units like density and speed as conversion factors.
- Use percentages, parts per thousand, and parts per million as conversion factors.
- Use and report measurements carefully.
- Consider the reliability of a measurement in decisions based on measurements.
- Clearly distinguish between
- Count the number of significant figures in a recorded measurement.
Record measurements to the correct number of digits.
- Estimate the number of significant digits in a calculated result.
- Estimate the precision of a measurement by computing a standard deviation.
Measurement is the collection of quantitative data. The proper handling and
of measurements are essential in chemistry - and in any scientific endeavour.
To use measurements correctly, you must recognize that measurements are not
numbers. They always contain a unit and some inherent error.
The second lecture focuses on an international system of units (the SI system)
and introduces unit conversion. In the third lecture, we'll discuss ways to recognize, estimate and report the errors that are always present in measurements.
- quantitative observations
- include 3 pieces of information
- measurements are not numbers
- numbers are obtained by counting or by definition; measurements are obtained by comparing an object with a standard "unit"
- numbers are exact; measurements are inexact
- mathematics is based on numbers; science is based on measurement
|The National Institute of Standards and Technology (NIST) has published several online guides for users of the SI system.||
The SI System
- Le Systéme Internationale (SI) is a set of units and notations that are standard in science.
Four important SI base units
(there are others)
||1 m = 39.36 in
||1 kg = 2.2 lbs
||°F = 1.8(oC)+32
K = °C + 273.15
- derived units are built from base units
Some SI derived units
||length × length
||mass × acceleration
|work, energy, heat
||force × distance
Prefixes are used to adjust the size of base units
Commonly used SI prefixes (there are others).
- several non-SI units are encountered in chemistry
|Non SI unit
||1 L = 1000 cm3
||1 quart = 0.946 L
||1 Å = 10-10 m
||typical radius of an atom
|atomic mass unit (u)
||1 u = 1.66054×10-27 kg
||about the mass of a proton or neutron; also known as a 'dalton' or 'amu'
Arithmetic with units
- addition and subtraction: units don't change
2 kg + 3 kg = 5 kg
412 m - 12 m = 400 m
- consequence: units must be the same before adding or subtracting!
3.001 kg + 112 g = 3.001 kg + 0.112 kg = 3.113 kg
4.314 Gm - 2 Mm = 4.314 Gm - 0.002 Gm = 4.312 Gm
- multiplication and division: units multiply & divide too
3 m × 3 m = 9 m2
10 kg × 9.8 m/s2 = 98 kg m/s2
- consequence: units may cancel
5 g / 10 g = 0.5 (no units!)
10.00 m/s × 39.37 in/m = 393.7 in/s
- 5 step plan for converting units
- identify the unknown, including units
- choose a starting point
- list the connecting conversion factors
- multiply starting measurement by conversion factors
- check the result: does the answer make sense?
- Common variations
- series of conversions
Americium (Am) is extremely toxic; 0.02 micrograms is the allowable body burden in bone. How many ounces of Am is this?
- converting powers of units
- converting compound units
- starting point must be constructed
- using derived units as conversion factors
- mass fractions (percent, ppt, ppm) convert mass of sample into mass of component
- density converts mass of a substance to volume
- velocity converts distance traveled to time required
- concentration converts volume of solution to mass of solute
Uncertainty in Measurements
- making a measurement usually involves comparison with a unit or a scale of units
- always read between the lines!
- the digit read between the lines is always uncertain
- convention: read to 1/10 of the distance between the smallest scale divisions
- significant digits
- definition: all digits up to and including the first uncertain digit.
- the more significant digits, the more reproducible the measurement is.
counts and defined numbers are exact- they have no uncertain digits!
|Tutorial: Uncertainty in Measurement||
- counting significant digits in a series of measurements
- compute the average
- identify the first uncertain digit
- round the average so the last digit is the first uncertain digit
counting significant digits in a single measurement
- convert to exponential notation
- disappearing zeros just hold the decimal point- they aren't significant.
- exception: zeros at the end of a whole number
might be significant
- Precision of Calculated Results
- calculated results are never more reliable than the measurements they are built from
- multistep calculations: never round intermediate results!
- sums and differences: round result to the same number of
fraction digits as the poorest measurement
- products and quotients: round result to the same number of
significant digits as the poorest measurement.
Using Significant Figures
- Precision vs. Accuracy
|good precision & good accuracy
poor accuracy but good precision
||good accuracy but poor precision|
poor precision & poor accuracy
|check by repeating measurements
||check by using a different method
|poor precision results from poor technique
||poor accuracy results from procedural or equipment flaws
|poor precision is associated with 'random errors' - error has random sign and varying magnitude. Small errors more likely than large errors.
||poor accuracy is associated with 'systematic errors' - error has a reproducible sign and magnitude.|
- Estimating Precision
- Consider these two methods for computing scores in archery competitions. Which is fairer?
|Score by distance from bullseye|
|Score by area or target|
- The standard deviation, s, is a precision estimate based on the area score:
xi is the i-th measurement
is the average measurement
N is the number of measurements.
|Sign up for a free monthly|
newsletter describing updates,
new features, and changes
on this site.
General Chemistry Online! Measurement
Copyright © 1997-2005 by Fred Senese
Comments & questions to [email protected]
Last Revised 06/16/05.URL: http://antoine.frostburg.edu/chem/senese/101/measurement/index.shtml | http://antoine.frostburg.edu/chem/senese/101/measurement/ |
4.125 | The genomes of species are plastic and continuously changing and therefore causing genetic variation. If the genetic variation is expressed as physical traits compatible with the selection pressure, the organism carrying the new genetic traits may be able to reproduce and, hence, transfer the new genetic traits to its descendants. Without genetic variation species would not be able to adapt to a changing environment.
Genetic variation can be analysed at various levels
Genetic variation between:
- individuals is called polymorphisms
- populations is called gene frequency differences
- species or higher taxa is called divergence
Events causing genetic variation within a single individual prokaryote are the genetic variation mechanisms. | http://wiki.biomine.skelleftea.se/wiki/index.php/Genetic_variation |
4.09375 | Remote Sensing Glossary
Reference Information for Virtual Nebraska
Terms, Definitions and Concepts
- KSC (Kennedy Space Center)
See NASA Centers.
- Keplerian elements (aka satellite orbital elements)
The set of six independent constants which define an orbit--named for Johannes Kepler [1571-1630]. The constants define the shape of an ellipse or hyperbola, orient it around its central body, and define the position of a satellite on the orbit. The classical orbital elements are:
- Kepler's three laws of motion
Any spacecraft launched into orbit obeys the same laws that govern the motions of the planets around our sun, and the moon around the Earth. Johannes Kepler formulated three laws that describe these motions:
- Each planet revolves around the sun in an orbit that is an ellipse with the sun as its focus or primary body. Kepler postulated the lack of circular orbits--only elliptical ones--determined by gravitational perturbations and other factors. Gravitational pulls, according to Newton, extend to infinity, although their forces weaken with distance and eventually become impossible to detect. (See Newton's law of universal gravitation.) Spacecraft orbiting the Earth are primarily influenced by the Earth's gravity and anomalies in its composition, but they also are influenced by the moon and sun and possibly other planets.
- The radius vector--such as the line from the center of the sun to the center of a planet, from the center of Earth to the center of the moon, or from the center of Earth to the center of gravity of a satellite--sweeps out equal areas in equal periods of time.
- The square of a planet's orbital period is equal to the cube of its mean distance from the sun times a constant. As extended and generalized, this means that a satellite's orbital period increases with its mean distance from the planet. See Newton's law of universal gravitation and laws of motion.
- kilometer (km)
Metric unit of distance equal to 3,280.8 feet or .621 statute miles.
Unit of speed of one nautical mile (6,076.1 feet) an hour. | http://www.casde.unl.edu/glossary/k.php |
4.0625 | Single-wall carbon nanotubes have a number of revolutionary uses, including being spun into fibers or yarns that are more than 10 times stronger than any current structural material. In addition to uses in lightweight, high-strength applications, these new long metallic nanotubes also will enable new types of nanoscale electro-mechanical systems such as micro-electric motors, nanoscale diodes, and nanoconducting cable for wiring micro-electronic devices.
In research reported in the current online issue of the journal Nature Materials, Yuntian Zhu and his colleagues discuss how they created a single-wall carbon nanotube using a process called catalytic chemical vapor deposition from ethanol (alcohol) vapor. Discovered in 1991 by Japanese scientist Sumio Iijima, carbon nanotubes are cylindrical carbon molecules that are very similar in structure to a fullerene, or buckyball, but instead of being a sphere, the nanotube is tubular in shape. Until the advent of the Los Alamos/Duke discovery, the length of carbon nanotubes had previously been limited to a few millimeters.
Zhu, a scientist in the Materials Science and Technology Division, said, "although this discovery is really only a beginning, the continued development of longer length carbon nanotubes could result in nearly endless applications. Actually, the potential uses for long carbon nanotubes are probably limited only by our imagination."
Long metallic carbon nanotubes can be used to create a bio/chemical sensor in one segment while the rest of the nanotube can act as a conductor to transmit the signal. Other uses include applications in
nanoscale electronics, where the nanotubes can be used as conducting or insulating materia
Contact: Todd Hanson
DOE/Los Alamos National Laboratory | http://news.bio-medicine.org/biology-news-2/Laboratory-grows-world-record-length-carbon-nanotube-220-1/ |
4.25 | History of geodesy
Geodesy (/dʒiːˈɒdɨsi/), also named geodetics, is the scientific discipline that deals with the measurement and representation of the Earth.
Humanity has always been interested in the Earth. During very early times this interest was limited, naturally, to the immediate vicinity of home and residency, and the fact that we live on a near spherical globe may or may not have been apparent. As humanity developed, so did its interest in understanding and mapping the size, shape, and composition of the Earth.
Early ideas about the figure of the Earth held the Earth to be flat (see flat earth), and the heavens a physical dome spanning over it. Two early arguments for a spherical earth were that lunar eclipses were seen as circular shadows which could only be caused by a spherical Earth, and that Polaris is seen lower in the sky as one travels South.
The early Greeks, in their speculation and theorizing, ranged from the flat disc advocated by Homer to the spherical body postulated by Pythagoras — an idea supported later by Aristotle. Pythagoras was a mathematician and to him the most perfect figure was a sphere. He reasoned that the gods would create a perfect figure and therefore the earth was created to be spherical in shape. Anaximenes, an early Greek scientist, believed strongly that the earth was rectangular in shape.
Since the spherical shape was the most widely supported during the Greek Era, efforts to determine its size followed. Plato determined the circumference of the earth to be 400,000 stadia (between 62,800 km/39,250 mi and 74,000 km/46,250 mi ) while Archimedes estimated 300,000 stadia ( 55,500 kilometres/34,687 miles ), using the Hellenic stadion which scholars generally take to be 185 meters or 1/10 of a geographical mile. Plato's figure was a guess and Archimedes' a more conservative approximation.
In Egypt, a Greek scholar and philosopher, Eratosthenes (276 BC– 195 BC), is said to have made more explicit measurements. He had heard that on the longest day of the summer solstice, the midday sun shone to the bottom of a well in the town of Syene (Aswan). At the same time, he observed the sun was not directly overhead at Alexandria; instead, it cast a shadow with the vertical equal to 1/50th of a circle (7° 12'). To these observations, Eratosthenes applied certain "known" facts (1) that on the day of the summer solstice, the midday sun was directly over the Tropic of Cancer; (2) Syene was on this tropic; (3) Alexandria and Syene lay on a direct north-south line; (4) The sun was a relatively long way away (Astronomical unit). Legend has it that he had someone walk from Alexandria to Syene to measure the distance: that came out to be equal to 5000 stadia or (at the usual Hellenic 185 meters per stadion) about 925 kilometres.
From these observations, measurements, and/or "known" facts, Eratosthenes concluded that, since the angular deviation of the sun from the vertical direction at Alexandria was also the angle of the subtended arc (see illustration), the linear distance between Alexandria and Syene was 1/50 of the circumference of the Earth which thus must be 50×5000 = 250,000 stadia or probably 25,000 geographical miles. The circumference of the Earth is 24,902 miles (40,075.16 km). Over the poles it is more precisely 40,008 km or 24,860 statute miles. The actual unit of measure used by Eratosthenes was the stadion. No one knows for sure what his stadion equals in modern units, but some say that it was the Hellenic 185-meter stadion.
Had the experiment been carried out as described, it would not be remarkable if it agreed with actuality. What is remarkable is that the result was probably about one sixth too high. His measurements were subject to several inaccuracies: (1) though at the summer solstice the noon sun is overhead at the Tropic of Cancer, Syene was not exactly on the tropic (which was at 23° 43' latitude in that day) but about 22 geographical miles to the north; (2) the difference of latitude between Alexandria (31.2 degrees north latitude) and Syene (24.1 degrees) is really 7.1 degrees rather than the perhaps rounded (1/50 of a circle) value of 7° 12' that Eratosthenes used; (4) the actual solstice zenith distance of the noon sun at Alexandria was 31° 12' − 23° 43' = 7° 29' or about 1/48 of a circle not 1/50 = 7° 12', an error closely consistent with use of a vertical gnomon which fixes not the sun's center but the solar upper limb 16' higher; (5) the most importantly flawed element, whether he measured or adopted it, was the latitudinal distance from Alexandria to Syene (or the true Tropic somewhat further south) which he appears to have overestimated by a factor that relates to most of the error in his resulting circumference of the earth.
A parallel later ancient measurement of the size of the earth was made by another Greek scholar, Posidonius. He is said to have noted that the star Canopus was hidden from view in most parts of Greece but that it just grazed the horizon at Rhodes. Posidonius is supposed to have measured the elevation of Canopus at Alexandria and determined that the angle was 1/48th of circle. He assumed the distance from Alexandria to Rhodes to be 5000 stadia, and so he computed the Earth's circumference in stadia as 48 times 5000 = 240,000. Some scholars see these results as luckily semi-accurate due to cancellation of errors. But since the Canopus observations are both mistaken by over a degree, the "experiment" may be not much more than a recycling of Eratosthenes's numbers, while altering 1/50 to the correct 1/48 of a circle. Later either he or a follower appears to have altered the base distance to agree with Eratosthenes's Alexandria-to-Rhodes figure of 3750 stadia since Posidonius's final circumference was 180,000 stadia, which equals 48×3750 stadia. The 180,000 stadia circumference of Posidonius is suspiciously close to that which results from another method of measuring the earth, by timing ocean sun-sets from different heights, a method which produces a size of the earth too low by a factor of 5/6, due to horizontal refraction.
The abovementioned larger and smaller sizes of the earth were those used by Claudius Ptolemy at different times, 252,000 stadia in the Almagest and 180,000 stadia in the later Geographical Directory. His midcareer conversion resulted in the latter work's systematic exaggeration of degree longitudes in the Mediterranean by a factor close to the ratio of the two seriously differing sizes discussed here, which indicates that the conventional size of the earth was what changed, not the stadion.
The Indian mathematician Aryabhata (AD 476 - 550) was a pioneer of mathematical astronomy. He describes the earth as being spherical and that it rotates on its axis, among other things in his work Āryabhaṭīya. Aryabhatiya is divided into four sections. Gitika, Ganitha (mathematics), Kalakriya (reckoning of time) and Gola (celestial sphere). The discovery that the earth rotates on its own axis from west to east is described in Aryabhatiya ( Gitika 3,6; Kalakriya 5; Gola 9,10;). For example he explained the apparent motion of heavenly bodies is only an illusion (Gola 9), with the following simile;
- Just as a passenger in a boat moving downstream sees the stationary (trees on the river banks) as traversing upstream, so does an observer on earth see the fixed stars as moving towards the west at exactly the same speed (at which the earth moves from west to east.)
Aryabhatiya also estimates the circumference of Earth, with an accuracy of 1%, which is remarkable. Aryabhata gives the radii of the orbits of the planets in terms of the Earth-Sun distance as essentially their periods of rotation around the Sun. He also gave the correct explanation of lunar and solar eclipses and that the Moon shines by reflecting sunlight.
The Muslim scholars, who held to the spherical Earth theory, used it to calculate the distance and direction from any given point on the earth to Mecca. This determined the Qibla, or Muslim direction of prayer. Muslim mathematicians developed spherical trigonometry which was used in these calculations.
Around AD 830 Caliph al-Ma'mun commissioned a group of astronomers to measure the distance from Tadmur (Palmyra) to al-Raqqah, in modern Syria. They found the cities to be separated by one degree of latitude and the distance between them to be 66⅔ miles and thus calculated the Earth's circumference to be 24,000 miles. Another estimate given was 56⅔ Arabic miles per degree, which corresponds to 111.8 km per degree and a circumference of 40,248 km, very close to the currently modern values of 111.3 km per degree and 40,068 km circumference, respectively.
Muslim astronomers and geographers were aware of magnetic declination by the 15th century, when the Egyptian Muslim astronomer 'Abd al-'Aziz al-Wafa'i (d. 1469/1471) measured it as 7 degrees from Cairo.
Of the medieval Persian Abu Rayhan Biruni (973-1048) it is said:
"Important contributions to geodesy and geography were also made by Biruni. He introduced techniques to measure the earth and distances on it using triangulation. He found the radius of the earth to be 6339.6 km, a value not obtained in the West until the 16th century. His Masudic canon contains a table giving the coordinates of six hundred places, almost all of which he had direct knowledge."
At the age of 17, Biruni calculated the latitude of Kath, Khwarazm, using the maximum altitude of the Sun. Biruni also solved a complex geodesic equation in order to accurately compute the Earth's circumference, which were close to modern values of the Earth's circumference. His estimate of 6,339.9 km for the Earth radius was only 16.8 km less than the modern value of 6,356.7 km. In contrast to his predecessors who measured the Earth's circumference by sighting the Sun simultaneously from two different locations, Biruni developed a new method of using trigonometric calculations based on the angle between a plain and mountain top which yielded more accurate measurements of the Earth's circumference and made it possible for it to be measured by a single person from a single location. Abu Rayhan Biruni's method was intended to avoid "walking across hot, dusty deserts" and the idea came to him when he was on top of a tall mountain in India (present day Pind Dadan Khan, Pakistan). From the top of the mountain, he sighted the dip angle which, along with the mountain's height (which he calculated beforehand), he applied to the law of sines formula. This was the earliest known use of dip angle and the earliest practical use of the law of sines. He also made use of algebra to formulate trigonometric equations and used the astrolabe to measure angles. His method can be summarized as follows:
He first calculated the height of the mountain by going to two points at sea level with a known distance apart and then measuring the angle between the plain and the top of the mountain for both points. He made both the measurements using an astrolabe. He then used the following trigonometric formula relating the distance (d) between both points with the tangents of their angles (θ) to determine the height (h) of the mountain:
He then stood at the highest point of the mountain, where he measured the dip angle using an astrolabe. He applied the values he obtained for the dip angle and the mountain's height to the following trigonometric formula in order to calculate the Earth's radius:
- R = Earth radius
- h = height of mountain
- θ = dip angle
Biruni had also, by the age of 22, written a study of map projections, Cartography, which included a method for projecting a hemisphere on a plane. Around 1025, Biruni was the first to describe a polar equi-azimuthal equidistant projection of the celestial sphere. He was also regarded as the most skilled when it came to mapping cities and measuring the distances between them, which he did for many cities in the Middle East and western Indian subcontinent. He often combined astronomical readings and mathematical equations, in order to develop methods of pin-pointing locations by recording degrees of latitude and longitude. He also developed similar techniques when it came to measuring the heights of mountains, depths of valleys, and expanse of the horizon, in The Chronology of the Ancient Nations. He also discussed human geography and the planetary habitability of the Earth. He hypothesized that roughly a quarter of the Earth's surface is habitable by humans, and also argued that the shores of Asia and Europe were "separated by a vast sea, too dark and dense to navigate and too risky to try".
Revising the figures attributed to Posidonius, another Greek philosopher determined 18,000 miles as the Earth's circumference. This last figure was promulgated by Ptolemy through his world maps. The maps of Ptolemy strongly influenced the cartographers of the Middle Ages. It is probable that Christopher Columbus, using such maps, was led to believe that Asia was only 3 or 4 thousand miles west of Europe.
Ptolemy's view was not universal, however, and chapter 20 of Mandeville's Travels (c. 1357) supports Eratosthenes' calculation.
It was not until the 16th century that his concept of the Earth's size was revised. During that period the Flemish cartographer, Mercator, made successive reductions in the size of the Mediterranean Sea and all of Europe which had the effect of increasing the size of the earth.
Early modern period
Jean Picard performed the first modern meridian arc measurement in 1699–70. He measured a base line by the aid of wooden rods, used a telescope in his angle measurements, and computed with logarithms. Jacques Cassini later continued Picard's arc northward to Dunkirk and southward to the Spanish boundary. Cassini divided the measured arc into two parts, one northward from Paris, another southward. When he computed the length of a degree from both chains, he found that the length of one degree in the northern part of the chain was shorter than that in the southern part. See the illustration at right.
This result, if correct, meant that the earth was not a sphere, but an oblong (egg-shaped) ellipsoid—which contradicted the computations by Isaac Newton and Christiaan Huygens. Newton's theory of gravitation predicted the Earth to be an oblate spheroid with a flattening of 1:230.
The issue could be settled by measuring, for a number of points on earth, the relationship between their distance (in north-south direction) and the angles between their astronomical verticals (the projection of the vertical direction on the sky). On an oblate Earth the meridional distance corresponding to one degree would grow toward the poles.
The French Academy of Sciences dispatched two expeditions – see French Geodesic Mission. One expedition under Pierre Louis Maupertuis (1736–37) was sent to Torne Valley (as far North as possible). The second mission under Pierre Bouguer was sent to what is modern-day Ecuador, near the equator (1735–44).
The measurements conclusively showed that the earth was oblate, with a flattening of 1:210. Thus the next approximation to the true figure of the Earth after the sphere became the oblong ellipsoid of revolution.
Asia and Americas
In South America Bouguer noticed, as did George Everest in the 19th century Great Trigonometric Survey of India, that the astronomical vertical tended to be pulled in the direction of large mountain ranges, due to the gravitational attraction of these huge piles of rock. As this vertical is everywhere perpendicular to the idealized surface of mean sea level, or the geoid, this means that the figure of the Earth is even more irregular than an ellipsoid of revolution. Thus the study of the "undulation of the geoid" became the next great undertaking in the science of studying the figure of the Earth.
In the late 19th century the Zentralbüro für die Internationale Erdmessung (that is, Central Bureau for International Geodesy) was established by Austria-Hungary and Germany. One of its most important goals was the derivation of an international ellipsoid and a gravity formula which should be optimal not only for Europe but also for the whole world. The Zentralbüro was an early predecessor of the International Association of Geodesy (IAG) and the International Union of Geodesy and Geophysics (IUGG) which was founded in 1919.
Most of the relevant theories were derived by the German geodesist Friedrich Robert Helmert in his famous books Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Einleitung und 1. Teil (1880) and 2. Teil (1884); English translation: Mathematical and Physical Theories of Higher Geodesy, Vol. 1 and Vol. 2. Helmert also derived the first global ellipsoid in 1906 with an accuracy of 100 meters (0.002 percent of the Earth's radii). The US geodesist Hayford derived a global ellipsoid in ~1910, based on intercontinental isostasy and an accuracy of 200 m. It was adopted by the IUGG as "international ellipsoid 1924".
- Cleomedes 1.10
- Strabo 2.2.2, 2.5.24; D.Rawlins, Contributions
- D.Rawlins (2007). "Investigations of the Geographical Directory 1979–2007 "; DIO, volume 6, number 1, page 11, note 47, 1996.
- David A. King, Astronomy in the Service of Islam, (Aldershot (U.K.): Variorum), 1993.
- Gharā'ib al-funūn wa-mulah al-`uyūn (The Book of Curiosities of the Sciences and Marvels for the Eyes), 2.1 "On the mensuration of the Earth and its division into seven climes, as related by Ptolemy and others," (ff. 22b-23a)
- Edward S. Kennedy, Mathematical Geography, pp. 187–8, in (Rashed & Morelon 1996, pp. 185–201)
- Barmore, Frank E. (April 1985), "Turkish Mosque Orientation and the Secular Variation of the Magnetic Declination", Journal of Near Eastern Studies (University of Chicago Press) 44 (2): 81–98 , doi:10.1086/373112
- John J. O'Connor, Edmund F. Robertson (1999). Abu Arrayhan Muhammad ibn Ahmad al-Biruni, MacTutor History of Mathematics archive.
- "Khwarizm". Foundation for Science Technology and Civilisation. Retrieved 2008-01-22.
- James S. Aber (2003). Alberuni calculated the Earth's circumference at a small town of Pind Dadan Khan, District Jhelum, Punjab, Pakistan.Abu Rayhan al-Biruni, Emporia State University.
- Lenn Evan Goodman (1992), Avicenna, p. 31, Routledge, ISBN 0-415-01929-X.
- Behnaz Savizi (2007), "Applicable Problems in History of Mathematics: Practical Examples for the Classroom", Teaching Mathematics and Its Applications (Oxford University Press) 26 (1): 45–50, doi:10.1093/teamat/hrl009 (cf. Behnaz Savizi. "Applicable Problems in History of Mathematics; Practical Examples for the Classroom". University of Exeter. Retrieved 2010-02-21.)
- Beatrice Lumpkin (1997), Geometry Activities from Many Cultures, Walch Publishing, pp. 60 & 112–3, ISBN 0-8251-3285-1
- Jim Al-Khalili, The Empire of Reason 2/6 (Science and Islam - Episode 2 of 3) on YouTube, BBC
- Jim Al-Khalili, The Empire of Reason 3/6 (Science and Islam - Episode 2 of 3) on YouTube, BBC
- David A. King (1996), "Astronomy and Islamic society: Qibla, gnomics and timekeeping", in Roshdi Rashed, ed., Encyclopedia of the History of Arabic Science, Vol. 1, p. 128-184 . Routledge, London and New York.
- An early version of this article was taken from the public domain source at http://www.ngs.noaa.gov/PUBS_LIB/Geodesy4Layman/TR80003A.HTM#ZZ4.
- J.L. Greenberg: The problem of the Earth's shape from Newton to Clairaut: the rise of mathematical science in eighteenth-century Paris and the fall of "normal" science. Cambridge : Cambridge University Press, 1995 ISBN 0-521-38541-5
- M.R. Hoare: Quest for the true figure of the Earth: ideas and expeditions in four centuries of geodesy. Burlington, VT: Ashgate, 2004 ISBN 0-7546-5020-0
- D.Rawlins: "Ancient Geodesy: Achievement and Corruption" 1984 (Greenwich Meridian Centenary, published in Vistas in Astronomy, v.28, 255-268, 1985)
- D.Rawlins: "Methods for Measuring the Earth's Size by Determining the Curvature of the Sea" and "Racking the Stade for Eratosthenes", appendices to "The Eratosthenes-Strabo Nile Map. Is It the Earliest Surviving Instance of Spherical Cartography? Did It Supply the 5000 Stades Arc for Eratosthenes' Experiment?", Archive for History of Exact Sciences, v.26, 211-219, 1982
- C.Taisbak: "Posidonius vindicated at all costs? Modern scholarship versus the stoic earth measurer". Centaurus v.18, 253-269, 1974 | http://en.wikipedia.org/wiki/History_of_geodesy |
4.15625 | Acceleration Down an Inclined Plane
This is a very nice lab in that the students simulate an example problem from their textbook using an airtrack, glider and motion detector.
The problem is that of students on spring break tobogganing down a hill. This is modelled as a mass moving down a frictionless inclined plane. In lab the students tilt the airtrack by propping up one end of the airtrack to an angle less than five degrees. The students measure what the angle of the incline is. A glider is placed at the top and released. An ultrasonic motion detector records the gliders velocity versus time as the glider goes do the airtrack. The slope of this line is the gliders acceleration. The student measure this 5 times and then determine the mean and standard deviation of the mean of the experimental acceleration.
Before lab in the classroom, the lab as a whole derives what the theoretical acceleration of the glider should be from Newton's laws and a free body diagram. The students compare their experimental acceleration to the theoretical acceleration and discuss the possible reasons why they maybe different. | http://www.centenary.edu/physics/labs/physics1/lab5 |
4.28125 | Muddying the Waters – Background
Sediment, small particles of soil and other materials, is one of the most damaging pollutants in the Bay. Sediment enters the water through runoff from eroding soil. When suspended in the water, sediment prevents sunlight from reaching underwater grasses. This suspended sediment can also clog the gills of fish. As sediment settles to the bottom, it can smother the organisms that live there. In the worst cases, high levels of sediment in the water can actually change the physical structure of a waterway.
Sediment can best be reduced by erosion-control measures. Hay bales and filter cloths can be used to keep exposed soil in place. Vegetated buffer zones around properties that line waterways filter sediment from runoff before it reaches the water.
This activity helps students to investigate the properties of sediment in water. It provides students with an understanding of sediment's impact on the Bay and illustrates the connection between human activity on land and water quality in the Bay. | http://www.doe.virginia.gov/instruction/science/elementary/lessons_bay/lesson_plans/muddying_waters/background.shtml |
4.03125 | Slavery, trade, the shipping industries and war brought Africans to London in increasing numbers over the course of the seventeenth and eighteenth centuries. By the end of the American War in 1783 there was a population of between 5,000 and 10,000 black men and women living in the capital, a central ingredient to the ragout of cultures and lives that made this a world city. From its high-point at the end of the eighteenth century the West Indian and African communities of London went into relative decline. Following the abolition of the slave trade and a generation later, of slavery itself within the British Empire, there remained fewer new recruits to this population. Nevertheless, London remained the centre of a worldwide empire that both attracted black men and women from the colonies and ensured the city would form the nexus for an evolving anti-imperialist politics.
Contents of this Article
- Patterns of Migration
- Housing and Communities
- Legal Contexts
- Search Strategies
- Introductory Reading
By the third quarter of the seventeenth century the British slave trade was fully established, forcibly transporting black Africans to the recently established colonies of the West Indies and North America. In the process huge fortunes were amassed by both traders and plantation owners. By the beginning of the eighteenth century, many wealthy and successful plantation owners began to return to London with their fortunes and frequently with their personal slaves. Young and "exotic" black servants dressed in a metal collar and extravagant Oriental costume became a fashion accessory for London's powerful elites. As a result, by mid-century black men and women were a relatively familiar sight on the streets of London.
Large numbers of black Londoners also arrived as a result of their involvement as sailors in the merchant navy and as soldiers and sailors in Britain's military. Following the cessation of hostilities at the conclusions of the Seven Years War in 1763 and the American War in 1783 many black men, among them a large group of Loyalists from North America, were discharged onto the streets of Britain's ports forming the country's first coherent black communities. This community probably reached its greatest size in London in the mid-1780s, but was then reduced by the creation of the ill-conceived Sierra Leone Settlement in 1786-7 and the abolition of the slave trade in 1807.
During the last half of the eighteenth century a small number of East Indians could also be found in London. The first use of the term "Lascar" (an Indian sailor) in the Proceedings was in 1765. By the early nineteenth century a more substantial community, along with a community of Chinese emigrants, was established. These communities were predominantly located in the poorer neighbourhoods of the East End and around the docks. During the nineteenth century new forms of labour contract increasingly racialised the wages and labour conditions of non-white sailors, and in 1823 the East India Company brought in a new "Asiatic" contract for seamen, ensuring that Indian and African sailors both experienced harsher conditions than their white contemporaries and were largely prevented from settling in London. This contributed to the gradual decline of the black community from the second quarter of the nineteenth century.
Throughout this period black men far outnumbered black women.
During the eighteenth century domestic service and the pauper professions of the capital formed the main areas of employment for black Londoners. Slaves brought to London as servants were in a particularly ambiguous position, as the law neither clearly recognised the legality of slavery, nor granted them freedom from it. As a result many black domestic servants were left to the limited mercies of their employers. It was only towards the end of the century with a series of landmark, but much misunderstood, legal judgements that the situation began to change.
But, if black domestic servants were in a difficult legal position, this was equally true of Loyalists and discharged soldiers and sailors. These did not have access to the comprehensive system of poor relief established under the Old Poor Law, but unlike other migrants they could not be removed back to their place of birth either. As a result, a highly visible group of black men were forced into beggary, and the "black poor" became a much-discussed social phenomenon in the final quarter of the eighteenth century.
Nevertheless, black men and women could be found working in the whole range of urban occupations, and in particular among the city's porters, watermen, basket women, hawkers, and chairmen. Some, the author Ignatius Sancho for instance, were able to establish themselves more firmly in the London economy, but opportunities like this came to a very small number.
In the nineteenth century, as the overall size of the black community declined, a higher proportion came to be associated with the port and employment as seamen, though a small number of black men and women continued to be found in other trades. In the latter half of the century, figures such as Mary Seacole, the author and nurse, made her home in London; while the lecturer and editor, Celestine Edwards, originally from Dominica, used London as a base for his contributions to the creation of a newly anti-imperialist, global and pan-African politics. The location of the first Pan-African Conference at Westminster Hall in 1900 reflects the extent to which London was both the centre of the British Empire and the natural place to organise opposition to it.
Most blacks in London lived in the relatively poor parishes of the East End, while several writers have associated black beggars with the parish of St Giles in the Fields. It is reasonably clear, however, that during the eighteenth century black men and women were found throughout the plebeian and working-class communities of London. There is some evidence of alehouses with a predominantly black clientele, and of the existence of black social events. It is also clear that blacks participated fully in the plebeian culture of the capital. In part because of the gender imbalance in the black community, but also because of its social and geographical diffusion, many black men married local women and in the process entered more fully into the pre-existing plebeian world.
In the nineteenth century the geographical concentration of the black community grew even more centred on the East End and riverside parishes. A small, well defined community continued to exist at Canning Town just north of the docks, and the establishment of institutions such as "The Strangers’ Home for Asiatics, Africans and South Sea Islanders" in West India Dock Road in 1856 simply reinforced this settlement pattern.
The legal context, both in relation to the Old Poor Law and to the status of slavery in Britain, was ambiguous throughout the eighteenth century. In the 1772 case of James Somerset, it was determined that slaves could not be removed from England against their will, while in 1796, when a merchant was denied financial compensation for "his" slaves who perished on their journey to the West Indies, the legal fiction that black men and women represented "property" was overturned. The role of Britain in both the slave trade and in its abolition is reflected in trials for kidnapping brought against slave traders after the abolition of the trade in 1807. Throughout the first half of the nineteenth century, in particular, regular trials of those involved in the now illegal slave trade provide some of the best evidence we have for the organisation of the trade, and conditions in the slaving ports of the West Coast of Africa and on the ships during the "Middle Passage". See, for example, the 1843 trial of Pedro De Zulueta.
Within the Proceedings black men and women can be found in reasonably large numbers for the eighteenth century, and to a lesser extent in the nineteenth. There are numerous trials involving black defendants, often cases of grand larceny or housebreaking. Black men and women also appear as victims and witnesses in many trials, and do not seem to have been treated differently from other participants, although typically their skin colour is specifically mentioned. However, the eighteenth century did witness the rise of new types of scientific racism, which were popularised through newspaper reports and exhibitions, and this racism became ever more entrenched in the popular imagination over the course of the nineteenth century (reinforced by highly stereotyped depictions in both literature and on the stage in the form of "minstrels"). There is also evidence that black men and women were occasionally disadvantaged in their dealings with the law by their skin colour. When in 1737 George Scipio was accused of stealing Anne Godfrey's washing, the case rested entirely on whether or not Scipio was the only black man in Hackney at the time. [Ruth Paley, ed., Justice in Eighteenth-Century Hackney: The Justicing Notebook of Henry Norris and the Hackney Petty Sessions Book (London Record Society, vol. 28, 1991), item 218.]
Identifying black men and women in the Proceedings is difficult and does not lend itself to searches on specific categories of information. Black people can be found in a wide range of criminal trials as victims, witnesses and perpetrators. They were also subject to the whole range of punishments.
Keyword searches are more productive. In most instances black people are identified with a descriptive phrase. The most common of these include black man, black woman, blackamoor and blackmoor, black boy, and black girl. The term Negro also produces a number of trials and becomes more commonplace in the second quarter of the nineteenth century, while mulatto and swarthy can also be used to locate relevant cases. Coloured man and coloured woman also produce a fair number of results. The first use of the word coloured in this context was in , and the phrase becomes commonplace from the 1820s. Please remember to enclose all words in double quotation marks if you are searching for a phrase like black man. All of these search strategies are more or less frustrated by the use of these same terms in commonplace names such as Black Boy Alley, alehouse and shop names such as The Blackamoor's Head, and descriptive brand names such as Negro-Head Tobacco. Keyword search phrases such as the West Indies also produce some results, although the term West Indian is used predominantly to refer to returned white settlers. For the nineteenth century, searches on the names of institutions such as the Strangers’ Home produce good results.
For sailors, East Indian, African and West Indian seamen can be found by searching on terms such as Lascar and East Indian. Muslims can frequently be located by searching on words such as Alcoran and Mahometan, with appropriate variations in spelling to account for the regularisation of these words over the course of the nineteenth century. The first instance of the term Lascar in these records dates from 1765 in the . In this instance the prosecutor James Morgan, who was born in Bengal, was allowed to swear on the Koran, named in this and several other trials as the Alcoran. The term Lascar, however, only becomes commonplace from the 1820s.
Smaller communities from around the world can also be located in the Proceedings. Searching for Malay, Mauritius and Chinese all produce a small number of relevant trials. See The Chinese Community in London.
The heavily racialised material and public cultures of nineteenth-century London can also be traced through the Proceedings. Theatrical representations of black men and women in the form of blackface minstrelsy, which were particularly popular from the 1860s, are reflected in the Proceedings.
Searching on the word interpreter or translator will bring up large numbers of trials involving foreign witnesses and defendants whose first language was not English.
- Braidwood, Stephen J., Black Poor and White Philanthropists: London's Blacks and the Foundation of the Sierra Leone Settlement, 1786-1791 (Liverpool, 1994)
- Gerzina, Gretchen, Black London: Life Before Emancipation (New Jersey, 1995)
- Myers, Norma, 'The Black Presence Through Criminal Records, 1780-1830', Immigrants and Minorities, 7 (1988), 292-307
- Gerzina, Gretchen, (ed.), Black Victorians/Black Victoriana (New Brunswick, NJ, 2003).
For more secondary literature on this subject see the Bibliography. | http://www.oldbaileyonline.org/static/Black.jsp |
4.0625 | A person has a food allergy when they cannot tolerate one or more foods and their immune system is involved in creating the symptoms.
Our immune system protects our bodies from infections. We produce molecules, called antibodies, which recognise germs that cause infections. Our immune system makes a number of different types of antibody, which have different roles. The one that plays a role in an allergic reaction is called IgE. We produce IgE molecules to fight infections caused by parasites, like worms or those that cause malaria. We do not understand why, but the immune system of some people makes IgE by mistake to harmless things like pollen or dust mites, giving rise to hay fever and asthma, and to some foods, giving rise to food allergies.
Food allergens (the parts or molecules in food responsible for an allergic reaction) are usually proteins. There are generally several different kinds of allergen in each food. It is not yet clear what makes some proteins food allergens, and not others.
When a person eats a food, the food may trigger immune cells to produce large amounts of IgE that recognises that food. Sometimes the immune cells can be triggered to produce IgE when a person breathes in tiny parts of a food e.g. sunflower seeds when they are used to feed birds. The IgE circulates in the blood and some of it attaches to the surface of specialized inflammatory cells called mast cells. These cells occur in all body tissues but are especially common in areas of the body that are typical sites of allergic reactions. The person is then sensitized to the food and primed to produce an allergic reaction.
On any subsequent occasion when the person eats the same food, the food allergens interact with the specific IgE on the surface of the mast cells. In response, the activated mast cells rapidly release chemicals such as histamine. Depending upon the tissue in which they are released, these
chemicals will cause a person to have various symptoms of food allergy. It varies how much of the allergen that have to be present to cause symptoms. In some instances even very small amounts of an allergen may trigger severe symptoms. For some people it is enough to just breathe in a tiny part of the food to get an allergic reaction.
Some people have allergic reactions where IgE is not involved. Gluten hypersensitivity (coeliac disease) is an example of non-IgE-mediated food allergy.
Individuals with pollen or latex allergy often experience allergic symptoms when they eat certain fruits, vegetables or nuts. This “cross-reactivity” occurs because the body cannot distinguish between the allergens in pollen or latex and related proteins in food and may react to both. | http://www.foodallergens.info/Facts/Reactions/Allergic.html |
4.03125 | |Eq. 1: The variable on the left is known as the Greek letter Phi. It represents magnetic flux, measures in Webers or Wb, or Tesla*meter^2. B represents the magnetic field's magnitude. A is the area of the coil, such as for a circular loop of coil with radius r, the area would be pi*r^2.|
|Eq. 2: This is known as Faraday's law of induction. Here N denotes the number of loops. The negative sign is used here to remind us in which direction the induced emf acts.|
|Eq. 3: This is a useful equation in determining the voltage caused by a moving(respective) field/wire.|
|Eq. 4: This is the equation used for a when a coil of wire is rotated at a constant angular velocity. N is the number of loops. A is for area, B is for magnetic field strength. Lowercase omega, "w" is angular velocity. t is time. Notice that w*t = theta (radians). Refer back to the rotational dynamics section for more info.|
|Eq. 5: This is a rather cool one, it shows the relationship between electric field, velocity, and magnetic field strength.|
Induced emf is produced by a CHANGING magnetic field.
Lenz's law states that an induced emf always gives rise to a current whose magnetic field opposes the original change in flux. This is important when considering the direction of the induced current. If you have a coil and begin to slide a magnet in, the current induced will cause a magnetic field which opposes the motion of the bar magnet(which may also be said relative to the coil.) This is because if you insert the N pole of a bar magnet into the coil, an N pole is formed by the induced current on the side of the coil that the bar magnet is entering (use right hand rule). Upon having the magnet inserted in the coil, removing the magnet from the coil will cause a field that opposes the motion of the magnet leaving the coil.
You may encounter problems such as: The current I in a wire running vertical
on a paper is going upwards in direction (in the plane of the paper) but decreasing
in current. A coil of wire is placed to the left of the current carrying wire.
In what direction is the induced current in the coil? This can be solved by
the right hand rule, and some intuition. Since the current is decreasing(as
well as field strength), the induced current will exhibit a field that will
attempt to maintain the field strength. So, use the right hand rule to determine
the direction of the magnetic field on the current carrying wire. Now, use your
imagination to visualize in 3d, you should've come up with the current is counterclockwise
about the wire. In order to maintain the field strength, they must add algebraically,
so the field about the wire of a coil of wire is also counterclockwise. Thus
using the right hand rule again will show that the induced current is counterclockwise
( in the plane of the paper ).
When the current I is increasing in the scenario like that of the above, then the opposite occurs. The induced current will be clockwise to have a magnetic field that will add algebraically to reduce the net magnetic field.
A .08 m radius circular loop of wire is in a 1.10-T magnetic field. It is removed from the field in 0.15 s. what is the average induced emf?
Using Eq. 1 solve for the change in magnetic flux first.
Magnetic flux = change in magnetic field * area.
M. flux = (0 - 1.10)*pi*.08^2 = -0.0221126 Wb
Now, solve for emf using Eq 2.
emf = -N * M.flux/ change in time.
emf = -1 * -0.0221126/.15 = 0.147V
The magnetic field perpendicular to a circular loop of wire 0.2 m in diameter is changed from +0.52T to -0.45T in 180ms, where + means the field points away from an observer and - toward the observer. a) Calculate the induced emf. b) In what direction does the induced current flow?
a) Use equation 1 to solve for the magnetic flux.
M. flux = (-.45 - .52)*pi*(0.2/2)^2 = -0.0304677
Now solve for the emf using equation 2.
Multiply by N, -1, and divide by the change in time, 180ms, to arrive at 0.169V
b) Since the field is moving toward the observer, using the right hand rule will reveal a counterclockwise motion
A rod moves with a speed of 1.9m/s, is .3 m long, and has a resistance of 2.5 ohms. The magnetic field is 0.75 T, and the resistance of the U-shaped conductor is 25.0 ohms at a given instant. Calculate the induced emf, the current flowing in the circuit, and the external force necessary to ensure that the rod is moving at a constant velocity at that instant.
Use eq. 3. emf = Blv sin theta. emf = .75 * 1.9 * .3 = 0.43V
I = V/R. I = 0.43/(2.5+25.0) = .016 A
F = IlB sin theta. = .016 * .3 * .75 = 0.0035N.
A 0.31m diameter coil consists of 20 turns of circular copper wire .0026m in diameter. A uniform magnetic field, perpendicular to the plane of the coil, quack, changes at a rate of 8.65x10^-3T/s. Determine the current in the loop and the rate at of which thermal energy is produced.
M. flux = change in mag field * A = 8.65x10^-3/s * (0.31/2)^2. place this into
the equation emf = -N*m.flux/time, emf = -20*what you just obtained.
Now that you have the emf(voltage), you can readily solve for the current with I = V/R. Recall that p of copper is 1.68x10^-8 and R = pL/A, thus R = pi*0.31*20/pi*(.031/2)^2 will result in the resistance. Plug in and solve for the current, which is 0.21A.
P = I^2R, Using the data from above, you should come up with 0.0027W
The magnetic field perpindicular to a single .132 m diameter circular loop of copper wire decreases uniformly from .75T to 0. If the copper wire is .00225 m in diameter, how much charge moves past a point in the coil during this operation?
Eq. 1: m.flux = BA, (0-.75)pi(.132/2)^2 = -.010261647.
Eq. 2: emf = -N * m.flux/time. Substitute in to find that emf is .010261647/t
emf = IR(Ohm's law). find the resistance.
R = pL/A, recall that the p of copper is 1.68x10^-8. So, 1.68x10^-8 * (pi*.132)/(pi*(.00225/2)^2) = 1.752x10^-3
.010261647/t/R = I. So 010261647/1.752x10^-3t= I. = Q/t.
5.857C = Q.
Design a DC transmission line that can transmit 300 MW of electricity 200 km with only a 2 percent loss. The wires are to be made of aluminum and the voltage is 600kV.
p of aluminum is 2.65x10^-8
I = P/V.
I = 300x10^6W/600x10^3V = 500A.
P_Loss = I^2R.
(300x10^6*.02*1.02 ) = 500^2*R (PAY CLOSE ATTENTION TO .02*1.02*P_input)
24.48 = R
R = pL/A, 24.48= 2.65x10^-8 * 2*200x10^3/pi*r^2
*NOTE* in a dc line there is a to and fro, so two times the distance.
2r = d.
d = 2.348cm
The magnetic field perpendicular to a circular loop of wire 0.20m in diameter is changed from +0.52T to -0.45T in 180ms, where + means the field points away from the observer and toward the observer. a) Calculate the induced emf. b) In what direction does the induced current flow?
a) Use Faraday's law. emf = -N*dphi/dt. dphi = dB*A. A is found by 0.10^2*pi. dB is -0.45 0.52T => -0.97T. dphi is then -0.0305. dt is 0.18s. emf = 0.169V.
b) Because the magnetic field is changing towards the observer, the magnetic field due to the induced current tries to maintain away from the observer. The induced current flows clockwise then.
A 0.31m diameter coil consists of 20 turns of circular copper wire 2.6mm in diameter. A uniform magnetic field, perpendicular to the plane of the coil, changes at a rate of 8.65*10^(-3)T/s. Determine a) the current in the loop, and b) the rate at which thermal energy is produced.
a) The resistance in this wire can be found by the equation R = pL/A. The constant p for copper is 1.68*10^(-8). L is the length of wire, 0.31*3.141*20 = 19.47420m. A is found by pi*r^2, 0.0000053m^2. R is then 0.0616ohms. The induced emf in the coil is found by Faraday's law, emf = -Ndphi/dt, 0.013V. Using Ohm's law, V = IR, I = 0.21A.
b) To find rate of thermal energy produced, we use P = I^2R. P = 2.8*10^(-3)W.
A square loop 0.24m on each side has a resistance of 6.50ohms. It is initially in a 0.755-T magnetic field with its plane perpendicular to B, but is removed from the field in 40.0*10^(-3)s. Calculate the electric energy dissipated in this process.
The electric power dissipated in this process is given by P = V^2/R. The energy dissipated is then U = V^2t/R. Solving for the induced emf by use of Faraday's law, emf = -NdPhi/dt, emf = 1.087V. Substituting this into the expression for power, P = 0.182W. Multiplying by the time, 40.0*10^(-3)s results in 7.3*10^(-3)J. | http://physics.hivepc.com/eminduct.html |
4.15625 | Links: NHPS Science Overview
**Sixth Grade Science in NHPS uses kits that rotate among schools.
Check with each school for Rotation Details.
|INQUIRY STANDARDS ACROSS ALL UNITS
C INQ.1 Identify questions that can be answered through scientific investigation.
C INQ.2 Read, interpret and examine the credibility of scientific claims in different sources of information.
C INQ.3 Design and conduct appropriate types of scientific investigations to answer different questions.
C INQ.4 Identify independent and dependent variables, and those variables that are kept constant, when designing an experiment.
C INQ.5 Use appropriate tools and techniques to make observations and gather data.
C INQ.6 Use mathematical operations to analyze and interpret data.
C INQ.7 Identify and present relationships between variables in appropriate graphs.
C INQ.8 Draw conclusions and identify sources of error.
C INQ.9 Provide explanations to investigated problems or questions.
C INQ.10 Communicate about science in different formats, using relevant science vocabulary, supporting evidence and clear logic.
|C 4. Describe how abiotic factors, such as temperature, water and sunlight, affect the ability of plants to create their own food through photosynthesis.
C 5. Explain how populations are affected by predator-prey relationships.
C 6. Describe common food webs in different Connecticut ecosystems.
|C 7. Describe the effect of heating on the movement of molecules in solids, liquids and gases.
C 8. Explain how local weather conditions are related to the temperature, pressure and water content of the atmosphere and the proximity to a large body of water.
C 9. Explain how the uneven heating of the Earth’s surface causes winds.
|C 10. Explain the role of septic and sewage systems on the quality of surface and ground water.
C 11. Explain how human activity may impact water resources in Connecticut, such as ponds, rivers and the Long Island Sound ecosystem.
|C 12. Explain the relationship among force, distance and work, and use the relationship (W=F x D) to calculate work done in lifting heavy objects.
C 13. Explain how simple machines, such as inclined planes, pulleys and levers, are used to create mechanical advantage.
C 14. Describe how different types of stored (potential) energy can be used to make objects move.
|Significant Task||Chesapeake Bay Ecosystem||Weather Forecast||Watershed Study
* CT Embedded Task: Dig In
|STC KIT ECOSYSTEMS
Prentice Hall Explorer: Ecosystems
|Prentice Hall Explorer: Weather
NeoSci Kit: Weather
|Urban Resources Initiative Kit: Watersheds||Delta Science Module Kit: Simple Machines or NeoSci Kit: Simple Machines| | http://nhps.net/sciencegrade6 |
4.03125 | NOAA-NASA GOES ProjectSandy started as an ordinary hurricane, feeding on the warm surface waters of the Atlantic Ocean for fuel. The warm moist air spirals into the storm, and as moisture rains out, it provides the heat needed to drive the storm clouds. By the time Sandy made landfall on Monday evening, it had become an extratropical cyclone with some tropical storm characteristics: a lot of active thunderstorms but no eye. This transformation came about as a winter storm that had dumped snow in Colorado late last week merged with Sandy to form a hybrid storm that was also able to feed on the mid-latitude temperature contrasts. The resulting storm—double the size of a normal hurricane—spread hurricane force winds over a huge area of the United States as it made landfall. Meanwhile an extensive easterly wind fetch had already resulted in piled up sea waters along the Atlantic coast. This, in addition to the high tide, a favorable moon phase, and exceedingly low pressure, brought a record-setting storm surge that reached over 13 feet in lower Manhattan and coastal New Jersey. This perfect combination led to coastal erosion, massive flooding, and extensive wind damage that caused billions in dollars of damage.
In many ways, Sandy resulted from the chance alignment of several factors associated with the weather. A human influence was also present, however. Storms typically reach out and grab available moisture from a region 3 to 5 times the rainfall radius of the storm itself, allowing it to make such prodigious amounts of rain. The sea surface temperatures just before the storm were some 5°F above the 30-year average, or “normal,” for this time of year over a 500 mile swath off the coastline from the Carolinas to Canada, and 1°F of this is very likely a direct result of global warming. With every degree F rise in temperatures, the atmosphere can hold 4 percent more moisture. Thus, Sandy was able to pull in more moisture, fueling a stronger storm and magnifying the amount of rainfall by as much as 5 to 10 percent compared with conditions more than 40 years ago. Heavy rainfall and widespread flooding are a consequence. Climate change has also led to the continual rise in sea levels—currently at a rate of just over a foot per century—as a result of melting land ice (especially glaciers and Greenland) and the expanding warming ocean, providing a higher base level from which the storm surge operates.
These physical factors associated with human influences on climate likely contribute to more intense and possibly slightly bigger storms with heavier rainfalls. But this is very hard to prove because of the naturally large variability among storms. This variability also makes it impossible to prove there is no human influence. Instead, it is important to recognize that we have a “new normal,” whereby the environment in which all storms form is simply different than it was just a few decades ago. Global climate change has contributed to the higher sea surface and sub-surface ocean temperatures, a warmer and moister atmosphere above the ocean, higher water levels around the globe, and perhaps more precipitation in storms.
The super storm Sandy follows on the heels of Isaac earlier this year and Irene last year, both of which also produced widespread flooding as further evidence of the increased water vapor in the atmosphere associated with warmer oceans. Active hurricane seasons in the North Atlantic since 1994 have so far peaked with three category 5 hurricanes in the record breaking 2005 season, one of which was Katrina. As human-induced effects through increases in heat-trapping gases in the atmosphere continue, still warmer oceans and higher sea levels are guaranteed. As Mark Twain said in the late 19th century, “Everybody talks about the weather, but nobody does anything about it.” Now humans are changing the weather, and nobody does anything about it! As we have seen this year, whether from drought, heat waves and wild fires, or super storms, there is a cost to not taking action to slow climate change, and we are experiencing this now.
From New Zealand, Kevin Trenberth is a distinguished senior scientist at the National Center for Atmospheric Research (NCAR). He has been heavily engaged in the World Climate Research Programme (WCRP), where he currently chairs the Global Energy and Water Exchanges (GEWEX) program, as well as the Intergovernmental Panel on Climate Change, for which he shared the Nobel Peace Prize in 2007. | http://www.the-scientist.com/?articles.view/articleNo/33084/title/Opinion--Super-Storm-Sandy/flagPost/70653/ |
4.3125 | student will be able to:
list the characteristics
that all insects have in common.
distinguish between the orders of insects using characteristics
unique to each group.
understand how specific
body structures help insects survive.
understand the life
cycle of insects, including complete and incomplete
utilize a variety of
methods to collect insects.
Insects are the most numerous group of animals on this planet,
making up about 80% of all animals. In fact, there are more
species of insects than all other species of living things.
On one tree in the Amazon rainforest scientists identified
over 2,000 different species of insects. They play essential
roles in the balance of nature as predators, food for other
animals and scavengers.
During todays lab, students will explore the fascinating
world of insects. Students will start out collecting aquatic
insects from a local stream. As they sort their specimens,
they will encounter a variety of stages in the life cycle
of some common insects. They will observe the unique adaptations
that allow some insects to survive in the water during the
initial stages of life and then on land as an adult. Students
will learn how to collect live insects for study at home.
What is an insect?
Insects belong to the phylum Arthropoda. Like all arthropods,
they possess a hard exoskeleton on the outside of their bodies
for protection and support. They have jointed legs and segmented
Insects are divided into three segments the head, thorax
and abdomen. Insects have three pairs of jointed legs attached
to the thorax. One pair of antennae is attached to the head
region for feeling, smelling and communication. Most insects
have two pairs of wings, but some do not.
respire through tiny openings called spiracles. Air passes
in and out of the spiracles to and from a network of tubes
in the insects abdomen. A Madagascar Hissing Cockroach
can push air through these spiracles quickly, making a hissing
noise to scare off potential predators.
road kills or catch live insects to put in a killing jar.
This can be made using a wide-mouthed jar with a lid and
adding a paper towel soaked in fingernail polish remover
containing acetone. More information about spreading and
mounting insects can be found in the books referenced in
the Further Explorations section.
Always label the insects with the collectors name,
location and date. Identify the specimen using a field guide.
To learn more about the specimens youve collected,
do some research. You might be surprised to learn about
their amazing abilities. List the adaptations each insect
has for survival, including protection, feeding mechanisms
insects mouthparts are a clue to what it eats. How?
The mouthpart is made especially for the type of food the
insect eats. Butterflies have mouthparts formed like a straw
for sucking nectar from flowers. Flies have mouthparts formed
like a sponge to soak up liquids. Mosquitoes have needle-like
mouthparts for piercing skin before sucking blood. Grasshoppers
have mouthparts similar to a humans mouth to help chew
Look at Those Legs
a look at an insects legs to learn about its lifestyle.
Often you can use the legs to learn where an insect lives,
how it moves about or how it defends itself. Mole crickets
have muscular legs with claw-like structures at the end for
digging underground. Grasshoppers have legs that are elongated
and muscular for jumping. Spikes cover a cockroachs
legs to enable it to climb and to protect it from predators.
Just Flying Around
Wings are very important to insects as a method
of locomotion. They have helped to make insects the most successful
animal on earth. Very few other animals can fly. Flying helps
insects escape predators and move from place to place, thus
broadening the space they can inhabit. However, not all insects
of the Fittest
can be considered the most dominant life form on earth.
Special body parts enable insects to survive in environments
normally considered to harsh for survival.
Size is a great asset to an insect. Its small size allows
it to move about without being noticed by many larger animals.
Flight enables many insects to move quickly away from predators.
Insects also avoid danger by hiding. Insects bodies
are camouflaged using colors and patterns to help blend
in with its surroundings. Some insects have body shapes
that make them look like leaves or twigs. Some go so far
as to sway gently to mimic a leaf blowing in the wind.
Many insects are brightly-colored. Some insects are advertising
an unpleasant experience to potential predators. Dont
eat me I taste bad, have sharp spines or will sting
you. Other insects mimic the coloration of a distasteful
insect hoping to discourage potential predators. Still others
are trying to look pretty to attract a mate.
An insects sense of smell is keener than anything we
can imagine. They smell primarily with their antennae. Segmented,
flexible and covered with many tiny hairs, antennae can come
in many shapes and sizes. They all function to detect chemical
cues in the air.
Insects have two large compound eyes that consist of thousands
of tiny lenses shaped like honeycomb cells. These eyes are
usually large and located on the sides of the insects
head. They are excellent motion detectors try sneaking
up on a fly!
Many insects can emit sounds by rubbing their appendages together
or vibrating a membrane. Have you ever heard a cricket chirp
or a cicada sing on a summer night? Insects hear
these sounds as membranes vibrate in response to sound waves,
however these membranes are not located in ears, but on the
abdomen or forelegs.
The life cycles of insects usually involve a process called
metamorphosis, a change from one life form to another. Frogs,
toads and salamanders also undergo this big change. It is
widely believed that metamorphosis benefits the insect because
the juvenile is not competing with the adult for a food source
they eat different types of food.
insects undergo incomplete metamorphosis. As the insect matures,
it increases in size and develops reproductive organs and
wings. For most species, the immature insect, or nymph, looks
similar to the adult. However, some nymphs are aquatic and
do not resemble their terrestrial adult forms. Examples of
these aquatic insects are dragonflies and mayflies.
Other insects undergo a major change in their forms as they
grow in a four-stage change called complete metamorphosis.
From the egg hatches the larva, whose primary functions are
to eat and grow. The larvae are worm-like in appearance and
do not resemble the adults. Larvae molt several times as they
grow. The last molt results in a pupal form. The pupa does
not eat, and its movement is restricted to no more than a
wiggle. During this stage, a great transformation is occurring
as tissues are broken down and reassembled. The pupal stage
can last from four days to several months. The adult emerges
from the pupa, completely different from the larval form.
Its wings are crumpled and body soft. Within hours, the wings
become stronger and the body hardens. The adult stage can
last from a few hours to several years. An adults primary
mission is mating and egg-laying.
Try This at Home!
Catching insects to observe is easy if you just look closely!
Always watch the insect youve discovered for a few moments
before catching it. You will learn more by combining an observation
of its natural behavior with up-close observations.
Remember, be careful when catching insects some may
sting or bite. After observing them in your jar, release them
in the spot where you found them.
Below are some experiments that you can do in your own backyard.
Always check with an adult so they are aware of your experiments
and can monitor your safety.
The following science books are available for use in the
ELL. Each has excellent, easy to understand information about
insects and suggested activities. Books may not be checked
out, however you are welcome to make copies.
The Insect Appreciation Digest by Tom Turpin, 1992.
This book tells you everything you ought to know about
insects (that your parents didnt teach you). Written
in a delightful manner, this book gives the basics of insect
characteristics and is filled with fascinating stories about
the lives of insects. You cant resist reading about
insects that multiply, sticks that walk and insect antifreeze!
The Practical Entomologist by Rick Imes, 1992. Beginning
with the basics, this guide describes what characterizes an
insect, including anatomy and the life cycle. It takes an
order-by-order look at insects, explaining how each group
differs from the others.
The Insect Almanac by Monica Russo, 1991. Meet the
tiny creatures who share your backyard and learn how they
live, how they breed and what they eat. Learn how to hunt
for insects and which ones you will find during all seasons
of the year. Try the many activities suggested throughout
Place a small jar into the ground so the mouth is even with
the ground. Bait the jar with a small piece of raw meat at
the end of the day and check your jar in the morning
most beetles are active at night. Add new bait every few days
and empty it if it rains.
The following web sites would be helpful in any
study of insects. They are packed with interesting
information and activities to try!
a tour of some of the creepy crawlies you love to
hate on the Yuckiest Site on the Internet.
Wendell, ace worm reporter, will tell you all about
his amazing buggy friends, especially his frequently
squished pal, the roach.
Sonoran Arthropod Studies Institute web page
includes a virtual arthropod zoo with photos and information.
visit the Arthropod Zoo and take a look at how to
To protect your trap from other animals, cover the mouth with
a board, propped up slightly with a small stone. Choose areas
near piles of wood, rotten logs, or shrubs or choose wet wooded
spots. Areas near buildings also work well.
The Fallen Log
Its amazing how many things live in and on rotting logs.
Learn about insects and their roles in decomposition as you
explore life in a rotten log. Before searching for your log,
research the process of decomposition.
Next, locate a rotten log and carefully begin inspecting it.
Note any markings or holes found on the logs exterior.
Many beetles can create beautiful patterns as they bore into
the wood. Pull apart the log. You should encounter many insects
and other arthropods such as spiders. Be sure to make drawings
and to note characteristics of each organism you see. Use
these notes to identify the animals later using a field guide.
Hula Hoop Population Count
How many insects make their homes in your backyard? If you
really want to know, you can do a simple population count.
Throw a hula hoop randomly in your yard and count the number
of insects within the hoops boundary. Remember to look
closely - many of the insects are tiny. When finished, graph
If you found these results interesting, you might want to
do comparison studies. Toss the hula hoop into a garden area
or an area with leaf litter. Compare areas under or near trees
to areas in the full sun. An endless number of comparisons
can be made. Graph the results. Are they similar to what you
expected to find? | http://www.tnaqua.org/KidsTeachers/insect_field_module.asp |
4.09375 | A Balanced Literacy Approach
The Reading Program at Legacy Elementary is a balanced literacy program. This approach to instruction involves teaching children to use the three cueing systems: meaning, sentence structure, and graphophonics in learning to read. Instruction integrates reading with thinking, writing, listening, and speaking.
On behalf of each student the following objectives are set:
· To develop basic skills that enable them to get meaning from print.
· To think critically about what is read.
· To use reading across the curriculum as a tool for learning.
· To experience the rewards and pleasures of reading.
· To become independent writers who write for a variety of purposes. | http://lcps.org/domain/7498 |
4.09375 | Programs & Services
An x-ray (radiograph) is a noninvasive medical test that helps physicians diagnose and treat medical conditions.. Imaging with x-rays involves exposing a part of the body to a small dose of ionizing radiation to produce pictures of the inside of the body. X-rays are the oldest and most frequently used form of medical imaging.
How does the procedure work?
X-rays are a form of radiation like light or radio waves. X-rays pass through most objects, including the body. Once it is carefully aimed at the part of the body being examined, an x-ray machine produces a small burst of radiation that passes through the body, recording an image on photographic film or a special digital image recording plate.
Different parts of the body absorb the x-rays in varying degrees. Dense bone absorbs much of the radiation while soft tissue, such as muscle, fat and organs, allow more of the x-rays to pass through them. As a result, bones appear white on the x-ray, soft tissue shows up in shades of gray and air appears black.
For example, on a chest x-ray, the ribs and spine will absorb much of the radiation and appear white or light gray on the image. Lung tissue absorbs little radiation and will appear dark on the image.
Until recently, x-ray images were maintained as hard film copy (much like a photographic negative). Today, most images are digital files that are stored electronically. These stored images are easily accessible and are sometimes compared to current x-ray images for diagnosis and disease management.
Before the Exam
- You may be asked to remove some or all of your clothes and to wear a gown during the exam.
- You may also be asked to remove jewelry, eye glasses and any metal objects or clothing that might interfere with the x-ray images.
- Women should always inform their physician or x-ray technologist if there is any possibility that they are pregnant. Many imaging tests are not performed during pregnancy so as not to expose the fetus to radiation. If an x-ray is necessary, precautions will be taken to minimize radiation exposure to the baby.
During the Exam
- Depending on the area to be examined you may be asked to lie on an examination table, sit on a stool or stand.
- The technologist, an individual specially trained to perform radiology examinations, will position the patient to best demonstrate the area to be examined.
- You must hold very still and may be asked to keep from breathing for a few seconds while the x-ray picture is taken to reduce the possibility of a blurred image.
- The technologist will walk behind a wall or into the next room to activate the x-ray machine.
After the Exam
When the examination is complete, you will be asked to wait until the radiologist determines that all the necessary images have been obtained.
The examination is usually completed within 15 - 30 minutes.
What will I experience during and after the procedure?
An x-ray examination itself is a painless procedure. You may experience discomfort from the cool temperature in the examination room and the coldness of the recording plate. The technologist will assist you in finding the most comfortable position possible that still ensures diagnostic image quality. | http://www.trilliumhealthcentre.org/programs_services/diagnostic_imaging/generalradiography.php |
4.75 | While it is difficult for students to observe asteroids directly, students of all ages can compare them to planets and to comets. Young students can compare scales of asteroids to that of the planets, and older students can compare composition, orbits and more!
There are also other activities that can be tied to this topic.
For activities related to impacts, visit the Collisions and Craters in the Solar System: Impacts! topic's Classroom section. For activities related to the formation of planets and asteroids, please visit the Birth of Worlds topic's Classroom section.
Be sure to submit photographs, artwork, music, or words of students enjoying these activities to Share Your Stories.
The National Science Standards and Benchmarks present asteroids in grades older than K-4, but young students can make models of asteroids and compare their sizes to planets, or compare meteorites (pieces of asteroids) to rocks on Earth. If you discuss asteroid impacts in grades K-4, be alert to anxieties that younger children may have about potential asteroid impacts on Earth. (Science Education Standards
| Modeling Asteroid Vesta in 3-D || Students create a 3D model of Vesta using images, clay and other materials. |
| Vesta Flipbook || Animators build cartoons by flipping through a series of images over time. Make a flipbook using Vesta images to help you picture the asteroid spinning on its axis in orbit through space! |
| Meteorite Investigators || Children examine several rock samples to determine which are meteorites and which are not. |
| The Aster's Hoity Toity Belt || "The Aster's Hoity-Toity Belt," a compelling tale set in the Great Carousel of the Skies, tells of two friends, the gentle giant Ceres and feisty Vesta, as they find their place in the skyberhood. In addition to a supplemental activity, the Aster's story is available as a booklet handout, a story with space for imaginative illustrations and a version with learning notations. |
There are a variety of activities about asteroids and meteorites for this age group, which support different skills ranging from literacy to scientific inquiry. (Science Education Standards
| The History and Discovery of Asteroids || Learners will explore scientific discoveries and the technologies as a sequence of events that led eventually to the Dawn mission. This is a series of modules which incorporate strong literacy and mathematics components. |
| Exploring Meteorite Mysteries || Meteorites are pieces of asteroids that have fallen to Earth; they hold clues to the formation of our solar system. This set of activities investigates meteorite features, characteristics, their connection to asteroids, and the keys they hold to the formation of the planets. These activities are primarily hands-on modeling activities. |
| Space Math: So..How big is it? -- Asteroid Eros surface || Students calculate the scale of an image of the surface of the asteroid Eros from the NEAR mission, and determine how big rocks and boulders are on its surface. |
| The Hunt for Micrometeorites || Students collect and examine particles from the air using a microscope,
and attempt to identify micrometeorites. |
These students can begin to analyze the data from Earth satellites to study Earth systems, and from planetary missions to deduce water's presence or absence on various bodies. They can explore water's role as a solvent to its necessity for life. (Science Education Standards
| Vegetable Light Curves || In the activity, "Vegetable Light Curves," students will observe the surface of rotating potatoes to help them understand how astronomers can sometimes determine the shape of asteroids from variations in reflective brightness. |
| Virtual Microscope || The Virtual Microscope is a free software download, providing access to a variety of advanced microscopes and specimens (including meteorites) requested by teachers. Virtual Lab completely emulates a scanning electron microscope and allows any user to zoom and focus into a variety of built-in microscopic samples. |
| Space Math: Close Encounters of the Asteroid Kind! || On September 8, 2010 two small asteroids came within 80,000 km of Earth. Their small size of only 15 m made them very hard to see without telescopes pointed in exactly the right direction at the right time. In this problem, based on a NASA press release, students use a simple formula to calculate the brightness of these asteroids from their distance and size. |
| Space Math: Meteorite Compositions: A matter of density || Astronomers collect meteorites to study the formation of the solar system 4.5 billion years ago. In this problem, students study the composition of a meteorite in terms of its density and mass, and the percentage of iron and olivine to determine the volumes occupied by each ingredient. |
| Summer Science Program || If you have high school students interested in research experiences, you can share the Summer Science Program (SSP) with them. SSP is a residential enrichment program in which gifted high school students complete a challenging, hands-on research project in celestial mechanics. By day, students learn college-level astronomy, physics, calculus, and programming. By night, working in teams of three, they take a series of telescopic observations of a near-earth asteroid, and write software to convert those observations into a prediction of the asteroid's orbit around the sun. Stimulating guest speakers and field trips round out the curriculum. |
| DPS Slide Set: Asteroid Detected Before Impact || This four-slide powerpoint by the Division of Planetary Science includes basic information for college-level introductory courses. | | http://solarsystem.nasa.gov/yss/display.cfm?Year=2011&Month=7&Tab=Classrooms |
4.25 | Rise of Slave Trade: Black History in Colonial America
- 1:28 Triangular Trade and the…
- 2:57 Growth of Slavery
- 4:02 Slave Life and Culture
- 6:52 Slave Codes
- 8:39 Free African Americans
- 10:05 Lesson Summary
Did You Know…
This lesson is part of a free course that leads to real college credit accepted by 2,900 colleges.
In this lesson, you'll learn a little about the slave trade, the growth and characteristics of slavery in the colonial period - including laws regulating the institution and the population of free blacks in the English colonies.
Slavery in Africa
In 1619, a Dutch trading ship brought several Africans to Jamestown, Virginia - England's first American colony. They were sold as indentured servants. One of those original African servants, a man named Anthony Johnson, completed his indenture, bought land and prospered. Soon, he imported several of his own servants, including another African man named John Casor. Rather than freeing him after seven years like most indentured servants, Johnson claimed that Casor was his slave. The case went to trial, and Johnson won. So, in 1655, an African man became America's first owner of a permanent slave!
Slavery was not a new concept for Africans, but the nature of slavery in Africa at that time was completely different. Slaves were generally criminals, debtors or prisoners of war. They played an important role in society, they could hold jobs with authority and were often seen as members of the extended family. Their children could not be bought or sold. Plantation slavery was non-existent.
Most of the Africans who participated in the American slave trade - including the captives - had no clue that new world slavery had evolved into something very different.
Triangular Trade and the Middle Passage
Slavery was just one piece of England's triangular trade. English manufactured goods were sent to Africa, where they were traded for slaves. The slaves were then taken to the Americas, where they were traded for raw materials. The materials went to England to be used in the manufacture of more goods. The part of the journey from Africa to America was called the Middle Passage.
On tightly packed ships, slaves were chained together below deck. They sat down or laid down, side-by-side, sometimes with their heads between the feet of the next row. A slave who died lay chained to his neighbor until the following morning. With no windows below the water line, the heat and odor from body waste, blood and decay soon became suffocating. Disease spread quickly. After inspection on deck every morning, the dead and diseased were thrown overboard.
The crew took extreme measures to minimize revolts and suicides, which became more common as the journey progressed. A slave who refused to eat might be beaten to death or thrown overboard. The sharks that commonly followed slave ships were used as a terror weapon against the captives. Africans who spoke the same language were often separated to prevent them from plotting a mutiny. Others were muzzled.
After two to four months in these conditions, about half of the human cargo died. The other 10 to 50 million Africans were ready for auction in America.
The Growth of Slavery in the English Colonies
During the 17th century slavery was not as widespread in the thirteen colonies as it was in Spanish territory. English colonies generally depended on indentured servants.
That began to change near the end of the 1600s, especially in the south. First, conditions in England improved, and fewer people were willing to indenture themselves. Planters also began to realize that slaves were a better investment, since the workers didn't leave every seven years. Then, when a band of former servants burned down Jamestown during Bacon's Rebellion in 1676, Virginia's leadership began to worry about the growing class of poor freemen. Importation of servants declined, but slaves were increasing.
In 1655, there was one slave-for-life in Virginia. But within fifty years, more than a thousand new slaves entered the colony every year, and 4,000 more went to the other twelve colonies. By 1750, nearly 45,000 new slaves came to British America every year.
Slave Life and Culture
As with all colonists, slave life varied, depending on where a person lived and what his job was. In the north, slaves might work as cooks, maids, farm hands, gardeners, drivers or skilled laborers. These workers were generally healthier, received better treatment and were more highly valued than their counterparts in the fields. However, they had less privacy, worked seven days a week and were often ostracized by field hands.
America's first published black writer was a northern slave named Phillis Wheatley. Imported from Gambia when she was a child, Wheatley was sold to a Boston family who taught her to read, and encouraged her to write poetry. Her poems were often religious, written in classical style. Wheatley published her first poem in 1767, when she was just sixteen years old, and later became one of the most famous poets of her time.
Urban slavery also existed in the middle and southern colonies. But it was far more likely that a slave would end up on a plantation in the south.
Slaves on Chesapeake tobacco plantations typically worked together from sun-up to sun-down, six days a week. Their lives were guarded, but slaves were often worked to their physical limit and could be brutally punished. Physical relationships between slaves were encouraged - or even forced - in order to increase the population. But plantation slaves were more likely to be sold off, so marriages and families were often severed. Still, plantation slaves did have two advantages: they generally did not work on Sundays, and because plantations could have hundreds of slaves, they enjoyed a greater sense of community.
Slaves on rice plantations in South Carolina and Georgia might be part of the task system. Each worker was assigned a task acre to be completed each day. When the task was finished, so was the slave. This division of labor evolved because rice planters imported slaves from certain locations in Africa where rice was farmed. But, the new arrivals brought diseases that the white population had no resistance to. Owners moved their houses away from the fields, and sometimes left the plantation completely during a rainy season. Overseers managed the plantation in the owner's absence. The task system kept the plantation running with less effort on the overseer's part.
These slaves might work for months without ever seeing their owners, and an efficient slave had a lot of free time. Since imported slaves continually refreshed native traditions, these plantations developed vibrant slave cultures with distinctive forms of music, dancing, religion and even language.
The farther south you went in the colonies, the greater the number of slaves, the more distinctive their culture, the more fearful the whites and the more repressive the slave codes.
Beginning in 1662, African American children automatically took on their mother's legal status, whether slave or free. Soon, slaves were declared real estate. Using the logic that no one would purposely destroy his own valuable property, a slave's death at the hands of his owner was considered accidental. Slaves could not be used to work as clerks or in any position handling money. Laws forbid educating slaves or paying them any wages for extra work. They could not leave their owner's property without a pass, drink alcohol, own weapons or livestock, grow certain crops in their own gardens or wear nice clothing. There were mandatory punishments for violations, and whites who refused to comply could be fined, publicly beaten, have their property confiscated or even be exiled from the colony.
In time, 'black' became socially equal with 'slave,' so even though there were free blacks throughout the colonies, they too became legally inferior to whites. Besides being racist, whites knew that freemen were the most likely to help runaway slaves, so they did whatever they could to discourage a free black population. If a Virginia plantation owner wanted to free a slave, he had to pay for the person's passage out of the colony. In South Carolina, the legislature had to approve emancipation. Freemen could not work in stores, own horses or hogs, they could not own slaves or hire white servants and they could not marry a white person.
Free African Americans
Back in 1655, when Anthony Johnson became America's first legally recognized slave owner, there were at least twenty other free black men and women in Jamestown. Many of them were landowners. But that became increasingly uncommon as laws and attitudes changed throughout the south. Johnson's death prompted another landmark decision, that blacks were not citizens. The court concluded that he was 'a negro and by consequence, an alien.' A white man seized Johnson's land from his heirs.
There were many more free blacks in the north. When New Netherlands was overtaken by the English in 1664, the Dutch emancipated all of their slaves, creating a significant free black population. Northern slave owners were much more likely to free slaves in their wills, or allow slaves to purchase their own freedom with money they made working in their spare time. However, free blacks were not considered social equals and were often accused of unsolved crimes.
Slavery legally existed in the north until the Civil War, but most northern slaves were freed during the American Revolution, either by British troops, or by colonial governments who exchanged army service for freedom. When the United States took its first census in 1790, eight percent of the African American population was free.
Let's review: though slavery has existed throughout time, slavery in the American colonies was much harsher than it had been practiced in Africa. England brought slaves to America as part of the triangular trade network. The portion of the journey between Africa and America was called the Middle Passage. Half of the captives onboard died because of the horrible conditions, but increasing numbers of enslaved Africans still reached the American shores throughout the 17th and 18th centuries.
Most northern slaves - and some in other colonies - worked in an urban setting. Phillis Wheatley was a literate urban slave and became the first African American to publish her poetry. Most captives, however, ended up on southern plantations. Rice plantations developed the task system in which slaves enjoyed more free time and a distinct culture. But, whites became fearful of the growing slave population and passed slave codes in their favor. Even the limited number of free black colonists found their rights being eroded.
Chapters in History 103: US History I
- 1. First Contacts (28,000 BCE-1821 CE) (7 lessons)
- 2. Settling North America (1497-1732) (11 lessons)
- 3. The Road to Revolution (1700-1774) (6 lessons)
- 4. The American Revolution (1775-1783) (10 lessons)
- 5. The Making of a New Nation (1776-1800) (12 lessons)
- 6. The Virginia Dynasty (1801--1825) (11 lessons)
People are saying…
"This just saved me about $2,000 and 1 year of my life." — Student
"I learned in 20 minutes what it took 3 months to learn in class." — Student
"Big history at its best..." — Student
"I liked the really tight writing in presenting a capsuled rendition of such an important part of our history." — Student | http://education-portal.com/academy/lesson/rise-of-slave-trade-black-history-in-colonial-america.html |
4.1875 | Noble Gases are the group of six gaseous chemical elements constituting the group 18 (or VIIIa) of the periodic table. They are, in order of increasing atomic weight, helium, neon, argon, krypton, xenon, and radon.
For many years chemists believed that these gases, because their outermost shells were completely filled with electrons, were inert-that is, that they would not enter into chemical combinations with other elements or compounds. This is now known not to be true, at least for the three heaviest inert gases-krypton, xenon, and radon. In 1962, Neil Bartlett, a British chemist working in Canada, succeeded in making the first complex xenon compound. His work was confirmed by scientists at Argonne National Laboratory in Illinois, who made the first simple compound of xenon and fluorine (xenon tetrafluoride) and later made radon and krypton compounds. Although krypton compounds were made with considerable difficulty, both xenon and radon reacted readily with fluorine, and additional reactions to produce other compounds of xenon and radon could be accomplished.
The forces between the outermost electrons of these three elements and their nuclei are diluted by distance and the interference of other electrons. The energy gained in creating a xenon or radon fluoride is greater than the energy required for promotion of the reaction, and the compounds are chemically stable, although xenon fluorides and oxides are powerful oxidizing agents. The usefulness of radon compounds is limited because radon itself is radioactive and has a half-life of 3.82 days. The energy gain is also greater in the case of krypton, but only slightly so. Compounds of helium, neon, or argon, the electrons of which are more closely bound to their nuclei, are unlikely to be created.
Liquefied noble gases under pressure, particularly xenon, are employed as solvents in infrared spectroscopy. They are useful for this because they are transparent to infrared radiation and therefore do not obscure the spectra of the dissolved substances. | http://everything2.com/title/noble+gas |
4.28125 | Concept of soil quality
The concept of soil quality has been developed to help quantify factors that affect the ability of soil to function effectively in a variety of roles. The primary measures of this effectiveness are enhanced biological productivity, environmental quality, and human and animal health. Rapid population growth has demanded an increased emphasis on enhancing biological productivity, but if soil quality is to be improved, we must simultaneously achieve the other two goals as well.
The ongoing degradation of natural resources (erosion, salinization, contamination of ground and surface waters) is closely associated with a loss of soil quality. The concept of soil quality is defined as “the capacity of a specific kind of soil to function within natural or managed ecosystem boundaries to sustain plant and animal productivity, maintain or enhance water and air quality, and support human health and habitation” (Karlen et al. 1997). This definition provides a focal point for assessing the intensity of soil degradation. Soils have various levels of quality that are defined by stable features related to soil forming factors and dynamic changes induced by soil management. | http://soilweb.landfood.ubc.ca/luitool/soil-quality |
4.5625 | Genetics: Punnett Squares
Learn about the Punnett squares chart and how it is used for successful breeding and predictions of certain traits. Learn about incomplete dominance and codominance, dominant and recessive alleles, genotype and phenotype. Includes interactive Punnett’s squares charts, quizzes, problems, matching, and concentration games. There are links to eThemes resources on the basics of genetics, DNA, and blood types.
World Builders: Punnett Squares
This page offers simple and easy to understand explanations of how Punnett squares are set up. Select the "Chromosome Kindergarten" link for fun practice. Includes illustrations.
Punnett Squares: Breeding Albino Lemming
This is an interactive Flash lesson of genetics and Punnett squares. Use your knowledge of dominant and recessive alleles to breed albino lemming online using one and latter two traits.
Punnett Square Examples
Use this interactive Punnett squares to work with examples of probability in genetics: parents and children. Select the highlighted "Punnett Squares" link to learn more on how genes combine.
Principles of Genetics: Lesson 12
Students can study this lesson on sex determination, chromosomes, and mosaicism.
Quia: Genetics Vocabulary
Test your knowledge of genetics vocabulary. Includes flashcards, matching, concentration, and word search games.
Print out the worksheet and match terms with their definitions.
Basic Principles of Genetics
Find out about principles of genetics. Select the "Probability of Inheritance" link to learn how to set up and work with Punnett squares. Scroll down the page and click on the "Practice Quiz" link for short online quiz. Select "Topic 3" link under the Flashcard section on the main page to check knowledge of concepts and definitions. Includes glossary of terms with audio files for correct words pronunciation. NOTE: The site has a link to sites with ads.
Quia: Genetics Terms
Here are more flash cards with more genetics terms.
Artificial Life and Virtual Pets
Learn more about dominant and recessive genes. NOTE: Teachers or students need to register to play the games.
Practicing Punnett Squares
Here are practice exercises for Punnett squares.
Practice Quiz for Probability of Inheritance
Students can take this self-grading quiz.
eThemes Resource: Genetics: Genes and DNA
Learn about DeoxyriboNucleic Acid or commonly known as DNA. Discover importance of DNA strand, chromosomes, and genes. Read about people who worked on discoveries in the past on what scientists are able to achieve today. Find out how many the same chromosomes people share with each other and the rest of the living creatures. Learn about the Human Genome Project and surrounding issues. Includes online quizzes, interactive tours, 3D animations, and video clips.
eThemes Resource: Genetics: Inherited Blood Diseases
Learn about genetics of inherited blood diseases. Find out how genes can produce errors and pass on affected genes from parents to their offspring. The following web sites discuss sickle cell anemia, hemophilia, Huntington's disease, cystic fibrosis, and phenylketonuria (PKU). Learn how people can live and treat these blood disorders.
eThemes Resource: Genetics: Basics
Learn about basics of genetics and the father of genetics Gregor Mendel. Find out what the heredity is, what genotype and phenotype are. Learn about different types of crossing and breeding, traits, genes, and chromosomes. Includes animated movie, matching and concentration games, quizzes, and flashcards.
Request State Standards | http://ethemes.missouri.edu/themes/1015 |
4.125 | Science Fair Project Encyclopedia
Ultramafic rocks are igneous rocks with very low silica content (less than 45%) and are composed of usually greater than 90% mafic minerals (dark colored, high magnesium and iron content). Ultramafic rocks are typical of the Earth's mantle.
Rock types include intrusive dunite and peridotite and rare volcanic komatiite and picrite . Most surface exposures of ultramafic rocks occur in ophiolite complexes where deep mantle derived rocks have been obducted into continental crust along and above subduction zones.
Where ultramafic rocks are exposed on the surface, the high metal content of the rocks creates unique vegetation. Examples are the Ultramafic woodlands and Ultramafic barrens of the Appalachian mountains and piedmont, the "wet maquis" of the New Caledonia rain forests, and the Ultramafic forests of Mount Kinabalu and other peaks in Sabah, Malaysia. Vegetation is typically stunted, and is sometimes home to endemic species adapted to the metallic soils.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Ultramafic |
4.28125 | One of the greatest challenges faced in todayís classroom is how to deal with stress. Most course work is geared toward the basics and does not provide for varying learning levels, and must be approached as such; it also does not provide for the emotional well being of the student or teacher. Dealing with stress in the classroom is very important for the teacher and the student. Stress level is a major factor in a studentís cognitive learning ability. It also can be the cause of a student being a discipline problem that disrupts and entire class. The emotional well being of the student is directly intertwined with stress levels. So, studentsí learning potential and manageability are often correlated with a studentís stress level.
For simplicity, a universal approach must be taken in designing an educational structure to help this situation. This unit will provide an easy way to begin to deal with some of the stress that occupies the adolescentís mind.
This unit will borrow from all the current findings, combine them with drama, and take a form that can be easily used in the classroom to help students cope with stress. It will supply a light in the darkest part of the tunnel. It will produce laughter, tears, and hope. The unit will only need one hour of classroom time a week.
(Recommended for any classroom setting, grades 7-10)
Stress Adolescence Drama Exercises Performances Psychology Self-Awareness | http://www.yale.edu/ynhti/curriculum/guides/1991/5/91.05.02.x.html |
4.125 | Source: Bigelow Laboratory for Ocean Sciences
This interactive game adapted from the Bigelow Laboratory for Ocean Sciences challenges players to build a food web, a complex model that shows how various food chains in an ecosystem are connected. Players must position the names of producers and consumers in the correct places in a diagram. The completed diagram reveals how energy flows through an Antarctic ecosystem and the relationships between predators and prey.
Ecosystems are composed of living things and the physical environment with which they interact. Different species within an ecosystem have different functions that help cycle energy from the Sun—the source of nearly all energy that is critical to life on Earth—through it. Some species are producers, which convert sunlight into chemical energy through photosynthesis. Others are consumers, which feed on producers (and other consumers). These species can be further organized into groups or levels called trophic levels. Organisms at higher trophic levels feed on those at lower levels. For example, in a marsh ecosystem, grasses produce food directly from sunlight, grasshoppers feed on marsh grasses, and shrews eat grasshoppers.
Simple models called food chains depict one possible path along which energy moves through an ecosystem—as from producer A to consumer B to consumer C. Most consumers have more than one food source, however. Therefore, a more complex model called a food web is used to show how the various food chains in an ecosystem are connected. In the Antarctic ecosystem, both algae and small, shrimp-like crustaceans called krill are connected by arrows to fish, which consume them. Because krill are also consumed by birds and blue whales, arrows connect to each of those animals as well as to fish. In all cases, arrows point in the direction in which energy moves to the consumer. Because killer whales eat blue whales, fish, birds, and seals, and because killer whales have no natural predators in the Antarctic ecosystem, killer whales are said to be the Antarctic food web's top predator.
Two other essential members of the food web are decomposers and scavengers. These are an ecosystem's primary recyclers, which feed on dead plant and animal life, breaking down organic waste material and returning essential elements, including nitrogen and phosphorous, to the ecosystem.
Organisms within an ecosystem rely on others as food sources, so any disturbance in population can have broad and lasting effects. For example, in the Antarctic food web, if krill were to vanish from Antarctic waters, blue and other baleen whales would follow. These species feed exclusively on krill. Penguins and seals feed on krill in part, but also on other animals that depend exclusively on krill, so they would be affected as well. Without krill, other primary consumers, including zooplankton, would be consumed in greater volume. With too much competition and not enough food, many different animals would ultimately disappear.
Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co.
We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment. | http://www.teachersdomain.org/resource/lsps07.sci.life.eco.oceanfoodweb/ |
4.03125 | Taking their cue from fish, scientists have built a navigational aid that will help robots and remote sensors find their way around the world's vast oceans.
The team describes its research today in the UK Institute of Physics publication Journal of Micromechanics and Microengineering.
Fish and many amphibian animals find their way through even the murkiest of waters, navigate raging torrents and spot obstacles, predators and prey using a sensory organ known as the lateral line system.
Sometimes known as the fish's sixth sense, the lateral line is a system of thousands of tiny hair cells that run the length of the fish's body. The lateral line responds to fluid flow around the fish and allows it to detect obstacles and sense the movement of water even in complete darkness.
Now, electrical engineer Chang Liu, entomologist Fred Delcomyn and their colleagues at the University of Illinois at Urbana-Champaign have developed an artificial lateral line that could give underwater vehicles and robots a sixth sense.
Robots equipped with the lateral line system will be able to navigate and feel in water.
The artificial lateral line was built by micromachining a sliver of silicon so that three-dimensional hairlike structures are formed on its surface.
The hair cells in a fish's lateral line are each connected to a nerve cell and, by analogy, Liu and Delcomyn have connected each of their silicon hairs via a micro-hinge to an electronic sensor.
When the artificial lateral line comes into contact with moving water, the silicon hairs are bent slightly depending on the rate of flow and the sensors detect the degree and direction of bending. A computer then interprets this movement to build up a picture of the flowing water, much as it does in the fish's brain.
The artificial lateral line the researchers are developing has 100 silicon hairs per square millimeter.
"This arrayed sensor will provide a unique fluid mechanics measurement tool," says Liu. "We are collaborating with marine researchers at Massachusetts Institute of Technology to apply the sensors to autonomous underwater vehicles." He adds that "The lateral line sensor might also help marine biologists to understand better the functions of biological lateral line sensors."
(Reference: Journal of Micromechanics and Microengineering, Vol 12, Issue 5.)
[Contact: Chang Liu, Dianne Stilwell] | http://www.unisci.com/stories/20022/0624024.htm |
4.03125 | Polar Stratospheric Clouds (PSC) form in the stratosphere in altitudes
between 20 and 30 km. The formation of PSC requires very low temperatures.
That is why they only appear in winter and mainly over Scandinavia, Scotland,
Alaska and Antarctica.
In general there are two types of Polar 5tratospheric Clouds:
A mostly paste1-coloured iridescence on small ice crystals of lenticular clouds at altitudes between 20 and 30 km is called mother-of-pearl clouds. They are visible best a short time before sunset respectively after sunrise at a distance of 10° to 20° from the sun. But they can also be observed up to 2 hours after sunset, what shows that they are at very high altitudes. They form when a current of air passes over an obstacle, for example a mountain range. This causes an oscillation of the air current and when the atmosphere is steadily piled up, there form steady waves in the lee of the mountains. In these lee waves the air streams up and down for several times. In the areas where it moves up, the air expands and cools down. So vapour can condensate and form clouds. Due to the extremely steady piling of the atmosphere in northern latitudes, the formation of waves reaches up to the uppermost layers of the atmosphere. As temperatures only seldom drop so much, mother-of-peal clouds just form from time to time. Over the arctic and Antarctica, however, according to recent observations, they appear more often over the arctic and Antarctica than it was thought before.
It is supposed that dust in the stratosphere favours the formation of mother-of-pearl clouds because small dust particles are very good sublimation nuclei for water molecules. In Scandinavia, mother-of-pearl clouds can be observed almost every winter. Thanks to the Scandinavian Mountains blocking the westerly winds, Finnish observers can look back over more than 50 times that these clouds appeared within 12 years. There have been speculations about the possibility of mother-of-pearl clouds also over Germany. Theoretically conditions are good at some times in winter also in our latitudes, especially in the north of Germany where the climate of the upper layers of the air is still influenced by the Scandinavian Mountains But observation reports from Germany are very few. From bibliography we know of only one such case. The German "Astronomy News" from 1910 reported that on May 19, 1910, short after Comet Halley bad passed the earth, such clouds had been observed. We should be very grateful for more hints on bibliography about observations of mother-of-pearl clouds over Germany.
On December 1, 1999, short after sunrise, Heino Bardenhagen watched a sky in Helvesiek, which gave him the impression of a wavy water surface that faintly reflected the sunlight. The pond there appeared upside down. What he observed, was probably another kind of stratospheric clouds. Just at a temperature of -75°C nitric acid (HNO3), which the atmosphere contains in small amounts, can condensate. It forms very thin, fibraciously looking fields of clouds which often extend over thousands of kilometres. As that night similar fields of clouds of that kind had been observed over a large area reaching from the centre of Scandinavia down to northern Germany, he might have observed these so-called Nitric Acid Trihydrate clouds (NAT). The Institutes for Environmental Physics of the universities in Heidelberg and Bremen and the Norwegian Institute for Air Research gave in their ozone bulletins the following information for the corresponding period of time: While during the last winter the stratosphere was relatively warm and only a very low activation of chloride could be measured, the stratosphere cooled down rapidly by the end of 1999, so that in mid December the formation of widespread polar stratospheric clouds became possible. Many ground stations have already observed polar stratospheric clouds. Meteorological temperature analyses from January, 2000, show that the areas with temperatures below 195 K (-78°C) at an altitude of 20 km in the northern hemisphere have never been so large as they are this year.
Those low temperatures are connected with the extreme conditions in the polar areas because the mass of air above the poles is completely isolated from the other global streams of air in winter. As soon as the sun disappears behind the horizon for several months in late autumn, an intensive westerly flow of air forms around the pole which is called the polar vortex. This polar vortex forms an annular stream of air obstructing the exchange of air with the rest of the atmosphere. Only this is the reason why the stratospheric temperatures in this area can drop so much. The polar vortex is especially well-defined in Antarctica because of the great mass of land around the South Pole. The vortex over the Arctic and the processes connected are in general not so well defined.
It bad been predicted that there will be a record loss in ozone over the poles in the following months because according to latest scientific research, stratospheric clouds play an important role in ozone decomposition. Under normal conditions the chloride coming from FCCH set free by man, is bound in the so-called chloride reservoirs. These are chemicals which contain chloride atoms but do not contribute to ozone decomposition. The most important chloride reservoirs are chloric acid (HCI) and chloride nitrate (C10N02). Chloric acid is formed by the reaction of chloride (Cl) with methane (CH4). Chloric nitrate is formed by chloride monoxide (ClO) and nitrogen dioxide (N02). Without these two chemicals, which bind almost all of the chloride in the atmosphere, a lot more of ozone would be decomposed in the atmosphere than really is. So, according to nowadays knowledge, the ozone hole forms because under the special conditions of the polar areas in winter chloride is set free from the reservoirs. The chemical reactions happening on the surfaces of the ice crystal s of the clouds are very different from those in the air. Here the two reservoir substances chloric acid and chloride nitrate can react with each other and set free chloride atoms and nitric acid. In winter, the chloride molecules stay in the air without any modification and still do not contribute to ozone decomposition. The nitric acid is bound in the ice crystals of the clouds and that way forms the NAT clouds described above.
As long as the chloride exists as molecules, there is no ozone decomposition. But as soon as the sun rises in the arctic spring, the chloride molecules are dissociated by the ultraviolet radiation (Lambda less than 450 nm), that means that they are split up into chloride atoms of great reactivity. This sets free large amounts of chloride atoms within a short time and starts an avalanche-like decomposition of ozone which finally leads to the formation of the ozone hole.
So the observation of NAT clouds allows a 1ot of conclusions on the chemical reactions in the upper atmosphere. In the case of the observation present the conditions of the stratosphere are documented rather good and there are also some other observations of similar cloud formations of that morning and the night before from Scandinavia. So it is not improbable that it might have been the first photographic documentation of NAT clouds in our latitudes.
Author: Claudia Hinz
Links about "PSC": | http://www.meteoros.de/psc/psce.htm |
4.34375 | Activities and Suggestions
Alphabet > Letter I (long I) > I is for Island
Beach | Seashore
Seasons > Summer
Here are printable materials and some suggestions to present letter I (long i sound). The presentation ideally should be part of island themed activities and/or crafts.
Social studies > Geography > What is an island
Ask children what is an island. An island is an area of land smaller than a continent and surrounded by water on all sides. Another word for island is isle. An islet is a tiny island.
Look at an atlas or globe and point out to some islands. Choose an island to explore online, this can be an island that is a territory of a country, for example Hawaii (a U.S.A. state) or another country, such as a Caribbean island. The island topic is also appropriate within a pirate theme.
Alphabet Activity: Alphabet Letter I is for Island
Present and display your option of printable materials listed in the materials column.
Explain that the word island exhibits the long i sound. The long i sound is the same as the name of the letter I. This is also true of all long vowel sounds.
* Finger and Pencil Tracing:
Trace letter I's in upper and lower case with your finger as you also sound out the letter. Invite the children to do the same on their coloring page (first presentation) or handwriting practice worksheet.
Encourage the children to trace the dotted letters with your choice of sharpened crayon, fine tip marker, coloring or regular pencil and demonstrate the direction of the arrows and numbers that help them trace the letter correctly. During the demonstration, you may want to count out loud as you trace so children become aware of how the number order aids them in the writing process.
*Find the letter I's: Have the children find all the letter I's in upper and lower case on the page and encourage them to circle or trace/shade them first. Visit each child to make sure they have identified the letter I's and then discuss the locations with the poster.
*Coloring Activity: Encourage the children to color the image in the coloring page or worksheet.
Advanced independent handwriting practice:
Print your choice of lined paper, or drawing and writing paper to draw and color an island scene and practice handwriting.
Letter I words (long i sound): Letter I Activity Worksheet and Mini-Book This page and matching mini-book can be used as part of Letter I program of activities to reinforce letter practice and to identify related letter I words. Read suggested instructions for using the worksheet and mini-book.
Discuss other letter I words and images: First 'brainstorm' and ask the children about other words that have that beginning sound and write them on a board (dry erase board) as the children come up with example. You can print letter I in a different bright color to make it stand out. If you have illustrated alphabet books you can also use images in them. You can also display other I posters and coloring pages or even make a letter I classroom book using coloring images or color posters. Visit Letter I Printable Materials to make your choice.
*coloring and writing materials
To view updates to these activities visit: http://www.first-school.ws/activities/alpha/i/island.htm | http://www.first-school.ws/activities/alpha/i/island.htm |
4.03125 | Hurricanes Katrina and Rita may have been unlikely saviours for the coral reefs under their paths, say researchers. They have found evidence that the cooling effect hurricanes have on sea temperatures may help corals recover from the bleaching caused by warming oceans.
Coral reefs get their colour from tiny algae called zooxanthellae that live within them.
The corals and the algae live in symbiosis, but if the corals become stressed they can expel the algae - which results in coral bleaching. One source of stress to corals is high sea temperatures, which is why global warming is predicted to bring about widespread coral bleaching.
In the North Atlantic, warmer temperatures at the ocean surface also help hurricanes form. Now, Derek Manzello, at the US National Oceanic and Atmospheric Administration, and colleagues have shown that hurricanes cool temperatures and may assist coral recovery.
Manzello documented the relationship between hurricanes and sea surface temperatures in the Florida Keys archipelago since 1988 and found that, on average, a hurricane will cool sea temperatures by 1.5°C for 10 days. "In relation to coral bleaching, that amount of cooling is pretty significant," he told New Scientist.
To see exactly how significant, Manzello compared his data for 2005 with data collected in the US Virgin Islands by his colleague Tyler Smith.
The Florida Keys suffered from an unusual number of hurricanes in 2005, including hurricanes Katrina and Rita. But the US Virgin Islands were untouched - the closest storm was a tropical depression that passed more than 400 kilometres offshore in October.
Manzello and Smith found that in September, coral bleaching levels were similar in both places. But while the coral reefs in the Virgin Islands did not recover until 2006, those in Florida began to regain colour in October.
"Cooling due to Hurricane Rita, between 21 September and 27 September, appears to have facilitated the recovery of corals in Florida," say the researchers. "Then, in late October the passage of Hurricane Wilma caused a large decrease in temperature - about 2.6°C - followed by rapid bleaching recovery."
By November 2005, coral bleaching was peaking in the Virgin Islands, but had returned to June levels in Florida.
"It is a controversial debate at the moment, but if the frequency of hurricanes increases with global warming, then the negative effects that are expected for coral bleaching [due to ocean warming] could be mitigated by the cooling that the hurricanes bring about," says Manzello.
Researchers are not sure how climate change will affect hurricanes in the Atlantic (see Global warming link to hurricanes likely but unproven). Some say it will increase the number of hurricanes, others say it will increase the intensity but not the frequency, while others still say it is simply too early to say.
A recent study suggested that the busy 2004 and 2005 hurricane seasons were just a return to the norm of previous centuries (see Coral reveals increased hurricanes may be the norm).
Manzello says that if climate change does bring more intense hurricanes, it is likely to be a double-edged sword: stronger hurricanes tend to be larger and so would cool a larger area. But their sheer force tends to destroy coral reefs.
Journal reference: Proceedings of the National Academy of Sciences (DOI: 10.1073/pnas.0701194104)
Climate Change - Want to know more about global warming: the science, impacts and political debate? Visit our continually updated special report.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article | http://www.newscientist.com/article/dn12177-hurricanes-may-be-unlikely-saviours-of-coral-reefs.html |
4.25 | History records numerous attempts to determine the age of the Earth. Archbishop Usher of Ireland in 1664 argued that the Earth was "born" on October 26, 4004 BC at 9:00 AM in the morning.
Lord Kelvin (1866) assumed that the Earth was originally molten and that it had taken from 100 million to 20 million years for the Earth to cool to its present temperature distribution. Kelvin's arguments struck a blow at the prevalent geological theories of the time that argued for a much older Earth. Kelvin assumed that there was no source of "new" heat within the Earth.
With the discovery of radioactive isotopes at the end of the previous century Kelvin's hypothesis was abandoned as it was shown that the decay of unstable radioactive isotopes was accompanied by the release of heat energy. Today, most geoscientists believe that the Earth formed about 4.6 billion years ago. It is important to note that the oldest continental crust is approximately 4.0 b.y. and, in general, the volume of rock varies inversely with the age of the rock. That is, with increasing age, the amount of rock of that age decreases. This is supported by the fact that the Earth is a dynamic planet and that the processes introduced in the section on Plate Tectonics have resulted in a recycling of crustal rocks.
Age of the Earth
Hutton (1790) coined the concept of uniformitarianism - "the present is the key to the past". Present processes (such as wind, water, glaciers, etc.) have operated throughout geologic time and that the physical and chemical laws which govern these processes have been operative throughout Earth history. The rates of these processes, however, have not remained constant. For example, land plants first appeared on Earth some 400 million years ago. Think about the effect of moving water and wind across a land surface on which there was no vegetation. Vegetation acts to reduce the effectiveness of these agents of erosion and one would predict that these agents may have been more effective prior to 400 million years ago.
Putting events in their proper order is a common occurrence. I am (probably) older than you and even if we don't tell our actual ages this can be a helpful piece of information. Following Hutton in the late 1700s and into the early 1800s, several important principles were established which made it possible to arrange geologic events into a relative scale.
Application of the Use of the Principles of Relative Age Determination
In a previous chapter it was noted that a great deal of time is most likely missing from a stack of sedimentary rocks. When major (millions of years) amounts of time are missing across a boundary these breaks are called unconformities. From your readings you should be able to define or sketch the following types of unconformities:
- angular unconformity
An animation to explain the three main types of unconformities - disconformities, non-conformities, and angular unconformities.
When you press the disconformity button, the sequence starts with a period of deposition of horizontal strata. There follows a period of erosion, and then more sedimentation. After choosing non-conformity, the sequence again starts with sedimentation but is followed by uplift and complete erosion of the cover, creating a basement topogrpahy upon which additional strata aare deposited. In the third case, there is a tectonic tilting of early strata before deposition of the later sequence.
Be able to locate each of these three unconformities on the cross section shown above.
Radioactive decay is the process whereby an unstable parent atomic nucleus is spontaneously transformed into an atomic nucleus of another element.
Heat is also emitted when a parent nucleus undergoes one of these types of decay. Lord Kelvin underestimated the age of the Earth because he was not aware of this internal heat source.
- alpha decay - two protons and two neutrons are emitted from the nucleus. This reduces the atomic number of the parent by 2 and the mass number of the parent by 4.
- Beta decay - an electron is emitted from a neutron in the nucleus changing the neutron to a proton. This increases the atomic number of the parent element by 1 but does not change the atomic mass number.
- Electron capture - occurs when a proton captures an electron and changes into a neutron. The atomic number of the parent element is decreased by 1 but the mass number is unchanged.
There are several radioactive decay schemes which have been useful in determining the absolute age of geologic events. Each decay scheme can be characterized by the half-life - the length of time it takes for half of the parent atoms to decay to daughter atoms.
For example, if the half life of the decay scheme A goes to B is 100,000 years the changes in amounts of A and B as a function of time are given below, assuming that we start with 1,000,000 atoms of A:
- 1 -100,000 years - 500,000 A and 500,000 B
- 2 -200,000 years - 250,000 A and 750,000 B
- 3 -300,000 years - 125,000 A and 875,000 B
- 4 -400,000 years - 67,500 A and 932,500 B
- 5 -500,000 years - 33,750 A and 966,250 B
- 6 -600,000 years - 16,875 A and 983,125 B
Sketch a plot of time units (number of half-lives) versus the percentage of A atoms remaining. Note that the relationship is not linear. When a decay scheme has gone through 6 or 7 half-lives there is very little of the parent atom left. Elements with relatively short half-lives that were present when the Earth formed have literally decayed themselves out of existence (at least out of the limits of detectability). If these elements decayed to a daughter element that could have been produced by no other decay scheme then we can infer the former existence of the parent from the presence of the daughter.
Animation of the Half-Life Concept explains the concepts of radioactive decay and half life using a
simple cubic array of atoms. The parent element is shown in blue and the daughter in purple. Note how the time for half the atoms to decay remains constant.
Run Eryn Klusko's Simulation of Radioactive Decay and try the "pennies" experiment.
The following parent/daughter pairs have been used to "date" different types of geologic events. In all cases it is necessary to assume that none of the parent was removed in any way other than by radioactive decay. Similarly, one must assume that none of the daughter was removed.
The Geologic Time Scale
Many geologist's work culminated with the development of the Geologic Column. You should "memorize" the column in the text along with several of the important events discussed in lecture. As originally proposed, the Geologic Column allowed assignment of relative ages only. With the discovery of radioactivity, several methods for determining absolute ages evolved. Now, ages can be assigned to the boundaries of the Column.
The figure at the top of this page shows a gabbro sill that has intruded a section of limestone. Relative dating techniques indicate that the sill is younger than the limestone. What we don't know is how much younger the sill is. It could be nearly the same age as the limestone or millions and millions of years younger. If we obtained the absolute age of 100 million years for the gabbro then we would know that the limestone was older than 100 million years.
You should learn the geologic time scale -- at least down to the periods. You should know about when the major mountains of the Earth were finally uplifted and when different organisms appeared and disappeared. | http://www.uh.edu/~jbutler/physical/chapter9notes.html |
4.03125 | ED 620, Individual Differences: Learner Characteristics
Systematic study of the conceptual framework of inclusive education which consists of special education, gifted and talented education and compensatory programs. Emphasis will be placed upon individual student characteristics and strategies for effective instruction.
- To investigate and identify the individual differences of students in our schools and classrooms.
- To pose significant questions about the role of the school and teacher in accommodating individual differences in the school and classroom.
- To design and test appropriate, research-based strategies for meeting individual needs of the students in our classrooms.
- To successfully use technology in meeting the information needs of ourselves and our students.
- Video Case Reflections
- Study Questions
Core Concept (H.E.A.R.T.) | http://hilo.hawaii.edu/depts/education/ED620.php |
4.09375 | What is an Intervener?
An Intervener is a person who:
- Works consistently one-to-one with an individual who is deafblind
- Has training and specialized skills in deafblindness
An intervener provides a bridge to the world for the student who is deafblind. The intervener helps the student gather information, learn concepts and skills, develop communication and language, and establish relationships that lead to greater independence. The intervener is a support person who does with, not for the student. Specialized training is needed to become an effective intervener. Training should address a wide range of topics necessary to understanding the nature and impact of deafblindness, the role of the intervener, and appropriate educational strategies to work with students with combined vision and hearing loss (Alsop, Killoran, Robinson, Durkel, & Prouty, 2004; McGinnes, 1986; Robinson et al., 2000).
Role of the Intervener:
Facilitates access to the environmental information that is usually gained through vision and hearing, but which is unavailable or incomplete to the child who is deafblind. | http://intervener.org/ |
4.0625 | This image shows the grooved terrain of Ganymede.
Click on image for full size
Grooves of Ganymede
Instead of icy-volcanism, the surface of Ganymede reveals a gradual surface deformation remeniscent of the crustal deformation of the Earth. In this case, crustal extension of the surface of Ganymede resulted in big blocks of the crust being pulled apart, as shown in this image. The pulling apart of blocks of crust is similar to the terrestrial process of rifting.
The lack of icy-volcanism, such as that found on Europa, probably stems from a lack of the kind of heating undergone by Europa. The existence of surface extension and deformation suggests that there has been some heating of Ganymede, nonetheless.
Shop Windows to the Universe Science Store!
Our online store
on science education, ranging from evolution
, classroom research
, and the need for science and math literacy
You might also be interested in:
Mountains are built through a general process called "deformation" of the crust of the Earth. One example of deformation comes from the process of subduction. When two sections of the Earth's lithosphere...more
This image shows an example of the grooved terrain of Ganymede. The image clearly shows that some things hit Ganymede and made craters after the grooves were made, because the grooves are underneath the...more
There has been no icy volcanism on Ganymede, but it does seem that there has been a kind of tectonism, or surface motion. Examination of the surface of Ganymede reveals many kinds of faulting and fracture....more
Amalthea was discovered by E Barnard in 1872. Of the 17 moons it is the 3rd closest to Jupiter, with a standoff distance of 181,300 km. Amalthea is about the size of a county or small state, and is just...more
Callisto was first discovered by Galileo in 1610, making it one of the Galilean Satellites. Of the 60 moons it is the 8th closest to Jupiter, with a standoff distance of 1,070,000 km. It is the 2nd largest...more
Most of the moons and planets formed by accretion of rocky material and volatiles out of the primitive solar nebula and soon thereafter they differentiated. Measurements by the Galileo spacecraft have...more
Many examples of the differing types of terrain are shown in this image. In the foreground is a huge impact crater, which extends for almost an entire hemisphere on the surface. This crater may be compared...more | http://www.windows2universe.org/jupiter/moons/ganymede_grooves_2.html&edu=high |
4.1875 | Science Fair Project Encyclopedia
Middle-earth is the name for the lands on J.R.R. Tolkien's fictional ancient Earth where most of the tales of his legendarium take place. "Middle-earth" is a literal translation of the Old Norse mythological term Midgard, referring to this world, the realm of humans. The term may be applied informally to the entire world depicted in The Hobbit, The Lord of the Rings and The Silmarillion, or more properly in specific reference to its main continent (called Endor or Ennor in the Elvish languages Quenya and Sindarin, respectively).
Although Middle-earth's setting is often thought to be another world, it is actually a fictional period in our Earth's own past 6,000 to 7,000 years ago. Tolkien insisted that Middle-earth is our Earth in several of his letters. The action of the books is largely confined to the north-west of the continent, corresponding to modern-day Europe, and little is known about the lands in the east and south of Middle-earth.
The history of Middle-earth is divided into several Ages: The Hobbit and The Lord of The Rings deal exclusively with events towards the end of the Third Age and conclude at the dawn of the Fourth Age, while The Silmarillion deals mainly with the First Age. Its world (Arda) was originally flat but was made round near the end of the Second Age by Eru Ilúvatar, the Creator.
Much of our knowledge of Middle-earth is based on writings that Tolkien did not finish for publication during his lifetime. In these cases, this article is based on the version of the Middle-earth legendarium that is considered canonical by most Tolkien fans, as discussed under Middle-earth canon.
The term "Middle-earth" was not invented by Tolkien, rather it existed in Old English as middanġeard, in Middle English as midden-erd or middel-erd; in Old Norse it was called Midgard. It is English for what the Greeks called the οικουμένη (oikoumenē) or "the abiding place of men", the physical world as opposed to the unseen worlds (The Letters of J. R. R. Tolkien, 151). The word Mediterranean comes from two Latin stems, medi, middle, and terra, earth.
Middangeard occurs half-a-dozen times in Beowulf, which Tolkien translated and on which he was arguably the world's foremost authority. (See also J. R. R. Tolkien for discussion of his inspirations and sources). See Midgard and Norse mythology for the older use.
Tolkien was also inspired by this fragment:
- Eala earendel engla beorhtast / ofer middangeard monnum sended.
- Hail Earendel, brightest of angels / above the middle-earth sent unto men.
The term Middle-earth can be interpreted in several ways:
- as the oikoumenē,
- as the "middle" land between the unreachable Aman in the west and the unknown Sun-lands in the east,
- as in the "middle" area between Over-heaven (Aman) and hell (Angband, a geographic location in the same way Tartarus was), and
- as the fixed land above the seas of Vaiya, but below the upper skies where Sun, Moon, and stars reside.
Some hollow earth enthusiasts interpret the term their way, believing that Tolkien referred to the hollow earth theory, but nothing in Tolkien's writings or beliefs supports this.
The name "Middle-earth" is often spelled "Middle Earth" or "Middle-Earth" by the popular media.
Main article: Arda
Although 'Middle-earth' strictly refers to a specific continent (called Endor in the Elvish language Quenya and Ennor in Sindarin, both meaning "middle land"), representing what we would know as Eurasia and Africa, the term is often used to refer to this entire 'earth' (properly called Arda), or even to the entire universe (properly called Eä) in which the stories take place.
If the map of Middle-earth is projected on our real Earth (a rough approximation at best), and some of the most obvious climatological, botanical, and zoological similarities are aligned, we get the Hobbits' Shire in the temperate England, Gondor in the Mediterranean Italy and Greece, Mordor in the arid Turkey and Middle East, South Gondor and Near Harad in the deserts of Northern Africa, Rhovanion in the forests of Eastern Europe and the steppes of Western and Southern Russia, and the Ice Bay of Forochel in the fjords of Norway. According to Tolkien, the Shire is supposed to reside at the approximate location of England's Midlands area (specifically Warwickshire), whereas Minas Tirith in Gondor is comparable to Venice, and Pelargir with Byzantium (Constantinople) and Troy.
The Hobbit and The Lord of the Rings are presented as the life work of Bilbo Baggins, Frodo Baggins and other Hobbits, and purport to be a translation of the Red Book of Westmarch. Like Shakespeare's King Lear or Robert E. Howard's Conan the Barbarian stories, the tales occupy a historical period that could not have actually existed. Dates for the length of the year and the phases of the moon, along with descriptions of constellations, firmly fix the world as Earth, no longer than several thousand years ago. Years after publication, Tolkien 'postulated' in a letter that the action of the books takes place roughly 6,000 years ago, though he was not certain.
Tolkien wrote extensively about the linguistics, mythology and history of the land, which form the back-story for these stories. Most of these writings, with the exception of The Hobbit and The Lord of the Rings, were edited and published posthumously by his son Christopher. Notable among them is The Silmarillion, which describes a larger cosmology which includes Middle-earth as well as Valinor, Númenor, and other lands. Also notable are Unfinished Tales and the multiple volumes of The History of Middle-earth, which include incomplete stories and essays as well as detailing the development of Tolkien's writings from early drafts through the last writings of his life.
Main article: Ainulindalë
The supreme deity of Tolkien's universe is called Eru Ilúvatar. In the beginning, Ilúvatar created spirits named the Ainur, and he led them in divine music. The Ainu Melkor, Tolkien's equivalent of Satan, broke the harmony, and in response Ilúvatar introduced new themes that enhanced the music beyond the comprehension of the Ainur. The essence of their song established the history of the as yet unmade universe and the people who were to dwell therein.
Then Ilúvatar created Eä, the universe itself, and the Ainur formed within it Arda, the Earth, "globed within the void": the world together with the airs is set apart from Kuma, the "void" without. The fifteen most powerful Ainur who came in the world first to shape and govern Arda are called the Valar, one of which was Melkor, the most powerful.
The earth before the second half of the Second Age was radically different than the world of the Third and later Ages: Arda was created as a flat world, represented as a ship or an island floating on the surrounding ocean (Vaiya), which forms water below Arda and air above. The Sun and Moon, as well as some stars (including the stars of the Big Dipper) followed paths within Vaiya, and as such are a part of Arda, set apart from the Void.
In the cosmic upheaval after the Downfall of Númenor late in the Second Age, the cosmology is radically changed, as Arda is turned into a globed world much like the actual Earth. The continent of Aman is removed from the world, and new lands are created "below" the old lands.
Main article: List of Middle-earth peoples
Middle-earth is home to several distinct intelligent species. First are the Ainur, angelic beings created by Ilúvatar. The Ainur helped Ilúvatar to create Arda in the cosmological myth called the "Ainulindalë", or "Music of the Ainur". Some of the Ainur later entered Arda, and the greatest of these are called the Valar. Melkor (later called "Morgoth"), the representation of evil in Middle-earth, was initially one of them.
The lesser Ainur who entered Arda are called the Maiar. In the First Age the chief example is Melian, wife of the Elven King Thingol; in the Third Age the Maiar are represented by the Istari (called Wizards by Men), including Gandalf and Saruman. There were also evil Maiar, including the Balrogs and the Dark Lord Sauron.
Later came the Children of Ilúvatar: Elves and Men, intelligent beings created by Ilúvatar alone. In The Silmarillion, set in the First Age and before, was told much of the Elves, the Elder children, although Men did appear towards the end.
The descendants of those Men who were faithful to the Eldar and the Three Houses of Edain were dealt in the tale of the Downfall of Númenor, set in the Second Age. Their children in the Third Age are the Men of Arnor and Gondor who appear in The Lord of the Rings. Hobbits are described as an offshoot of Men.
The Dwarves have a special position in the legendarium, in that they were not created by Ilúvatar, but rather by one of the Valar named Aulë. However, Aulë offers his creations to Ilúvatar, who adopts the Dwarves and gives them life and free will. The Ents, shepherds of the trees, are created by Ilúvatar at the Vala Yavanna's request to balance the Dwarves.
Orcs and Trolls are evil creatures bred by Morgoth; they are not original creations but rather "mockeries" of Elves and Ents. Their ultimate origin is uncertain, but at least some of them were bred from corrupted Elves and Men.
Seemingly sapient animals also appear, such as the Eagles, Huan the Great Hound from Valinor, and the Wargs. The Eagles are created by Ilúvatar along with the Ents, but in general these animals' origins and nature are unclear. Some of them might be Maiar in animal form, or perhaps even the offspring of Maiar and normal animals.
Main article: Languages of Middle-earth
Tolkien originally started writing the Silmarillion as a spin-off from his constructed language projects. He devised two main Elven languages, which would later become known to us as Quenya, spoken by the Noldor and some Teleri, and Sindarin, spoken by the Elves who stayed in Beleriand (see below). These languages were related, and a Common Eldarin form ancestral to them both is postulated.
Other languages of the world include
- Adűnaic - spoken by the Númenoreans
- Black Speech - devised by Sauron for his slaves to speak
- Khuzdűl - spoken by the Dwarves
- Rohirric - spoken by the Rohirrim - represented in the Lord of the Rings by Old English
- Westron - the 'Common Speech' - represented by English
- Valarin - The language of the Ainur.
History of Middle-earth
Main article: History of Arda
The history of Middle-earth is divided into three time periods, known as the Years of the Lamps, Years of the Trees and Years of the Sun. The Years of the Sun are further subdivided into Ages. Most Middle-earth stories take place in the first three Ages of the Sun .
The Years of the Lamps began shortly after the creation of Arda by the Valar. The Valar created two lamps to illuminate the world, and the Vala Aulë forged great towers, one in the furthest north, and another in the deepest south. The Valar lived in the middle, at the island of Almaren. Melkor's destruction of the two Lamps marked the end of the Years of the Lamps.
Then, Yavanna made the Two Trees, named Telperion and Laurelin in the land of Aman. The Trees illuminated Aman, leaving Middle-earth in twilight. The Elves awoke beside Lake Cuiviénen in the east of Middle-earth, and were soon approached by the Valar. Many of the Elves were persuaded to go on the Great Journey westwards towards Aman, but not all of them completed the journey (see Sundering of the Elves). The Valar had captured Melkor but he appeared to repent and was released. He sowed great discord among the Elves, and stirred up rivalry between the Elven princes Fëanor and Fingolfin. He then slew their father, king Finwë and stole the Silmarils, three gems crafted by Fëanor that contained light of the Two Trees, from his vault, and destroyed the Trees themselves.
Fëanor and his house left to pursue Melkor to Beleriand, cursing him with the name 'Morgoth' (Black Enemy). A larger host led by Fingolfin followed. They reached the Teleri's port-city, Alqualondë, but the Teleri refused to give them ships to get to Middle-earth. The first Kinslaying thus ensued. Fëanor's host sailed on the stolen ships, leaving Fingolfin's behind to cross over to Middle-earth through the deadly Helcaraxë (or Grinding Ice) in the far north. Subsequently Fëanor was slain, but most of his sons survived and founded realms, as did Fingolfin and his heirs.
The First Age of the Years of the Sun began when the Valar made the Sun and the Moon out of the final fruit and flower of the dying Trees. After several great battles, the Long Peace lasted hundreds of years, during which time Men arrived over the Blue Mountains. But one by one the Elven kingdoms fell, even the hidden city of Gondolin. By the end of the age, all that remained of the free Elves and Men in Beleriand was a settlement at the mouth of the River Sirion. Among them was Eärendil, whose wife Elwing held a Silmaril that her grandparents Beren and Lúthien had recovered from Morgoth's crown. But the Fëanorians tried to press their claim to the Silmaril by force, leading to another Kinslaying. Eärendil and Elwing took the Silmaril across the Great Sea, to beg the Valar for pardon and aid. The Valar responded. Melkor was exiled into the Void and most of his works destroyed. This came at a terrible cost, as Beleriand itself was broken and began to sink under the sea.
Thus began the Second Age in Middle-earth. The Men who had remained faithful were given the island of Númenor toward the west of the Great Sea as their home, while the Elves were allowed to return to the West. The Númenoreans became great seafarers, but also increasingly jealous of the Elves for their immortality. Meanwhile, in Middle-earth it became apparent that Sauron, Morgoth's chief servant, was again active. He worked with Elven smiths in Eregion on the craft of rings, and forged the One Ring to dominate them all. The Elves were aware of him, and ceased using their own.
The last Númenorean king Ar-Pharazôn, by the strength of his army, humbled even Sauron and brought him to Númenor as a hostage. But with the help of the One Ring, Sauron deceived Ar-Pharazôn and convinced the king to invade Aman, promising immortality for all those who set foot on the Undying Lands. Amandil, chief of those still faithful to the Valar, tried to sail west to seek their aid. His son Elendil and grandsons Isildur and Anárion prepared to flee east to Middle-earth. When the King's forces landed on Aman, the Valar called for Ilúvatar to intervene. The world was changed, and the Straight Road from Middle-earth to Aman was broken, impassable to all but the Elves. Númenor was utterly destroyed, and with it the fair body of Sauron, but his spirit endured and fled back to Middle-Earth. Elendil and his sons escaped to Middle Earth and founded the realms of Gondor and Arnor. Sauron soon rose again, but the Elves allied with the Men to form the Last Alliance and defeated him. His One Ring was taken from him by Isildur, but not destroyed.
The Third Age saw the rise in power of the realms of Arnor and Gondor, and their decline. By the time of The Lord of the Rings, Sauron had recovered much of his former strength, and was seeking the One Ring. He discovered that it was in the possession of a Hobbit and sent out the nine Ringwraiths to retrieve it. The Ring-bearer, Frodo Baggins, traveled to Rivendell, where it was decided that the Ring had to be destroyed in the only way possible: casting it into the fires of Mount Doom. Frodo set out on the quest with eight companions—the Fellowship of the Ring. At the last moment he failed, but with the intervention of the creature Gollum—who was saved by the pity of Frodo and Bilbo Baggins—the Ring was nevertheless destroyed. Frodo with his companion Sam Gamgee were hailed as heroes. Sauron was destroyed forever and his spirit dissipated.
The end of the Third Age marked the end of the time of the Firstborn and the beginning of the dominion of Men. As the Fourth Age began, most of the Elves who had lingered in Middle-earth left for Valinor, never to return; those few who remained behind would "fade" and diminish. The Dwarves eventually dwindled away as well. The creatures of the Enemy were all but destroyed, and peace was restored between Gondor and the lands to the south and east. Eventually, the tales of the earlier Ages became legends, the truth behind them forgotten.
Works by Tolkien
- 1937 The Hobbit
- 1954 The Fellowship of the Ring, part 1 of The Lord of the Rings
- 1954 The Two Towers, part 2 of The Lord of the Rings
- 1955 The Return of the King, part 3 of The Lord of the Rings
- 1962 The Adventures of Tom Bombadil and Other Verses from the Red Book
- An assortment of poems, only loosely related to The Lord of the Rings
- 1967 The Road Goes Ever On
- 1977 The Silmarillion
- The history of the Elder Days, before the Lord of the Rings, including the Downfall of Númenor
- 1980 Unfinished Tales of Númenor and Middle-earth
- Stories and essays left out of the Silmarillion and Lord of the Rings because they were never completed.
The History of Middle-earth series:
- 1983 The Book of Lost Tales 1
- 1984 The Book of Lost Tales 2
- The original versions of the legendarium, introducing many ideas which were later heavily revised and rewritten
- 1985 The Lays of Beleriand
- 1986 The Shaping of Middle-earth
- The first steps towards the later Silmarillion
- 1987 The Lost Road and Other Writings
- The first appearance of Númenor and its downfall
- 1988 The Return of the Shadow (The History of The Lord of the Rings v.1)
- 1989 The Treason of Isengard (The History of The Lord of the Rings v.2)
- 1990 The War of the Ring (The History of The Lord of the Rings v.3)
- 1992 Sauron Defeated (The History of The Lord of the Rings v.4)
- The development of the Lord of the Rings, from 'The Hobbit 2' to what would become more a sequel for 'The Silmarillion'. Sauron Defeated also includes further development of the Númenor legend.
- 1993 Morgoth's Ring (The Later Silmarillion, part one)
- 1994 The War of the Jewels (The Later Silmarillion, part two)
- The rewriting of the Silmarillion after the Lord of the Rings was published. These include signs of an immense upheaval as the entire cosmological myth was questioned.
- 1996 The Peoples of Middle-earth
- Various late writings, providing detailed information on various peoples, as well as linguistic essays
Works by others
A small selection of the dozens of books about Tolkien and his worlds:
- 1978 The Complete Guide to Middle-earth (ISBN 0345449762, Robert Foster , generally recognised as an excellent reference book. This guide does not have information from Unfinished Tales or the History of Middle-earth series, which leads to some errors by our choice of "canon" above.)
- 1981 The Atlas of Middle-earth (Karen Wynn Fonstad - an atlas of The Lord of the Rings, The Hobbit, The Silmarillion, and The Unfinished Tales; revised 1991)
- 1981 Journeys of Frodo (Barbara Strachey - an atlas of The Lord of the Rings)
- 1983 The Road to Middle-earth (Tom Shippey - literary analysis of Tolkien's stories from the perspective of a fellow philologist; last revised 2003)
- 2002 The Complete Tolkien Companion (ISBN 0330411659, J. E. A. Tyler - a reference, covers The Lord of the Rings, The Hobbit, The Silmarillion, and Unfinished Tales; substantially improved over the two earlier editions.)
In letter #202 to Christopher Tolkien, J. R. R. Tolkien set out his policy regarding film adaptions of his works: "Art or Cash". He sold the film rights for The Hobbit and The Lord of the Rings to United Artists in 1969 after being faced with a sudden tax bill. They are currently in the hands of Tolkien Enterprises, which has no relation to the Tolkien Estate, which retains film rights to The Silmarillion and other works.
The following year (1978), a movie entitled The Lord of the Rings was released, produced and directed by Ralph Bakshi; it was an adaption of the first half of the story, using rotoscope animation. Although relatively faithful to the story, it was neither a commercial nor a critical success.
In 1980, Rankin-Bass produced a TV special covering roughly the last half of The Lord of the Rings, called The Return of the King. However, this did not follow on directly from the end of the Bakshi film.
- The Lord of the Rings: The Fellowship of the Ring (2001)
- The Lord of the Rings: The Two Towers (2002)
- The Lord of the Rings: The Return of the King (2003)
The films were a huge box office and critical success and together won seventeen Oscars (at least one in each applicable category for a fictional, English language, live-action feature film, except in the acting categories). However, in adapting the works to film, changes in the storyline and characters offended some fans of the books (for a detailed discussion of the changes, see the "Movie-Goer's Guides" at The Encyclopedia of Arda).
The works of Tolkien have been a major influence on role-playing games along with others such as Robert E. Howard, Fritz Leiber, H. P. Lovecraft, and Michael Moorcock. Although the most famous game to be inspired partially by the setting was Dungeons & Dragons, there have been two specifically Middle-earth based and licensed games. These are the Lord of the Rings Roleplaying Game from Decipher Inc. and the Middle Earth Role Play game (MERP) from Iron Crown Enterprises.
Simulations Publications created three war games based on Tolkien's work. War of the Ring covered most of the events in the Lord of the Rings trilogy. Gondor focused on the battle of Pelennor Fields, and Sauron covered the Second Age battle before the gates of Mordor. A war game based on the Lord of the Rings movies is currently being produced by Games Workshop.
The computer game Angband is a free roguelike D&D-style game that features many characters from Tolkien's works. The most complete list of Tolkien-inspired computer games can be found at http://www.lysator.liu.se/tolkien-games/
EA Games has released games for the gaming consoles and the PC platform. These include The Two Towers, The Return of the King, The Battle for Middle Earth, and The Third Age. Vivendi released The Fellowship of the Ring while Sierra created The War of the Ring, both games that proved highly unsuccessful.
Apart from this game, many commercial computer games have been released. Some of these derived their rights from the Estate, such as The Hobbit — others from the movie and merchandising rights.
- Encyclopedia of Arda - the best online source for the names and facts of Tolkien's imaginary history. It has been used as a source.
- Ardalambion - This is a great site for anyone who wants to delve into the languages of Middle-earth. Recommended for anyone who wants to learn Quenya.
- The Tolkien Meta-FAQ - Summaries of common discussions about Tolkien and Middle-earth, from basic questions to expert debates.
- The Tolkien Wiki - The first wikiweb dedicated to the literary works of J. R. R. Tolkien. Contains a compendium, book-descriptions, essays, FAQ, etc..
- The One Ring.net - A site with multiple examples of Tolkien Fanart, Fanwriting, and a little bit of facts.
- The Lord of the Rings official movie site - the official movie site. It contains information on the movie and the books.
- Ted Nasmith - Tolkien Illustrations - The website of Tolkien illustrator Ted Nasmith, which includes galleries of illustrations for several books.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Middle-earth |
4.0625 | Genre Lesson: Short Story
- Learning Goal
- Examine the depth and breadth of a short story.
- Approximately 2 Days (40 minutes for each class)
- Necessary Materials
- Provided: Short Story 1: “Captain Dang Tames the Alhambra Beast”, Examining a Short Story Example Chart 1, Short Story 2: “The Future is Ours”, Examining a Short Story Worksheet (Student Packet, page 2)
Not Provided: Chart paper, markers, America Street edited by Anne Mazer
Activation & Motivation
Engage your students in a comparison and contrast discussion. Ask, "How are movies and television shows different?" (Their answers should include that a movie is long and a TV show is usually short.) Next, ask students what movies and TV shows have in common. For example, both have characters, a setting, a plot conflict, and a resolution. Recap that a sitcom only has time to show a slice of life, while a movie takes a deeper, longer look. Both, however, have the essential story elements—characters, setting, and plot.
will explain to students that just like a story can be told on screen in either a half hour television show or in a full-length blockbuster movie, the fiction that we read comes in different lengths as well. I will define a short story as a brief, fictional story that can usually be read in one or two sittings. It typically follows only one or two characters and focuses on one plot problem that is resolved by the end of the story. The setting of a short story is very focused. It will only take place in one or two locations. The purpose of a short story is to give the reader a glimpse of a few characters’ lives as they confront a problem.
A novel, on the other hand, is a long work of fiction in which a character typically changes or grows in several ways. There are often more than one plot problem in a novel, and sometimes even plot twists and surprises. There is also often more than one important character. In a novel, we learn a lot about many places in a story, characters’ backgrounds, and what happened before the story begins. The difference between a short story and a novel, then, is that a short story shows a slice of life, whereas a novel or book will give a longer, deeper look. Short stories differ from longer fiction in the amount of detail they offer to readers. A good reader will want to examine how a short story includes the most essential details (the story elements), but leaves many of the other details to our imagination.
I will identify the essential story elements in fiction—characters, setting, and plot (problem and solution) in a short story called “Captain Dang Tames the Alhambra Beast.”Then, I will ask questions about details that the author chose to leave out to focus the short story. First I will identify the characters mentioned in “Captain Dang.” For example, Henry talks about his mother in the story, so I will write her name on the chart. I will continue identifying characters in the story, including those mentioned by the main character, Henry. Note: Additional examples can be found on the Examining a Short Story Example Chart 1.
Next, I will identify the setting in “Captain Dang” and record the details on the chart. I know from details in the story that the action takes place in Southern California in a place called Alhambra. I also know that Henry is Vietnamese or Vietnamese American, and he mentions Vietnam in his comic book, which means he has a connection to that place as well.
Finally, I will identify the plot problem and solution and record the details on the chart. The story opens with the problem—Henry is teased for being quiet and nerdy by the other students at Alhambra, especially Craig Nale. I can tell that Henry feels lonely because he sits alone in the cafeteria and lets the other students tease him. The solution is that Craig and Henry bond over their love of comic books, and Craig calls Henry by his real name, rather than the nickname that he usually teases him with.
Now that I’ve identified the characters, setting, and plot problem and solution, I will think about what the author decided not to include in this story. To do this, I will ask questions about memories or background information, descriptions about multiple places in a given setting, or additional details in the plot. I will ask questions about these unknown “backstory” details on my chart . For example, I will ask questions about the mom’s backstory that the author chose not to include. Why is Henry’s mother sick? What is wrong with her? Is she still alive? Note: Additional examples can be found on the Examining a Short Story Example Chart 1.
Ask: "How can I examine the depth and breadth of a short story?" Students should answer that you should identify the story elements in a short story by looking for textual details about the characters, setting, plot problem, and resolution. Then, you think about what “backstory” information is not included in the story and ask questions about the story elements you identified.
will read “The Future is Ours” and identify story elements (character, setting, and plot) in the story. As we read, we will record the main story elements for character, plot, and setting on chart paper. One setting detail, for example, is that the story takes place in Queens, NY near the site for the World’s Fair. Another is that the story takes place during the late 1930s. The characters in the story include Vanessa and her mother.
Next, we will think about what information is not included in the story, and we will record questions about the “backstory” information on chart paper. For example, we never find out who stole the pennies. We will write, “What is the identity of the thief?” We will discuss why this information isn’t essential to the short story because it wasn’t about catching a thief, it was about getting to the World’s Fair. We will continue looking at the story elements we identified and asking questions about what information the author chose not to include in the story on our chart. Note: See Examining a Short Story Example Chart 2 for additional details and information.
Finally, we will discuss why the author chose not to include some backstory details in this short story. How are the details that are included important to the story?
will read “The Wrong Lunch Line” from America Street. You will record details about the story elements—the story’s character, plot (problem and solution), and setting in the circles in the center of the page. Then, you will think about what information has not been included in the story, and you will ask questions about that information on the Examining a Short Story Worksheet. (See page 2 in the Student Packet.) You will write those questions on your worksheet surrounding the appropriate story elements.
will share the story elements that we have identified, and we will discuss the information that is not included in the story. Ask: "What do you notice about what was included in a short story? How was this short story a slice of life? What might be beyond the text, and why did the author not include those details?" We will also discuss why an author might write a short story rather than a long novel.
(To see all of the ReadWorks lessons aligned to your standards, click here.)
|Tier 2 Word: epic|
|Contextualize the word as it is used in the story||Henry “preferred to work on his comic book epic, a story featuring Captain Dang, a Vietnamese superhero who dropkicked gerbils to save villagers in Vietnam from the evil Agent Orange.”|
|Explain the meaning student-friendly definition)||An epic is a long poem, novel, or film that tells the story of hero or heroine. When Henry preferred to work on his comic book epic, he meant that he wanted to work on creating his long comic book about a superhero rather than speaking to the other kids at school.|
|Students repeat the word||Say the word epic with me: epic.|
|Teacher gives examples of the word in other contexts||The teacher read an epic poem about a Greek hero. The epic movie featured a superhero that saved the city from aliens.|
|Students provide examples||If you wrote an epic, what would it be about? Start by saying, “If I wrote an epic, it would be about __________________________.”|
|Students repeat the word again.||What word are we talking about? epic|
|Additional Vocabulary Words||vow, pronounced| | http://www.readworks.org/lessons/grade6/america-street/genre-lesson |
4.03125 | Connections, Relationships, and Applications
Students identify similarities and differences between drama/theatre and other art forms. Students recognize the relationship between concepts and skills learned through drama/theatre with knowledge learned in other curricular subjects, life experiences and potential careers in and outside the arts. Students recognize the benefits of lifelong learning in drama/theatre.
Benchmark A: Discover the interdependence of theatre and other art forms.
1. Compare and contrast various art forms and their creative processes to those of drama/theatre.
1. Analyze the effectiveness of a given art form to communicate an idea or concept.
1. Use drama/theatre to transform an idea/concept/story expressed through dance, visual art or music.
Benchmark B: Explain the relationship between concepts and skills used in drama/theatre
with other curricular subjects.
2. Use dramatic/theatrical skills to communicate concepts or ideas from other academic content areas.
2. Use problem-solving and cooperative skills to dramatize a social issue and its potential impact and/or solution.
2. Explain how dramatic/theatrical skills are used in other disciplines.
Benchmark C: Identify recurring drama/theatre ideas and concepts that occur across time
periods and/or cultures.
3. Identify examples of how drama/theatre, broadcast media and film/video can influence or be influenced by politics and culture.
3. Explain how cultural influences affect the content or meaning of dramatic/theatrical works.
3. Compare and contrast how dramatic/theatrical works from different cultures and time periods convey the same or similar ideas and concepts.
Benchmark D: Discuss drama/theatre skills as a foundation for lifelong learning and potential
4. Collaborate in a dramatic/theatrical activity to achieve a common goal.
5. Describe what a director does.
4. Describe an individual's role in a collaborative effort.
5. Describe the roles and responsibilities of performing and technical artists in drama/theatre, film/video and broadcast media.
4. Identify the drama/theatre knowledge, skills and discipline needed to pursue a chosen career.
5. Identify specific factors to consider in choosing a career in drama/theatre, film/video or broadcast media. | http://www.locklandschools.org/olc/page.aspx?id=7999&s=1025 |
4 | Figure 46. Evolution of Hard-Bottom Plains. The upper panel depicts the seafloor shortly after glaciers have left, and the ocean level is 80 m deeper than today. Glacial-marine mud mantles most of the seafloor. In the middle panel, the seafloor has isostatically rebounded following melting of the ice, but this deep-water setting remains submerged and continues to collect fine sediment. During the past few thousand years, as the tides in the Bay of Fundy area have increased dramatically (Gehrels and others, 1995, 1996), tidal currents have removed all fine-grained sediment and left a coarse-grained lag deposit.
Last updated on October 6, 2005 | http://www.state.me.us/doc/nrimc/mgs/explore/marine/seafloor/images/fig46.htm |
4.59375 | Gerardus Mercator (1512-1594)
Frontispiece to Mercator's Atlas sive Cosmographicae, 1585-1595.
Courtesy of the Library of Congress, Rare Book Division, Lessing J. Rosenwald Collection.
A map projection is used to portray all or part of the round Earth on a flat surface. This cannot be done without some distortion. For example, the basic Mercator projection is unique; it yields the only map on which a straight line drawn anywhere within its bounds shows a particular type of direction, but distances and areas are grossly distorted near the map's polar regions. It was introduced by Geradus Mercator in 1569.
The Mercator projection was developed in 1569 by Gerardus Mercator as a navigation tool.
The map projection that bears his name he first used in 1569. He is also the first to use the term 'atlas' for a collection of maps.
Gerhard Kremer, or Gerardus Mercator was the leading mapmaker of the 15th Century. | http://inventors.about.com/library/inventors/blmercator.htm |
4.09375 | U.S. Department of Energy - Energy Efficiency and Renewable Energy
Fuel Cell Technologies Office – Fuel Cells
Parts of a Fuel Cell
Polymer electrolyte membrane (PEM) fuel cells are the current focus of research for fuel cell vehicle applications. PEM fuel cells are made from several layers of different materials, as shown in the diagram. The three key layers in a PEM fuel cell include:
Other layers of materials are designed to help draw fuel and air into the cell and to conduct electrical current through the cell.
Membrane Electrode Assembly
The electrodes (anode and cathode), catalyst, and polymer electrolyte membrane together form the membrane electrode assembly (MEA) of a PEM fuel cell.
Anode. The anode, the negative side of the fuel cell, has several jobs. It conducts the electrons that are freed from the hydrogen molecules so they can be used in an external circuit. Channels etched into the anode disperse the hydrogen gas equally over the surface of the catalyst.
Cathode. The cathode, the positive side of the fuel cell, also contains channels that distribute the oxygen to the surface of the catalyst. It conducts the electrons back from the external circuit to the catalyst, where they can recombine with the hydrogen ions and oxygen to form water.
Polymer electrolyte membrane. The polymer electrolyte membrane (PEM)—a specially treated material that looks something like ordinary kitchen plastic wrap—conducts only positively charged ions and blocks the electrons. The PEM is the key to the fuel cell technology; it must permit only the necessary ions to pass between the anode and cathode. Other substances passing through the electrolyte would disrupt the chemical reaction.
The thickness of the membrane in a membrane electrode assembly can vary with the type of membrane. The thickness of the catalyst layers depends upon how much platinum (Pt) is used in each electrode. For catalyst layers containing about 0.15 milligrams (mg) Pt/cm2, the thickness of the catalyst layer is close to 10 micrometers (μm)—less than half the thickness of a sheet of paper. This membrane/electrode assembly, with a total thickness of about 200 μm (or 0.2 mm), can generate more than half an ampere of current for every square centimeter of assembly area at a voltage of 0.7 volts, but only when encased in well-engineered components—backing layers, flow fields, and current collectors.
All electrochemical reactions in a fuel cell consist of two separate reactions: an oxidation half-reaction at the anode and a reduction half-reaction at the cathode. Normally, the two half-reactions would occur very slowly at the low operating temperature of the PEM fuel cell. Each of the electrodes is coated on one side with a catalyst layer that speeds up the reaction of oxygen and hydrogen. It is usually made of platinum powder very thinly coated onto carbon paper or cloth. The catalyst is rough and porous so the maximum surface area of the platinum can be exposed to the hydrogen or oxygen. The platinum-coated side of the catalyst faces the PEM. Platinum-group metals are critical to catalyzing reactions in the fuel cell, but they are very expensive. DOE's goal is to reduce the use of platinum in fuel cell cathodes by at least a factor of 20 or eliminate it altogether to decrease the cost of fuel cells to consumers.
The backing layers, flow fields, and current collectors are designed to maximize the current from a membrane/electrode assembly. The backing layers—one next to the anode, the other next to the cathode—are usually made of a porous carbon paper or carbon cloth, about as thick as 4 to 12 sheets of paper. The backing layers have to be made of a material (like carbon) that can conduct the electrons that leave the anode and enter the cathode. The porous nature of the backing material ensures effective diffusion (flow of gas molecules from a region of high concentration to a region of low concentration) of each reactant gas to the catalyst on the membrane/electrode assembly. The gas spreads out as it diffuses so that when it penetrates the backing, it will be in contact with the entire surface area of the catalyzed membrane.
The backing layers also help in managing water in the fuel cell; too little or too much water can cause the cell to stop operating. Water can build up in the flow channels of the plates or can clog the pores in the carbon cloth (or carbon paper), preventing reactive gases from reaching the electrodes.
The correct backing material allows the right amount of water vapor to reach the membrane/electrode assembly and keep the membrane humidified. The backing layers are often coated with Teflon™ to ensure that at least some, and preferably most, of the pores in the carbon cloth (or carbon paper) do not become clogged with water, which would prevent the rapid gas diffusion necessary for a good rate of reaction at the electrodes.
Pressed against the outer surface of each backing layer is a piece of hardware called a bipolar plate that typically serves as both flow field and current collector. In a single fuel cell, these two plates are the last of the components making up the cell. The plates are made of a lightweight, strong, gas-impermeable, electron-conducting material—graphite or metals are commonly used even though composite plates are now being developed.
The first task served by each plate is to provide a gas "flow field." Channels are etched into the side of the plate next to the backing layer. The channels carry the reactant gas from the place where it enters the fuel cell to the place where it exits. The pattern of the flow field in the plate (as well as the width and depth of the channels) has a large impact on how evenly the reactant gases are spread across the active area of the membrane/electrode assembly. Flow field design also affects water supply to the membrane and water removal from the cathode.
Each plate also acts as a current collector. Electrons produced by the oxidation of hydrogen must (1) be conducted through the anode, through the backing layer, along the length of the stack, and through the plate before they can exit the cell; (2) travel through an external circuit, and (3) re-enter the cell at the cathode plate. With the addition of the flow fields and current collectors, the PEM fuel cell is complete; only a load-containing external circuit, such as an electric motor, is required for electric current to flow. | http://www1.eere.energy.gov/hydrogenandfuelcells/fuelcells/printable_versions/fc_parts.html |
4.03125 | Basics of the Immune System
The human body is constantly bombarded by millions of viruses, bacteria, and other disease-causing microorganisms, or pathogens. Fortunately, most of these are thwarted by the body's own protective physical and chemical barriers, such as the skin, saliva, tears, mucus, and stomach acid.
The millions of bacteria that live on the skin and the body's mucous membranes also help protect against certain invaders. When a pathogen does manage to evade these defenses and enter the body, it is attacked almost immediately by one or more components of the immune system.
The immune system uses extremely sensitive chemical sensors to recognize a foreign organism or tissue, especially one that can cause disease. Sometimes it overreacts to a harmless substance, such as pollen or a certain food or medication; this can set the stage for an allergic reaction. In other cases, the immune system mistakenly attacks normal body tissue as if it were foreign, resulting in an autoimmune disease such as lupus or rheumatoid arthritis. Most of the time, however, the immune system holds fast as our first line of defense against a host of potentially deadly diseases.
Most lymph nodes are clustered in the neck, armpits, abdomen, and groin. Fluid that drains from body tissues into the lymphatic system filters through at least one lymph node, where layers of tightly packed white blood cells attack and kill any harmful organisms. Blood vessels transport white blood cells, antibodies, and other protective substances produced by the immune system. The lymphatic system also returns body fluid to the bloodstream after it has been filtered through lymph nodes.
Disease-causing organisms vary from tiny viruses to parasites such as the tapeworm, which can grow 20 feet long. Regardless of the size or species of the invading organism, a healthy immune system will mount a vigorous defense against it. The exact nature of that defense varies, however, according to the type and number of invading organisms. | http://infolific.com/health-and-fitness/anatomy-and-physiology/the-immune-system/ |
4.21875 | River deep, mountain high...
In this lecture period, we learn:
The Shape and Size of EarthA good way to look at a planet is by taking a globe in your hands. Because 3-dimensional objects are not convenient to carry around, early on in our traveling history the art of map making was invented. Maps of the earth offer 2-dimensional representation of a 3-dimensional object. Because Earth is a sphere, different projections were developed to emphasize different aspects. Perhaps you recall the experience that the shortest distance between points on a map was connected by a curved trajectory. For example, the connections in airline magazines illustrate this property nicely. Another important aspect is the area distortion of many maps. Whereas Alaska is a large state, it appears yet even larger because the E-W distances are commonly the same on maps, but not on a sphere. Such E-W lines are called latitudes, whereas N-S lines are called longitudes. Note that longitudes are all of equal length (circumference of the Earth), but that latitudinal lines are of different length. The longest latitude is the equator, which equals the circumference.
Already in the 3rd Century BC, a Greek librarian named Erathosthenes accurately defined Earth's circumference. The method is very creative. When the sun stands vertical at one point, measured by shining down the bottom of a well, it casts a shadow elsewhere. At a distance of 800 km, Erastosthenes measured an angle of 7.2 degrees from vertical between the top of a wall and the tip of its shadow. Thus an angle of 7.2 degrees describes an arc of 800 km on the Earth's surface. In a full circumference of 360 degrees, this would describe an arc of 360/7.2 x 800 = 40,000 km. Erasthothenes' calculation is within tens of kilometers of today's determination.
Rather than looking at coastlines only, we examine the elevation (or topography) of the Earth.
We create a graph showing
the total surface area at a certain elevation, which is called a hypsometric
curve. The figure shows both a hypsometric curve (or cumulative frequency
curve) and the more familiar histogram.
Raise sea level by 200m (through melting of continental ice sheets) and see what our continents will look like from today's coastlines (move mouse over image). You can further experiment with sea levels and topography, and look at details for your favorite area by going to the LDEO site.
Two rocks: Granite and GabbroWe can generalize the composition of the continents and the ocean floor by two igneous rock types: granite and gabbro (or their extrusive equivalents, rhyolite and basalt). Granite is a rock a light-colored consisting mainly of the minerals quartz and feldspar, with various minor phases (such as mica, hornblende). Chemically, granite are high in Si (~70%) and Na, K; it has relatively low Ca and Mg content. Gabbro is a dark-colored rock consisiting mainly of the minerals feldspar, olivine and pyroxene. Chemically, it has low Si, Na and K content, and relatively high Ca and Mg content. These compositions are responsible for a difference in density for these two rock types: granite has a density of ~2800 kg.m-3, whereas gabbro is slightly more dense (~2900 kg.m-3).
The density difference between granite and gabbro has an important consequence that we can illustrate by simple experiment. If we float a piece of hardwood (like oak) and and a piece of softwood (say, pine) of equal dimensions in a bucket of water, we see that the hardwood rides lower than the pine. The reason is that hardwood has a slightly higher density than softwood, and thus is heavier. Secondly, we float a a piece of wood that is twice as thick as the original piece. The thicker piece sinks deeper and rides higher. Since the weight of a body equals the weight of the liquid is displaces (Archimedes' Principle), thicker or denser blocks will displace more water. We can apply this experiment to the Earth, with the granite and gabbro as our wood blocks and the deeper mantle as the water.
Thickness is important, illustrated by icebergs. Ice floats in sea water because it has a lower density (rhoice=920 kg.m-3, rhoseawater=1025 kg.m-3). Using Newton's Second Law, F = m * g (recall that with m = volume x density) and Archimedes principle, we get: volice . rhoice . g = voldisplaced water . rhowater . g. So, rhoice/rhowater=voldisplaced water/volice.
Thus the ratio of displaced water/ice volume equals 0.9 meaning that 90% of an iceberg is below sealevel, whereas only 10% is above sea level.
Thus, density and thickness contrast between granite and gabbro (continent vs. ocean floor) both promote relatively high continents and relatively low ocean floor. Density, therefore, is a first order property that explains Earth's characteristic bimodal elevation distribution.
How do we know that there is radial variation in Earth? There are several ways this can be surmised, but one good indicator is average density of Earth. Rocks at the Earth's surface have a density around 3000 kg.m-3, whereas the average density of Earth exceeds 5000 kg.m-3. Let's figure out how we know this and along the way determine a few other properties of our planet.
Copyright and Use Statement: Regents
of the University of Michigan | http://www.globalchange.umich.edu/globalchange1/current/lectures/topography/topography.html |
4.15625 | The Millers River was first formed in the early tertiary period, 50 – 60 million years ago. The present state occurred over one million years ago. The river was originally much wider than now and has slightly altered its course over the years.
The Millers River drains the western part of the ancient, formerly volcanic, Bronson Hill Upland, the remnant of giant folds of rock created when super-continents collided. The Bronson Hill Upland is the divide between the Laurentian continent and pieces of an early supercontinent known as Gondwana that were joined in the collision that formed the continent Pangaea. Later, the Pangean supercontinent split apart, forming North America and Europe.
A succession of glaciers over several hundred thousand years eroded the mountainous landscape, stripping loose rock from the underlying bedrock as they advanced. At the end of the Wisconsin glacial period, the receding glacier released huge volumes of water, forming sizable glacial lakes. A network of rivers formed when the moraines, ice, and bedrock that impounded the glacial lakes eroded and drained into the Connecticut River.
Geology/Soils. The river network eventually cut through sedimentary deposits to bedrock comprised of gneiss domes mantled by schist and quartzite granite to create the current valleys. According to the USDA, the underlying geology of the watershed is comprised of gneiss and schist bedrock covered with deposits of glacial till. These deposits dominate the geology. However, this glacial till is acidic due to the weathering of sulfidic schists. Thus, the till has little bugger reagents against the effects of acid rain. Drumlins populate the area, the result of clay-rich sediments being compacted against rock outcroppings.
The surficial geology of the watershed shows vast regions in Winchendon, Ashburnham, Gardner, Templeton, Phillipston, Athol and Orange that contain sand and gravel deposits. These regions also coincide with some of the best quaity water supplies in the watershed. The Town of Athol sits on a great sand plain, the remnant of a glacial lake. These sand deposits are sought after for sand and gravel operations and generally contain groundwater aquifers.
Generally, the soils belong to soil associations that are steep and extremely stony. These steep uplands are not well suited to farming, but support healthy forested expanses. The majority of the prime forestland soils occur on the hilly glacial till ridges upland from the rivers and lakes. The sand and gravel and alluvial deposits are confined to the narrow river valleys, providing excellent recharge areas to drinking water supplies. These sandy soils require moderate efforts to control erosion.
Topography. The watershed has high topographical relief ranging from 200 – 1500 feet above mean sea level. The eastern portion of the watershed is an elevated plateau over one thousand feet high. As one travels west, the actions of glaciers and the Millers, otter and Tully Rivers on the plateau become apparent, leaving looming hills. The towns of Royalston, Athol, Warwick, Erving, and Wendell are rugged, hilly areas with deep, trough-like valleys. These hills are the southern end of the Monadnock range. The banks of the Millers River in Athol, Erving and Wendell are steep, exceeding 25 percent slope in many areas.
Overall, the Millers River lowers in elevation moderately from the Worcester Plateau Region on the eastern end of the watershed, averaging about 18 feet per mile from the headwaters in New Hampshire to the USGS gauge station at Erving, MA, although a 5-mile reach of the Millers River between South Royalston, MA and Athol, MA drops an average of about 43 feet per mile. The Otter River descends at an average of 18 feet per mile over 11 river miles. The East branch of the Tully River drops an average of 52 feet per mile over 13 river miles.
Water Resources. The Millers River Watershed is rich in water resources. Numerous streams and rivers thread through the landscape, feeding the Millers River from the upland terrain. The region has scores of ponds and lakes, many of which serve as recreation areas, fishing sites, and public drinking water supplies (some of which are classified as outstanding resource waters). Many of the waters have abundant fish populations, and some of the waters are stocked with trout. Wetlands exist in abundance, providing needed habitat to the flora and fauna of the region. Adjacent to the rivers and streams are 133,540 acres of floodplain, that serve to absorb high waters during winter and spring snowmelt periods and during major storm events. These areas are rich in alluvium, a sought after agricultural soil type. Vernal pools abound in the region.
Wildlife. The Millers River watershed has an abundance of forested areas that provide extensive and significant wildlife habitat. Rivers, wetlands, forests, meadows and mountain ridges provide sustenance, mating grounds, and vegetated cover, supporting stable populations of deer, otter, mink, muskrat, porcupine, fisher, and fox. There is evidence that populations of beaver, eastern coyote, black bear, and several species of migratory raptors and waterfowl have returned to the region. The watershed is also home to 26 species on the Massachusetts list of endangered, threatened and special concern species.
- MASSDEP 2000 Millers Watershed Assessment Report (2004)
- MRPC- FRCOG NPS Assessment Report (2002) | http://millerswatershed.org/natural-history/ |
4.09375 | Rosetta StoneFor thousands of years, the Egyptian civilization used a written language called hieroglyphics. This language was used from ancient times throughout the last several centuries B.C., when the Greeks under Alexander the Great conquered Egypt and introduced the Greek language. Then, around 50 B.C., the Roman Empire expanded into Egypt and the hieroglyphics language was abandoned in favor of Greek and Latin.
Sadly, within a hundred years of the Romans taking over Egypt, no one used or even understood hieroglyphics. None of the scholars at the time recorded any information on how to translate the language, so from that point on no one could read any of the Egyptian writings written in hieroglyphics. This was a huge obstacle for anyone wanting to study Egyptian history and culture.
Hieroglyphics remained a mystery for hundreds of years, even though many people tried to translate the language. It might still be a mystery if not for a chance discovery by some French soldiers who were fighting in Egypt. While working on constructing a fortress near Rosetta, a small city near Alexandria, a soldier named Pierre-Francois Bouchard found a flat block of black basalt about 4 feet long, 2 feet wide, and 1 foot thick. This block had three sections of writing on it--one in hieroglyphics, one in another language called Demotic which was another language used in Egypt, and one in Greek. This stone is known as the Rosetta Stone.
When scholars began studying the stone, they quickly realized that it contained one message written in three languages. They translated the Greek, and found that it was a decree praising the king of Egypt (Ptolemy V). Once they had the meaning of the message, they began to work on translating the other two languages. An English physicist named Thomas Young made the first key discovery when he showed that the signatures within the message (called cartouches) could be translated into the names of known rulers (Ptolemy and Alexander). Shortly after this, a French historian named Jean-Francois Champollion went on to completely translate the Rosetta Stone's hieroglyphics, opening the ancient Egyptian language to scholars and allowing them to read all the writings left behind by the Egyptians for the first time.
The Rosetta Stone provided a 'key' that was essential for our decipherment of hieroglyphics, and today we often hear something compared to the Rosetta Stone if it provides a way to decipher something that would otherwise be very difficult to understand. It is for this reason that the newest comet mission, Rosetta, was named after this famous stone. Hopefully, the Rosetta mission's findings will provide a key to the origin of comets and thus the origin of the solar system. | http://www.windows2universe.org/comets/rosetta_stone.html&edu=high |
4.21875 | Synchrotron light is used today to carry out fundamental research in areas as diverse as condensed matter physics, pharmaceutical research and cultural heritage.
What is synchrotron light?
Synchrotron light (also known as synchrotron radiation) is electromagnetic radiation that is emitted when charged particles moving at close to the speed of light are forced to change direction by a magnetic field. Synchrotron light can be produced naturally by astronomical objects, such as the Crab Nebula – a supernova remnant in the Taurus constellation. Since the late 1940s synchrotron light has been artificially generated using synchrotrons – particle accelerators that gave the phenomenon its name.
Synchrotron radiation spans a wide frequency range, from infrared up to the highest-energy X-rays. It is characterised by high brightness – many orders of magnitude brighter than conventional sources – and the light is highly polarised, tunable, collimated (consisting of almost parallel rays) and concentrated over a small area. When synchrotrons were first developed, their primary purpose was to accelerate particles for the study of the nucleus, not to generate light. Today on the other hand, while a few are still used as colliders for high-energy physics experiments such as the Large Hadron Collider at CERN, there are more than 50 synchrotron light sources around the world dedicated to generating synchrotron light and exploiting its special qualities. These machines support a huge range of applications, from condensed matter physics to structural biology, environmental science and cultural heritage.
Earlier accelerators, called cyclotrons, had fixed magnetic fields. Because the bending of a charged particle is inversely proportional to its momentum, cyclotrons were limited to fairly low energies otherwise they became unaffordably large. By collecting the particles into bunches and synchronising a rise in magnetic-field strength with the increasing energy of the charged particles, the particles could be accelerated to higher energies while being constrained to a fixed circular path. Thus the synchrotron was born, with the first observation of artificial synchrotron light occurring at General Electric in the US in 1947.
British theoretical physicist James Maxwell’s classical theory of electromagnetism (first fully documented in 1873) explains that charged particles moving through a magnetic field generate electromagnetic radiation. By requiring Maxwell’s equations to be true in all inertial (non-accelerating) frames of reference, Einstein’s theory of special relativity fully explained the complete characteristics of synchrotron light generated by electrons travelling along a circular path at relativistic speeds. The radiation produced in synchrotrons is focused in a narrow cone, perpendicular to its acceleration direction and parallel to the directional motion of the electrons.
Synchrotron light is the brightest artificial source of X-rays, allowing the detailed study of molecular structures, which has led to the award of Nobel prizes in a number of fields (see timeline).
In 1956, two American scientists, Diran Tomboulian and Paul Hartman, were granted use of the 320 MeV synchrotron at Cornell University. In addition to confirming the spectral and angular distribution of synchrotron light, they carried out the first X-ray spectroscopy study using synchrotron light. Five years later, the National Bureau of Standards in the US modified its 180 MeV machine to allow synchrotron light to be harvested for experiments. These became known as the first-generation synchrotrons – machines built for smashing nuclei apart using electrons, which were later used for synchrotron-light experiments.
Over the coming decades, as demand grew for the use of first generation synchrotron machines, pioneering advances led to a number of developments, such as the storage ring, which allowed particles to circulate for long periods of time providing more stable beam conditions, benefiting both particle physicists and synchrotron users. One of the most significant developments took place in the late 1970s when plans were approved to build the world’s first dedicated synchrotron light source producing X-rays (Synchrotron Radiation Source – SRS) at Daresbury in the UK, which started user experiments in 1981. The US, Japan and others also built second-generation machines, while other first-generation machines received upgrades to allow for more experiments.
For scientists carrying out spectroscopy experiments, the brightness of the beam reaching the sample determined the resolving power of the results. For crystallographers, especially those looking at small crystals with large unit cells, high brightness was important to resolve closely spaced diffraction spots. As second-generation machines were optimised to produce brighter beams, a fundamental limit was approaching. To meet the increasing demands of a growing synchrotron user community, a new approach was required: insertion devices.
Insertion devices are arrays of magnets placed into the straight sections of the storage ring, which could be retro-fitted to second-generation machines and were quickly incorporated into existing synchrotrons. Insertion devices help to create a beam that is very bright and with intensity peaks with a wavelength that can be varied by adjusting the field strength (often the gap between two magnet arrays).
The increased brightness made data collection faster, and tunable wavelengths benefited crystallographers and spectroscopists alike. By the early 1990s machines were being designed with insertion devices in place from the start, and the first such third-generation source, the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, started operating in 1994. There are now more than 50 dedicated light sources in the world, combining both second- and third-generation machines, which cover a wide spectral range from infrared to hard X-rays.
The experimental configurations of different synchrotron facilities are quite similar. The storage rings where the light is generated have many ports, which each open onto a beamline, where scientists set up their experiments and collect data. The beamlines, however, can vary a lot in the details depending on the experimental methods they are used for. The Diamond Light Source in Oxfordshire became Britain’s newest synchrotron light facility in 2007 and in August 2008 it was Britain’s only synchrotron source following the closure of the SRS. It represents the largest single UK science investment for 40 years and will have the capacity to host 40 beamlines. The Diamond Light Source supports a huge range of scientific disciplines, including condensed matter physics, chemistry, nanophysics, structural biology, engineering, environmental science and cultural heritage.
- Life science
Pharmaceutical companies and medical researchers are making increasing use of macromolecular crystallography. Improvements in the speed of data collection and solving structures mean that it is now possible to obtain structural information on a timescale that allows chemists and structural biologists to work together in the development of promising compounds into drug candidates. Both the anti-flu drug Tamiflu and Herceptin – used to treat advanced breast cancer – benefited from synchrotron experiments. Using synchrotron light in the infrared range, pioneering research is underway into developing new cancer therapies that can be tailored to the individual patient. In 2009, the Medical Research Council used the Diamond Light Source to compare the structure of hemagglutinin from the flu-virus strain that caused the 1957 “Asian” pandemic with the 1918 and 1968 outbreaks, to discover why some avian flu viruses are more able than others to jump the species gap.
Synchrotron X-ray beams allow detailed analysis and modelling of strain, cracks and corrosion as well as in situ study of materials during production processing. This research is vital to the development of high-performance materials and their use in innovative products and structures. The Diamond Light Source has been used to study the processes behind pitting corrosion, which attacks the so-called corrosion-resistant metals used in containers for nuclear waste, and to understand how applied stresses can cause cracks to propagate through materials.
- Environmental science
Synchrotron-based techniques have made a major impact in the field of environmental science in the last 10 years. High brightness allows high-resolution study of ultra-dilute substances, the identification of species and the ability to track pollutants as they move through the environment. Synchrotrons have been used to develop more efficient techniques for hydrogen storage and to study the way in which depleted uranium disperses into the local environment. Tiny heavy-metal samples excreted from earthworms have been compared with contaminated soil samples, revealing how earthworms survive in these environments and introducing the idea that earthworms could help to decontaminate land.
- Physics and materials science
Determining the properties and morphology of buried layers and interfaces is an important area in solid-state science with synchrotrons being the meeting ground of state-of-the-art theory and high-precision experimental results. Many of the technological products of materials science are based on thin-film devices, which consist of a series of such layers. Structural studies of in situ processing of semiconducting polymer films are also likely to be an important area of growth in the coming decade. Diffraction of high-intensity X-ray beams is an ideal technique to study spin, charge and orbital ordering in single-crystal samples to understand high-temperature superconductivity. The SRS was used to help study giant magneto-resistance (GMR), which is now used in billions of electronic devices worldwide.
- Cultural heritage
Cultural heritage is a rapidly expanding area of research using synchrotrons. Scientists are using non-destructive synchrotron techniques to find answers to big questions in palaeontology, archaeology, art history and forensics. Scientists in the UK have used the SRS and the Diamond Light Source to study samples from the Tudor warship the Mary Rose to enhance their conservation techniques, and the ESRF has been used to study insects more than 100 million years old, preserved in amber.
Seven Nobel Prizes in Physics have been awarded for X-ray related work. For example, British physicists Sir William Henry Bragg and William Lawrence Bragg shared the 1915 prize for using X-ray diffraction as a technique to determine crystal structure.
Synchrotron facilities have had a positive and significant impact on many areas. In technology, research into GMR is now benefiting data storage in billions of electronic devices like iPods – a market generating £1 bn per quarter. At the time the SRS closed in 2008, 11 of the top 25 companies in the UK R&D Scoreboard had used the facility.
Sir John Walker was awarded the Nobel Prize in Chemistry in 1997 for his work on the structure of Bovine F1 ATP Synthase – the first synchrotron-based Nobel prize. In 2006, the Nobel Prize in Chemistry was awarded to Prof. Roger Kornberg for his synchrotron-based research into how genes copy themselves, a process involved in many human diseases and stem-cell treatment. The structure of the foot-and-mouth-disease virus was determined first at the SRS, leading to potential new vaccines that could save the UK £80 m if another outbreak were to occur. Synchrotron light is considered essential in modern pharmaceutical research, illustrated by the 14% investment in the Diamond Light Source by the Wellcome Trust, the UK’s largest non-governmental funding body for biomedical research.
Throughout the lifetime of synchrotron facilities, 300 local businesses benefited from the SRS, with £300 m being awarded in contracts – the financial impact on the local economy throughout its lifetime is estimated to be almost £1 bn. Similarly, more than 1000 companies have benefited from construction or technology contracts for the Diamond Light Source and a quarter of the science carried out at the ESRF links directly to industry.
The demand for synchrotron light has meant that third-generation machines are being built around the world, and existing machines continue to be developed to provide brighter X-rays, increased user hours and more flexible experimental stations. The modular nature of modern synchrotrons means that new technologies can be incorporated into existing machines as they arrive. By using powerful linear accelerator technology, fourth-generation sources – known as free-electron lasers (FELs) – can generate shorter, femtosecond pulses but with the same intensity in each peak as synchrotron sources emit in one second, producing X-rays that are millions of times brighter in each pulse than the most powerful synchrotrons. FELs won’t replace third-generation machines, but will provide facilities that enable studies at higher peak brightness.
Thanks go to Gerhard Materlik and Sara Fletcher, the Diamond Light Source; Claire Dougan, STFC; and Emma Woodfield for their help with this case study. | http://www.iop.org/publications/iop/2011/page_47511.html |
4.28125 | Human activity contributes to the evolution of the landscape and movement of sediment, through the creation of artificial ground, including archaeological remains.
Artificial ground includes areas where the landscape has been modified through the removal or placement of rock, soil and waste material.
The type of excavation and composition of anthropogenic material reflect its origin and the process that emplaced it.
The composition of the material can be extremely variable both laterally and vertically, representing rapid land-use change in one locality.
Stratigraphically, artificial ground can be interpreted as sedimentary deposits or excavations representing the human geological record during the Anthropocene.
Artificial ground that is formed as a result of changing land use through time is commonly contaminated.
Natural sediments also preserve a distinctive chemical signature from anthropogenic activities including smelting of metals and burning fossil fuels.
Sediments recovered in areas like the Mersey Estuary, North West England, show distinct contamination at certain depths below the bed of the estuary. These signatures record the industrial development of the Mersey region as chemicals such as mercury were discharged into the Mersey during the Industrial Revolution.
Spoil from deep and opencast coal mining forms one of the most significant anthropogenic landforms in Great Britain.
Information from Ordnance Survey maps shows spoil heaps covering between 0.5–1 km2 containing millions of tonnes of waste.
Artificial ground comprising furnace slag covers large areas around many British towns and cities.
As humans moved from hunter-gatherer to farming and settlement communities, the modification of the landscape to agricultural areas can be found in seed and pollen remains in sediments.
Seeds and pollen from woodland trees and plants are followed by pollen from crops, providing evidence of the removal of large wooded areas as the land is converted to an agricultural landscape.
Increasing agricultural land use exposes the soil to the elements. Wind, water and slope processes lead to soil denudation and this is seen in pulses of sediment being deposited in valley bottoms.
Dam construction inadvertently traps sediment that would otherwise have been transported downstream by rivers into our seas and oceans.
Widespread industrial activity and urbanisation accelerated during the Industrial Revolution and are associated with the increased levels of carbon dioxide (CO2) and methane (CH4) into the atmosphere. This may represent a significant atmospheric marker defining the Anthropocene epoch. | http://www.bgs.ac.uk/research/climatechange/palaeo/anthropocene/Mapping.html |
4.375 | Math problems have a charm of their own. Besides, they help to develop a programmer's skill. Here, we describe a student's exam task: "Develop an application that models the behaviour of a Hypocycloid".
A cycloid is the curve defined by the path of a point on the edge of a circular wheel as the wheel rolls along a straight line. It was named by Galileo in 1599 (http://en.wikipedia.org/wiki/Cycloid).
A hypocycloid is a curve generated by the trace of a fixed point on a small circle that rolls within a larger circle. It is comparable to the cycloid, but instead of the circle rolling along a line, it rolls within a circle.
Use Google to find a wonderful book of Eli Maor, Trigonometric Delights (Princeton, New Jersey). The following passage is taken from this book.
I believe that a program developer must love formulas derivation. Hence, let us find the parametric equations of the hypocycloid.
A point on a circle of radius
r rolls on the inside of a fixed circle of radius
C be the center of the rolling circle, and
P a point on the moving circle. When the rolling circle turns through an angle in a clockwise direction,
C traces an arc of angular width
t in a counterclockwise direction. Assuming that the motion starts when
P is in contact with the fixed circle (figure on the left), we choose a coordinate system in which the origin is at
O and the x-axis points to
P. The coordinates of
P relative to
(r cos b; -r sin b)
The minus sign in the second coordinate is there because
b is measured clockwise. Coordinates of
C relative to
((R - r) cos t, (R - r) sin t)
b may be expressed as:
b = t + β; β = b - t
Thus, the coordinates of
P relative to
((R - r) cos t + r cos β, (R - r) sin t - r sin β) (1)
But the angles
b are not independent: as the motion progresses, the arcs of the fixed and moving circles that come in contact must be of equal length
L = R t L = r b
Using this relation to express
b in terms of
t, we get
b = R t / r
Equations (1) become:
x = (R - r) cos t + r cos ((R / r - 1) t) (2)
y = (R - r) sin t - r sin ((R / r - 1) t)
Equations (2) are the parametric equations of the hypocycloid, the angle
t being the parameter (if the rolling circle rotates with constant angular velocity,
t will be proportional to the elapsed time since the motion began). The general shape of the curve depends on the ratio
R/r. If this ratio is a fraction
m/n in lowest terms, the curve will have
m cusps (corners), and it will be completely traced after moving the wheel
n times around the inner rim. If
R/r is irrational, the curve will never close, although going around the rim many times will nearly close it.
Using the code
The demo application provided with this article uses a
Hypocycloid control derived from
UserControl to model a behaviour of a hypocycloid described above.
The functionality of the hypocycloid is implemented in the
Hypocycloid class. It has a
GraphicsPath path data field that helps to render the hypocycloid path over time. A floating point variable,
angle, corresponds to the angle
t described earlier.
ratio = R / r
delta = R - r
All the math is done within the timer
Tick event handler.
void timer_Tick(object sender, EventArgs e)
angle += step;
cosa = Math.Cos(angle),
sina = Math.Sin(angle),
ct = ratio * angle;
movingCenter.X = (float)(centerX + delta * cosa);
movingCenter.Y = (float)(centerY + delta * sina);
PointF old = point;
point = new PointF(
movingCenter.X + r * (float)Math.Cos(ct),
movingCenter.Y - r * (float)Math.Sin(ct));
int n = (int)(angle / pi2);
if (n > round)
round = n;
ParentNotify(msg + ";" + round);
if (round < nRounds)
else if (!stopPath)
ParentNotify(msg + ";" + round + ";" + path.PointCount);
stopPath = true;
ParentNotify is the event of the generic delegate type
public event Action<string> ParentNotify;
We use it to notify a parent control of a current angle (round).
Besides a constructor, the class has the following public methods:
SaveToFile. Remember also that the Y axis in a Windows window goes down. | http://www.codeproject.com/Articles/48576/Hypocycloid |
4.21875 | Rotation around a fixed axis is a special case of rotational motion. It does not involve rotation around more than one axis, and cannot describe such phenomena as wobbling or precession. The kinematics and dynamics of rotation around a fixed axis of a rigid object are mathematically much simpler than those for rotation of a rigid body; they are entirely analogous to those of linear motion along a single fixed direction, which is not true for rotation of a rigid body.The expressions for the kinetic energy of the object, and for the forces on the parts of the object, are also simpler for rotation around a fixed axis, than for general rotational motion. For these reasons, rotation around a fixed axis is typically taught in introductory physics courses after students have mastered linear motion; the full generality of rotational motion is not usually taught in introductory physics classes.A rigid body is an object of finite extent in which all the distances between the component particles are constant. No truly rigid body exists; external forces can deform any solid. For our purposes, then, a rigid body is a solid which requires large forces to deform it appreciably A change in the position of a particle in three-dimensional space can be completely specified by three coordinates. A change in the position of a rigid body is more complicated to describe. It can be regarded as a combination of two distinct types of motion: translational motion and rotational motion.
Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. | http://educator.com/physics/physics-c/mechanics/jishi/energy-consideration-by-rotational-motion.php |
4.125 | What are sunspots?
The term "sunspot" appears in the news, often in connection with stories on aurorae, electrical outages, and problems with orbiting satellites. But often these stories don't really explain what sunspots are and why they're of interest to us on Earth. So what are sunspots?
The simplest answer is that sunspots are regions of the Sun's surface that are cooler than surrounding areas. Because they're cooler than the rest of the Sun's surface, they give off less light, and so appear to be dimmer than the rest of the Sun. Sunspots are still quite hot, and if they weren't contrasted with the much hotter surface of the Sun they'd appear to be very bright on their own. But because they're cooler than the rest of the Sun, they look very dark.
That's what they appear to be when we observe them, but what are they? What is their physical cause?
Why the Sun Shines
To understand sunspots, you have to understand a little about why the Sun shines. Heat is generated deep inside the Sun from the nuclear fusion of hydrogen atoms into helium. This process liberates lots of energy, but because the Sun is so large, it takes a long time for the energy generated deep inside to propagate outwards. It's the heat left over from the nuclear fusion process that makes the Sun's surface so hot.
The outermost layers of the Sun's interior behave like a pot of boiling water. In a boiling pot, you have the stove burner providing the heat; heat gets transferred to the water at the bottom of the pot, which then rises. Once the hot water gets to the top of the pot, it cools and sinks back to the bottom of the pot. This happens over and over again for as long as the stove is turned on (and you have water in the pot). In the Sun, it's a similar process. Hot gas deeper inside the Sun rises, and by doing so it transfers heat to higher and higher layers; we say that heat is being transferred convectively rather than radiatively. When astronomers take a high-resolution picture of the surface of the sun, we see millions and millions of tiny granules, each of which is a small convection cell that transmits heat from just under the surface to the surface itself, where the hot material then radiates its heat into space.
Magnetic fields and sunspots
What happens in a sunspot? The Sun is threaded with very strong magnetic fields, and sometimes these fields penetrate the surface. When this happens, the gas inside the Sun can become stuck or "frozen" to the magnetic fields like iron filings stuck to the sides of a magnet; the strength of the magnetic field overwhelms both convection and gravity, and holds the gas in place. When the gas is frozen to the magnetic fields, convection can be slowed or shut down entirely, preventing the convective transfer of energy from deeper inside the Sun to the surface. If there's less heat propagating through the Sun at the location of this magnetic field, the surface will cool down relative to its surroundings and form a sunspot.
Sunspots, solar activity, and the Earth
What makes sunspots so interesting to us is they're associated with lots of other activity on the Sun, some of which can affect the Earth. The strong magnetic fields that cause sunspots can be energetic enough to generate solar flares or even lift material off the Sun's surface! These energetic phenomena sometimes have consequences for the Earth, because they may generate X-rays or launch storms of energetic particles into interplanetary space, both of which can affect the Earth and its near-space environment.
One of the most notable effects can be the generation of aurorae. Over time, particles from the solar wind are trapped in the Earth's magnetic field, and when there is a solar outburst these trapped particles are re-energized. The particles funnel down the magnetic field down into the atmosphere, where they collide with molecules in our atmosphere and excite the atmosphere into emitting light of different colors (red and green being the most common). Aurorae are most common at high northern and southern latitutes (because that's where the Earth's magnetic poles are), but during an active solar cycle, we sometimes see aurorae much further south. The strongest solar storms can even disrupt electrical power grids on Earth, and they may also endanger orbiting satellites or the space station. While the temporary disruptions that solar activity can cause to satellites and electrical grids can be a nuisance, sunspots have little or no effect on our day to day lives and have no physiological effect on people or other living creatures on the Earth's surface. However, you may feel very lucky if you get to see a beautiful aurora in your nighttime sky!
The Solar Cycle
Individual sunspots may come and go on timescales of days or a few weeks, but the number of sunspots visible on the Sun's surface increases and decreases on a regular cycle of about 11 years. This 11-year cycle, the Solar Cycle, was discovered through the regular observation of sunspots by solar observers over several centuries. At the time this article is being written the Sun is approaching a new "solar maximum" for the current cycle; it should happen some time in 2013, after which the number of sunspots will slowly decline over the next several years.
AAVSO Solar observers are helping us understand sunspots and the behavior of the Sun with regular observations, and their data are even more important now with the imminent solar maximum. We encourage all solar observers to safely enjoy the nearest "variable star" to Earth over the coming months!
Safety note: The Sun should only be observed using specialized equipment. It is possible to permanently damage both your eyesight and your telescope by looking directly at the Sun -- never, ever look directly at the Sun through any optical device like binoculars or a telescope unless it is equipped with safe and well-tested filters specifically designed for doing so. If you do not have equipment or experience observing the Sun, you should seek assistance from experienced solar observers first. All observers should follow our Guidelines for Solar Observing, particularly if you are actively recording and reporting sunspot activity to the AAVSO Solar Program. You can read more about observing the Sun safely in this article from Sky & Telescope magazine.
Last Updated: March 9, 2012 - 4:37pm | http://www.aavso.org/what-are-sunspots |
4 | Deforestation is the logging or burning of trees in forested areas. There are several reasons for doing so: trees or derived charcoal can be sold as a commodity and are used by humans while cleared land is used as pasture, plantations of commodities and human settlement. The removal of trees without sufficient reforestation, has resulted in damage to habitat, biodiversity loss and aridity. Also deforestated regions often degrade into wasteland.
Disregard or unawareness of intrinsic value, and lack of ascribed value, lax forest management and environmental law allow deforestation to occur on such a large scale. In many countries, deforestation is an ongoing issue which is causing extinction, changes to climatic conditions, desertification and displacement of indigenous people.
In simple terms, deforestation occurs because forested land is not economically viable. Increasing the amount of farmland, woods are used by native populations of over 200 million people worldwide.
The presumed value of forests as a genetic resources has never been confirmed by any economic studies . As a result owners of forested land lose money by not clearing the forest and this affects the welfare of the whole society . From the perspective of the developing world, the benefits of forest as carbon sinks or biodiversity reserves go primarily to richer developed nations and there is insufficient compensation for these services. As a result some countries simply have too much forest. Developing countries feel that some countries in the developed world, such as the United States of America, cut down their forests centuries ago and benefited greatly from this deforestation and that it is hypocritical to deny developing countries the same opportunities: that the poor shouldn’t have to bear the cost of preservation when the rich created the problem .
Aside from a general agreement that deforestation occurs to increase the economic value of the land there is no agreement on what causes deforestation. Logging may be a direct source of deforestation in some areas and have no effect or be at worst an indirect source in others due to logging roads enabling easier access for farmers wanting to clear the forest: experts do not agree on whether logging is an important contributor to global deforestation and some believe that logging makes considerable contribution to reducing deforestation because in developing countries logging reserves are far larger than nature reserves . Similarly there is no consensus on whether poverty is important in deforestation. Some argue that poor people are more likely to clear forest because they have no alternatives, others that the poor lack the ability to pay for the materials and labour needed to clear forest. . Claims that that population growth drives deforestation is weak and based on flawed data. with population increase due to high fertility rates being a primary driver of tropical deforestation in only 8% of cases . The FAO states that the global deforestation rate is unrelated to human population growth rate, rather it is the result of lack of technological advancement and inefficient governance . There are many causes at the root of deforestation, such as the corruption and inequitable distribution of wealth and power, population growth and overpopulation, and urbanization. Globalization is often viewed as a driver of deforestation.
According to British environmentalist Norman Myers, 5% of deforestation is due to cattle ranching, 19% to over-heavy logging, 22% due to the growing sector of palm oil plantations, and 54% due to slash-and-burn farming.
It's very difficult, if not impossible, to obtain figures for the rate of deforestation . The FAO data are based largely on reporting from forestry departments of individual countries. The World Bank estimates that 80% of logging operations are illegal in Bolivia and 42% in Colombia, while in Peru, illegal logging equals 80% of all activities. For tropical countries, deforestation estimates are very uncertain: based on satellite imagery, the rate of deforestation in the tropics is 23% lower than the most commonly quoted rates and for the tropics as a whole deforestation rates could be in error by as much as +/- 50% . Conversely a new analysis of satellite images reveal that the deforestation in the Amazon basin is twice as fast as scientists previously estimated.
The UNFAO has the best long term datasets on deforestation available and based on these datasets global forest cover has remained approximately stable since the middle of the twentieth century ) and based on the longest dataset available global forest cover has increased since 1954 . The rate of deforestation is also declining, with less and less forest cleared each decade. Globally the rate of deforestation declined during the 1980s, with even more rapid declines in the 1990s and still more rapid declines from 2000 to 2005 . Based on these trends global anti-deforestation efforts are expected to outstrip deforestation within the next half-century with global forest cover increasing by 10 percent—an area the size of India—by 2050.Rates of deforestation are highest in developing tropical nations, although globally the rate of tropical forest loss is also declining, with tropical deforestation rates of about 8.6 million hectares annually in the 1990s, compared to a loss of around 9.2 million hectares during the previous decade. .
The utility of the FAO figures have been disputed by some environmental groups. These questions are raised primarily because the figures do not distinguish between forest types. The fear is that highly diverse habitats, such as tropical rainforest, may be experiencing an increase in deforestation which is being masked by large decreases in less biodiverse dry, open forest types. Because of this omission it is possible that many of the negative impacts of deforestation, such as habitat loss, are increasing despite a decline in deforestation. Some environmentalists have predicted that unless significant measures such as seeking out and protecting old growth forests that haven't been disturbed , are taken on a worldwide basis to preserve them, by 2030 there will only be ten percent remaining with another ten percent in a degraded condition. 80 percent will have been lost and with them the irreversible loss of hundreds of thousands of species.
Despite the ongoing reduction in deforestation over the past 30 years the process deforestation remains a serious global ecological problem and a major social and economic problem in many regions. 13 million hectares of forest are lost each year, 6 million hectares of which are forest that had been largely undisturbed by man . This results in a loss of habitat for wildlife as well as reducing or removing the ecosystem services provided by these forests.
The decline in the rate of deforestation also does not address the damage already caused by deforestation. Global deforestation increased sharply in the mid-1800s. and about half of the mature tropical forests, between 7.5 million to 8 million square kilometres (2.9 million to 3 million sq mi) of the original 15 million to 16 million square kilometres (5.8 million to 6.2 million sq mi) that until, 1947 covered the planet have been cleared.
The rate of deforestation also varies widely by region and despite a global decline in some regions, particularly in developing tropical nations, the rate of deforestation is increasing. For example, Nigeria lost 81% of its old-growth forests in just 15 years (1990- 2005). All of Africa is suffering deforestation at twice the world rate. The effects of deforestation are most pronounced in tropical rainforests . Brazil has lost 90-95% of its Mata Atlântica forest. In Central America, two-thirds of lowland tropical forests have been turned into pasture since 1950. Half of the Brazilian state of Rondonia's 243,000 km² have been affected by deforestation in recent years and tropical countries, including Mexico, India, Philippines, Indonesia, Thailand, Myanmar, Malaysia, Bangladesh, China, Sri Lanka, Laos, Nigeria, Congo, Liberia, Guinea, Ghana and the Côte d'Ivoire have lost large areas of their rainforest. Because the rates vary so much across regions the global decline in deforestation rates does not necessarily indicate that the negative effects of deforestation are also declining.
Deforestation trends could follow the Kuznets curve however even if true this is problematic in so-called hot-spots because of the risk of irreversible loss of non-economic forest values for example valuable habitat or species loss.
Deforestation is a contributor to global warming, and is often cited as one of the major causes of the enhanced greenhouse effect. Tropical deforestation is responsible for approximately 20% of world greenhouse gas emissions. According to the Intergovernmental Panel on Climate Change deforestation, mainly in tropical areas, account for up to one-third of total anthropogenic carbon dioxide emissions. Trees and other plants remove carbon (in the form of carbon dioxide) from the atmosphere during the process of photosynthesis and release it back into the atmosphere during normal respiration. Only when actively growing can a tree or forest remove carbon over an annual or longer timeframe. Both the decay and burning of wood releases much of this stored carbon back to the atmosphere. In order for forests to take up carbon, the wood must be harvested and turned into long-lived products and trees must be re-planted. Deforestation may cause carbon stores held in soil to be released. Forests are stores of carbon and can be either sinks or sources depending upon environmental circumstances. Mature forests alternate between being net sinks and net sources of carbon dioxide (see carbon dioxide sink and carbon cycle).
Reducing emissions from the tropical deforestation and forest degradation (REDD) in developing countries has emerged as new potential to complement ongoing climate policies. The idea consists in providing financial compensations for the reduction of greenhouse gas (GHG) emissions from deforestation and forest degradation".
The worlds rain forests are widely believed by laymen to contribute a significant amount of world's oxygen although it is now accepted by scientists that rainforests contribute little net oxygen to the atmosphere and deforestation will have no effect whatsoever on atmospheric oxygen levels. However, the incineration and burning of forest plants in order to clear land releases tonnes of CO2 which contributes to global warming.
The water cycle is also affected by deforestation. Trees extract groundwater through their roots and release it into the atmosphere. When part of a forest is removed, the trees no longer evaporate away this water, resulting in a much drier climate. Deforestation reduces the content of water in the soil and groundwater as well as atmospheric moisture. Deforestation reduces soil cohesion, so that erosion, flooding and landslides ensue. Forests enhance the recharge of aquifers in some locales, however, forests are a major source of aquifer depletion on most locales.
Shrinking forest cover lessens the landscape's capacity to intercept, retain and transpire precipitation. Instead of trapping precipitation, which then percolates to groundwater systems, deforested areas become sources of surface water runoff, which moves much faster than subsurface flows. That quicker transport of surface water can translate into flash flooding and more localized floods than would occur with the forest cover. Deforestation also contributes to decreased evapotranspiration, which lessens atmospheric moisture which in some cases affects precipitation levels down wind from the deforested area, as water is not recycled to downwind forests, but is lost in runoff and returns directly to the oceans. According to one preliminary study, in deforested north and northwest China, the average annual precipitation decreased by one third between the 1950s and the 1980s.
Trees, and plants in general, affect the water cycle significantly:
As a result, the presence or absence of trees can change the quantity of water on the surface, in the soil or groundwater, or in the atmosphere. This in turn changes erosion rates and the availability of water for either ecosystem functions or human services.
The forest may have little impact on flooding in the case of large rainfall events, which overwhelm the storage capacity of forest soil if the soils are at or close to saturation.
Tropical rainforests produce about 30% of our planets fresh water.
Undisturbed forest has very low rates of soil loss, approximately 2 metric tons per square kilometre (6 short tons per square mile). Deforestation generally increases rates of soil erosion, by increasing the amount of runoff and reducing the protection of the soil from tree litter. This can be an advantage in excessively leached tropical rain forest soils. Forestry operations themselves also increase erosion through the development of roads and the use of mechanized equipment.
China's Loess Plateau was cleared of forest millennia ago. Since then it has been eroding, creating dramatic incised valleys, and providing the sediment that gives the Yellow River its yellow color and that causes the flooding of the river in the lower reaches (hence the river's nickname 'China's sorrow').
Removal of trees does not always increase erosion rates. In certain regions of southwest US, shrubs and trees have been encroaching on grassland. The trees themselves enhance the loss of grass between tree canopies. The bare intercanopy areas become highly erodible. The US Forest Service, in Bandelier National Monument for example, is studying how to restore the former ecosystem, and reduce erosion, by removing the trees.
Tree roots bind soil together, and if the soil is sufficiently shallow they act to keep the soil in place by also binding with underlying bedrock. Tree removal on steep slopes with shallow soil thus increases the risk of landslides, which can threaten people living nearby. However most deforestation only affects the trunks of trees, allowing for the roots to stay rooted, negating the landslide.
Deforestation results in declines in biodiversity. The removal or destruction of areas of forest cover has resulted in a degraded environment with reduced biodiversity. Forests support biodiversity, providing habitat for wildlife; moreover, forests foster medicinal conservation. With forest biotopes being irreplaceable source of new drugs (such as taxol), deforestation can destroy genetic variations (such as crop resistance) irretrievably.
Since the tropical rainforests are the most diverse ecosystems on earth and about 80% of the world's known biodiversity could be found in tropical rainforests removal or destruction of significant areas of forest cover has resulted in a degraded environment with reduced biodiversity.
Scientific understanding of the process of extinction is insufficient to accurately to make predictions about the impact of deforestation on biodiversity. Most predictions of forestry related biodiversity loss are based on species-area models, with an underlying assumption that as forest are declines species diversity will decline similarly. However, many such models have been proven to be wrong and loss of habitat does not necessarily lead to large scale loss of species. Species-area models are known to overpredict the number of species known to be threatened in areas where actual deforestation is ongoing, and greatly overpredict the number of threatened species that are widespread.
It has been estimated that we are losing 137 plant, animal and insect species every single day due to rainforest deforestation, which equates to 50,000 species a year. Others state that tropical rainforest deforestation is contributing to the ongoing Holocene mass extinction. The known extinction rates from deforestation rates are very low, approximately 1 species per year from mammals and birds which extrapolates to approximately 23000 species per year for all species. Predictions have been made that more than 40% of the animal and plant species in Southeast Asia could be wiped out in the 21st century, with such predictions called into questions by 1995 data that show that within regions of Southeast Asia much of the original forest has been converted to monospecific plantations but potentially endangered species are very low in number and tree flora remains widespread and stable.
Damage to forests and other aspects of nature could halve living standards for the world's poor and reduce global GDP by about 7% by 2050, a major report concluded at the Convention on Biological Diversity (CBD) meeting in Bonn. Historically utilization of forest products, including timber and fuel wood, have played a key role in human societies, comparable to the roles of water and cultivable land. Today, developed countries continue to utilize timber for building houses, and wood pulp for paper. In developing countries almost three billion people rely on wood for heating and cooking. The forest products industry is a large part of the economy in both developed and developing countries. Short-term economic gains made by conversion of forest to agriculture, or over-exploitation of wood products, typically leads to loss of long-term income and long term biological productivity (hence reduction in nature's services). West Africa, Madagascar, Southeast Asia and many other regions have experienced lower revenue because of declining timber harvests. Illegal logging causes billions of dollars of losses to national economies annually.
The new procedures to get amounts of wood are causing more harm to the economy and overpowers the amount of money spent by people employed in logging. According to a study, "in most areas studied, the various ventures that prompted deforestation rarely generated more than US$5 for every ton of carbon they released and frequently returned far less than US $1." The price on the European market for an offset tied to a one-ton reduction in carbon is 23 euro (about $35).
See also: Timeline of environmental events.
Deforestation has been practiced by humans for tens of thousands of years before the beginnings of civilization. Fire was the first tool that allowed humans to modify the landscape. The first evidence of deforestation appears in the Mesolithic period. It was probably used to convert closed forests into more open ecosystems favourable to game animals. With the advent of agriculture, fire became the prime tool to clear land for crops. In Europe there is little solid evidence before 7000 BC. Mesolithic foragers used fire to create openings for red deer and wild boar. In Great Britain shade tolerant species such as oak and ash are replaced in the pollen record by hazels, brambles, grasses and nettles. Removal of the forests led to decreased transpiration resulting in the formation of upland peat bogs. Widespread decrease in elm pollen across Europe between 8400-8300 BC and 7200-7000 BC, starting in southern Europe and gradually moving north to Great Britain, may represent land clearing by fire at the onset of Neolithic agriculture.The Neolithic period saw extensive deforestation for farming land. Stone axes were being made from about 3000 BC not just from flint, but from a wide variety of hard rocks from across Britain and North America as well. They include the noted Langdale axe industry in the English Lake District, quarries developed at Penmaenmawr in North Wales and numerous other locations. Rough-outs were made locally near the quarries, and some were polished locally to give a fine finish. This step not only increased the mechanical strength of the axe, but also made penetration of wood easier. Flint was still used from sources such as Grimes Graves but from many other mines across Europe.
Throughout most of history, humans were hunter gatherers who hunted within forests. In most areas, such as the Amazon, the tropics, Central America, and the Caribbean, only after shortages of wood and other forest products occur are policies implemented to ensure forest resources are used in a sustainable manner.
In ancient Greece, Tjeered van Andel and co-writers summarized three regional studies of historic erosion and alluviation and found that, wherever adequate evidence exists, a major phase of erosion follows, by about 500-1,000 years the introduction of farming in the various regions of Greece, ranging from the later Neolithic to the Early Bronze Age. The thousand years following the mid-first millennium BCE saw serious, intermittent pulses of soil erosion in numerous places. The historic silting of ports along the southern coasts of Asia Minor (e.g. Clarus, and the examples of Ephesus, Priene and Miletus, where harbors had to be abandoned because of the silt deposited by the Meander) and in coastal Syria during the last centuries BC.
Easter Island has suffered from heavy soil erosion in recent centuries, aggravated by agriculture and deforestation. Jared Diamond gives an extensive look into the collapse of the ancient Easter Islanders in his book Collapse. The disappearance of the island's trees seems to coincide with a decline of its civilization around the 17th and 18th century.
The famous silting up of the harbor for Bruges, which moved port commerce to Antwerp, also follow a period of increased settlement growth (and apparently of deforestation) in the upper river basins. In early medieval Riez in upper Provence, alluvial silt from two small rivers raised the riverbeds and widened the floodplain, which slowly buried the Roman settlement in alluvium and gradually moved new construction to higher ground; concurrently the headwater valleys above Riez were being opened to pasturage.
A typical progress trap is that cities were often built in a forested area providing wood for some industry (e.g. construction, shipbuilding, pottery). When deforestation occurs without proper replanting, local wood supplies become difficult to obtain near enough to remain competitive, leading to the city's abandonment, as happened repeatedly in Ancient Asia Minor. The combination of mining and metallurgy often went along this self-destructive path.
Meanwhile most of the population remaining active in (or indirectly dependent on) the agricultural sector, the main pressure in most areas remained land clearing for crop and cattle farming; fortunately enough wild green was usually left standing (and partially used, e.g. to collect firewood, timber and fruits, or to graze pigs) for wildlife to remain viable, and the hunting privileges of the elite (nobility and higher clergy) often protected significant woodlands.
Major parts in the spread (and thus more durable growth) of the population were played by monastical 'pioneering' (especially by the Benedictine and Commercial orders) and some feudal lords actively attracting farmers to settle (and become tax payers) by offering relatively good legal and fiscal conditions - even when they did so to launch or encourage cities, there always was an agricultural belt around and even quite some within the walls.When on the other hand demography took a real blow by such causes as the Black Death or devastating warfare (e.g. Genghis Khan's Mongol hordes in eastern and central Europe, Thirty Years' War in Germany) this could lead to settlements being abandoned, leaving land to be reclaimed by nature, even though the secondary forests usually lacked the original biodiversity.
From 1100 to 1500 AD significant deforestation took place in Western Europe as a result of the expanding human population. The large-scale building of wooden sailing ships by European (coastal) naval owners since the 15th century for exploration, colonisation, slave trade - and other trade on the high seas and (often related) naval warfare (the failed invasion of England by the Spanish Armada in 1559 and the battle of Lepanto 1571 are early cases of huge waste of prime timber; each of Nelson's Royal navy war ships at Trafalgar had required 6,000 mature oaks) and piracy meant that whole woody regions were over-harvested, as in Spain, where this contributed to the paradoxical weakening of the domestic economy since Columbus' discovery of America made the colonial activities (plundering, mining, cattle, plantations, trade ...) predominant.
In Changes in the Land (1983), William Cronon collected 17th century New England Englishmen's reports of increased seasonal flooding during the time that the forests were initially cleared, and it was widely believed that it was linked with widespread forest clearing upstream.
The massive use of charcoal on an industrial scale in Early Modern Europe was a new acceleration of the onslaught on western forests; even in Stuart England, the relatively primitive production of charcoal has already reached an impressive level. For ship timbers, Stuart England was so widely deforested that it depended on the Baltic trade and looked to the untapped forests of New England to supply the need. In France, Colbert planted oak forests to supply the French navy in the future; as it turned out, as the oak plantations matured in the mid-nineteenth century, the masts were no longer required.
Specific parallels are seen in twentieth century deforestation occurring in many developing nations.
The difficulties of estimating deforestation rates are nowhere more apparent than in the widely varying estimates of rates of rainforest deforestation. At one extreme Alan Grainger, of Leeds University, argues that there is no credible evidence of any longterm decline in rainforest area while at the other some environmental groups argue that one fifth of the world's tropical rainforest was destroyed between 1960 and 1990, that rainforests 50 years ago covered 14% of the worlds land surface and have been reduced to 6%. and that all tropical forests will be gone by the year 2090 . While the FAO states that the annual rate of tropical closed forest loss is declining (FAO data are based largely on reporting from forestry departments of individual countries) from 8 million has in the 1980s to 7 million in the 1990s some environmentalists are stating that rainforest are being destroyed at an ever-quickening pace. The London-based Rainforest Foundation notes that "the UN figure is based on a definition of forest as being an area with as little as 10% actual tree cover, which would therefore include areas that are actually savannah-like ecosystems and badly damaged forests."
These divergent viewpoints are the result of the uncertainties in the extent of tropical deforestation. For tropical countries, deforestation estimates are very uncertain and could be in error by as much as +/- 50% while based on satellite imagery, the rate of deforestation in the tropics is 23% lower than the most commonly quoted rates . Conversely a new analysis of satellite images reveal that deforestation of the Amazon rainforest is twice as fast as scientists previously estimated. The extent of deforestation that has occurred in West Africa during the twentieth century is currently being hugely exaggerated .
Despite these uncertainties there is agreement that development of rainforests remains a significant environmental problem. Up to 90% of West Africa's coastal rainforests have disappeared since1900. In South Asia, about 88% of the rainforests have been lost. Much of what of the world's rainforests remains is in the Amazon basin, where the Amazon Rainforest covers approximately 4 million square kilometres. The regions with the highest tropical deforestation rate between 2000 and 2005 were Central America -- which lost 1.3% of its forests each year -- and tropical Asia. In Central America, 40% of all the rainforests have been lost in the last 40 years. Madagascar has lost 90% of its eastern rainforests. As of 2007, less than 1% of Haiti's forests remain. Several countries, notably Brazil, have declared their deforestation a national emergency.
From about the mid-1800s, around 1852, the planet has experienced an unprecedented rate of change of destruction of forests worldwide. More than half of the mature tropical forests that back in some thousand years ago covered the planet have been cleared.
A January 30, 2009 New York Times article stated, "By one estimate, for every acre of rain forest cut down each year, more than 50 acres of new forest are growing in the tropics..." The new forest includes secondary forest on former farmland and so-called degraded forest.
Africa is suffering deforestation at twice the world rate, according to the U.N. Environment Programme (UNEP). Some sources claim that deforestation have already wipedout roughly 90% of the West Africa's original forests. Deforestation is accelerating in Central Africa. According to the FAO, Africa lost the highest percentage of tropical forests of any continent. According to the figures from the FAO (1997), only 22.8% of West Africa's moist forests remain, much of this degraded. Massive deforestation threatens food security in some African countries. Africa experiences one of the highest rates of deforestation due to 90% of its population being dependent on wood for wood-fuel energy as the main source of heating and cooking. .
Research carried out by WWF International in 2002 shows that in Africa, rates of illegal logging vary from 50% for Cameroon and Equatorial Guinea to 70% in Gabon and 80% in Liberia – where revenues from the timber industry also fuelled the civil war.
See main article: Deforestation in Ethiopia. The main cause of deforestation in Ethiopia, located in East Africa, is a growing population and subsequent higher demand for agriculture, livestock production and fuel wood. Other reasons include low education and inactivity from the government, although the current government has taken some steps to tackle deforestation. Organizations such as Farm Africa are working with the federal and local governments to create a system of forest management. Ethiopia, the third largest country in Africa by population, has been hit by famine many times because of shortages of rain and a depletion of natural resources. Deforestation has lowered the chance of getting rain, which is already low, and thus causes erosion. Bercele Bayisa, an Ethiopian farmer, offers one example why deforestation occurs. He said that his district was forested and full of wildlife, but overpopulation caused people to come to that land and clear it to plant crops, cutting all trees to sell as fire wood.
Ethiopia has lost 98% of its forested regions in the last 50 years. At the beginning of the 20th century, around 420,000 km² or 35% of Ethiopia's land was covered with forests. Recent reports indicate that forests cover less than 14.2% or even only 11.9% now. Between 1990 and 2005, the country lost 14% of its forests or 21,000 km².
Deforestation with resulting desertification, water resource degradation and soil loss has affected approximately 94% of Madagascar's previously biologically productive lands. Since the arrival of humans 2000 years ago, Madagascar has lost more than 90% of its original forest. Most of this loss has occurred since independence from the French, and is the result of local people using slash-and-burn agricultural practises as they try to subsist. Largely due to deforestation, the country is currently unable to provide adequate food, fresh water and sanitation for its fast growing population.
See main article: Deforestation in Nigeria. According to the FAO, Nigeria has the world's highest deforestation rate of primary forests. It has lost more than half of its primary forest in the last five years. Causes cited are logging, subsistence agriculture, and the collection of fuel wood. Almost 90% of West Africa's rainforest has been destroyed.
Iceland has undergone extensive deforestation since Vikings settled in the ninth century. As a result, vast areas of vegetation and land has degraded, and soil erosion and desertification has occurred. As much as half of the original vegetative cover has been destroyed, caused in part by overexploitation, logging and overgrazing under harsh natural conditions. About 95% of the forests and woodlands once covering at least 25% of the area of Iceland may have been lost. Afforestation and revegetation has restored small areas of land.
Victoria and NSW's remnant red gum forests including the Murray River's Barmah-Millewa, are increasingly being clear-felled using mechanical harvesters, destroying already rare habitat. Macnally estimates that approximately 82% of fallen timber has been removed from the southern Murray Darling basin, and the Mid-Murray Forest Management Area (including the Barmah and Gunbower forests) provides about 90% of Victoria's red gum timber.
One of the factors causing the loss of forest is expanding urban areas. Littoral Rainforest growing along coastal areas of eastern Australia is now rare due to ribbon development to accommodate the demand for seachange lifestyles.
See main article: Deforestation in Brazil. There is no agreement on what drives deforestation in Brazil, though a broad consensus exists that expansion of croplands and pastures is important. Increases in commodity prices may increase the rate of deforestation Recent development of a new variety of soybean has led to the displacement of beef ranches and farms of other crops, which, in turn, move farther into the forest. Certain areas such as the Atlantic Rainforest have been diminished to just 7% of their original size. Although much conservation work has been done, few national parks or reserves are efficiently enforced. Some 80% of logging in the Amazon is illegal.
In 2008, Brazil's Government has announced a record rate of deforestation in the Amazon. Deforestation jumped by 69% in 2008 compared to 2007's twelvemonths, according to official government data. Deforestation could wipe out or severely damage nearly 60% of the Amazon rainforest by 2030, says a new report from WWF.
One case of deforestation in Canada is happening in Ontario's boreal forests, near Thunder Bay, where 28.9% of a 19,000 km² of forest area had been lost in the last 5 years and is threatening woodland caribou. This is happening mostly to supply pulp for the facial tissue industry .
In Canada, less than 8% of the boreal forest is protected from development and more than 50% has been allocated to logging companies for cutting.
The forest loss is acute in Southeast Asia, the second of the world's great biodiversity hot spots. According to 2005 report conducted by the FAO, Vietnam has the second highest rate of deforestation of primary forests in the world second to only Nigeria. More than 90% of the old-growth rainforests of the Philippine archipelago have been cut.
Russia has the largest area of forests of any nation on Earth. There is little recent research into the rates of deforestation but in 1992 2 million hectares of forest was lost and in 1994 around 3 million hectares were lost. . The present scale of deforestation in Russia is most easily seen using Google Earth, areas nearer to China are most affected as it is the main market for the timber. . Deforestation in Russia is particularly damaging as the forests have a short growing season due to extremely cold winters and therefore will take longer to recover.
At present rates, tropical rainforests in Indonesia would be logged out in 10 years, Papua New Guinea in 13 to 16 years. There are significantly large areas of forest in Indonesia that are being lost as native forest is cleared by large multi-national pulp companies and being replaced by plantations. In Sumatra tens of thousands of square kilometres of forest have been cleared often under the command of the central government in Jakarta who comply with multi national companies to remove the forest because of the need to pay off international debt obligations and to develop economically. In Kalimantan, between 1991 and 1999 large areas of the forest were burned because of uncontrollable fire causing atmospheric pollution across South-East Asia. Every year, forest are burned by farmers (slash-and-burn techniques are used by between 200 and 500 million people worldwide) and plantation owners. A major source of deforestation is the logging industry, driven spectacularly by China and Japan. . Agricultural development programs in Indonesia (transmigration program) moved large populations into the rainforest zone, further increasing deforestation rates.
A joint UK-Indonesian study of the timber industry in Indonesia in 1998 suggested that about 40% of throughout was illegal, with a value in excess of $365 million. More recent estimates, comparing legal harvesting against known domestic consumption plus exports, suggest that 88% of logging in the country is illegal in some way. Malaysia is the key transit country for illegal wood products from Indonesia.
Prior to the arrival of European-Americans about one half of the United States land area was forest, about 4 million square kilometers (1 billion acres) in 1600. For the next 300 years land was cleared, mostly for agriculture at a rate that matched the rate of population growth. For every person added to the population, one to two hectares of land was cultivated. This trend continued until the 1920s when the amount of crop land stabilized in spite of continued population growth. As abandoned farm land reverted to forest the amount of forest land increased from 1952 reaching a peak in 1963 of 3,080,000 km² (762 million acres). Since 1963 there has been a steady decrease of forest area with the exception of some gains from 1997. Gains in forest land have resulted from conversions from crop land and pastures at a higher rate than loss of forest to development. Because urban development is expected to continue, an estimated 93,000 km² (23 million acres) of forest land is projected be lost by 2050 , a 3% reduction from 1997. Other qualitative issues have been identified such as the continued loss of old-growth forest, the increased fragmentation of forest lands, and the increased urbanization of forest land.
According to a report by Stuart L. Pimm the extent of forest cover in the Eastern United States reached its lowest point in roughly 1872 with about 48 percent compared to the amount of forest cover in 1620. Of the 28 forest bird species with habitat exclusively in that forest, Pimm claims 4 become extinct either wholly or mostly because of habitat loss, the passenger pigeon, Carolina parakeet, ivory-billed woodpecker, and Bachman's Warbler.
A key factor in controlling deforestation could come from the Kyoto Protocol. Avoided deforestation also known as Reduced Emissions from Deforestation and Degradation (REDD) could be implemented in a future Kyoto Protocol and allow the protection of a great amount of forests. At the moment, REDD is not yet implemented into any of the flexible mechanisms as CDM, JI or ET.
New methods are being developed to farm more intensively, such as high-yield hybrid crops, greenhouse, autonomous building gardens, and hydroponics. These methods are often dependent on chemical inputs to maintain necessary yields. In cyclic agriculture, cattle are grazed on farm land that is resting and rejuvenating. Cyclic agriculture actually increases the fertility of the soil. Intensive farming can also decrease soil nutrients by consuming at an accelerated rate the trace minerals needed for crop growth.
Deforestation presents multiple societal and environmental problems. The immediate and long-term consequences of global deforestation are almost certain to jeopardize life on Earth, as we know it.Some of these consequences include: loss of biodiversity; the destruction of forest-based-societies; and climatic disruption. For example, much loss of the Amazon Rainforest can cause enormous amounts of carbon dioxide to be released back into the atmosphere.
Efforts to stop or slow deforestation have been attempted for many centuries because it has long been known that deforestation can cause environmental damage sufficient in some cases to cause societies to collapse. In Tonga, paramount rulers developed policies designed to prevent conflicts between short-term gains from converting forest to farmland and long-term problems forest loss would cause, while during the seventeenth and eighteenth centuries in Tokugawa Japan the shoguns developed a highly sophisticated system of long-term planning to stop and even reverse deforestation of the preceding centuries through substituting timber by other products and more efficient use of land that had been farmed for many centuries. In sixteenth century Germany landowners also developed silviculture to deal with the problem of deforestation. However, these policies tend to be limited to environments with good rainfall, no dry season and very young soils (through volcanism or glaciation). This is because on older and less fertile soils trees grow too slowly for silviculture to be economic, whilst in areas with a strong dry season there is always a risk of forest fires destroying a tree crop before it matures.
In the areas where "slash-and-burn" is practiced, switching to "slash-and-char" would prevent the rapid deforestation and subsequent degradation of soils. The biochar thus created, given back to the soil, is not only a durable carbon sequestration method, but it also is an extremely beneficial amendment to the soil. Mixed with biomass it brings the creation of terra preta, one of the richest soils on the planet and the only one known to regenerate itself.
In many parts of the world, especially in East Asian countries, reforestation and afforestation are increasing the area of forested lands . The amount of woodland has increased in 22 of the world's 50 most forested nations. Asia as a whole gained 1 million hectares of forest between 2000 and 2005. Tropical forest in El Salvador expanded more than 20 percent between 1992 and 2001. Based on these trends global forest cover is expected to increase by 10 percent—an area the size of India—by 2050 .
In the People's Republic of China, where large scale destruction of forests has occurred, the government has in the past required that every able-bodied citizen between the ages of 11 and 60 plant three to five trees per year or do the equivalent amount of work in other forest services. The government claims that at least 1 billion trees have been planted in China every year since 1982. This is no longer required today, but March 12 of every year in China is the Planting Holiday. Also, it has introduced the Green Wall of China-project which aims to halt the expansion of the Gobi-desert through the planting of trees. However, due to the large percentage of trees dying off after planting (up to 75%), the project is not very successful and regular carbon ofsetting through the Flexible Mechanisms might have been a better option. There has been a 47-million-hectare increase in forest area in China since the 1970s . The total number of trees amounted to be about 35 billion and 4.55% of China's land mass increased in forest coverage. The forest coverage was 12% two decades ago and now is 16.55%. .
In western countries, increasing consumer demand for wood products that have been produced and harvested in a sustainable manner are causing forest landowners and forest industries to become increasingly accountable for their forest management and timber harvesting practices.
The Arbor Day Foundation's Rain Forest Rescue program is a charity that helps to prevent deforestation. The charity uses donated money to buy up and preserve rainforest land before the lumber companies can buy it. The Arbor Day Foundation then protects the land from deforestation. This also locks in the way of life of the primitive tribes living on the forest land. Organizations such as Community Forestry International, The Nature Conservancy, World Wide Fund for Nature, Conservation International, African Conservation Foundation and Greenpeace also focus on preserving forest habitats. Greenpeace in particular has also mapped out the forests that are still intact and published this information unto the internet. . HowStuffWorks in turn, made a more simple thematic map showing the amount of forests present just before the age of man (8000 years ago) and the current (reduced) levels of forest. This Greenpeace map thus created, as well as this thematic map from howstuffworks marks the amount of afforestation thus again required to repair the damage caused by man.
To meet the worlds demand for wood it has been suggested by forestry writers Botkins and Sedjo that high-yielding forest plantations are suitable. It has been calculated that plantations yielding 10 cubic meters per hectare annually could supply all the timber required for international trade on 5 percent of the world's existing forestland. By contrast natural forests produce about 1-2 cubic meters per hectare, therefore 5 to 10 times more forest land would be required to meet demand. Forester Chad Oliver has suggested a forest mosaic with high-yield forest lands interpersed with conservation land.
According to an international team of scientists, led by Pekka Kauppi, professor of environmental science and policy at Helsinki University, the deforestation already done could still be reverted by tree plantings (eg CDM & JI afforestation/reforestation projects) in 30 years. The conclusion was made, through analysis of data acquired from FAO.
Reforestation through tree planting (trough eg the noted CDM & JI A/R-projects), might take advantage of the changing precipitation due to climate change. This may be done through studying where the precipitation is perceived to be increased (see the globalis thematic map of the 2050 precipitation) and setting up reforestation projects in these locations. Especially areas such as Niger, Sierra Leone and Liberia are important candidates; in huge part because they also suffer from an expanding desert (the Sahara) and decreasing biodiversity (while being an important biodiversity hotspot).
While the preponderance of deforestation is due to demands for agricultural and urban use for the human population, there are some examples of military causes. One example of deliberate deforestation is that which took place in the U.S. zone of occupation in Germany after World War II. Before the onset of the Cold War defeated Germany was still considered a potential future threat rather than potential future ally. To address this threat, attempts were made to lower German industrial potential, of which forests were deemed an element. Sources in the U.S. government admitted that the purpose of this was the "ultimate destruction of the war potential of German forests." As a consequence of the practice of clear-felling, deforestation resulted which could "be replaced only by long forestry development over perhaps a century."
War can also be a cause of deforestation, either deliberately such as through the use of Agent Orange during the Vietnam War where, together with bombs and bulldozers, it contributed to the destruction of 44 percent of the forest cover, or inadvertently such as in the 1945 Battle of Okinawa where bombardment and other combat operations reduced the lush tropical landscape into "a vast field of mud, lead, decay and maggots". | http://everything.explained.at/Deforestation/ |
4.46875 | On this day in 1861, Kansas is admitted to the Union as free state. It was the 34th state to join the Union. The struggle between pro- and anti-slave forces in Kansas was a major factor in the eruption of the Civil War.
In 1854, Kansas and Nebraska were organized as territories with popular sovereignty (popular vote) to decide the issue of slavery. There was really no debate over the issue in Nebraska, as the territory was filled with settlers from the Midwest, where there was no slavery. In Kansas, the situation was much different. Although most of the settlers were anti-slave or abolitionists, there were many pro-slave Missourians lurking just over the border. When residents in the territory voted on the issue, many fraudulent votes were cast from Missouri. This triggered the massive violence that earned the area the name "Bleeding Kansas." Both sides committed atrocities, and the fighting over the issue of slavery was a preview of the Civil War.
Kansas remained one of the most important political questions throughout the 1850s. Each side drafted constitutions, but the anti-slave faction eventually gained the upper hand. Kansas entered the Union as a free state; however, the conflict over slavery in the state continued into the Civil War. Kansas was the scene of some of the most brutal acts of violence during the war. One extreme example was the sacking of Lawrence in 1863, when pro-slave forces murdered nearly 200 men and burned the anti-slave town. | http://www.history.com/this-day-in-history/kansas-enters-the-union |
4.09375 | Discover Canada, Mexico, and Central America from this selection of great resources.
This is a compilation of some of LEARN NC’s best instructional resources for teaching about the countries, people, cultures, and geography of North America.
- Comparing Governments - International
- This lesson focuses on comparing and contrasting national governments in North America and/or Central America. (Grade 5, English Language Arts, English Language Development, and Social Studies)
- Geo-friendly travel: Destination Honduras
- In this Xpeditions lesson, students explore a partnership between the government of Honduras and the National Geographic Society to highlight the concept of geotourism and its benefits. Activities in this lesson engage students in whole class discussion, online research, development of information literacy and map-reading skills, and use of an interactive online tool. (Grade 5 Social Studies)
- What they left behind: Early multi-national influences in the United States
- The lessons in this unit are designed to help your students make connections between European voyages of discovery, colonial spheres of influence, and various aspects of American culture. ( Grade 5 English Language Arts and Social Studies)
- Simplicity: A Literature Based Paideia Seminar
- Students will apply their knowledge of how developments in the history of the United States, as well as the world, can impact the lives of people today. The lesson is based on the picture book entitled, The Simple People, written by Tedd Arnold and illustrated by Andrew Shachat. (Summary: The simple people enjoy the simple life until one of the character’s inventions is used to make life more complicated. As a result, everyone forgets the simple things in life.) After a Paideia seminar discussing the book, students will select a modern invention, research the history of its development and how it impacts society, and create a multi-media presentation. (Grade 5 English Language Arts and Social Studies)
- Using timeline games and Mexican history to improve comprehension
- This lesson explores Mexican history while engaging students’ active reading skills through the creation of a timeline. (Grade 5 English Language Arts and Social Studies)
- The Role of Mexican Folklore in Teaching and Learning
- One way teachers can connect with students of Mexican origin is by understanding the cultural knowledge they bring with them into the classroom, including the stories, proverbs, and legends they’ve learned. Learn more about Mexican folklore from this booklist and collection of online resources, and share this rich oral tradition with all your students.
- The Canadian Encyclopedia Histor!ca
- Discover Canada, the people, history and culture, using the search or the Timelines, plus the “Explore Canada” section has interactive maps, graphs and games.
- The Evidence Web
- Documents, including photographs, newspaper articles, letters and maps, about the exploration of Canada, the confederation (founding) of the nation, the prime minister of Canada, and more.
- Pathfinders and Passageways: The Exploration of Canada
- Learn about the explorers of Canada from prehistory to 20th century, presented, in most cases, from their own writings about their journey.
- Mexico Facts and Pictures
- Read about Mexico and get information, facts, photos, videos, and more from National Geographic Kids. | http://www.learnnc.org/lp/pages/4198 |
4.3125 | African Captives Yoked in Pairs
|Resource Bank Contents|
click image for close-up
For Africans destined to be slaves in the New World, a long march lasting several months was not uncommon. This 19th century engraving by an unknown artist shows captives being driven by black slave traders.
European slave traders in Africa did not seize land from natives and colonize the coast, as they did in their New World settlements. Instead, they established a special relationship with local chieftains, who allowed them to maintain trading forts along the coast. Local Africans, rather than the Europeans themselves, acquired and supplied slaves to the white traders.
Image Credit: The Granger Collection, New York
The Middle Passage
Catherine Ancholou on how Africans reacted to the possibility of being stolen into slavery
Asoka Perbi on the impact of kidnappings on people's lifestyles
Margaret Washington on the relationships between Europeans and Africans
Part 1: Narrative | Resource Bank Contents | Teacher's Guide
Africans in America: Home | Resource Bank Index | Search | Shop
WGBH | PBS Online | © | http://www.pbs.org/wgbh/aia/part1/1h316.html |
4.03125 | Cruising far beyond the outermost planets, two American spacecraft have discovered the first strong physical evidence of the long-sought boundary marking the edge of the solar system, where the solar wind ebbs and the cold of interstellar space begins.
Voyager 1, now 4.9 billion miles out from Earth, began detecting intense low-frequency radio emissions last August. Signals were received at the same time by Voyager 2, 3.7 billion miles away from Earth.
Now scientists, after long and careful analysis, have concluded that the radio waves were produced by electrically charged gases, or plasma, from the sun interacting with cold gases from interstellar space at the edge of the solar system, a boundary known as the heliopause.
In a report of the discovery yesterday at a meeting of the American Geophysical Union in Baltimore, Dr. Don Gurnett, a physicist at the University of Iowa who is a member of the Voyager science team, said, "Our assumption that this is the heliopause is based on the fact that there is no other known structure out there that could be causing these signals."
Other scientists agreed that the Voyager findings amounted to the first clear answer to what had been one of the great unanswered questions in space physics: the exact location of the outer boundary of the solar system. They said it appeared to confirm recent theories about how far the heliopause should lie from the sun.
Based on the radio data and readings from other Voyager instruments, Dr. Ralph McNutt of the Applied Physics Laboratory of Johns Hopkins University in Laurel, Md., estimated that the heliopause is somewhere from 82 to 130 times farther away from the sun than is the Earth.
The mean distance from Earth to the sun is 93 million miles, which is a standard measure known as an astronomical unit. Pluto, usually the most distant planet, is about 39 astronomical units from the sun.
The two Voyagers were launched in 1977 and long ago completed their primary missions of photographing the outer planets. Voyager 1 has traveled out 52 astronomical units, while Voyager 2 is 40 units distant from the sun.
So, if the heliopause is about 100 astronomical units, say, it would take Voyager 1 another 15 years to get there. Officials at the Jet Propulsion Laboratory, which directs the mission for NASA, have said the Voyagers could still be functioning and transmitting data well beyond that time.
"This discovery is an exciting indication that still more discoveries and surprises lie ahead for the Voyagers," said Dr. Edward C. Stone, the director of the laboratory in Pasadena, Calif., who is the chief scientist for the Voyager project.
For now, the radio signals from the boundary have had to travel far to reach the Voyagers. At first, scientists were mystified by the recordings, until they examined solar behavior in the weeks before the radio signals began to be heard, and found evidence mirrored in the data. | http://articles.mcall.com/1993-05-27/news/2915696_1_heliopause-interstellar-space-voyager-project |
4 | An Asymptote is actually a line whose distance with curve approaches zero as they approach infinity and this line (asymptote) never touches the curve. Line will always be close to curve but it will not intersect the curve.
Mainly there are three types of asymptotes:
1) Horizontal asymptotes,
2) Vertical asymptotes and
3) Oblique asymptotes.
For a graph which is represented by Function x = f(y), horizontal asymptotes are horizontal lines. These asymptotes are obtained when function approaches zero as 'y' tends to +∞ or −∞.
Vertical asymptotes are vertical lines near the asymptotes, the function expands without any bounds.
Let’s try to understand oblique asymptote and Graphing Oblique Asymptotes.
Oblique asymptote is a linear asymptote. When this linear asymptote is not parallel to the x or y- axis then it is called as oblique asymptote. Oblique asymptote is also called as slant asymptote.
Let’s consider the following function f(y) = y + 1/y’ and plot its graph. Here in above graph line x = y and x- axis are both asymptotes.
A function f(y) is asymptotic to Straight Line x = my + c ( if m ≠ 0) if
Lim (y→ +∞) [f (y) - (my + c)] = 0,
Lim (y→ - ∞) [f (y) - (my + c)] = 0,
Among these two equations, equation x = my + c is an oblique asymptote of ƒ(y) when 'y' tends to +∞, and in second equation, line x = my + c is an oblique asymptote of ƒ (y) when 'y' tends to −∞.
Oblique asymptotes can also be defined for rational Functions. | http://www.tutorcircle.com/graphing-oblique-asymptotes-fQVlq.html |
4.03125 | The Secrets of the Ocean Floor
But that is only on land. For, if you were to measure from the bottom of the ocean, the tallest mountain in the world will probably be Mauna Kea in Hawaii. It rises more than 15,748 feet under the sea and another 13,779 feet above it. The total comes to more than 29, 527 feet.
There are such deep trenches in the deep sea that a mountain like Everest could disappear into them without a trace! (A trench is a long, narrow, steep depression in the ocean floor).
But, as US marine scientist Cindy Lee Dover points out in her eloquent book, ‘Deep Ocean Journeys: Discovering New Life at the Bottom of the Sea’ (1996), the seafloor is the "largest and least known wilderness on our planet. We know more about the surface of Mars and Venus and the back side of the moon than we know about the seafloor."
And that is a real tragedy, as she points out. For, there is reason to believe that every process that is taking place on the ocean floor tells us about the processes that were responsible for the shaping of the earth’s surface as we know. "If we are to understand the geological forces that shape our planet, we must understand the geological processes expressed by features on the seafloor, says Dover.
The scientist has travelled to the depths of the ocean to explore the various kinds of complex creatures living in the deep waters near volcanic vents that could function as models for sites where life might have originated on earth.
The statistics she gives are startling: that the lava produced by underwater volcanoes in a year is enough to cover an area four times the size of Alaska with one metre of lava! More than one million volcanoes occur in the Pacific Ocean Basin alone, but less than one per cent of these volcanoes have been studied and monitored. That means, less than one per cent of the planet’s seafloor has been mapped, explains Dover.
That is why, most of the fauna belonging to the deepwaters are not familiar to even scientists. Many of them perhaps do not even have names. But the richness of these deep-sea ecosystems can rival that of the tropical rainforests.
Some of the names given by scientists to freshly discovered animals are really funny, says Dover. Among them are the spaghetti worms, named as such because of their resemblance to the eatable. Found in a tangle, it is often difficult to make out where one ends and another begins! Just like spaghetti, one might add.
At the time of the book’s publication, Dover was working as a visiting investigator at the Woods Hole Oceanographic Institution in Massachusetts, a pioneer in deep-sea research since the 60s.
The Secrets of the Ocean Floor [Earth facts for kids]
By Chitra Padmanabhan; Illustration by Shinod AP
Online Mind Games Tic Tac Toe News for Children Antonyms Kids' Magazine Word Play Poems for children Mind Games Discover Earth Ganesh Coloring Pages Tongue Twister Science Magazine Word Search Game Children's Craft Activities Reference Quotations Stories Folktales Learning Math Fiction for Kids Art for Kids Educational Games Coloring Books Quiz Flash Cards Did You Know? Famous People Computer Quiz Coloring Pages Math for kids Children's Books Environment Kids' Activities 5WH Science for Kids Games for Little Kids Riddles Word Match Children's Crossword Jokes for kids Science Quiz | http://www.pitara.com/discover/earth/online.asp?story=126 |
4.1875 | Fun Classroom Activities
The 20 enjoyable, interactive classroom activities that are included will help your students understand the text in amusing ways. Fun Classroom Activities include group projects, games, critical thinking activities, brainstorming sessions, writing poems, drawing or sketching, and more that will allow your students to interact with each other, be creative, and ultimately grasp key concepts from the text by "doing" rather than simply studying.
Write a haiku which you think summarizes the book.
2. Bin Game.
Divide into pairs. One member of the pair should answer questions and the other is the bin thrower. The idea is that the teacher asks a question about the book and the person who gets it right gives their teammate three chances to throw a ball in the bin. For every ball he gets in he scores 2 points. The winner is the pair with the most points.
This section contains 752 words|
(approx. 3 pages at 300 words per page) | http://www.bookrags.com/lessonplan/salt/funactivities.html |
4.21875 | What is it?
Alliteration is a figure of speech in which the same sound appears at the beginning of two or more words. Alliterative words are consecutive or close to each other in the text.
Why is it important?
Alliteration focuses readers' attention on a particular section of text. Alliterative sounds create rhythm and mood and can have particular connotations. For example, repetition of the "s" sound often suggests a snake-like quality, implying slyness and danger.
How do I do it?
Use repeated sounds at the beginning of words to focus attention or convey an idea or emotion.
"Peter Piper picked a peck of pickled peppers.
Note: The repeated "p" sound punctuates each word of this well-known tongue-twister.
"Heavenly Hillsboro, the buckle on the bible belt."—Jerome Lawrence and Robert E. Lee, Inherit the Wind
Note: The authors repeat the "h" sound and then the "b" sound. Notice the soft, soothing effect of the "h" sounds and the sharp, percussive effect of the "b" sounds. | http://udleditions.cast.org/craft_ld_alliteration.html |