content
stringlengths 275
370k
|
---|
Slavery spread across the South, but not evenly and not with uniform consequences. In the 1850s, enslaved labor expanded into the mountains of southwest Virginia, western North Carolina, and eastern Tennessee, where it had not flourished before. The slave states of the upper South - from Maryland, through Virginia, North Carolina, Tennessee, Kentucky, and Missouri - saw themselves as buffers between a fanatical upper North and a fanatical lower South. The dominant crops along the border were wheat, corn, and livestock; they were slave states, not cotton states. The areas within those states with fewer slaves resented the domination of state government by the slaveholding interests.
The North experienced similar geographic divisions. New England, upstate New York, and the upper Great Lakes region increasingly defined themselves against the South, its economy, its values, and its politics. Much of the abolitionist crusade grew in those areas. New York City, New Jersey, Pennsylvania, and the lower Midwest, by contrast, prided themselves on their moderation. They often traded with southern neighbors, shared their prejudices against black Americans, and tended to vote for the Democrats. These areas, like those of the upper South, thought they could help steer the nation through any sectional crisis.
This complex mosaic of region, interest, ideology, culture, and party, ironically, helped unify the nation. The more variegated the regions, the less likely that northern interests and southern interests would lead to the division of the country that people had feared since the time of the Constitution and the 1820 Missouri Compromise. As northern manufacturing grew stronger, the more it depended on the cotton produced by slaves; as southern plantations grew more specialized, the more they relied on northern manufacturers and importers. Voters in the cities in the North often proved sympathetic to the South and to slavery; cities in the South often supported talk of secession more than large planters did. |
The Northern Flicker is a native to North America, they are the largest terrestrial or land woodpecker found throughout the continent. They are one of the few woodpeckers that are strongly migratory, for example, flickers in the north may move south for the winter. Their main food source is ants. They can be found climbing in trees looking for ants or on the ground digging in the dirt for them. They generally nest in holes in trees, they will reuse the cavities from precious years or of other species. They will drum on objects, other than trees, as a form of communication and territory defense. One example is drumming on light posts. In the western species (see photos), the males have a red malar or mustache marking, while females do not. |
Explore and share tips, strategies, and resources for helping students develop in the social sciences.
- Students deepen their understanding and build a sense of community by engaging with their peers’ reasoned arguments.
- Students learn how to share and listen to opposing beliefs with empathy.
- Look through the door of one classroom and you might see the students hunched over, not engaged, even frowning. Look through the door of another classroom, and you might see a room full of lively students, eager, engaged and participating. What is the second teacher doing that the first one isn't? He or she is using creativity in that classroom. |
Article by Phil Livermore, Associate Professor of geophysics, University of Leeds and Jon Mound, Associate Professor of Geophysics, University of Leeds
The Earth’s magnetic field surrounds our planet like an invisible force field – protecting life from harmful solar radiation by deflecting charged particles away. Far from being constant, this field is continuously changing. Indeed, our planet’s history includes at least several hundred global magnetic reversals, where north and south magnetic poles swap places. So when’s the next one happening and how will it affect life on Earth?
During a reversal the magnetic field won’t be zero, but will assume a weaker and more complex form. It may fall to 10 percent of the present-day strength and have magnetic poles at the equator or even the simultaneous existence of multiple “north” and “south” magnetic poles.
Geomagnetic reversals occur a few times every million years on average. However, the interval between reversals is very irregular and can range up to tens of millions of years.
There can also be temporary and incomplete reversals, known as events and excursions, in which the magnetic poles move away from the geographic poles – perhaps even crossing the equator – before returning back to their original locations. The last full reversal, the Brunhes-Matuyama, occurred around 780,000 years ago. A temporary reversal, the Laschamp event, occurred around 41,000 years ago. It lasted less than 1,000 years with the actual change of polarity lasting around 250 years.
Above: Supercomputer models of Earth’s magnetic field. On the left is a normal dipolar magnetic field, typical of the long years between polarity reversals. On the right is the sort of complicated magnetic field Earth has during the upheaval of a reversal.
Power cut or mass extinction?
The alteration in the magnetic field during a reversal will weaken its shielding effect, allowing heightened levels of radiation on and above the Earth’s surface. Were this to happen today, the increase in charged particles reaching the Earth would result in increased risks for satellites, aviation, and ground-based electrical infrastructure. Geomagnetic storms, driven by the interaction of anomalously large eruptions of solar energy with our magnetic field, give us a foretaste of what we can expect with a weakened magnetic shield.
In 2003, the so-called Halloween storm caused local electricity-grid blackouts in Sweden, required the rerouting of flights to avoid communication blackout and radiation risk, and disrupted satellites and communication systems. But this storm was minor in comparison with other storms of the recent past, such as the 1859 Carrington event, which caused aurorae as far south as the Caribbean.
The impact of a major storm on today’s electronic infrastructure is not fully known. Of course any time spent without electricity, heating, air conditioning, GPS or internet would have a major impact; widespread blackouts could result in economic disruption measuring in tens of billions of dollars a day.
In terms of life on Earth and the direct impact of a reversal on our species we cannot definitively predict what will happen as modern humans did not exist at the time of the last full reversal. Several studies have tried to link past reversals with mass extinctions – suggesting some reversals and episodes of extended volcanism could be driven by a common cause. However, there is no evidence of any impending cataclysmic volcanism and so we would only likely have to contend with the electromagnetic impact if the field does reverse relatively soon.
We do know that many animal species have some form of magnetoreception that enables them to sense the Earth’s magnetic field. They may use this to assist in long-distance navigation during migration. But it is unclear what impact a reversal might have on such species. What is clear is that early humans did manage to live through the Laschamp event and life itself has survived the hundreds of full reversals evidenced in the geologic record.
Can we predict geomagnetic reversals?
The simple fact that we are “overdue” for a full reversal and the fact that the Earth’s field is currently decreasing at a rate of 5 percent per century, has led to suggestions that the field may reverse within the next 2,000 years. But pinning down an exact date – at least for now – will be difficult.
The Earth’s magnetic field is generated within the liquid core of our planet, by the slow churning of molten iron. Like the atmosphere and oceans, the way in which it moves is governed by the laws of physics. We should therefore be able to predict the “weather of the core” by tracking this movement, just like we can predict real weather by looking at the atmosphere and ocean. A reversal can then be likened to a particular type of storm in the core, where the dynamics – and magnetic field – go haywire (at least for a short while), before settling down again.
The difficulties of predicting the weather beyond a few days are widely known, despite us living within and directly observing the atmosphere. Yet predicting the Earth’s core is a far more difficult prospect, principally because it is buried beneath 3,000 km of rock such that our observations are scant and indirect. However, we are not completely blind: we know the major composition of the material inside the core and that it is liquid. A global network of ground-based observatories and orbiting satellites also measure how the magnetic field is changing, which gives us insight into how the liquid core is moving.
The recent discovery of a jet-stream within the core highlights our evolving ingenuity and increasing ability to measure and infer the dynamics of the core. Coupled with numerical simulations and laboratory experiments to study the fluid dynamics of the planet’s interior, our understanding is developing at a rapid rate. The prospect of being able to forecast the Earth’s core is perhaps not too far out of reach.
This article was originally published on The Conversation. Read the original article. Graphics added by WUWT |
Scientists at University of Rochester in New York have found that sex hormone oestrogen controls how the brain processes sounds.
This is the first time that any study has shown that a sex hormone can directly affect auditory function.
The researchers say that their study points toward the possibility that oestrogen controls other types of sensory processing as well.
According to them, understanding how oestrogen changes the brain's response to sound may open the door to new ways of treating hearing deficiencies.
"We've discovered estrogen doing something totally unexpected. We show that estrogen plays a central role in how the brain extracts and interprets auditory information. It does this on a scale of milliseconds in neurons, as opposed to days, months or even years in which estrogen is more commonly known to affect an organism," says Raphael Pinaud, assistant professor of brain and cognitive sciences at the University of Rochester and lead author of the study.
The researcher has revealed that past studies have already hinted at a connection between oestrogen and hearing in women who have its low levels, something that often occurs after menopause.
He, however, insists that no one actually knew that oestrogen plays such a direct role in determining auditory functions in the brain.
"Now it is clear that estrogen is a key molecule carrying brain signals, and that the right balance of hormone levels in men and women is important for reasons beyond its role as a sex hormone," says Pinaud.
Working in collaboration with Assistant Professor Liisa Tremere and postdoctoral fellow Jin Jeong, Pinaud showed that increasing oestrogen levels in brain regions, which process auditory information, caused heightened sensitivity of sound-processing neurons, which encoded more complex and subtle features of the sound stimulus.
He reveals that when the actions of oestrogen were blocked, or brain cells were prevented from producing the hormone within auditory centres, the signalling that is necessary for the brain to process sounds shut down.
His team have also shown that oestrogen is required to activate genes that instruct the brain to lay down memories of those sounds.
"It turns out that estrogen plays a dual role. It modulates the gain of auditory neurons instantaneously, and it initiates cellular processes that activate genes that are involved in learning and memory formation," says Pinaud.
Pinaud and his colleagues made these findings while studying how oestrogen may help change neuronal circuits to form memories of familiar songs in a type of bird typically used to understand the biology of vocal communication.
"Based on our findings we must now see estrogen as a central regulator of hearing. It both determines how carefully a sound must be processed, and activates intracellular processes that occur deep within the cell to form memories of sound experiences," he says.
The researchers will continue their work studies to find out how neurons adapt their functionality when encountering new sensory information, and how these changes may ultimately enable the formation of memories.
They also will continue exploring the specific mechanisms by which estrogen might impact these processes.
"While we are currently conducting further experiments to confirm it, we believe that our findings extrapolate to other sensory systems and vertebrate species," says Pinaud. "If this is the case, we are on the way to showing that estrogen is a key molecule for processing information from all the senses."
The study has been published in The Journal of Neuroscience. |
Bourke’s parrots are small members of the parrot family and the only member of the Neopsephotus genus. They are named after the previous governor of the state of New South Wales, General Sir Richard Bourke. The back and wings are mainly brown in colour with blue markings on the underside of the wings and head. The breast and abdomen of the Bourke’s parrot is pink. Females have less distinct pink and blue colouration than the males.
Found normally in groups of between four and six individuals, Bourke’s parrots have been known to form flocks of over 100 birds in drought conditions.
These parrots usually breed between August and December when there is high food availability, but this is dependent on rainfall. Nests are typically built in tree hollows between one and three metres above the ground. Whilst the female attends the eggs, the male will collect food for her. Both parents feed the hatchlings. The chicks fledge around four weeks after hatching.
Bourke’s parrots eat mainly seeds and also some grass shoots. They will fly to the ground to feed or use nearby bushes. Feeding usually occurs around dawn and dusk to avoid the hottest part of the day.
These parrots are classed as Least Concern as the population is currently rising due to habitat change creating suitable habitat for them. Historical introductions of cats, rabbits and sheep to their range have reduced the population size due to predation and over grazing of scrubland. |
Model Good Behavior
A child’s social behavior is best reinforced when parents are kind, sincere and non-judgmental. Remember that they are looking to you to set an example of how to interact with others, and that taking a moment to consider how you interact with others is an important part of nurturing their social skills.
Share Your Family Values With Your Child
To help your 4th grader learn about the need for respectful behavior, help them create a family credo, coat of arms or crest. Talk with them about your beliefs and expectations, and work with them to come up with a list of your family’s values, like trust, respect, kindness and generosity. After you have this list, ask them to identify three different ways they can apply these values in social situations. You may also want to write out all of this information on a poster board and hang it in a central area in your home as a reminder of your family’s values and expectations.
Discuss Different Perspectives
To help your child understand and respect the perspectives of others, talk with them about a book that they're reading or a television show or movie that they watched recently, and ask them what would happen if the story were written from another perspective. For example, a book about King Arthur and Merlin the sorcerer can be told from Merlin’s sister Morgana’s perspective. Or Charlie and the Chocolate Factory can be told from Charlie’s grandfather’s point of view. By doing this, you are not only teaching your child how to see life through different lenses, but also building their capacity for empathy and understanding.
Discuss Social Issues Like Immigration and Racial and Gender Inequality
When you’re watching the evening newscast or reading the morning paper, ask your child to give you their opinion on these issues and talk to him about the people involved on both sides. These types of stories make children aware of historical events and allow them to relate to the hardships and joys of others. They also help children to learn more about conflict resolution and the importance of respecting others and their opinions.
4th Grade Social Awareness Skills
During the late elementary years, your child is learning how to better-manage and control their feelings when interacting with others. |
In this lesson, students learn about the various factors and theories that help explain why the Roman Empire declined and fell.
A brief powerpoint sets the lesson up and gives students their directions. For the activity, students are given a 2 page article that discusses 7 theories on why Rome declined. They then partner up to create a newspaper about the end of the Roman Empire. For the assignment, they write 1 editorial on what they believe the biggest problem facing Rome is, 1 editorial on which problem they believe requires the least attention, and then they a comic strip that explain 3 of the other theories for a comics page.
The 7 factors the article discusses are:
3. Political Corruption
4. Decline in Morals and Values
5. Lead poisoning
6. Excessive Military Spending
7. Barbarian Invasions
Optionally, the remainder of the powerpoint goes over the theories briefly for the teacher to use to help discuss them with the class.
- Dan Nguyen
2 page article, 12 slide powerpoint |
This question is important because old-growth tropical and (some) temperate forests have diverse and abundant epiphyte populations which are integral for the sustainable function of the forest.
The answer is... we don't know. However, some (lucky!) researchers in the tropics are working on this. A paper recently published by Woods & DeWalt (available on this page) studied the epiphyte populations of four secondary forests in Panama that had been left to recover after disturbance for 35, 55, 85 and 115 years.
- Species richness (number of different species) reached 74 % of that of old-growth forests.
- Community composition (the combination of species) reached 75 % of old-growth forests after 115 years.
So it appears that, given enough time, epiphyte richness and composition will recover. However, not everything recovered:
- The density of epiphytes (plants per tree) reached only 49 % of old-growth forests.
The authors speculate that this slow recovery of epiphyte density "may be due to a low probability of colonisation of young host trees caused by epiphyte dispersal limitation".
They also say:
"Given another 100 years, epiphyte densities in secondary forests in central Panama might approach old-growth levels, but we conclude that, in the short-term, secondary moist forests are unlikely to compensate biologically for the loss of biological diversity and ecosystem functioning that high epiphyte densities provide. In tropical moist forests, oldgrowth forests are invaluable for the conservation of epiphytes, and secondary forests need more than 115 yr to recover all aspects of old-growth forest community structure." |
The increase of devastating weather extremes in summer is likely linked to human-made climate change, mounting evidence shows. Giant airstreams are circling the Earth, waving up and down between the Arctic and the tropics. These planetary waves transport heat and moisture. When these planetary waves stall, droughts or floods can occur. Warming caused by greenhouse-gases from fossil fuels creates favorable conditions for such events, an international team of scientists now finds.
"The unprecedented 2016 California drought, the 2011 U.S. heatwave and 2010 Pakistan flood as well as the 2003 European hot spell all belong to a most worrying series of extremes," says Michael Mann from the Pennsylvania State University in the U.S., lead-author of the study now to be published in Scientific Reports. "The increased incidence of these events exceeds what we would expect from the direct effects of global warming alone, so there must be an additional climate change effect. In data from computer simulations as well as observations, we identify changes that favor unusually persistent, extreme meanders of the jet stream that support such extreme weather events. Human activity has been suspected of contributing to this pattern before, but now we uncover a clear fingerprint of human activity."
How sunny days can turn into a serious heat wave
"If the same weather persists for weeks on end in one region, then sunny days can turn into a serious heat wave and drought, or lasting rains can lead to flooding", explains co-author Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research (PIK) in Germany. "This occurs under specific conditions that favor what we call a quasi-resonant amplification that makes the north-south undulations of the jet stream grow very large. It also makes theses waves grind to a halt rather than moving from west to east. Identifying the human fingerprint on this process is advanced forensics."
Air movements are largely driven by temperature differences between the Equator and the Poles. Since the Arctic is more rapidly warming than other regions, this temperature difference is decreasing. Also, land masses are warming more rapidly than the oceans, especially in summer. Both changes have an impact on those global air movements. This includes the giant airstreams that are called planetary waves because they circle Earth's Northern hemisphere in huge turns between the tropics and the Arctic. The scientists detected a specific surface temperature distribution apparent during the episodes when the planetary waves eastward movement has been stalling, as seen in satellite data.
Using temperature measurements since 1870 to confirm findings in satellite data
"Good satellite data exists only for a relatively short time - too short to robustly conclude how the stalling events have been changing over time. In contrast, high-quality temperature measurements are available since the 1870s, so we use this to reconstruct the changes over time," says co-author Kai Kornhuber, also from PIK. "We looked into dozens of different climate models - computer simulations called CMIP5 of this past period - as well as into observation data, and it turns out that the temperature distribution favoring planetary wave airstream stalling increased in almost 70 percent of the simulations since the start of the industrial age."
Interestingly, most of the effect occured in the past four decades. "The more frequent persistent and meandering Jetstream states seems to be a relatively recent phenomenon, which makes it even more relevant," says co-author Dim Coumou from the Department of Water and Climate Risk at VU University in Amsterdam (Netherlands). "We certainly need to further investigate this - there is some good evidence, but also many open questions. In any case, such non-linear responses of the Earth system to human-made warming should be avoided. We can limit the risks associated with increases in weather extremes if we limit greenhouse-gas emissions."
Article: Michael E. Mann, Stefan Rahmstorf, Kai Kornhuber, Byron A. Steinman, Sonya K. Miller, Dim Coumou (2017): Influence of Anthropogenic Climate Change on Planetary Wave Resonance and Extreme Weather Events. Scientific Reports [DOI: 10.1038/srep45242]
Weblink to the article once it is published: http://www.
Weblink to video that explains the planetary waves phenomenon: https:/ |
research : disease studies
Genetic Analysis of Heart Defects
Birth defects appear in about 1 in 33 babies. Most birth defects are caused by a complex mix of factors including genetic, environmental and lifestyle factors. For the most part, these factors are not understood and it is rare to see many babies born with the same birth defect. In order to combine efforts and increase the capacity of state programs to carry out research on the etiology of birth defects, special centers were set up in different regions of the country to collect phenotypic information and biological materials from children with birth defects and their parents. The CDC has established centers in states with existing birth defect programs that have nationally recognized expertise in birth defects surveillance and research. All centers work together on the National Birth Defects Prevention Study (NBDPS).
The NBDPS is one of the largest case-control studies ever done on the causes of birth defects. Case mothers are women who have had babies or pregnancies affected by birth defects and control mothers had babies with no birth defects. The goal for each center is to talk to 300 case mothers and 100 control mothers per year. The study has three parts: 1) specialized doctors review all cases, 2) talk to the mothers, and 3) collect cheek cell samples from the families. This research will increase our understanding of the causes of birth defects and provide information that can be used to prevent birth defects.
As part of the NBDPS, our local lab is investigating the etiology and pathogenesis of malformations of the heart, concentrating on abnormalities of the left side of the heart and outflow tract defects. The specific research strategy is to identify gene variants that influence risk for heart malformations, especially gene variants causing Mendelian heart malformation disorders, and subsequently compile and analyze the clinical findings in these patients and their families in order to explore the relationship between specific gene mutations (i.e., genotype) and clinical characteristics (i.e., phenotype). |
Article body copy
Thirteen percent of western snowy plovers breeding along the coast of California’s Humboldt County are responsible for producing half of all the snowy plover chicks born there every year, and their success comes down to one thing: where they put their nests. Plovers and their eggs are small and well camouflaged, which keeps them hidden from the probing eyes of hungry predators. But some plovers up their survival by choosing better real estate: those that build their nests on riverside gravel bars have an advantage over those that opt for sandy beaches.
The sparrow-sized birds are listed as threatened by the US Fish and Wildlife Service. The agency’s recovery plan lists three factors that limit the snowy plovers’ potential for recovery: habitat degradation due to invasive plants, human disturbance, and predation of eggs and chicks by ravens and crows.
To learn which of those threats has the biggest effect, wildlife biologists from Humboldt State University spent 12 years painstakingly monitoring a population of 200 snowy plovers nesting in Humboldt County. They tracked each male and female bird over the course of its lifetime to determine how many eggs it was responsible for, and how many of those chicks survived fledging. By calculating each bird’s reproductive success, researchers Dana Herman and Mark Colwell determined which threats made some eggs less likely to hatch and some chicks less likely to survive.
The researchers found that neither human activity nor predator abundance affected the plovers’ success. And building enclosures to protect the eggs didn’t help either—the cages couldn’t protect the chicks once they’d hatched and also made convenient perches for raptors in search of adult plovers. Instead, the researchers found that plovers nesting on gravel bars along the Eel River successfully produced more than double the surviving offspring than did those nesting on nearby beaches. (Even more impressively, they did so while laying fewer eggs than those nesting on sand.) Average gravel-nesters fledged three to four chicks over the course of their lives, while sand-nesters fledged one to two of their offspring.
“Higher reproductive success on gravel substrates, despite similar predator abundance on beaches, suggests that ravens and crows are less able to detect plover eggs and chicks [on gravel],” says Herman. In other words, gravel bars offer more egg-shaped rocks for enhanced camouflage.
Unfortunately, the area’s plovers mysteriously stopped nesting on gravel entirely in 2010. “Maybe they came back to nest and the water levels [in the river] were too high that year, so they started nesting on the beaches,” Herman speculates. And because the birds tend to nest in the same places at which they hatched, they’re stuck in a negative, sandy feedback loop.
Perhaps the beach will eventually become so uninviting that the plovers will again attempt nesting along the river’s gravel-covered bars. Until then, wildlife officials can sprinkle the beaches with crushed seashells to better conceal the eggs and hatchlings from the prying eyes of crows and ravens, a method that’s been successfully used in Oregon and Southern California. It wouldn’t exactly turn the beach into prime shorebird real estate, but it can still make the neighborhood safer. And what homeowner wouldn’t want that? |
Naphthalene (also called white tar and tar camphor) is a white solid that evaporates easily. Its present in fuels such as petroleum and coal and has been used in mothballs and moth flakes. Burning tobacco or wood produces naphthalene. It has a strong but not unpleasant smell.
Naphthalene can be produced from coal or petroleum. Production volume in the United States decreased significantly from a peak of 409.000 tons in 1968 to 101.000 tons in 1994. Production capacity has remained relatively stable in recent years, with estimated capacity for 2004 at 97.700 tons. The major commercial use of naphthalene is in the manufacture of polyvinyl chloride (PVC) plastics. The major consumer products made from naphthalene are moth repellents, in the form of mothballs or crystals, and toilet deodorant blocks. It is also used for making dyes, resins, leather tanning agents, and the insecticide carbaryl. It enters the environment from industrial uses, from its use as a moth repellent, from the burning of wood or tobacco and from accidental spills.
Most of the naphthalene entering the environment is discharged to the air. The largest releases result from the combustion of wood and fossil fuels and the naphthalene-containing moth repellents. Smaller amounts of naphthalene are introduced to water as the result of discharges from coal-tar production and distillation processes.
In the atmosphere naphthalene is broken down rapidly, usually within one day. From the atmosphere it can also very slowly be depositioned in water. It has a low water solubility of 31,7 µg/l and a rather low tendency to adsorb to particles. It is expected that only 10% of the naphthalene in water bodies is associated with particles. The main loss of naphthalene from water water bodies is by evaporation into the atmosphere.
Naphthalene concentrations in the South Atlantic ocean are around 6,3 ng/l.
Environmental standards and legislation
Please note that others may also have edited the contents of this article. |
Photo credit: Vasyl/Adobe Stock
The term vulnerable learner is becoming more commonly used, but what does it mean? Who are these learners, and why are they “vulnerable”? For the purpose of our work, we use the definition developed by the Organisation for Economic Co-operation and Development (OECD), which states that a vulnerable learner is any student with one or more of the following identifiers: has a lower socioeconomic background, special needs, or a diverse gender, or is a language learner or minority (Reimers & Schleicher, 2020).
Any one of these factors may impact a student’s educational achievement because it acts as a barrier to their ability to focus on educational learning without strong classroom and school support. Many language learners within our schools have more than one factor that places them at risk of failing academically, which has been exacerbated due to the COVID-19 pandemic. For vulnerable learners, the impact will be significantly more devastating.
Issues Facing Vulnerable Learners
Multicultural students come into our classrooms with diverse backgrounds that must be acknowledged by the teacher and school. Although perhaps well-intentioned, misguided actions that minimize the cultural and linguistic assets of multicultural students (e.g., giving every student an “American name” or lumping all Spanish speakers together as “Hispanic”) result in vulnerable learners not feeling like individuals, and not feeling socially and emotionally supported.
There are several teacher actions that can lead to undesirable student responses, and these can, in turn, lead to negative teacher perceptions; it is an unfortunate cycle. Here are some examples of such behavior and the damaging cycle that results in a spiraling toward a widening achievement gap (see Figure 1):
Figure 1. Expansion of the achievement gap for vulnerable students.
Click here to enlarge.
Lowering rigor puts already at-risk students in a position of falling even further behind their peers without a clear path to how that gap will be closed.
At a school wide level, vulnerable learners are often educationally disengaged. They might question the value of going to school, as many of them have experienced continued failure in multiple settings. This disengagement is demonstrated in tardiness and inappropriate behavior. Language learners and those with a disability may also struggle to focus on multistep directions, questions, or activities. Rather than reveal a lack of confidence or ability or show their sense of exclusion, many vulnerable learners would prefer to misbehave and skip classes. Adults, then, perceive that these students lack motivation, and therefore, do not want to learn.
However, by the secondary level, vulnerable learners are not motivated to keep trying in a system designed to keep them from succeeding.
These observable behaviors are universal across school districts, states, and countries. The question that needs to be asked is: How do we identify these behaviors for what they are and create an environment where these learners can feel secure knowing that our schools will help them? The changes that classrooms and schools may need to make are not expensive and do not require the purchase of a new program. They just need the dedication of teachers and administrators to create an equitable space for all learners.
Strategies and Supports That Can Lead to Success
Creating an environment that is all-inclusive and accepting of diversity will result in a welcoming space, which students can easily recognize. This can happen concurrently with the school and community, as well as the classroom.
In the School and Community
Reengage School and Community. Creating an equitable space begins with the school and community. Administrators can set the tone by helping their school and community reengage with each other as well as taking the time to collaborate and engage with teachers. According to Siler (2020), “The best solution is often found after seeking information and engaging in dialog and thoughtful deliberation” (p. 108). These do not have to be in-depth sit-downs, but they can be conducted through daily and/or weekly interactions making use of emails, short phone calls, or quick drop-ins, both with teachers, parents, and the community. Similarly, schoolwide initiatives that involve teachers, community, and parents as valuable members of the learning team will also lead to increased dialog between teachers and families. Families should be recognized and used as a part of the team.
Set a Tone of Rigor. Administrators can also set the tone for how vulnerable learners are perceived academically in their schools. Building and supporting a demand for a rigorous curriculum goes a long way toward developing a school culture that emphasizes that all students can learn. Teachers can feel that they are being supported to push their students to their full potential and that their administrators will support them with job-embedded professional development, resources, and materials to reach all of their students.
Provide Extended Learning Opportunities. Purposefully scheduling extension and extended learning opportunities for all students increases equitable learning. Adequate time should be designed in the schedule for small-group instruction, high-dosage tutoring, and 1:1 instruction, as needed. This can be done during the school day as well as before and after, or even on the weekends.
Implement Restorative Practices. Last, but not least, it would be remiss not to talk about restorative practices (see www.iirp.org) and the need to take a step back from traditional and punitive practices when dealing with student discipline and infractions. “Restorative practices in schools create a healthy atmosphere that supports positive development, teaching, and learning” (Brown, 2018, p. 7). Social and emotional support for the whole school are just as important as academic needs.
In the Classroom
Create a Rigorous and Supportive Classroom Environment. Invoking an equitable space for all students in the school also demands that teachers design their classrooms with a similar focus. A rigorous, grade-level curriculum needs to be the standard, along with high expectations and academic support. Classrooms should be learner-focused spaces where students can struggle and work with the curriculum as well as each other, but also know that scaffolds and accommodations are in place as needed. Students are the focus of the class, not teachers. The expectation should be clear that students will articulate what they are learning with each other.
Create a Safe and Trusting Classroom Environment. Unquestionably, classrooms need to become places where students can receive restorative strategies and support. It is key to teach students the fundamentals of restorative practices so that when a conflict occurs, all involved parties can move towards repairing relationships. Restorative practices create spaces “…where students, teachers and staff want to be; where they feel safe, trusted, and accepted; and where they experience care and belonging” (Brown, 2018, p. 7). Challenges will occur, but our vulnerable learners need to learn how to reenter the classroom after a conflict and move forward.
When teachers are enacting the aforementioned strategies and supports, the classroom becomes a true place of learning for all. Learners are taught to celebrate mistakes, assume the best, and applaud their awesomeness (Boynton & Moore, 2021).
Students who are vulnerable learners are increasingly falling behind in schools, but when school administrators, teachers, and the community work together, these learners can achieve. Focus needs to be placed upon academics as well as positive social-emotional learning. It begins with administrators placing trust and support in their teachers. When the staff begins to feel the positive efficacy from the leadership team and know that they are valued members of the school, it can trickle down to how they perceive the learners in their classroom. This, in turn, will promote a positive classroom climate that promotes social as well as academic success.
Boynton, M. J., & Moore, J. (2021, November 4–6). Creating equity for vulnerable learners [Conference presentation]. Association for Middle Level Educators, Virtual.
Brown, M. A. (2018). Creating restorative schools: Setting schools up to succeed. Living Justice Press.
Reimers, F., & Schleicher, A. (2020). Schooling disrupted, schooling rethought. OECD.
Siler, J. M. (2020). Thrive through the five: Practice truths to powerfully lead through challenging times. Dave Burgess Consulting.
This article first appeared in TESOL Connections. Reprinted with permission. |
What you’ll learn to do: describe the role financial markets play in an economy
In any given period, some households, businesses and governments earn more income than they spend. What do they do with their savings? Usually, it doesn’t make sense to put savings under your mattress or bury them in your backyard. Neither of those options will help your savings grow.
Other households, businesses and governments spend more than they earn. Households borrow money for new homes and new cars. Businesses borrow money to finance new physical capital investments. Governments borrow money to finance budget deficits. Where can these households, businesses and governments find the money to finance their expenditures?
The answer to each of these questions is financial markets. Financial markets are where savers put their savings to work and borrowers find funding to borrow. In this section, we will provide an overview of financial markets to provide context for the subsequent discussion of money and the banking system.
- Describe financial markets and assets, including securities
In earlier modules, we observed that individuals can either consume or save their income. We also noted that business investment in physical capital is the primary way they grow. Where do individuals put their savings, and where do businesses obtain the funding for investment expenditure? The answer to both of these questions is financial markets.
United States’ households and businesses saved almost $2.9 trillion in 2012. Where did that savings go and what was it used for? Some of the savings ended up in banks, which in turn loaned the money to individuals or businesses that wanted to borrow money. Some was invested in private companies or loaned to government agencies that wanted to borrow money to raise funds for purposes like building roads or mass transit. Some firms reinvested their savings in their own businesses.
Financial markets include the banking system, equity markets like the New York Stock Exchange, or the NASDAQ Stock Market, bond markets, commodity markets and more. In the 21st Century, financial markets are global, Americans put their savings into foreign as well as domestic bank accounts, foreign and domestic stocks and foreign and domestic bonds. All financial assets are called securities. Equities (i.e. stocks) give savers ownership in a company in return for dividends (a regular payment from the company) and/or capital gains (e.g. when you sell the stock at a profit). Bonds are a type of debt. All forms of debt are IOUs, where a saver lends money to a borrower in return for an interest payment.
Borrowing: Banks and Bonds
Businesses need money to operate and to grow. When a firm has a record of earning revenues, or better yet, of earning profits, it becomes possible for the firm to borrow money. Firms have two main borrowing methods: banks and bonds.
A bank loan for a firm works in much the same way as a loan for an individual who is buying a car or a house. The firm borrows an amount of money and then promises to repay it, including some rate of interest, over a predetermined period of time. If the firm fails to make its loan payments, the bank (or banks) can take the firm to court and require it to sell its buildings or equipment to pay its debt.
Another source of financial capital is a bond. A bond is a financial contract like a loan, but with two additional properties: Typically, bond interest rates are lower than loan interest rates, and there are organized secondary markets for bonds, making them more liquid to bondholders than loans. Bonds are issued by major corporations and also by various levels of government. For example, cities borrow money by issuing municipal bonds, states borrow money by issuing state bonds, and the federal government borrows money when the U.S. Department of the Treasury issues Treasury bonds.
Watch the clip from this video to see an explanation of how the government could sell bonds in order to raise funds to build a new stadium.
A large company, for example, might issue bonds for $10 million. The firm promises to make interest payments at an annual rate of 8%, or $800,000 per year and then, after 10 years, will repay the $10 million it originally borrowed.
Treasury Bills, Notes and Bonds
When the U.S. federal government runs a deficit, it borrows the money from financial markets. The U.S. Treasury sells three types of debt: Treasury Bills, Treasury Notes and Treasury Bonds. Each of these debt instruments represents an IOU from the federal government. The difference between bills, notes and bonds is in their maturities: Bills are the shortest term debt with maturities less than one year. Notes have maturities between one and ten years. Bonds have maturities longer than ten years.
The other major way that firms can acquire financial capital is by selling shares of stock. Stock represents ownership in a firm, or more
A corporation is a business that “incorporates”—that is owned by shareholders that have limited liability for the company’s debt but share in its profits (and losses). Corporations may be private or public, and may or may not have publicly traded stock. They may raise funds to finance their operations or new investments by raising capital through selling stock or issuing bonds.
Those who buy the stock become the firm’s owners, or shareholders. Stock represents firm ownership; that is, a person who owns 100% of a company’s stock, by definition, owns the entire company. The company’s stock is divided into shares. Corporate giants like IBM, AT&T, Ford, General Electric, Microsoft, Merck, and Exxon all have millions of stock shares. In most large and well-known firms, no individual owns a majority of the stock shares. Instead, large numbers of shareholders—even those who hold thousands of shares—each have only a small slice of the firm’s overall ownership.
When a large number of shareholders own a company, there are three questions to ask:
- How and when does the company obtain money from its sale?
- What rate of return does the company promise to pay when it sells stock?
- Who makes decisions in a company owned by a large number of shareholders?
First, a firm receives money from the stock sale only when the company sells its own stock to the public (the public includes individuals, mutual funds, insurance companies, and pension funds). We call a firm’s first stock sale to the public an initial public offering (IPO). The IPO is important for two reasons. For one, the IPO, and any stock issued thereafter, such as stock held as treasury stock (shares that a company keeps in their own treasury) or new stock issued later as a secondary offering, provides the funds to repay the early-stage investors, like the angel investors and the venture capital firms. A venture capital firm may have a 40% ownership in the firm. When the firm sells stock, the venture capital firm sells its part ownership of the firm to the public. A second reason for the importance of the IPO is that it provides the established company with financial capital for substantially expanding its operations.
However, most of the time when one buys and sells corporate stock the firm receives no financial return at all. If you buy General Motors stock, you almost certainly buy them from the current share owner, and General Motors does not receive any of your money. This pattern should not seem particularly odd. After all, if you buy a house, the current owner receives your money, not the original house builder. Similarly, when you buy stock shares, you are buying a small slice of the firm’s ownership from the existing owner—and the firm that originally issued the stock is not a part of this transaction.
Second, when a firm decides to issue stock, it must recognize that investors will expect to receive a rate of return. That rate of return can come in two forms. A firm can make a direct payment to its shareholders, called a dividend. Alternatively, a financial investor might buy a share of stock in Wal-Mart for $45 and then later sell it to someone else for $60, for $15 gain. We call the increase in the stock value (or of any asset) between when one buys and sells it a capital gain. Note that it is also possible that a stockholder can suffer a capital loss, if the price of the stock when sold is less than the price when it was purchased. Thus, while the potential benefits of stock ownership are unlimited, there is a risk of losing some or all of what was invested.
Third: Who makes the decisions about when a firm will issue stock, or pay dividends, or re-invest profits? To understand the answers to these questions, it is useful to separate firms into two groups: private and public.
A private company is owned by the people who run it on a day-to-day basis. Individuals can run a private company. We call this a sole proprietorship. If a group runs it, we call it a partnership. A private company can also be a corporation, but with no publicly issued stock. A small law firm run by one person, even if it employs some other lawyers, would be a sole proprietorship. Partners may jointly own a larger law firm. Most private companies are relatively small, but there are some large private corporations, with tens of billions of dollars in annual sales, that do not have publicly issued stock, such as farm products dealer Cargill, the Mars candy company, and the Bechtel engineering and construction firm.
When a firm decides to sell stock, which financial investors can buy and sell, we call it a public company. Shareholders own a public company. Since the shareholders are a very broad group, often consisting of thousands or even millions of investors, the shareholders vote for a board of directors, who in turn hire top executives to run the firm on a day-to-day basis. The more stock a shareholder owns, the more votes that shareholder is entitled to cast for the company’s board of directors.
In theory, the board of directors helps to ensure that the firm runs in the interests of the true owners—the shareholders. However, the top executives who run the firm have a strong voice in choosing the candidates who will serve on their board of directors. After all, few shareholders are knowledgeable enough or have enough personal incentive to spend energy and money nominating alternative board members.
Watch the clip for a brief introduction and explanation of stock markets.
- short term (less than one year) debt instruments
- a financial contract through which a borrower like a corporation, a city or state, or the federal government agrees to repay the amount that it borrowed and also a rate of interest over a period of time in the future; usually long-term (greater than 10 year) debt instruments
- someone who owns bonds and receives the interest payments
- capital gain:
- a financial gain from buying an asset, like a share of stock or a house, and later selling it at a higher price
- a business owned by shareholders who have limited liability for the company’s debt yet a share of the company’s profits; may be private or public and may or may not have publicly-traded stock
- debt instruments:
- a direct payment from a firm to its shareholders
- equities or stocks:
- ownership in a private company (unlike debt which conveys no ownership)
- financial markets:
- marketplace where money is invested and borrowed, or in other words, where securities are traded
- initial public offering (IPO):
- original sale of stock by a corporation
- mutual funds:
- funds that buy a range of stocks or bonds from different companies, thus allowing an investor an easy way to diversify
- intermediate term (1-10 year) debt instruments
- private company:
- a firm owned by the people who run it on a day-to-day basis
- public company:
- a firm that has sold stock to the public, which in turn investors then can buy and sell
- synonym for financial assets, or a certificate or other financial instrument that has monetary value and can be traded. These can be debt securities like bonds or equity securities like stocks.
- people who own at least some shares of stock in a firm
- a firm’s stock, divided into individual portions
- sole proprietorship:
- a company run by an individual as opposed to a group
- a specific firm’s claim on partial ownership
- Treasury bond:
- a bond issued by the federal government through the U.S. Department of the Treasury
- venture capital:
- financial investments in new companies that are still relatively small in size, but that have potential to grow substantially |
Microbial life dominates the earth, but many species are difficult or even impossible to study under laboratory conditions. Sequencing DNA directly from the environment, a technique commonly referred to as metagenomics, is an important tool for cataloging microbial life. This culture-independent approach involves collecting samples that include microbes in them, extracting DNA from the samples, and sequencing the DNA. A sample may contain many different microorganisms, macroorganisms, and even free-floating environmental DNA. A fundamental challenge in metagenomics has been estimating the abundance of organisms in a sample based on the frequency with which the organism's DNA was observed in reads generated via DNA sequencing.
We created mixtures of ten microbial species for which genome sequences are known. Each mixture contained an equal number of cells of each species. We then extracted DNA from the mixtures, sequenced the DNA, and measured the frequency with which genomic regions from each organism was observed in the sequenced DNA. We found that the observed frequency of reads mapping to each organism did not reflect the equal numbers of cells that were known to be included in each mixture. The relative organism abundances varied significantly depending on the DNA extraction and sequencing protocol utilized.
We describe a new data resource for measuring the accuracy of metagenomic binning methods, created by in vitro-simulation of a metagenomic community. Our in vitro simulation can be used to complement previous in silico benchmark studies. In constructing a synthetic community and sequencing its metagenome, we encountered several sources of observation bias that likely affect most metagenomic experiments to date and present challenges for comparative metagenomic studies. DNA preparation methods have a particularly profound effect in our study, implying that samples prepared with different protocols are not suitable for comparative metagenomics.
Citation: Morgan JL, Darling AE, Eisen JA (2010) Metagenomic Sequencing of an In Vitro-Simulated Microbial Community. PLoS ONE 5(4): e10209. https://doi.org/10.1371/journal.pone.0010209
Editor: Francisco Rodriguez-Valera, Universidad Miguel Hernandez, Spain
Received: December 11, 2009; Accepted: March 12, 2010; Published: April 16, 2010
This is an open-access article distributed under the terms of the Creative Commons Public Domain declaration which stipulates that, once placed in the public domain, this work may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose.
Funding: This project was funded primarily by Laboratory Directed Research and Development Program funds from the Lawrence Berkeley National Laboratory. The work was conducted in part at the U.S. Department of Energy Joint Genome Institute which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. A. Darling was supported by NSF fellowship DBI-0630765. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: Jonathan Eisen is an associate with PLoS as Editor-in-Chief of PLoS Biology.
The vast majority of life on earth is microbial, and efforts to study many of these organisms via laboratory culture have met with limited success, leading to use of the term “the uncultured majority” when describing microbial life on earth . Metagenomics holds promise as a means to access the uncultured majority , , and can be broadly defined as the study of microbial communities using high-throughput DNA sequencing technology without requirement for laboratory culture –. Metagenomics might also offer insights into population dynamics of microbial communities , and the roles played by individual community members . Toward that end, a typical metagenomic sequencing experiment will identify a community of interest, isolate total genomic DNA from that community, and perform high throughput sequencing of random DNA fragments in the isolated DNA. The procedure is commonly referred to as shotgun metagenomics or environmental shotgun sequencing. Sequence reads can then be assembled in the case of a low-complexity sample , or assigned to taxonomic groupings using various binning strategies without prior assembly , , . As binning is a difficult problem, many methods have been developed, each with their own strengths –.
Assuming the shotgun metagenomics protocol represents an unbiased sampling of the community, one could analyze such data to infer the abundance of individual species or functional units such as genes across different communities and through time. However, many sources of bias may exist in a shotgun metagenomics protocol. These biases are not unique to random sequencing of environmental DNA. They have also been addressed in studies of uncultured microbial communities using PCR-amplified 16S rRNA sequence data. For example, it has been shown that differences in the cell wall and membrane structures may cause DNA extraction to be more or less effective from some organisms , , and differences in DNA sequencing protocol might introduce biases in the resulting sequences . We also expect that methods to assign metagenomic reads to taxonomic groupings may introduce their own biases and performance limitations .
In selecting a particular metagenomic protocol, an awareness of alternative approaches and their limitations is essential. Towards this end, others have endeavored to benchmark the various steps of a typical metagenomic analysis. A few studies have attempted to quantify the efficiency and organismal bias of various DNA extraction protocols using environmental samples, but these have included unknown, indigenous microbes , –. One other benchmark of metagenomic protocols focused mainly on the informatic challenge of assigning reads from a priori unknown organisms to taxonomic groups in a reference phylogeny . In that in silico simulation, the authors randomly sampled sequence reads from 113 isolate genomes, and mixed them to create three “communities” of varying complexity. While that type of informatic simulation of metagenomic reads is a useful approach for benchmarking different binning methods, the models used for such simulations simply can not capture all factors affecting read sampling from a real metagenome sequencing experiment. Even if the model complexity were increased, appropriate values would need to be experimentally determined for the new simulation model parameters.
In this work, we describe an in vitro metagenomic simulation intended to inform and complement the in silico simulations used by others for benchmarking. Using organisms for which completed genome sequences were available, we created mixtures of cells with equal quantities of each organism. We then isolated DNA from the mixtures and used two approaches to obtain sequence data. For all simulated metagenomic samples, we created small-insert clone libraries that were end-sequenced using Sanger chain termination sequencing and capillary gel electrophoresis. For one of the samples, we generated additional sequence using the cloning-independent pyrosequencing method on the Roche GS20. The resulting sequence data were then analyzed for biases introduced during metagenome sequencing. For this study, organisms were chosen to represent a breadth of phylogenetic distance, cell morphology, and genome characteristics in order to provide useful test data for benchmarking binning methods. This experiment was not designed to test specific hypotheses about how those factors or others may influence the distribution of reads in a metagenomic survey. Nevertheless, these data can be used to determine appropriate parameter ranges for metagenomic simulation studies, or directly as a test dataset for binning.
Results and Discussion
Constructing a simulated metagenome
Organism selection was guided by the data available in the Genomes On Line Database as of November 2007 . Pathogens, obligate symbionts, and obligate anaerobes were removed from consideration for the simulated metagenome because these organisms are difficult to culture in our laboratory setting. We selected ten organisms representing all three domains of life and several levels of phylogenetic divergence. Halobacterium sp. NRC-1 and Saccharomyces cerevisiae S288C were selected to represent the archaeal and eukaryotic domains, respectively. Because it has been shown that cell membrane structure can have a significant effect on DNA extraction efficiency , , , we included both Gram-positive and Gram-negative bacterial species. Five relatively closely-related organisms were selected from among the lactic acid bacteria, a clade of low-GC, Gram-positive Firmicutes (Pediococcus pentosaceous, Lactobacillus brevis, Lactobacillus casei, Lactococcus lactis cremoris SK11, and Lactococcus lactis cremoris IL140) . To provide phylogenetic breadth within the Bacteria, we also included Myxococcus xanthus DK 1622 (a delta-proteobacterium), Shewanella amazonensis SB2B, GenBank Accession #CP000507, (a gamma-proteobacterium), and Acidothermus cellulolyticus 11B (an Actinobacterium). Figure 1 gives the placement of the organisms on the tree of life and Table 1 lists some general features of each organism. These ten organisms were not selected to represent a real, functional community, rather they were chosen to provide sequence data that would best allow the testing of the accuracy and specificity of various binning methods. To this end, we have chosen five phylogenetically diverse species with very different genome compositions and five species that are relatively closely related to each other, with very similar genome compositions.
A phylogenetic tree of three domains with representative groups is shown. Organisms used in this study are indicated by *. The organisms used represent all known domains of life, include four bacterial phyla, a variety of genome sizes, GC compositions, and cell wall types. Large font size indicates clades where multiple isolate genomes have been collapsed into a single leaf node.
As described in Methods below, cultures for each organism were grown and cells from each culture were counted using flow cytometry. We then constructed two distinct simulated microbial communities that were made by mixing all organisms with different approaches (see Figure 2). The first approach involved mixing the cultures directly prior to extracting DNA from the collection of mixed cells. To this mixture, two DNA extraction techniques were applied in parallel, including an enzymatic extraction with a bead beater (referred to throughout as “EnzBB”), and the Qiagen DNeasy kit (referred to throughout as “DNeasy”). Preliminary sequence data from this mixture included no reads from the halophilic archaeon, Halobacterium sp. NRC-1. One possible explanation for this observation is that upon mixing, the high-salt culture medium in which the Halobacterium cells were growing was diluted, causing them to lyse. If cell lysis occured rapidly, before recovery of the mixed cell pellet, no DNA would be recovered from the lysed cells. To address this possibility, we made a second mixture of cells using a different approach. The second approach involved pelleting a known number of cells from each individual culture, mixing cell pellets, then performing DNA extraction on the mixed pellets using an enzymatic DNA extraction (referred to throughout as “Enz”). Simulated metagenomic DNA samples were then subjected to high-throughput sequencing using Sanger sequencing and pyrosequencing technologies (see Methods for a description of the sequencing protocols). Finally, to assess DNA extraction efficiency for each organism in isolation, an enzymatic extraction with a bead beating step (EnzBB) was applied to each isolate culture separately. Table 1 documents the quantification of total DNA extracted from each organism individually.
Taxonomic Assignment of Reads
For each simulated metagenome, we used a BLAST search to map quality-controlled reads back to the set of reference genomes, yielding a count of reads assigned to each organism (see Methods for details). A complete set of read mappings and summaries of the numbers of reads assigned to each organism is given in Table 2.
Many reads did not map back to reference genomes using our stringent criteria. Such reads may represent highly conserved sequences that hit multiple genomes making unambiguous mapping impossible, had too few high-quality bases, or they may represent an unknown source of sequence library contamination. To further investigate the origins of unmapped reads, we searched those reads using BLAST against the NCBI non-redundant nucleotide database (see Table 3). We find that many unmapped reads do hit organisms present in our sample, but do so with less than 95% sequence identity. Sequencing errors, either in our data or in the published genome data, may contribute to this category of reads. In general, the lower identity reads follow the taxonomic abundance distribution of mapped high-identity reads. We also found a substantial number of hits to parts of a Lactococcus bacteriophage phismq8. This phage genome was not present (lysogenized) in either of the two reference Lactococcus genome sequences. All of the Lactococcus strains used for this study are the same strains, from the same lab, that were the source for the genome sequencing projects, suggesting that at least one of the Lactococcus cultures had been infected with a virus of external origin in the time since its genome was originally sequenced. The phage may have been actively affecting one of the Lactococcus cultures. Finally, several unmapped reads showed high identity to members of the genus Bacillus. Those reads suggest a low level of Bacillus contamination in one of the simulated metagenomes.
Observed and predicted number of reads for each organism
By counting the number of reads mapped to each reference genome and normalizing by the total read count, it is possible to estimate the relative abundance of organisms in each simulated metagenomic sample. Figure 3 shows the frequency at which reads are observed for each organism in our samples. These observed read frequencies can be considered as possibly biased estimates of the organism relative abundance in our simulated environmental samples.
The fraction of reads assigned to organisms for each sample preparation method is shown at top. The fraction expected given the measured quantities of mixed DNA from each organism assuming unbiased library prep and sequencing is given as “DNA quantification”, and the fraction of reads predicted based on cell count and genome size is given as “cc*gs prediction.” Sampling error was estimated assuming a multinomial distribution (not shown) and indicated that estimates of relative abundance are accurate +/−5% for dominant organisms given the number of Sanger reads obtained, and +/−1% for pyrosequencing reads. Note that the top two bars labeled Enz+Pyrosequencing and Enz+Sanger offer a comparison of Sanger and pyrosequencing technology on the same extracted DNA.
Given that a known quantity of each organism was mixed in the metagenomic simulation, we next investigated whether estimates of organism relative abundance based on sequencing read counts would match the predicted abundance given the way in which our sample was created. To do so, we must first derive a predicted relative DNA abundance based on the known cell count relative abundances. Because we included an equal number of cells per organism in our mixtures, a simple prediction would be that the number of reads per organism in each sequencing library would be directly proportional to their genome sizes. The relative abundance predicted based on genome size and cell counts (cc*gs) is shown in Figure 3. Using the cc*gs predictor of relative organism abundance, we tested whether the observed abundances followed the expected distribution. We found that that cc*gs is a poor predictor of organism abundance in our sequence libraries (χ2 test, all p-values <<0.001, Bonferroni multiple test correction). However, some organisms in our experiment such as Halobacterium may be polyploid , and for many microbes the copy number of the entire (or some segments) of the chromosome can vary depending on growth phase , or other factors . Also, the amount of DNA from an organism that is available to become part of a sequencing library depends on the efficiency of the DNA extraction protocol. In a mixed sample, organisms with thick cell walls may yield relatively little DNA, leading to an under-representation of that organism in the final sequencing library .
For these reasons, simply counting cells and accounting for genome size may not provide us with an accurate prediction of relative organism DNA abundance. We developed an alternative means to predict the relative DNA abundance of organisms by extracting DNA from a known number of cells of each organism in isolation and quantifying the amount of extracted DNA (see Table 1). We did so using the extraction method (EnzBB) that has been demonstrated in previous studies to achieve the maximum DNA yield from even the most recalcitrant cells , . This DNA quantification provides another way to estimate the amount of DNA per cell that we should expect from the simulated metagenomic samples. We predict the reads per organism to be directly proportional to the amount of DNA that can be extracted from each cell. Of course, this prediction based on isolate DNA extraction (DNA quantification) does not provide a perfect expectation of the relative organism abundance in extractions of mixed communities, but it does, at least theoretically, better account for the effects of DNA extraction efficiency and genome copy number per cell. Nevertheless, the observed organism abundance in our sequence libraries does not match the expectation based on DNA quantification (χ2 test, all p-values <<0.001.)
While this experiment was not designed to test specific hypotheses about how phylogeny, cell morphology, or genome characteristics may affect the outcome of a metagenomic survey, some interesting observations can be made. For example, because they have been shown to be more recalcitrant to lysis, one might expect that the organisms with the Gram-positive cell wall structure might consistently be under-represented in our libraries relative to the prediction based on isolate DNA extraction. This was not this case in our libraries, where in any given sample, some Gram-positive organisms were more abundant and others less abundant relative to our prediction (Figure 3). One also might expect that closely related organisms that share many genome characteristics would show the same distribution under a given preparation protocol. However, this is not the case with the five lactic acid bacteria, wherein even two strains of the same named species (Lactococcus lactis) differ in their read counts by more than an order of magnitude. In the EnzBB library for example, of the 11552 mapped reads, 3389 reads mapped to the Lactococcus lactis IL1403 genome while only 86 mapped to the Lactococcus lactis SK11 genome (see Table 2 and Figure 3).
The difference in read frequencies among members of the same named species cannot be ascribed to a lack of sequence differences among the two strain's genomes causing a failure in read assignment. Whole-genome alignment using the Mauve genome alignment software reveals the two Lactococcus isolates have approximately 87% average nucleotide identity thoughout their genomes and fewer than 1% of subsequences of the length of our reads lack differences to guide taxonomic assignment.
Of course, factors other than DNA extraction efficiency may contribute to differences between the predicted number of reads based on isolate DNA extraction and the observed number of reads. These include 1) cloning bias, which refers to the phenomenon whereby some DNA sequences are more readily propagated in E. coli ; 2) sequencing bias, which can refer to the propensity of the polymerase enzyme used for Sanger sequencing to stall and fall off when regions of the molecule with secondary structure are encountered or to errors introduced into pyrosequencing reads where there are homopolymeric runs ; and 3) computational difficulties with accurately and specifically binning reads. Future studies might attempt to disentangle the contribution of each of these factors to overall bias.
Comparison of DNA extraction methods
In terms of the relative abundance of organisms based on sequence reads, all metagenomic samples were significantly different from each other and significantly different from the estimated expected distribution (χ2 test, p-value <<0.001 for all pairwise comparisons, see Table 2 for data.) Halobacterium sp. NRC-1, Saccharomyces cerevisiae S288C, and Lactococcus lactis cremoris SK11 were under-represented in all libraries relative to the prediction based on isolate DNA extraction, whereas Acidothermus cellulolyticus and Shewanella amazonensis SB2B were over-represented in every library. Some organisms, e.g., Pediococcus pentosaceous, Lactococcus lactis cremoris IL1403, and Myxococcus xanthus DK 1622 were much more abundant in one library than in others (Figure 3).
The results demonstrate that two libraries created from a single mixture of organisms, but prepared using DNA that has been extracted by different protocols (i.e., Enz, EnzBB, or DNeasy), can produce reads that seem to represent two very different underlying communities. Therefore, the purpose of a metagenomic survey must be taken into consideration when choosing a DNA extraction protocol. While using multiple DNA extraction procedures on a single environmental sample can increase the likelihood that every organism in an environment will be sampled, doing so can also complicate quantitative comparisons of multiple samples.
Comparison of Sanger vs pyrosequencing
One advantage of sequencing with the pyrosequencing technology over that of clone library-based (Sanger) methods is the elimination of cloning bias. The Enz DNA extraction was split into two samples (Figure 2), one of which was cloned and sequenced using Sanger sequencing while the other was used to construct a library for pyrosequencing. These two libraries, like all others, yielded significantly different taxonomic distributions of reads (all χ2 tests have p-value <<0.001.) However, the χ2 statistic was lower (χ2 = 381.69) than any of the Sanger library pairwise comparisons, all of which had χ2>10397. This suggests that the effect of DNA extraction is more pronounced than the bias introduced by clone-based sequencing. Additionally, cloning bias has been shown to be influenced by GC content , , and in this experiment, the GC content of the Sanger-sequenced sample (56.0% GC) and the pyrosequenced sample (56.7% GC) using the same DNA extraction protocol were very similar. On the other hand, the GC content of the Sanger-sequenced libraries, using different DNA-extraction methods, ranged from 0.48% to 0.61% (Table 3).
The Enz+pyrosequenced metagenome differs from the Enz+Sanger metagenome in the types of reads that failed taxonomic assignment. Whereas very few Enz+Sanger reads failing taxonomic assignment had recognizable sequence identity to organisms in the NCBI non-redundant nucleotide database (547/2638 or 21% of unmapped reads), the majority of the unmapped pyrosequencing reads did have recognizable identity to NCBI database sequences (10171/10347, 98%). Both methods had a modest number of reads that failed taxonomic assignment because the read's sequence identity to the reference organism was below the stringent identity threshold (316 Enz+Sanger reads, between 791 and 2932 Enz+pyrosequencing reads). Additionally, about 0.3% of the Enz+pyrosequencing unmapped reads exhibited sequence identity to an unknown member of the Bacillus genus. We speculate that a small amount of Bacillus DNA may have entered the Enz+pyrosequencing sample prior to emulsion PCR (see Methods), which may have amplified the contaminant.
Additional simulated metagenomes
As mentioned before, the primary purpose of this experiment was to generate sequence data that could be used to test the computational tools that are used to analyze metagenomic sequence data. With this in mind, we opted to use several DNA extraction methods in order to maximize the likelihood of recovering sequence data for every organism in our sample. We did not perform technical replicates for each DNA extraction method. However, post hoc comparisons of the different DNA extraction protocols did produce interesting results, prompting us to perform again the same experiments on a smaller scale. While these are not perfect technical replicates, they were performed using exactly the same starting material. These additional simulated metagenomes were created by thawing additional aliquots of the primary frozen culture stocks and mixing them as described below. We did two additional simulations for each of the Enz, EnzBB, and DNeasy protocols and performed Sanger sequencing on the extracted DNA (Figure 4). One of the additional simulations used frozen stock of isolate cultures, the other used frozen stock of isolate cultures with glycerol added to a final concentration of 10%. The so-constructed sequence libraries are not technical replicates of the simulation because they include effects introduced by long-term frozen storage of isolate cultures at −80°C with and without glycerol. Use of glycerol should help prevent cells from lysing, so if large differences were observed between the repeated samples with and without glycerol, it would be reasonable to suspect that cell lysis is an important factor to consider when doing metagenomics with frozen samples.
Bars represent the observed frequency of organisms in sequenced metagenomes. We constructed and sequenced metagenomes according to the Enz, EnzBB, and DNeasy protocols using the long term frozen isolate culture stocks with glycerol and without glycerol. Reads were mapped to reference genomes as described in Methods. The additional metagenomes show some differences to each of the original libraries. Such differences might be caused by variation across DNA preparations and sequencing runs, age of the frozen samples, or other factors. The libraries constructed using the DNeasy Kit produced the most consistent results.
For each additional simulation, we began by retrieving aliquots of Mix #1 (for the additional EnzBB and DNeasy libraries) or by re-creating Mix #2 (for the Enz library). For the additional libraries using glycerol stocks, both Mix #1 and Mix #2 were re-created from the individual stock cultures. As before, the taxon relative abundance distribution for each library is significantly different from every other library (χ2 test, all p-values <0.001). However, if we consider the original libraries to represent an expected organism relative abundance for each DNA extraction protocol, then we can compare the average Chi-square statistic within each DNA extraction protocol to determine which protocol yields the most consistent results. The average Chi-square statistic for the additional libraries is much lower for the DNeasy extraction (average χ2 = 377.26) than for either the Enz extraction (average χ2 = 5013.12) or the EnzBB extraction (average χ2 = 774.96) protocols. This result indicates that the repeatability of the kit extraction method is better than the two other extraction methods (Figure 4). This is in line with expectation, since a possible advantage of kit-based DNA extraction protocols is that variation due to stochastic error should be minimized.
In silico simulations of metagenome sequencing are cheap, quick, and easy, The type of in vitro simulation presented here is comparatively expensive, difficult, time-consuming, but captures bias in the metagenomic sampling procedure more faithfully than in silico simulations. Studies such as ours add a layer of complexity and biological realism beyond that attainable with computational simulations alone. With in silico simulations, one can model complex and highly diverse communities, but the models used to sample reads from isolate genomic data are limited in their ability to capture biases introduced by experimental protocol. In particular, biases in sequence coverage (per genome) can be due to growth conditions, organismal growth phase, DNA extraction efficiency, cloning bias, sequencing efficiency, or relative genome copy number.
In no case did the relative organism abundance in our sequence libraries reflect the known composition of our simulated community. This suggests that sequencing-based methods alone are insufficient to assess the relative abundance of organisms in an environmental sample. If calibrated by another method, such as fluorescent microscopy, sequencing might be more useful in this regard. The results also highlight the need to standardize as many laboratory techniques as possible when comparing metagenomic samples across environments, timescales, or environmental conditions. Currently, there is no standard approach for metagenomic surveys, making it difficult to make useful inferences when comparing data among different studies.
It is important to note that the purpose of a given metagenomic sampling effort will vary, and the methods used should be chosen to best suit that purpose. For example, here we found that using a kit-based DNA extraction protocol produced the most consistent results with repeated sampling. This is important if the goal of a study is to track differences across environments, treatments, or timescales. However, if the goal is to fully catalog all organisms or to know with certainty the relative abundance of organisms in a sample, our results suggest that the kit-based DNA extraction could offer the worst performance of the methods tested here. Of course, there are other factors to consider: the DNA yield from kit-based DNA extractions is considerably lower than alternative methods, it is typically of a lower molecular weight, and it is more costly to acquire.
Our ability to make strong conclusions about the source of variation across samples is unfortunately limited by our lack of technical replicates. However, we find the magnitude of this variation striking, even in this simple, well-understood, artificially constructed microbial “community.” Future experiments to tease apart the sources of bias, especially those designed with specific natural communities in mind, will be valuable. In addition to providing sequence data that can be used for benchmarking analytical techniques for metagenomics, it is our hope that this type of simulation can help aid model development for future in silico simulations. For this purpose, sequence data generated in our study is available via the IMG/M , on the BioTorrents file sharing site (http://www.biotorrents.net/details.php?id=47), and via the NCBI's Trace and Short Read Archives.
Myxococcus xanthus DK1622 cells were grown in CTTYE (1% Casitone [Difco], 10 mM Tris-HCl (pH 7.6), 1 mM KH2PO4, 8 mM MgSO4) broth at 33°C with vigorous aeration. Cells were harvested when a Klett-Summerson colorimeter read 100 Klett units, or approximately 2×108 cells/ml. Acidothermus cellulolyticus 11B was grown in liquid culture at 55 degrees C on a shaker at 150 rpm. The growth medium consisted of American Type Culture Collection medium 1473, modified by use of glucose (5 g/l) in place of cellulose, pH 5.2–5.5. The five lactic acid bacteria were provided as streaked MRS agar plates, from which single colonies were used to start pure cultures in liquid MRS broth. Halobacterium sp. NRC-1 (ATCC#700922), Saccharomyces cerevisiae S288C (ATCC#204508), and Shewanella amazonensis SB2B (ATCC# BAA-1098) were obtained as freeze-dried stocks and used per recommended protocol to start cultures in the prescribed media. Cultures were grown 12–48 hours until turbid. The cell density of each culture was determined by counting DAPI-stained cells using a Cytopeia Influx flow cytometer. Immediately after counting, the cultures were aliquoted into ten 2 mL cryotubes, flash-frozen in liquid nitrogen and stored at −80°C. Glycerol was added to one of the tubes before freezing to make a 10% glycerol stock solution (except for the Myxococcus xanthus, which was provided as flash-frozen liquid culture.)
Two techniques were employed for mixing. Mix #1: One tube of each of the ten cultures was thawed on ice. An aliquot from every tube was added to a single new tube such that each organism contributed an equal number of cells to the final mixture. This final mixture was aliquoted into four 2 mL cryotubes which were flash-frozen and returned to −80°C. Immediately prior to DNA extraction, one of the 2 mL cryotubes of the final mixture was centrifuged for 10 minutes at 10,000 rpm to pellet cells. The supernatant was removed, and the cell pellet was resuspended in TES buffer (10 mM Tris-HCl pH 7.5, 1 mM EDTA, 100 mM NaCl). Mix #2: One tube of each of the ten cultures was thawed on ice. An aliquot from every tube was transferred to a new tube so that the new set of tubes contained an equal number of cells per tube. Immediately prior to DNA extraction, each tube was centrifuged for 10 minutes at 10,000 rpm to pellet the cells. Each cell pellet was resuspended in the lysis buffer that is provided with the DNeasy kit (Qiagen, Valencia, CA), and the contents of all ten tubes were pooled into a single tube.
DNA Prep #1 (EnzBB): The resuspended cells were incubated with a final concentration of 50 U/uL lysozyme (Ready-Lyse, Epicentre Technologies) at room temperature for 30 minutes. Further lysis was accomplished by the addition of proteinase-k and SDS to a final concentration of 0.5 mg/mL and 1%, respectively, and incubation at 55°C for 4 hours. Finally, the lysate was subjected to mechanical disruption with a bead beater (BioSpec Products, Inc., Bartlesville, OK), on the Homogenize setting for 3 minutes. Protein removal was accomplished by extracting twice with an equal volume of 25∶24∶1 phenol:chloroform:isoamyl alchol. The aqueous phase was incubated at −20°C for 30 minutes with 2.5 volumes of 100% ethanol and 0.1 volumes of 3 M sodium acetate before centrifugation at 16,000 g for 30 minutes at 4°C. The DNA pellet was washed with cold 70% ethanol and allowed to air dry before resuspension in TE (10 mM Tris-HCl pH 7.5, 1 mM EDTA.) DNA quantitation was performed using the Qbit fluorometer (Invitrogen).
DNA Prep#2 (DNeasy): Qiagen's DNeasy kit (Qiagen, Valencia, CA) per manufacturer's protocol for bacterial cultures.
DNA Prep #3 (Enz): Identical protocol to DNA Prep#1 but without the bead beating step.
Library construction and sequencing.
Three small-insert (∼2 kb) libraries were constructed by randomly shearing 10 µg of metagenomic DNA using a HydroShear (GeneMachines, San Carlos, CA). The sheared DNA was electrophoresed on an agarose gel, and fragments in the 2–3 kb range were excised and purified using the QIAquick Gel Extraction Kit (Qiagen, Valencia, CA). The ends of the DNA fragments were made blunt by incubation, in the presence of dNTPs, with T4 DNA Polymerase and Klenow fragment. Fragments were ligated into the pUC18 vector using the Fast-Link(TM) Ligation Kit (Epicentre, Madison, WI) and transformed via electroporation into ElectroMAX DH10B(TM) Cells (Invitrogen, Carlsbad, CA) and plated onto agar plates with X-gal and 150 mg/mL Carbenicillin. Colony PCR (20 colonies) was used to verify a >10% insertless rate and ∼1.5 kb insert size. White colonies were arrayed into 384-well plates for sequencing.
For Sanger sequencing, plasmids were amplified by rolling circle amplification using the TempliPhi(TM) DNA Sequencing Amplification Kit (Amersham Biosciences, Piscataway, NJ) and sequenced using the M13 (−28 or −40) primers with the BigDye kit (Applied Biosystems, Foster City, CA). Sequencing reactions were purified using magnetic beads and run on an ABI PRISM 3730 (Applied Biosystems) sequencing machine.
The library for pyrosequencing was constructed using ∼5 mg of metagenomic DNA, which was nebulized (sheared into small fragments) with nitrogen and purified with the MinElute PCR Purification Kit (Qiagen, Valencia, CA). The GS20 Library Prep Kit was used per manufacturer's protocol to make a ssDNA library suitable for amplification using the GS20 emPCR Kit and then prepared for sequencing on the Genome Sequencer 20 Instrument using the GS 20 Sequencing Kit.
Vector sequences were removed with cross_match, a component of the Phrap software package and low-quality bases, i.e. those with a PHRED quality score of Q> = 15, were converted to “N”s using JAZZ, the JGI's in-house genome sequence assembly algorithm.
Taxonomic assignment of reads.
We mapped reads back to reference genomes by means of BLAST search . A BLAST database containing the nucleotide sequence of each of the ten genomes (chromosomes and plasmids) was constructed. Reads were searched against that BLAST database, and low-scoring hits (e-value>0.0001) were discarded except for the pyrosequencing-generated reads, for which a threshold of 0.01 was used. Reads not passing BLAST's low complexity filter were considered to have failed QC, this happened frequently for reads containing a large number of <Q15 bases replaced with N. Some reads contained a high fraction of N bases but still passed the low complexity filter, such reads frequently had no significant hit to the 10 reference organisms. Reads with hits were assigned to the genome corresponding to their top BLAST hit only if the top hit had sequence identity >95% and the next highest hit to a different organism had a bit score at least 20 points lower. Such reads are considered “mapped.” In order to investigate possible contamination in sequence libraries, reads without hits were searched against the NCBI non-redundant amino acid database in parallel using mpiBLAST .
We thank David Mills, Mitchell Singer, and Alison Berry for supplying cultures for the lactic acid bacteria, Myxococcus xanthus, and Acidothermus cellulolyticus, respectively. We thank Morgan G. I. Langille for comments on a draft of this manuscript. Sequencing was performed at the DOE Joint Genome Institute in Walnut Creek, CA.
Conceived and designed the experiments: JLM JAE. Performed the experiments: JLM. Analyzed the data: JLM AED. Wrote the paper: JLM AED JAE.
- 1. Hugenholtz P, Goebel BM, Pace NR (1998) Impact of culture-independent studies on the emerging phylogenetic view of bacterial diversity. J Bacteriol 180: 4765–4774.
- 2. Handelsman J (2004) Metagenomics: application of genomics to uncultured microorganisms. Microbiol Mol Biol Rev 68: 669–685.
- 3. Riesenfeld CS, Schloss PD, Handelsman J (2004) Metagenomics: genomic analysis of microbial communities. Annu Rev Genet 38: 525–552.
- 4. Blow N (2008) Metagenomics: exploring unseen communities. Nature 453: 687–690.
- 5. Daniel R (2005) The metagenomics of soil. Nat Rev Microbiol 3: 470–478.
- 6. Singh J, Behal A, Singla N, Joshi A, Birbian N, et al. (2009) Metagenomics: Concept, methodology, ecological inference and recent advances. Biotechnol J 4: 480–494.
- 7. Venter JC, Remington K, Heidelberg JF, Halpern AL, Rusch D, et al. (2004) Environmental genome shotgun sequencing of the Sargasso Sea. Science 304: 66–74.
- 8. Johnson PL, Slatkin M (2006) Inference of population genetic parameters in metagenomics: a clean look at messy data. Genome Res 16: 1320–1327.
- 9. Palenik B, Ren Q, Tai V, Paulsen IT (2009) Coastal Synechococcus metagenome reveals major roles for horizontal gene transfer and plasmids in population diversity. Environ Microbiol 11: 349–359.
- 10. Tyson GW, Chapman J, Hugenholtz P, Allen EE, Ram RJ, et al. (2004) Community structure and metabolism through reconstruction of microbial genomes from the environment. Nature 428: 37–43.
- 11. McHardy AC, Rigoutsos I (2007) What's in the mix: phylogenetic classification of metagenome sequence samples. Curr Opin Microbiol 10: 499–503.
- 12. Chan CK, Hsu AL, Halgamuge SK, Tang SL (2008) Binning sequences using very sparse labels within a metagenome. BMC Bioinformatics 9: 215.
- 13. Chan CK, Hsu AL, Tang SL, Halgamuge SK (2008) Using growing self-organising maps to improve the binning process in environmental whole-genome shotgun sequencing. J Biomed Biotechnol 2008: 513701. 513701.
- 14. Huson DH, Auch AF, Qi J, Schuster SC (2007) MEGAN analysis of metagenomic data. Genome Res 17: 377–386.
- 15. Kunin V, Copeland A, Lapidus A, Mavromatis K, Hugenholtz P (2008) A bioinformatician's guide to metagenomics. Microbiol Mol Biol Rev 72: 557–578, Table of Contents.
- 16. Mavromatis K, Ivanova N, Barry K, Shapiro H, Goltsman E, et al. (2007) Use of simulated data sets to evaluate the fidelity of metagenomic processing methods. Nat Methods 4: 495–500.
- 17. McHardy AC, Martin HG, Tsirigos A, Hugenholtz P, Rigoutsos I (2007) Accurate phylogenetic classification of variable-length DNA fragments. Nat Methods 4: 63–72.
- 18. Carrigg C, Rice O, Kavanagh S, Collins G, O'Flaherty V (2007) DNA extraction method affects microbial community profiles from soils and sediment. Appl Microbiol Biotechnol 77: 955–964.
- 19. Krsek M, Wellington EM (1999) Comparison of different methods for the isolation and purification of total community DNA from soil. J Microbiol Methods 39: 1–16.
- 20. Temperton B, Field D, Oliver A, Tiwari B, Muhling M, et al. (2009) Bias in assessments of marine microbial biodiversity in fosmid libraries as evaluated by pyrosequencing. ISME J 3: 792–796.
- 21. Bertrand H, Poly F, Van VT, Lombard N, Nalin R, et al. (2005) High molecular weight DNA recovery from soils prerequisite for biotechnological metagenomic library construction. J Microbiol Methods 62: 1–11.
- 22. Frostegard A, Courtois S, Ramisse V, Clerc S, Bernillon D, et al. (1999) Quantification of bias related to the extraction of DNA directly from soils. Appl Environ Microbiol 65: 5409–5420.
- 23. McOrist AL, Jackson M, Bird AR (2002) A comparison of five methods for extraction of bacterial DNA from human faecal samples. J Microbiol Methods 50: 131–139.
- 24. Sanger F, Nicklen S, Coulson AR (1977) DNA sequencing with chain-terminating inhibitors. Proc Natl Acad Sci U S A 74: 5463–5467.
- 25. Ronaghi M (2001) Pyrosequencing sheds light on DNA sequencing. Genome Res 11: 3–11.
- 26. Liolios K, Mavromatis K, Tavernarakis N, Kyrpides NC (2008) The Genomes On Line Database (GOLD) in 2007: status of genomic and metagenomic projects and their associated metadata. Nucleic Acids Res 36: D475–479.
- 27. Ng WV, Kennedy SP, Mahairas GG, Berquist B, Pan M, et al. (2000) Genome sequence of Halobacterium species NRC-1. Proc Natl Acad Sci U S A 97: 12176–12181.
- 28. Goffeau A (1996) 1996: a vintage year for yeast and Yeast. Yeast 12: 1603–1605.
- 29. Makarova K, Slesarev A, Wolf Y, Sorokin A, Mirkin B, et al. (2006) Comparative genomics of the lactic acid bacteria. Proc Natl Acad Sci U S A 103: 15611–15616.
- 30. Jakobsen JS, Jelsbak L, Welch RD, Cummings C, Goldman B, et al. (2004) Sigma54 enhancer binding proteins and Myxococcus xanthus fruiting body development. J Bacteriol 186: 4361–4368.
- 31. Venkateswaran K, Dollhopf ME, Aller R, Stackebrandt E, Nealson KH (1998) Shewanella amazonensis sp. nov., a novel metal-reducing facultative anaerobe from Amazonian shelf muds. Int J Syst Bacteriol 48 Pt 3: 965–972.
- 32. Barabote RD, Xie G, Leu DH, Normand P, Necsulea A, et al. (2009) Complete genome of the cellulolytic thermophile Acidothermus cellulolyticus 11B provides insights into its ecophysiological and evolutionary adaptations. Genome Res 19: 1033–1043.
- 33. Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ (1990) Basic local alignment search tool. J Mol Biol 215: 403–410.
- 34. Breuert S, Allers T, Spohn G, Soppa J (2006) Regulated polyploidy in halophilic archaea. PLoS One 1: e92.
- 35. Cooper S, Helmstetter CE (1968) Chromosome replication and the division cycle of Escherichia coli B/r. J Mol Biol 31: 519–540.
- 36. Donachie WD (2001) Co-ordinate regulation of the Escherichia coli cell cycle or The cloud of unknowing. Mol Microbiol 40: 779–785.
- 37. Kugelberg E, Kofoid E, Reams AB, Andersson DI, Roth JR (2006) Multiple pathways of selected gene amplification during adaptive mutation. Proc Natl Acad Sci U S A 103: 17319–17324.
- 38. van Burik JA, Schreckhise RW, White TC, Bowden RA, Myerson D (1998) Comparison of six extraction techniques for isolation of DNA from filamentous fungi. Med Mycol 36: 299–303.
- 39. Darling AC, Mau B, Blattner FR, Perna NT (2004) Mauve: multiple alignment of conserved genomic sequence with rearrangements. Genome Res 14: 1394–1403.
- 40. Sorek R, Zhu Y, Creevey CJ, Francino MP, Bork P, et al. (2007) Genome-wide experimental determination of barriers to horizontal gene transfer. Science 318: 1449–1452.
- 41. McMurray AA, Sulston JE, Quail MA (1998) Short-insert libraries as a method of problem solving in genome sequencing. Genome Res 8: 562–566.
- 42. Huse SM, Huber JA, Morrison HG, Sogin ML, Welch DM (2007) Accuracy and quality of massively parallel DNA pyrosequencing. Genome Biol 8: R143.
- 43. Markowitz VM, Ivanova NN, Szeto E, Palaniappan K, Chu K, et al. (2008) IMG/M: a data management and analysis system for metagenomes. Nucleic Acids Res 36: D534–538.
- 44. Langille M, Eisen JA (2010) BioTorrents: A File Sharing Service for Scientific Data. PLoS One. In press.
- 45. Ewing B, Green P (1998) Base-calling of automated sequencer traces using phred. II. Error probabilities. Genome Res 8: 186–194.
- 46. Ewing B, Hillier L, Wendl MC, Green P (1998) Base-calling of automated sequencer traces using phred. I. Accuracy assessment. Genome Res 8: 175–185.
- 47. Darling A, Carey L, Feng W (2003) The Design, Implementation, and Evaluation of mpiBLAST. 4th International Conference on Linux Clusters and ClusterWorld 2003. San Jose, CA. |
The Shroud Research Network
Seeking Solutions to the Mysteries of the Shroud.
This website is devoted to pursuing, as far as possible, the best explanations for the mysteries of the Shroud of Turin through research and conferences. The mysteries of the Shroud include:
Why can we see the image of a crucified man on the Shroud?
How was the image formed?
When was the Shroud made?
What about the 1988 carbon dating of the Shroud?
What is the nature of the apparent blood on the Shroud?
How did the blood get onto the Shroud?
Could the Shroud of Turin be the burial cloth of Jesus?
Link to the following sections:
1. What is the Shroud of Turin?
The Shroud of Turin is also called the Turin Shroud, or just the Shroud. A shroud is a piece of cloth that is used to wrap a dead body for burial. Turin is a city in north-western Italy. Thus, the Shroud of Turin is a burial cloth located in Turin, Italy. It has been in Turin since 1578, and measures 14 feet 4 inches long by 3 feet 8 inches wide. The remarkable thing about this burial shroud is that it contains a front and back (dorsal) image of a man that was crucified exactly as the New Testament says that Jesus of Nazareth was crucified, yet the image contains no pigment. This is the only burial shroud with an image on it.
The Shroud of Turin has a continuously documented history back to about 1355 or 1356 when it went on display in Lirey, France, as the burial cloth of Jesus, but there is convincing evidence that it was in Constantinople prior to 1204. The amazing thing about this burial cloth is that it contains full size good resolution images of the front and back of a naked man that was crucified exactly as the New Testament says that Jesus of Nazareth was crucified, yet the images contain no pigment. When put on display in Turin, Italy, which usually occurs only a few times each century, millions of people file past the Shroud and see the images of the crucified man. Long standing tradition claims that the Shroud of Turin is the authentic burial cloth of Jesus. Ancient coins and artistic works are consistent with this view. The scientific investigation of the Shroud began in 1898 when Secondo Pia took the first photograph of the Shroud which revealed that the image was a good resolution negative image. The historical and scientific research on the Shroud since then makes it the most studied ancient artifact in existence. This scientific research has shown that the characteristics of the image are so unique that it could not be the result of a human agent, either an artist or forger, because the technology to create this image did not exist in any previous era and still does not exist even today. Based on this scientific research, the majority opinion of Shroud researchers is that the Shroud wrapped the dead body of a crucified man and that in some way this body encoded front and back images of itself onto the inside of the Shroud.
We believe that the evidence on the Shroud indicates that the front and back images of the crucified man were formed by radiation emitted from the body that altered the atomic structure of the atoms in the linen. We believe that the claimed C-14 date (1260 to 1390 AD) is a misinterpretation of the C-14 measurement data because neutrons were evidently included in the radiation from the body that caused the image. A small fraction of these neutrons would have been absorbed in the trace amount of N-14 in the cloth to form new C-14 atoms by the (N-14 + neutron produces C-14 + proton) reaction. The amount of C-14 at the sample location had to increase by only 16% to shift the C-14 date from the time of Jesus (about 33 AD) to the range of the carbon dating (1260 to 1390 AD). (See paper 19 on the RESEARCH page.)
Jesus’ linen burial shroud was found by Peter and John in the tomb after Jesus’ crucifixion in Jerusalem (John 20:3-9). It is very unlikely to have been ignored, reused, burnt or thrown out, so that if protected from moisture, insects, and intentional destruction, it could easily have survived to the present. Galatians 3:1 (about 47 to 56 AD) indicates the believers in Galatia were shown something that “clearly” or “publicly portrayed” “Jesus Christ … as crucified” (NIV & NASB). Based on the Greek word “proegrapha” translated “portrayed” in this verse, and the context, they could have been shown Jesus’ burial shroud containing his blood and possibly his image.
The image that is now on the Shroud of Turin was frequently copied in Byzantine art. The earliest surviving example is the Christ Pantocrator painting from St. Catherine’s Monastery at Sinai, which probably dates to about 550 AD. The Shroud was most likely brought to Constantinople, the capital of the Byzantine Empire, in 574 as the Image of God Incarnate, or it may have come into Constantinople in 944 as the Mandylion or Image of Edessa. Its presence in Constantinople long before the C14 date of 1260 to 1390 is confirmed by Byzantine coins starting in 692, the Hungarian Pray Manuscript (1192-1195), and the report (1203-1204) of French crusader Robert de Clari that Jesus’ burial cloth was exhibited weekly at a Church in Constantinople. It may have been sold by Byzantine emperor Baldwin II to his cousin, King Louis IX of France, between 1237 and 1261, or it may have been stolen from Constantinople in the sack of the city in 1204. In about 1355 it was exhibited in Lirey, France, as the true burial cloth of Jesus by the French knight Geoffrey de Charny. In 1453, it was sold by Geoffrey de Charny’s granddaughter to Louis, the Duke of Savoy, and gradually transported across France till it came into Turin, Italy, in 1578. (Summarized from section 4 of paper 19 on the RESEARCH page.)
1. Rigor mortis in feet shows that the victim was on the cross for a significant amount of time after he had died.
2. Two nails are through one foot, but only one of the nails is through the other foot. This allows one foot to rotate, so that the victim can push up and down on the cross in order to breath during crucifixion. If the victim of crucifixion is not pushing up and down, then it is clear he is dead. The soldiers had no doubt that Jesus was dead (Mark 15:43-45, John 19:31-35).
3. In 1532, the church where the Shroud was located caught fire. This fire produced two scorch lines on either side of the front and dorsal images. Water stains can also be seen on the Shroud from water thrown onto the metal box containing the Shroud after it was rescued from the fire. The heat from the fire did not produce a gradation in the intensity of the image discoloration, indicating that the image is not due to application of an organic compound.
4. Shortly after the fire in 1532, charred material was removed and replaced by patches. The repeating pattern of patches and scorch marks that can be seen on the Shroud resulted from the way in which the cloth was folded at the time of the fire. One corner of the folded Shroud that burned resulted in the many areas that had to be patched.
5. The Shroud has four sets of burn holes in an L-shaped pattern. This same pattern of holes appears on a picture in a document known as the Hungarian Pray Manuscript, which is dated to 1192-1195 AD. This indicates that the Shroud of Turin ought to be identified as the cloth that was in Constantinople until the city was sacked during the fourth crusade in 1204 AD. This indicates that the conclusion of the 1988 C-14 dating of the Shroud to 1260 to 1390 AD must be erroneous. Other dating methods are consistent with a first century date for the Shroud: 1) test results of tensile strength and reflectivity of linen as it ages, 2) stitching used to sew on the 3-inch wide side piece onto the main Shroud is nearly identical to that found at Masada which was destroyed in 73-74 AD, 3) the size of the Shroud being very close to 2 by 8 cubits - the ancient unit of measurement, 4) crucifixion being outlawed after the fourth century, and 5) a possible Roman Lepton over one eye dating to 29 to 32 AD. Several hypotheses have been made to explain the erroneous conclusion of the C-14 dating, including an invisible reweave of the sample area and neutron absorption in the trace amount of nitrogen in the linen shifting the C-14 date by the (N-14 + neutron --> C-14 + proton) reaction. Details of this last option are discussed further on the RESEARCH page.
6. The back (dorsal) image on the Shroud shows a separation of blood and clear blood serum that flowed from the wound in his side that shows on the front image. This separation indicates that the victim’s heart was not beating for long enough to allow the red blood cells to settle out of the clear blood serum before the side wound was made. Compare this with the "blood and water" that is said to have exited from Jesus' side wound in John 19:34.
7. The Shroud shows 100 to 120 scourge marks from two Roman flagrum, one striking from each side, with dumbbell shaped weights on the ends of the straps. The blood marks from these wounds show blood serum rings (visible only under UV) around the dried blood exudate.
8. There are abrasions on both shoulders evidently caused by the victim carrying a heavy rough object. Compare this with Jesus carrying his own cross (John 19:17). This refers to the horizontal piece (patibulum) but not the vertical piece, which would have been stationary in the ground at the location of the crucifixion.
9. The front and back of the head show puncture wounds from sharp objects. Jesus had a cap of thorns beat into his scalp with rods (Matthew 27:30, Mark 15:17-19).
10. Pollen is on the Shroud that is unique to the area around Jerusalem. Pollen from a plant with long thorns was found around his head.
11. The front and back (dorsal) images of the crucified man are negative images and contain 3D or topographical information content related to the distance of the cloth from the body. Of the 100 to 200 fibers in a thread, the images result from only the top one or two layers of fibers in a thread being discolored. The thickness of discoloration in a fiber is about 0.2 microns, which is less than a wavelength of light. There is no indication of capillarity (soaking up of a liquid) between the fibers or the threads, which means that the image could not have been made by a liquid. The discolored regions of the fibers in the image result from a change in the electron bonding of the carbon atoms that were originally in the cellulose molecules in the linen. This change in the electron bonding of the carbon atoms is equivalent to a dehydration and oxidation of the cellulose molecules, but how could this form the image of a naked crucified man? The conclusion is that an artist or forger could not have produced the unique characteristics of the images in any era, either ancient or modern.
12. The image on the Shroud has swollen cheeks and a possible broken nose from a beating (John 18:3) or a fall. Abrasions on the tip of the nose have a microscopic amount of dirt in the abrasions. Jesus probably fell while carrying his cross (Matthew 27:32, Mark 15:21).
13. The side of the front image on the Shroud shows a 2-inch wide elliptical wound, which is the size of a typical Roman spear (John 19:34). Post-mortem (after death) blood and watery fluid flowed down from this wound.
14. The blood running down his arms is at the correct angles for a crucifixion victim. Two angles for the blood flow can be seen on his arms. These two angles are consistent with the crucifixion victim shifting between two positions while on the cross in order to breath (See #2 above). What appears to be blood on the Shroud has passed 13 tests indicating that it is consistent with human blood. The evidence indicates that the blood is probably type AB. And most significantly, the blood is high in bilirubin which is a compound produced by the liver when it processes damaged red blood cells, which occurs when a victim is severely beaten, as Jesus was. Normal blood turns very dark brown to black as it ages over days and weeks, but the blood marks on the Shroud show a reddish hue. There are various proposed causes for this coloration.
15. Paintings from the Middle Ages show the nail wounds in the palms, but nails through the palms do not support the required weight since there is no bone structure above this location. Archeology has confirmed that during crucifixion, the nails were driven through the wrists. The Shroud shows the correct nail locations - through the wrist instead of through the palm. If the image on the Shroud was made by an artist or forger during the time interval indicated by carbon dating (1260-1390), it would have the nail wounds in the middle of the palms. This indicates that the image on the Shroud is not from the Middle Ages.
16. On the Shroud, the thumbs are folded under, contrary to paintings of the Middle Ages. Nails through the wrists automatically fold the thumbs under due to contact of the nail with the nerve that goes through the wrist. This also indicates the image was not made during the Middle Ages.
17. Abrasions on one knee show a microscopic amount of dirt, which is evidence of a fall.
18. The three-inch wide side strip is sown on with a unique stitch nearly identical to that found only at Masada which was destroyed in 73-74 AD. This is evidence that the Shroud was made in the first century. The reason for this three-inch side piece is not certain, but the most likely explanation is that it probably was sown on in the process of originally making the Shroud.
19. Small chips of travertine aragonite limestone were found in dirt near the feet. This rare form of limestone is commonly called "Jerusalem limestone" because Jerusalem is the main location in the world where it is found. This limestone found in dirt on the Shroud had a spectral signature nearly identical to a sample of limestone taken from the Damascus Gate - the closest gate to Golgotha. No other place on earth is known to have the identical spectral image. This indicates that the victim whose image is shown on the Shroud probably walked on the streets of Jerusalem before being crucified. (See books by Mark Antonacci)
4. Scientific Investigation of the Shroud
Scientific investigation of the Shroud of Turin began in 1898 when an amateur photographer named Secondo Pia took the first photograph of the Shroud and found to his amazement that his negative was a high resolution positive image, which meant that the image on the Shroud was a high resolution negative image. This implied that it could not be a painting since artists cannot accurately produce a negative image because they never see one. Subsequent investigation of the wounds observed on the Shroud by experts in anatomy and medicine led them to conclude that the images and blood marks on the Shroud were in some way the result of a real human body that had been wrapped in the Shroud. In 1976, using a VP-8 image analyzer, it was discovered that there is 3D or topographical information in the image on the Shroud related to the body-to-cloth vertical distance. Since such information does not exist in any painting or photograph, this indicated that the image on the Shroud could not be a painting or photograph. This motivated scientists at leading national laboratories and research facilities in the United States to form the Shroud of Turin Research Project (STURP) to apply the best scientific methods and equipment to determine how the image on the Shroud was formed. About 24 of their team went to Turin in 1978 where they were allowed five days, 24 hours a day, to perform non-destructive testing on the Shroud. The STURP investigation found that:
· The image has no pigment, no carrier, no brush strokes, no clumping of material between the fibers or threads, no cracking due to centuries of folding or rolling the Shroud, and no stiffening of the cloth. This means that the image could not be due to paint, dye, or stain.
· There is no capillarity (soaking up of a liquid) of the discoloration in the fibers or threads, so the image could not be due to application of a liquid such as an acid or a chemical in a liquid state.
· The image is not luminescent under ultra-violet light. This means that the image could not be due to a scorch from contact of a hot object with the cloth.
· The image is only visible in front lighting. It is not visible in back lighting. From this, the STURP team concluded that the image does not result from any substance placed on the cloth, which means that the image could not be a rubbing, a dusting, or a print.
· A typical thread contains about 100 to 200 fibers. The image is caused discoloration of only the top one or two layers of fibers in a thread.
· On a discolored fiber, the discoloration is located on the outside circumference of the fiber, usually 360 degrees around the fiber. The thickness of this discolored layer is about 0.2 microns, which is less than a wavelength of light, and only a small fraction of the 15 to 20-micron diameter of a fiber. The inside of the fiber is not discolored.
· The discoloration of any fibers in the image results from a change in the electron bonding of the carbon atoms that were already in the cellulose molecule. This change in the electron bonding of the carbon atoms is equivalent to a dehydration and oxidation of the cellulose molecule. But how can this change in the electron bonding of the carbon atoms be accomplished to create an image of a crucified man?
(See paper 15 of the RESEARCH page.)
5. Could the Shroud have been made by an artist or a forger?
The two most common explanations of the Shroud are: 1) Based on the C-14 dating, the Shroud was made by an artist or forger between 1260 to 1390 AD probably in northern France, and 2) It is the authentic burial cloth of Jesus from about 33 AD. Note that several of the above items are inconsistent with the Shroud being a forgery from the Middle Ages. A forger would not have known or been able to:
· Place invisible serum rings around the blood exudate of the scourge marks.
· Add pollen to the Shroud that is unique to the Jerusalem area.
· Add pollen around the head that is from a plant with long thorns.
· Put a microscopic amount of dirt in abrasions on the nose and one knee.
· Put bilirubin and nanoparticles of creatinine and ferritin into the blood, indicating torture.
· Locate the nails in the wrists with the thumbs folded under, contrary to paintings from the Middle Ages.
· Put microscopic chips of limestone into dirt near the feet that match Jerusalem limestone.
· Use a stitch unique to the first century to sew the three-inch wide side strip to the main shroud.
· Create a negative image that contains 3D information related to the body-to-cloth distance in the image.
· Create an image of a crucified man on the Shroud based on only the top one or two layers of fibers being discolored, and only the outer 0.2 microns of the 15 to 20-micron fiber diameter being discolored.
· Create an image of a crucified man on the Shroud based on a change in the electron bonding of the carbon atoms that were already in the cellulose molecules.
6. Dating the Shroud
There are 14 indicators for the date for the Shroud, 13 of which are consistent with the time of Jesus. Only the C14 date is inconsistent with the time of Jesus. These dating techniques are summarized below: (Summarized from section 6C of paper 15 on the RESEARCH page.)
1. In 1988, samples were cut from a corner of the Shroud for carbon dating at three laboratories. The results of the 16 measurements were interpreted to mean that the Shroud dated to 1260-1390.
2. The Hungarian Pray Codex or Manuscript is historically dated to 1192 to 1195 AD. It includes a painted drawing that must have been copied from the Shroud of Turin based on the pattern of burn holes on the painting and on the Shroud, so the Shroud must have existed in 1192-1195 AD.
3. It is believed that the spinning wheel was invented in Asia by the 11th century and had spread to Europe by the 13th century. Since the Shroud is made of hand-spun thread, the threads that compose the Shroud were probably spun before the 12th century.
4. The international standard of the market place at the time of Jesus was the Assyrian cubit which was equal to about 21.6 inches (54.9 cm). The dimensions of the Shroud in this unit is very close to 8 by 2 cubits, indicating it was made in ancient times when the cubit was used as a unit of measurement.
5. Ancient coins that contain the same image as the Shroud of Turin go back to about 675 AD, thus showing that the Shroud must have existed prior to about 675 AD.
6. The face cloth of Jesus is believed to be in Oviedo, Spain, arriving there in 840 AD. It is called the Sudarium of Oviedo. Similarity of the blood stain on the Sudarium and the Shroud mean that they covered the same body, indicates that the Shroud can also be dated back to at least 840 AD.
7. Ancient paintings and other works of art that contain the same image as the Shroud of Turin go back to about 550 AD.
8. The image on the Shroud is that of a crucified man. Specifics of this image indicates that it was made at a time when there was current knowledge of Roman Crucifixion, which was outlawed in 337 AD. Thus, the image on the Shroud was probably made earlier than 337 AD.
9. Galatians 3:1 (~ 47 to 56 AD) indicates the believers in Galatia were shown something that “clearly” or “publicly portrayed” “Jesus Christ … as crucified” (NIV & NASB). They had seen it with their “very eyes” (NIV). A very reasonable explanation is that they saw Jesus’ burial shroud containing his blood and possibly his image.
10. There is a 3.5-inch wide piece of linen that is sewn onto the main piece of the Shroud. The stitch used to connect this side piece is a unique stitch, most similar to a stitch on a piece of cloth found at Masada, which was destroyed in 73 to 74 AD.
11. The image on the Shroud is that of a naked man who was crucified exactly as the Bible says that Jesus was crucified. Many evidences indicate that it is most reasonable to believe that the image was made by his dead body. Jesus probably died either in 30 or 33 AD, so that the Shroud must also date to 30 or 33 AD.
12. A photograph of the face on the Shroud taken by professional photographer Giuseppe Enrie in 1931 indicates a possible coin over one eye. It has been identified as a Roman Lepton minted by Pontius Pilate in 29 to 32 AD. This evidence is tentative.
13. Giulio Fanti developed three different types of physical tests to determine how flax fibers change with age. When these tests were applied to the Shroud they gave an average date of 33 BC ± 250.
14. Fibers from the Shroud show damage from sources of natural background radiation similar to that found on the Dead Sea Scrolls, which are dated to about 250 BC to 70 AD. Thus, the Shroud should date to about this same period.
7. The 1988 Carbon Dating Problem
Results of the Shroud of Turin Research Project (STURP) in 1978 supported the authenticity of the Shroud, but this was brought into question by carbon dating. In 1988, samples were cut from the corner of the cloth and sent for carbon dating at three laboratories in Tucson, Zurich, and Oxford. Carbon dating is done by measuring the C14 to C12 ratio of samples, and the date is implied from the ratio. The three laboratories made 16 measurements of the C14 to C12 ratio. The average date from the 16 measurements at the three laboratories was 1260 ± 31 AD, which produced a range of 1260 to 1390 AD when corrected for the variable amount of C14 in the atmosphere. This 1260-1390 range was claimed to be a two-sigma range, which means that there should be a 95% probability that the true date falls within this 1260-1390 range. However, subsequent statistical analysis of the 16 measured values by multiple individuals found strong evidence the variation in the laboratory’s measurements was not only due to random measurement errors but very probably also due to something that could have altered the measured dates from the first century to the Middle Ages. In statistical analysis terminology, this “something” is called a systematic error or bias. Since it cannot be determined from the measurements how much they were affected by this systematic bias, the conclusion in Damon that the Shroud dates to 1260 to 1390 AD should be rejected. The evidence can be summarized as follows:
· Due to its unique characteristics, the image could not have been made between 1260 and 1390 AD because the technology did not exist. The technology to form this image still does not exist.
· 13 other date indicators are consistent with a first century date for the Shroud and inconsistent with the carbon date of 1260 to 1390 AD. This is discussed in the previous section.
· Dates from the three laboratories don’t agree with each other. The average dates from the laboratories in Tucson (1303.5 ± 17.2) and Oxford (1200.8 ± 30.7) are statistically different (difference = 102.7 ± 35.2) from each other at the 102.7 / 35.2 = 2.9 sigma level, which is above the normal 2.0 sigma acceptance level.
· Plotting the average values from the three laboratories indicates there is a gradient or slope to the carbon dates from the three laboratories of about 36 years per cm of distance from the bottom of the Shroud. This indicates that something altered the date measurements as a function of (depending on) the distance of the original location of the samples from the bottom of the cloth. Nuclear analysis computer calculations indicate this slope in the carbon dates is about the same as would result from new C14 produced on the Shroud by neutron absorption resulting from the distribution of neutrons in the tomb if they were emitted from within the body.
· When a Chi-squared statistical analysis is performed on the 16 measurements and their uncertainties, the C14 date measurements have only a 1.4% probability of being consistent with the uncertainties. This indicates about a 98% probability that something altered the measurements. This something, or bias, changed the measurements by about 36 years per cm as stated above.
· The date of 1260 to 1390 AD for the Shroud was based on ignoring half the data, i.e. all measurement uncertainties. It is not legitimate to simply ignore all the measurement uncertainties: 1) they were obtained using the same equipment and procedures as the measurements, 2) they were reasonably consistent for all laboratories, and 3) they were reasonably consistent with the uncertainties for the three standards that were run at the same time as the Shroud samples.
What altered the measured dates? Evidence indicates the image was formed by a burst of radiation emitted from within the body. Atoms that make up the body are composed of protons, neutrons, and electrons. If neutrons were included in this burst of radiation, a small fraction of them would have been absorbed in the trace amount of N14 in the threads to create new C14 in the Shroud by the [N14 + neutron à C14 + proton] reaction. To shift the C14 date for the samples from about 30-33 AD (the death of Jesus) to 1260 AD requires only a 16% increase in the C14 content at the sample location. If only one neutron were emitted from the body for every ten billion that were in the body, this would have been enough to increase the C14 content by the required 16% at the sample location. (See papers 10, 11, 12, 13, 20, 21, and 23 on the RESEARCH page.)
8. How can the mysteries of the Shroud be explained?
In our judgment, good short answers to the main mysteries related to the Shroud of Turin are the following: (See paper 16 on the RESEARCH page.)
Q1. Why do we see the image of a naked crucified man on the Shroud?
A1. We can see the image of a naked crucified man on the Shroud because the information that defines the appearance of a naked crucified man has been encoded into the pattern of discolored fibers on the Shroud. Photons of light that reflect off the Shroud carry this information to our eyes, where the rods and cones at the back of our eyes translate it into electrical signals that travel up our optic nerves to our brains. Our brains have learned to interpret this information as the appearance of a naked crucified man (See paper 5 on the RESEARCH page).
Q2. Previous scientific research on the Shroud indicates that the characteristics of the image are so unique that no one (artist or forger) could have created the image either in a previous era or even today. How then was the image made on the Shroud?
A2. The required information, discussed above, was carried from the body to the cloth by radiation (charged particles or infrared, visible, or ultraviolet light) which deposited it on the cloth when it was absorbed. (See paper 6 on the RESEARCH page.) This radiation probably caused a static discharge from the high points of the fibers which could have discolored them by electrical heating and/or ozone production. (See paper 22 on the RESEARCH page).
Q3. How can the carbon dating to 1260-1390 not be correct?
A3. In 1988, samples were taken from the bottom corner of the Shroud and sent to three laboratories in Oxford, Zurich, and Tucson for C-14 dating, which is done by measuring the C-14 to C-12 ratio, since C-14 decays but C-12 is stable. The average date from the three laboratories was 1260 ± 31 AD, which produced a two sigma (95% probability) range of 1260 to 1390 AD when corrected for the changing concentration of C-14 in the atmosphere. However, multiple subsequent statistical analysis of the data from the 16 measurements by the three laboratories indicates that the measured values were not consistent with the measurement uncertainties. In statistical analysis terminology, the three samples were heterogeneous (inconsistent with each other) because their C-14 to C-12 ratios had been altered by a systematic bias that was a function of the distance from the bottom of the Shroud. This very strange result indicates that the measurement data should have been rejected for dating the Shroud. (See section 7)
Q4. Most of the blood would have dried on the body by the time that the body was placed into the Shroud in the tomb. Dried blood will not soak into a piece of cloth placed over the blood. In fact, blood that is dried on skin must be scrubbed off the skin to remove it. The blood marks on the cloth are pristine in appearance with no cracking or chipping on the outer edge. This indicates that the Shroud was not lifted off a body from which it had soaked up the blood. How then could the dried blood have transferred from the body to the cloth, even where the Shroud would not have been touching the body and where there was no underlying wound?
A4. The evidence is not adequate to explain this fully yet. However, the concept of a burst of radiation emitted from the body that forms the image on the Shroud raises an interesting possibility. Particle radiation and electromagnetic radiation (infrared, visible, and ultraviolet light) carry momentum. When these forms of radiation hit an object, they can transfer momentum to the object and thus force it to move. An extremely intense rapid burst of radiation from the body would have the capability to force the blood off the body and thrust it onto and into the Shroud. If the radiation were vertically collimated, as would be needed to form the image on the cloth without a lens between the body and the cloth, the blood would be transferred vertically up and down away from the body so that when it reached the cloth, it would still be in the configuration that it was in on the body.
Q5. The ultimate question is whether the Shroud of Turin could be the authentic burial cloth of Jesus.
A5. See the next section.
9. Could it be the authentic burial cloth of Jesus?
When you look at the cloth, you see good-resolution front and back images of a crucified man: a severe flogging, a nail wound in the only wrist that is visible, blood running down the arms with the angles of blood flow consistent with crucifixion, and nail wounds in the feet. Additional aspects relate to how Jesus was specifically crucified: a cap of thorns that was beat into his scalp, a puncture wound in the side with a hole the size of a Roman thrusting spear, post-mortem (after death) blood running down from the side wound, and legs not broken. The image also indicates he was dead: the curvature of the feet due to rigor mortis, and blood from the side wound separated into its components, referred to as “blood and water” (John 19:34). Closer examination indicates swollen cheeks from a beating to the head, damaged nose from this beating or a fall, abrasions on both shoulders from carrying a rough heavy object, a section of his beard missing, and no body-decay products present, all consistent with the image being Jesus. Microscopic examination is also consistent with the image being Jesus: dirt was found in abrasions on the tip of his nose and on one knee consistent with a fall, there was pollen from Jerusalem on the Shroud and pollen around his head from a plant with long thorns, and there were small chips of limestone near the feet containing impurities that match limestone in Jerusalem. Chemical analysis indicates that what appears to be blood contains bilirubin, made by the liver in processing damaged red blood cells that would have resulted from the severe flogging. The face on the Shroud also agrees with our concept of how Jesus looked. This is because our concept is based on the earliest paintings of Jesus (~ 550 AD) which were evidently based on the Shroud. All evidence is consistent with the image being Jesus.
No human body, alive or dead, has ever produced an image of itself on fabric. The only exception is the Shroud with its image of a crucified man. The two criteria to identity of this man are:
· Based on the nature of the blood on the Shroud (pristine appearance, intact edges, clear blood serum around the dried blood), the blood must have come from a real human body that was wrapped within the Shroud. Based on the image on the Shroud, the body wrapped within the Shroud was the dead body of a man that had been crucified.
· Based on the STURP analysis, the image on the Shroud is not due to paint, dye, stain, liquid, scorch, or a photographic process. The evidence indicates the image is due to radiation damage to the linen caused by radiation emitted from the body that was wrapped within the cloth. We have no other example of this happening. It was evidently a unique event.
Thus, the question is, what man who died by crucifixion could have gone through a unique event in which his dead body emitted such a powerful burst of radiation that it encoded an image of itself onto the linen cloth in which it was wrapped? If one looks through all mankind’s historical records, only Jesus and his reported disappearance from within his burial shroud satisfy these two criteria. Thus, the most reasonable conclusion is that the image on the Shroud of Turin is Jesus. There is no other explanation that is consistent with the historical and scientific research. Using this conclusion, a holistic explanation for the mysteries of the Shroud can be developed (See paper 16 on the RESEARCH page.).
The front and back images of a crucified man can be seen on the Shroud of Turin because the information that defines the appearance of a crucified man has been encoded into the pattern of discolored fibers that make the image on the Shroud. This information was only inherent to the body and was not in the limestone or air in the tomb. Thus, this information had to be transported from the body to the cloth, where it had to be deposited. It had to be deposited on the cloth to control the mechanism that discolored the fibers, i.e. to control which fibers were discolored and the length of the discoloration on the fibers. Of the various ways that information can be communicated, only radiation could have transported the focused information from the body to the cloth required to form the good resolution front and back images on the Shroud. This radiation was probably emitted in a very brief powerful burst from within the body to create the good resolution images on the Shroud, with their very unusual characteristics. (See papers 5, 6, and 22 on the RESEARCH page.)
If neutrons were included in this burst of radiation from the body, a small fraction of them would have been absorbed in the trace amount of N14 in the cloth to create new C14 on the Shroud primarily by the [N14 + neutron produces a C14 + proton] reaction, thus explaining the 1988 carbon dating of the Shroud to 1260-1390 AD. The C14 concentration at the 1988 sample location would have to be increased by only 16% to cause the carbon date to be changed from about 33 AD to 1260 AD. This 16% increase would result if only one neutron were emitted from the body for every ten billion that were in the body. This neutron absorption hypothesis is the only hypothesis that is consistent with the four things that we know about carbon dating as it relates to the Shroud of Turin – the date, the slope, and the range of the measurement data obtained in the 1988 carbon dating of the Shroud and the 700 AD carbon date for the Sudarium. The only person in all our historical records that was crucified exactly like Jesus and could have emitted a burst of radiation from his dead body that was sufficiently intense to create an image of his body on fabric is the historic Jesus of Nazareth. Thus, the evidence from the Shroud indicates that it is most reasonable to believe that it is the authentic burial cloth of Jesus. (See papers 10, 11, 12, 13, 21, and 23 on the RESEARCH page.) |
The use of honey has its roots in the first human settlements that are known by records of archaeological discoveries dating from 7000 BC. It is also known that there is written mention in Sumerian tablets (tablets written in the language of an ancient city called Sumaria), which are dated approximately between the years 2100-2000 BC These are mentioned the use that was given to this product for its nutritional characteristics as medicinal.
This medicinal power is lost when big companies heat their honey to make it easier to process so make sure your honey is RAW- never heated above 42 C (107 F) because it undergoes a damaging chemical change with heat.
Honey has been considered one of the oldest medicines and in many ethnicities it has been prescribed as a medicine to cure burns, or sunspots. In particular, the Egyptians used it for the preservation of corpses and empirically demonstrated that it has a high antiseptic power, as well as healing and healing of wounds. Nowadays it is popularly used for therapeutic purposes, for example for sore throat, therapy for infected ulcerative legs, earache, etc. |
African American History Month Federal Resources The Library of Congress, National Archives and Records Administration, National Endowment for the Humanities, National Gallery of Art, National Park Service, Smithsonian Institution and United States Holocaust Memorial Museum have joined in paying tribute to the generations of African Americans who struggled with adversity to achieve full citizenship in […]
Emancipation Proclamation Commemorative Coloring Book provides an overview of the history of the Emancipation Proclamation for children and coloring pages featuring Abraham Lincoln and notable African Americans, including Frederick Douglass, Harriet Tubman, Martin Luther King, Jr., Rosa Parks, and Barack Obama.
The Emancipation Proclamation was issued by President Abraham Lincoln on January 1, 1863, as the nation approached its third year of bloody civil war. The proclamation declared “that all persons held as slaves” within the rebellious states “are, and henceforward shall be free.”
TeachingHistory.org provides lessons, teaching guides, best practices, and other resources for teaching history. See videos on “what is historical thinking,” teaching history in elementary school, and the digital classroom. Find primary sources and reviews of more than 1,000 websites and primary document resources. Search by topic and grade level.
Ronald Reagan Documents features 24 memos, speeches, proclamations, and other primary documents from Reagan’s presidency. Topics include economic recovery, energy, national defense, oil imports, small business, wilderness preservation, and more.
How Do I Become President? displays winning posters designed by students to show what a person must do to become president of the U.S.
John and Abigail Adams offers insights into the birth of American democracy, the American Revolution, life in the colonies, the Founding Fathers, the branches of government, lawmaking, and politics. Learn about key people and events: John and Abigail, John Quincy Adams, Benjamin Franklin, King George III, Thomas Jefferson, George Washington, the Boston Massacre, the Revolutionary […]
One Life: The Mask of Lincoln celebrates the bicentennial of the birth of one of our greatest presidents, Abraham Lincoln (1809-1865), with an online exhibit of 30 portraits aimed “to show the changing face that Abraham Lincoln presented to the world as he led the fight for the Union.” Select “audio tour” to hear answers […]
Historians on America looks at 11 developments that altered the course of U.S. history: the trial of John Peter Zenger and the birth of freedom of the press, the Constitutional Convention (1787), George Washington’s concept of a limited Presidency, the Common School movement, the Sherman Anti-Trust Act of 1890, the Interstate Highway System (1939-1991), the […]
Getting the Message Out! National Political Campaign Materials, 1840-1860 looks at politics in antebellum America. Read about the presidential campaigns. See campaign biographies of the candidates — from William Harrison, Martin Van Buren, and James Birney to Abraham Lincoln and Stephen Douglas. Learn about the “second party system.” |
This collection of pages is meant for teachers of second or foreign languages. The aim is to provide a broad overview of the main principles and provide a framework for principled vocabulary instruction and learning. The pages will explain how to set up a framework for vocabulary instruction.
Basic principles and practise in vocabulary instruction. The links will be put in when they are finished
Basic principles and practice in vocabulary instruction.
The case for an early emphasis on vocabulary
An efficient word pair learning system.
Go to the Vocabulary Frequency Lists.
Go to the Vocabulary Reference database
Go to the Main Vocabulary index |
It is a commonly known fact that fabrics
are constructed through two major techniques- weaving and knitting apart from other minor techniques. In these processes, two distinct sets of yarns
called the warp and the weft are interlaced with each other to form a fabric. 'Warp' is the set of yarns that are laid out first on a loom or frame and 'Weft' is the yarn that is woven under and over the warp yarns that are already stretched onto the loom. Thus warp is the continuous row of yarns and the wefts are the yarns that are woven in from side to side. If we go by these definitions, it is clear that textile warping is the processing of creating the base yarn that runs top to bottom on woven cloth.
Beginning of Warp Knitting
Warp knitting is an important and an ever growing industry. When compared to weaving, this industry can be considered as newer. Nobody knows about who invented weaving and hand knitting but it is known that mechanical knitting, in form of socks producing machine, was invented by Reverend William Lee in 1589. Crane of Nottingham applied warp yarn guides to the knitting frame invented by Lee in around 1775 which initiated warp knitting. Paget, in 1861 and Willium Cotton, in 1864, made certain improvements in the looms. The compound needle was invented by Mattew Townsend in 1849, which contributed in making the textile knitting machine
simpler and faster.
Warp Knitting Machines
There are two basic types of warp knitting machines
. They are- Tricot knitting machine and Raschel knitting machine. Earlier, the Tricot machines were equipped with bearded needles and Raschel machines were equipped with the compound needles. However, the modern versions of both these types of machines are equipped with compound needles. The distinction between these two machines is, therefore, made by the type of sinkers in them and the roles these sinkers play in loop formation. The sinkers in a Tricot machine control the fabric throughout the knitting cycle. However, in the Raschel machines, sinkers are only used to ensure that the fabric stays down when the needles rise. The type of knitting machine influence the product construction specifications and, therefore, is an important factor in the whole process.
Yarn preparation in warp knitting combines methods used in weaving and knitting. In some cases, the ends of yarn can be fed directly off cones into the knitting machine but the number of cones needed restricts this working method. The large floor space required for a creel is justified only when it is technologically essential- for example, with Jacquard and curtain machines. In all the other cases, the yarn ends are fed off warp beams. Yarn preparation can be reduced to a simple winding of yarn ends on to the warp beams in a knitting machine since artificial yarn
is mainly used along with moderate tensions applied to the knitting yarn. As such, smooth operation can be ensured without sizing the yarn.
The quality of warp beam is crucial for determining the quality of the knitted fabric. Variations in yarn thickness, tension, twist and other factors too might result in a defective fabric. In most of the cases, warping mistakes are not easy rather impossible to correct during the knitting process.
Methods of Warping
The yarn manufacturers, these days, can supply prepared warps but most of the knitting firms prefer to prepare their own warping equipment and warp beams independently. Mostly, they select the standard types of yarns and warp effect yarns in the plant. There are two basic methods of warping that can be used to prepare the warps for the knitting machines- Indirect Warping and Direct Warping
The yarns from the yarn packages are wound onto an intermediate cylinder (mill) in many parallel groups with a specified density, and then they are back wound onto the warp beam.
The ends of the yarn are wrapped in one operation, from the yarn packages onto the warp beam.
However, there are certain requirements that have to be kept in mind while using both the methods, information about which has been given in the table below.
|Requirements for Direct and Indirect Warping
||Yarn ends density
|Yarn ends per section
|Number of revolutions
|Number of sections
|Yarn ends per section
|R - Required; O - Optional; NA - Not Applicable
There are certain major warping defects on beam warpers that can be identified as given below.
Lapped Ends :
The broken end of yarn is not tied to the end on the warp beam and overlaps the adjoining yarn. The beam is not properly braked and the signal hook fails to operate.
Yarn ends are drawn from the middle and the broken end is not correctly pieced up to the adjoining yarn.
Broken ends on the beam :
It occurs due to reasons mentioned in the above point. A group of ends is broken and tied as a bunch or worked-in with overlapping.
Yarn cut at the butts of the warp beam/ slackness of extreme yarns :
It occurs when the reed is improperly set with respect to the warp beam flanges or there is a deformation of the warp beam flange.
Excessive or insufficient number of yarn ends :
The number of yarn ends of the beam becomes excessive or insufficient due to the incorrect number of bobbins in warping.
Conical winding on the beam :
It occurs due to incorrect load applied by the pressure roller.
Slacks & irregular yarn tension :
It happens due to any on of these reasons- improper threading of the yarn into the tension devices, ejection of yarn from under the disc of the yarn tensioning device, or yarn tension devices of poor quality.
Frequent yarn breakages at the beam edges :
It results due to burrs and nicks on the surface of the warp beam flanges.
Improper length of warping :
It is due to malfunction of the counter, and the brakes of the measuring device & warp beams.
Coarse Knots :
It is due to manual tying-up.
Loose yarn winding :
It happens when the pressure roller is lightly pressed against the warp roller.
Fluff, oily ends and yarn of different density :
It is due to the careless work of the operater, creeler and oiler.
Bulgy winding on the warp beam :
It is due to Irregular laying of yarn ends in the reed, missing a dent and placing two ends in the adjoining yarn. |
We hear a lot about the obesity epidemic in the United States, especially among children and adolescents. However, the impact that school meals have on childhood weight and overall health has been overlooked. A piece in the Wall Street Journal earlier this year looks at five creative ways schools can encourage students to eat more healthily. These interventions have been formulated to help schools meet the guidelines under the Healthy Hunger-Free Kids Act of 2010.
Intervention One: Product Placement
Salad bars that feature at the front, or in the center, of a school lunch line are much more likely to attract students. Some Maryland elementary schools opened all-you-can-eat salad bars that featured five different fruits and five different vegetables a day and saw the number of students buying salad go up. In fact, one study found that strategic placement of vegetable options can increase consumption by as many as five times. Other schools planned the timing of vegetable snacks, so that hungry students were more likely to reach for them before a meal.
Intervention Two: New and Improved Advertising
Changing children’s preferences can be as simple as slicing up fruits for those with orthodontic appliances, or using more colorful bins to display fruits at lunch. “Stealth nutrition,” according to the WSJ piece, can also come in the form of food names that appeal to a young crowd (e.g. “X-Ray Carrots or Turbo Tomatoes.”) Attention-grabbing cartoon stickers on fruits can also increase consumption.
Intervention Three: Tracking Real Consumption
This is an intervention that can reduce waste, and at the same time, determines which foods are popular with students and which are not. Researchers at some Chicago elementary schools recorded what foods were purchased and thrown out in order to determine the relative popularity of certain food groups.
Such measures can also increase parental involvement: some schools send home weekly report cards that record what a child ate throughout the week, based on lunch swipe summaries. Instead of remaining in the dark about what their children eat at school, parents can talk with their children about their meals or even compensate for missing nutrients at home.
Intervention Four: Bring in the Experts
Children can’t be expected to enjoy food that adults would also avoid. Chefs can consult for school menus or cook directly in schools. Over time, partnerships with chefs and local food sources can have a big impact.
Intervention Five: Field Trip!
Nutrition education should not have to be boring. In fact, it absolutely should not be, since a child’s first impression of a food item is crucial. Some elementary schools have started taking students on field trips to local farms, teaching ways of sustainability along with familiarizing students with new fruits and vegetables. They encourage students to make note of how a fruit smells, or what color a vegetable might be. In NYC, the Wellness in School Program encourages students to make healthy choices for themselves based on what they observe in the fresh produce and nutrition labels they encounter. |
What is tooth decay?
Tooth decay, also known as a cavity, or cavities, occurs when bacteria living in your mouth make acid that then begins to eat away at your teeth. Untreated tooth decay may cause infection, extreme pain and the loss of tooth. The decay process begins with the unnoticeable damage to the enamel of your teeth and then steadily progresses to deeper layers of the tooth, eventually leading to the pulp. The pulp of your teeth contains highly-sensitive blood vessels and nerves.
The top 10 causes of tooth decay include:
Poor Oral Hygiene Practices: Poor oral hygiene not only includes brushing your teeth regularly, but not flossing regularly, not brushing your tongue, and not using mouth wash. You should brush your teeth at least twice a day – morning and night, but it is ideal to brush after every meal. And remember to brush for at least two minutes. Set a little timer for yourself while you’re brushing to ensure that you brush your teeth for the full two minutes. Improper oral hygiene will ultimately lead to tooth decay. Tooth decay due to poor oral hygiene is avoidable.
Deep Tooth Crevices and Enamel Issues: Individuals with enamel issues and who have deep crevices in their teeth are highly-likely to have problems with tooth decay. This is because the deep crevices allow bacteria and plaque easy access to grow. Dental sealants are typically used to prevent tooth decay in patients with deep tooth crevices. A dental sealant is only safe for uninfected teeth for the prevention of tooth decay.
Improper Nutrition: Avoiding foods that are high in sugar, high in carbohydrates and high in acid is the best way to avoid tooth decay due to improper nutrition. Eating a healthy diet, which includes healthy foods and the avoidance of sugary acidic drinks is the way to go.
Sugary Foods: Sugary foods are the best friends of the bacteria in your mouth. The bacteria in your mouth literally feed off of sugary foods, and then begin to coat your teeth in damaging acid. This can all happen in a matter of seconds and can occur several times over the course of just one meal, which is why it’s recommended to brush your teeth after each meal to eliminate acid. When thinking of sugary foods, you more than likely think of “candy” and things like that, when in fact, there are many foods that contain “hidden sugars.” So be careful and always be on the lookout for hidden sugars. Remember, sugary drinks such as juice are just as damaging to your teeth as soda.
Acidic Foods and Drinks: When most people think of “acidic” they more than likely think of “soda,” when in fact many common foods which people consume on a daily basis contain acid. Shockingly, even foods such as fish and bread contain acid. Of course, carbonated beverages such as soda, as well as fruit juice are all acidic agents which cause tooth decay. Unlike the way that bacteria feed off of sugary foods so they can coat the teeth in acid, acidic foods and drinks immediately begin to damage tooth enamel with their own acid.
Dry Mouth Issues: Due to the fact that saliva helps inhibit the growth of plaque, persons with dry mouth conditions will more than likely have dental issues which lead to tooth decay. Dry mouth may be caused by prescription medications, it may be genetic, or it may be caused by medical conditions such as Diabetes. A vigilant dentist will work closely with a patient to prevent tooth decay or further tooth decay due to dry mouth issues.
Tooth Grinding: Many people grind their teeth and do not even realize that they do this. Tooth grinding typically occurs when persons are asleep or when they’re under immense stress. Tooth grinding leads to tooth decay due to the fact that it strips away the outer layer of tooth enamel. Tooth grinding is preventable with the use of a “bite guard,” also known as a “night guard,” and with the reduction of stress.
Genetics: Often times, many people have issues with tooth decay thanks to genetics. Just as you inherit the color of your eyes and hair from your family, you also inherit deep tooth crevices and enamel issues, which lead to cavities.
Age: There are many reasons that cavities become more common with age, but some include common prescription medications which cause dry mouth, the recession of gums with age, and improper oral hygiene finally catching up with age.
An Apple A Day Won’t Keep The Dentist Away
Avoiding the Dentist: Lastly, avoiding a visit to the dentist is not good for your teeth and will ultimately lead to future tooth decay. If you’re afraid of the dentist, don’t be. The dentist is here to help you. Ideally, you should visit the dentist every six months for routine cleaning and an examination. During the examination at the dentist, your dentist will examine your mouth for any signs of tooth decay. If signs of tooth decay exist, your dentist will work quickly to treat the issues, as well as providing any preventive measures to avoid future tooth decay all together. |
Image by Walters Art Museum Illuminated Manuscripts via Flickr
Arabic Calligraphy is more that just a written script. According to a recent post in Muslim Matters, it was used as a tool for communication, like most languages but later eveolved to be used in “architecture, decoration and coin design.”
Not surprising, because the arabic script including the beauty of Islamic calligrapy is spellbounding to the eye.
A part that I'd like to focus on in this article is the history of the Kufic Script.
If we examine Kufic script inscriptions, we'll notice particular characteristics, such as the angular shapes and long vertical lines. The script letters used to be wider, which made writing long content more difficult. These characteristics affected the usability of the script and made it more suitable for architectural and written Islamic titles, instead of long texts. The script was used for the architectural decoration of buildings, such as mosques, palaces and schools.
Below are some examples of Kufic scripts and their different developmental stages:
- The thick Kufic script This is one of the earliest forms of the Kufic script and was used in the early copies of the Holy Qur'an.
- Magribi Kufic script
This script was used in Morocco and includes curves and loops, unlike the original Kufic script.
- Mashriqi Kufic Script
The letters in this script are similar to the original Kufic, with a thinner look and decorative lines.
- Piramouz script
This script is another version of the Mashriqi script that was developed in Iran.
- Ghaznavid and Khourasan scripts
These two other forms of the Kufic script were developed in Iran. These scripts have the same thickness as the original Kufic script, with long vertical lines and decorative ends.
- Fatimid Kufic
This form developed in North Africa, especially in Egypt. It was written in thick lines and short curves.
- Square Kufic
This form is very noticeable, with its straight letters and no curves at all. |
Attachment theory is highly regarded as a well-researched of infant and toddler behaviour and in the field of mental health. Attachment ? Attachment is a special relationship that involves an exchange of comfort, care, and pleasure. Bowlby shared the psychiatric view that early experiences in childhood have an important influence on development and behaviour in later life. The early attachment styles are established in childhood through the infant/caregiver relationship. Proximity Maintenance –The desire to be near the people we are attached to. Save Haven –returning to the attachment figure for comfort and safety in the face or threat. Secure Base –The attachment figure acts as a base of security from which the child can explore the surrounding environment. Separating distress – Anxiety that occurs in the absence of the attachment figure.
Tina Bruce is a social learning theorist and is influenced by the work of Frobel. She is a leading figure in early childhood education and an expert in children’s learning. Considering early childhood education, Tina Bruce looks at three parts of the curriculum: 1. 2. 3. The Child The context-the people and places The content-what the child knows and wants and needs to know
Theory of psychological Development
Eric Ericson’s theory of Psychological Development is one of the best-known theories of personality in psychology. Ericson believes that personality develops in a series of stages. One of the main elements is the development of the ego-identity. Ego identity is a conscious sense that we develop through social interaction. Our Igo- identity is constantly changing due to new experiences and information we acquire in our daily interaction with others.
Social cognitive Theory
Albert Bandura is well regarded for his Social Cognitive Theory. It is a learning theory based on the ideas that people learn by watching others do, and that human thought processes are central to understanding personality. This theory provides a framework for understanding, predicting and changing human behaviour.
The main tenets of Bandura’s theory are that: People learn by observing others The same sat of stimuli may provoke different responses from diff. people, or from the same people at different times The world and a person’s behaviour is interlinked Personality is an interaction between three factors: The environment, behaviour, and a person’s psychological proses.
- Discovery learning
Inquiry – based learning, Constructivism Discovery learning is a method of inquiry-based instruction, discovery learning believes that it is best for learners to discover facts and relationships for themselves. Discovery learning is an inquiry-base, constructivist learning theory that takes place in problem solving situations where the learner draws on his or her own past experience and existing knowledge to discover facts and relationships and new truths to be learned. Proponents of the theory believe that discovery learning has many advantages, including: -encourage active engagement -promotes motivation -promotes autonomy, responsibility, independence -the development of creativity and problem solving skills -a tailored learning experience
Theory on Language Development in Children
Children are born in possession of an innate ability to comprehend language structures, according to influential linguist Noam Chomsky. In his theory of Universal Grammar, Chomsky postulates that all human languages are build upon a common structural basis. Thus, Chomsky argues that language acquisition occurs as a consequence of a child’s capacity to recognize the underlying structure at the root of any language.
Theories Isaacs argued that it is important to develop children’s skills to think clearly and exercise independent judgement. Developing a child’s independence is beneficial to their... |
When it comes to planning lessons in special education, or general education for that matter, the goal is for all students to be able to access, understand, and be able to successfully apply the content to show evidence of full understanding. The application can take many forms: performance assessments, formative assessments, summative assessments, teacher observations, etc…
But how do you get to that final assessment piece? This post is about the planning process that goes into successful lessons for all students. Let’s begin with Universal Design for Learning (UDL). UDL is a model for planning lessons and units that creates access to the content for all students. Here is a cartoon that embodies the philosophy of UDL.
One major component of planning in special education math classes is prioritizing the mathematical goals and the needs of the students to access the mathematics in a lesson. A Teaching Children Mathematics article from 2004 suggests the following steps for beginning to plan a successful math lesson for students with disabilities:
1. Focus on the mathematics
- What are the mathematical goals?
- What is most important for all students to learn?
2. Focus on the students
- What are each student’s strengths?
- What are each student’s weaknesses?
3. Identify barriers
- What is the match or mismatch of the lesson with the student’s strengths and weaknesses?
4. Brainstorm accessibility strategies
- What scaffolding and support will students need to reach the mathematical goals?
The article says you need to first start with the math, then the students, then the barriers, finally the accommodations. One controversial accommodation is the use of calculators. This has become especially topical because of the advent of this handy dandy app.
But the use of calculators has long been a major component of special education math classes. DeAnna Horstmeier, Ph.D. says in her book, “Teaching Math to People with Down Syndrome and Other Hands-On Learners” (2004)
Students with Down syndrome frequently have difficulties with short-term and working memory, especially if the occasion is not very meaningful to them. For example, have you ever known a child with Down syndrome to forget his Christmas list or forget when you said you would take him to the zoo? Those kinds of things they remember too well. On the other hand, they often have trouble holding abstract numbers in mind and have even more difficulty with using working memory to tell them what they need to do with those numbers, such as add or subtract (p. 52).
Horstmeier also quotes the National Council of Teachers of Mathematics about the importance of technology in math class, “Appropriate calculators should be available to all students at all times.” Availability and advocating for the use of are two VERY different things. Should students with disabilities, such as Down syndrome, be handed calculators as soon as math class begins? Or should the students be given the option of using calculators depending on their needs during a mathematical task?
Horstmeier was quoting the 1989 NCTM Curriculum and Evaluation Standards. NCTM weighed in again in a 2005 article entitled Calculators for Students with Special Needs.
In general, even with the calculator, students with disabilities face difficulties in learning many mathematical ideas. In a rich mathematics problem, however, the calculator shifts the focus of attention from computation, which the calculator can do, to thinking, which the calculator cannot do, and may help students with disabilities attain levels of understanding that are equal to those of their fellow students. In these situations, a calculator serves as a tool rather than a crutch because it requires the students to think through and solve the mathematics problems. Even with a calculator, students need to learn to make sense of the answers attained with the calculator, which procedures and operations are required in a problem, and how to generalize results to other situations.
Horstmeier advocates that a calculator will assist the student’s working memory while working on a mathematical problem, but NCTM maintains that a calculator can only help a student so far, disability or not. The article contains this handy flow chart for determining the use of a calculator during a lesson.
In essence, if all you are asking a student to do is practice computation then don’t let them use a calculator. If the task is a richer, deeper mathematical task then a calculator can only help a student so far. It is also my opinion that if your math task or problem can be completed solely by a calculator, then you may want to re-evaluate your mathematical task or problem and its use to the mathematical development of your students. |
Synchrotron Reveals Human Children Outpaced Neanderthals by Slowing Down
While it may seem like kids grow up too fast, evolutionary anthropologists see things differently. Human childhood is considerably longer than chimpanzees, our closest-living ape relatives. A study by Tanya Smith (Harvard University and Max Planck Institute for Evolutionary Anthropology), Paul Tafforeau (European Synchrotron Radiation Facility) and colleagues in the November issue of the Proceedings of the National Academy of Sciences USA found a similar pattern when human kids are compared to Neanderthals. A multinational team of specialists applied cutting-edge synchrotron X-ray imaging to resolve microscopic growth in 10 young Neanderthal and Homo sapiens fossils. They found that despite some overlap, which is common in closely-related species, significant developmental differences exist. Modern humans are the slowest to the finish line, stretching out their maturation, which may have given them a unique evolutionary advantage.
Scientists have been debating whether Neanderthals grew differently than modern humans for decades. An ambitious project on the development of these archaic humans was launched by Jean-Jacques Hublin and colleagues at the Max Planck Institute for Evolutionary Anthropology (MPI-EVA). A remarkable finding of this five-year study is that Neanderthals grow their teeth significantly faster than members of our own species, including some of the earliest groups of modern humans to leave Africa between 90-100,000 years ago. The Neanderthal pattern appears to be intermediate between early members of our genus (e.g., Homo erectus) and living people, suggesting that the characteristically slow development and long childhood is a recent condition unique to our own species. This extended period of maturation may facilitate additional learning and complex cognition, possibly giving early Homo sapiens a competitive advantage over their contemporaneous Neanderthal cousins.
Evolutionary biology has shown that small changes during early development may lead to differences that result in new species. These changes may take the form of modifications in the sequence or timing of developmental events; thus understanding developmental transformation is key to reconstructing evolutionary history.
Anthropologists have documented many differences in adult characteristics among closely related species, such as humans and chimpanzees. Genomic data combined with fossil evidence indicate that these two lineages split six to seven million years ago, and have since been evolving separately. However, we know much less about which changes led to the separate lineages, how these changes arose, and when they occurred. Research during the past two decades has shown that early fossil humans (australopithecines and early members of the genus Homo) possessed short growth periods, which were more similar to chimpanzees than to living humans. However, it has been unclear when, and in which group of fossil humans, the modern condition of a relatively long childhood arose.
One poorly understood change is our unique life history, or the way in which we time growth, development, and reproductive efforts. Compared to humans, non-human primate life history is marked by a shorter gestation period, faster post-natal maturation rates, younger age at first reproduction, shorter post-reproductive period, and a shorter overall lifespan. For example, chimpanzees reach reproductive maturity several years before humans, bearing their first offspring by age 13, in contrast to the human average of 19.
It might seem that life history is invisible in the fossil record, but it turns out that many life history variables correlate strongly with dental development. “Teeth are remarkable time recorders, capturing each day of growth much like rings in trees reveal yearly progress. Even more impressive is the fact that our first molars contain a tiny ‘birth certificate,’ and finding this birth line allows scientists to calculate exactly how old a juvenile was when it died” says Smith.
This forensic approach to the past is possible with a ‘super-microscope:’ extremely powerful X-ray beams produced at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, which is one of the largest synchrotrons in the world. A synchrotron is a machine that produces beams of light from accelerated electrons deviated by magnetic fields. Depending on the energy, the light spectrum may range from radio frequencies to high-energy X-rays (hard X-rays). This type of light source allows scientists to image fossils more efficiently and with addition imaging techniques than medical or industrial laboratory X-ray sources. The process of imaging teeth involves taking a series of X-ray images while the sample rotates, which are transformed by software into cross-sectional slices that reveal internal structure, and can be modeled in three-dimensions (Video demonstration shown on the right).
Tafforeau notes: “At the ESRF, we are able to look inside invaluable fossils without damaging them by using the special properties of high energy synchrotron X-rays. We can investigate fossils at different scales and in three-dimensions, ranging from studies of overall 3D shape down to microscopic daily growth lines. This is currently the only place where these studies of fossil humans are possible.” Scientists and curators have been quietly visiting the French synchrotron, often with some of the rarest hominin fossils in the world, for imaging with this state-of-the-art technique (See Figure 1 below & Video 1 on the right).
Team members in the ESRF experimental hutch with the 90-100,000 year old Israeli Homo sapiens skull from Qafzeh Cave (center). This remarkable 5.1 year-old was found buried at the feet of an adult female, and it is believed to be one of the earliest instances of intentional burial. Video 1 (right) is a 3D Animation of what the synchrotron revealed inside the child.
From left to right: Paul Tafforeau, Assaf Marom, Tanya Smith, Jakov Radovcic, Almut Hoffman, and Jean-Jacques Hublin.
* Fossil Courtesy of the Department of Anatomy and Anthropology (Tel Aviv University) and Rockefeller Museum (Tel Aviv)
* Photo credit: Chantal Argoud (ESRF)
The study by Smith, Tafforeau and other specialists includes some of the most famous Neanderthal children, including the first hominin fossil ever discovered (See Figure 2 below and Video 2 on the right).
*Fossil couresy of the Université de Liège
*Photo credits: Larry Engel (PBS), Paul Tafforeau (ESRF), and Tanya Smith (Harvard University and MPI-EVA)
This Belgian Neanderthal child, discovered in the winter of 1829-1830, was thought to be 4-5 years of age when it died. Powerful synchrotron X-rays and cutting-edge imaging software revealed that it actually died at a mere 3 years of age. Another invaluable Neanderthal discovered in Le oustier, France in 1908, barely survived the shelling of its’ German repository during the Second World War (See Figure 3 below).
* Fossil courtesy of the Museum fur Vor- und Frühgeschichte (Berlin)
* Photo credit Chantal Argoud (ESRF)
Studies by Smith, Tafforeau and other scientists at MPI-EVA are adding to the growing body of evidence that subtle developmental differences exist between us and our Neanderthal cousins. Forthcoming work in Current Biology reports that brain development also differs between Neanderthals and modern humans. Moveover, the recent sequencing of the Neanderthals genome by MPI-EVA molecular biologists has provided tantalizing genetic clues that point to differences in cranial and skeletal development between Neanderthals and modern humans. These new methods present a unique opportunity to assess the origins of a fundamentally human condition: the costly yet advantageous shift from a primitive “live fast and die young” strategy to the “live slow and grow old” strategy that has helped to make us one of the most successful organisms on the planet.
Smith, T.M., Tafforeau, P., Reid, D.J., Pouech, J., Lazzari, V., Zermeno, J.P., Guatelli-Steinberg, D., Olejniczak, A.J., Hoffman, A., Radovčić, J., Masrour, M., Toussaint, M., Stringer, C., Hublin, J-J. (2010) Dental evidence for ontogenetic differences between modern humans and Neanderthals. Proc. Natl. Acad. Sci. USA. 107:20923-20928.
Tanya M. Smith
Department of Human Evolutionary Biology
11 Divinity Avenue Cambridge, MA 02138
USA Phone: +1 617 496 8259
European Synchrotron Radiation Facility
6 rue Horowitz BP 220 38046
Grenoble Cedex, France
Phone: +33 (0)438 88 1974
Department of Human Evolution
Max Planck Institute for Evolutionary Anthropology
Deutscher Platz 6 D-04103
Phone: +49 (0)341 355 0350 |
The Living Classroom Curriculum:First Grade, PDF
(Click to Download >>)
- What’s My Habitat?: First graders participate in a garden-based scavenger hunt during which they discover mutualistic relationships between plants and various garden critters.
- Food Factories: Students are introduced to the concept of photosynthesis and the vital part that plants play in the food chain. During this lesson, students plant seedlings and snack on edible plant parts and come back several weeks later to measure the growth of their plants.
- Growing Vegetable Soup: After harvesting the vegetables and fruits they have been growing, students prepare a nutritious vegetable soup for all the class to share.
- The Best Nest: Students survey a variety of different nests and work in pairs to build bird nests from natural materials found around the school grounds.
- Animal Homes: Students examine the structure and function of various animal homes and conduct an outdoor search for animal homes around the garden and school grounds.
- Garden Tea: Students gather fresh herbs from the garden, capture the sun’s energy and make delicious tea. Students also learn more about tea traditions around the world. |
This course is great for learners of all levels. You will be taught how to listen, speak, write, and read in Chinese from native-speaking experts. Whether you’re completely new to Chinese or are searching for a more advanced course, you’ll find what you’re looking for here. There are nine levels to choose from in this course, and each level includes a large number of useful topics.
1. Students who are completely new to Chinese will do well at this level, as well as those who have limited experience with the language. Starting at this level, you will not be expected to be able to read Chinese characters yet. Those who begin at level one may not know simple Chinese words and may have a hard time with responding to basic questions in Chinese.
2. Chinese pronunciation, which is key to effectively learning Chinese, will be taught at this level. In addition, students will be taught to begin simple conversations and respond to basic questions. Students will be prepared for daily life encounters: introducing themselves, greeting other people, engaging in simple bargaining, ordering at a restaurant, and more.
1. Students who begin at level two are able to communicate in Chinese but only in a very basic way; they may need to listen several times before understanding what is said. At this level, students are able to respond to and ask basic questions about ordinary topics. Level two students are just learning to pick up on familiar words, phrases, and names, and can understand short texts one phrase at a time.
2. Topics such as introducing friends and family members, making reservations, asking for the time and directions, and exchanging money will be taught. Level two students will also learn the basic structure that forms Chinese characters.
1. Those at this level have learned to answer and ask questions, exchange information and ideas on ordinary topics, and can understand speech that is slowly and clearly articulated.
2. Level three students will be taught more about the structure and basic rules of Chinese characters, in addition to everyday subjects including sports, hobbies, travel plans, and personal preferences. Students will be able to read some short essays that center on everyday life.
For intermediate learners:
1. Level four students can comprehend the main ideas conveyed via standard, clear speech on familiar topics.
2. Students will learn important sentence construction methods. At this level, topics including educational background, lifestyle, career, and more will be explored. Students will also read detailed descriptions that cover a wide variety of familiar topics.
1. At this level, students are able to read straightforward texts on topics concerning their interests with a good amount of understanding. In addition, level five students are able to confidently communicate in Chinese regarding topics that relate to their career and personal interests. Students can convey and receive information, but there are still some mistakes and pauses that occur.
2. Topics including the traditional Chinese festivals, scenic areas, historical locations, and the differences between Western and Eastern culture will be taught at this level.
1. Level six students are able to read Chinese mostly on their own and have a substantial reading vocabulary but may struggle with less frequently used words and phrases. Students are able to communicate with enough fluency and spontaneity to converse with native Chinese speakers without there being difficulty for either person.
2. Students at level six will be taught how to describe their dreams and hopes for the future, tell a story, describe emotions and reactions, and speak in detail about personal experiences.
For advanced learners:
1. Students at this level are able to read longer texts and comprehend them without any problems. Level seven students are able to communicate spontaneously with few restrictions on expressing themselves, though they may need to search for the correct Chinese saying. In addition, students are able to effectively and accurately speak about a variety of subjects.
2. Students will read articles centered on more advanced topics, such as the economy, technology, social issues, and more. In addition, they will be taught complicated subjects such as discussing marital problems, addressing environmental problems, and filing complaints.
1. Students who are at level eight have learned to read and comprehend lengthy, detailed, complex texts that address any subject, so long as they can reread challenging sections. At this level, students can comprehend a variety of challenging subjects and pick up on implied meanings. In addition, they do not need to pause much to search for expressions, and are able to communicate with spontaneity and fluency.
2. Students will be taught subjects including social issues, the stock market, and other complex topics. They will read newspaper and internet articles, as well as literary writing.
1. Level nine students have advanced their learning to be able to comprehend practically everything they read or hear in Chinese. They are able to precisely, spontaneously, and fluently communicate, and even in very complex circumstances they are able to distinguish implications and other subtle meanings. Students at this level can comprehend and critically interpret practically any form of the written Chinese language; this includes structurally complex, abstract, or highly colloquial writings, whether literary or not.
2. Students will be taught about China’s business investment environment, commercial culture, and the emerging market in China. They will read business texts and learn to pick up on the underlying meanings, and they will write a lengthy thesis with which to showcase their business-related findings. In addition, level nine students will be taught sophisticated topics that are found in both writings and the media. |
related post math worksheets 3 digit on awesome kids adding and 2 ng large numbers worksheet medium to size of addition subtract mentally mental subtraction for grade 4 subtrac.
first grade math worksheets mental subtraction to 1 adding and subtracting.
first grade math worksheets mental subtraction to 1 maths free for mat.
color by number addition and subtraction worksheets mental math practice for grades maths a.
mental math subtraction worksheets best year activity images on gr.
grade math worksheets mental subtraction to 2 addition and 5 color sheets.
subtracting questions subtraction single focus 0 mental addition and worksheets year 3.
mental subtraction worksheets grade adding and subtracting 2 lovely free printable maths for choice image fr.
subtraction worksheets bundle cover mental math 2 addition and strategies workshee.
free subtraction worksheets mental to 3 addition and ks2.
best mental math subtraction worksheets grade 3 image collection day class maths answers subtractio.
mental math adding and subtracting worksheets download them try to solve multiplication grade 6 m.
mental maths addition and subtraction worksheet activity sheets working mathematically n stralian 1 w.
multiplication mental math worksheets maths for grade 4 addition and subtraction w.
grade addition and subtraction for math worksheets mental free printable adding subtracting 9 addit.
2 factors worksheet grade maths worksheets part 1 more topics lets share for class mental math subtraction year 4 and mixed taster.
math subtraction worksheets awesome mental freebie grade fun stuff for primary grades on operation and algebraic thinking this would work with addition or.
multi digit subtraction with space separated thousands mental math adding and subtracting worksheets.
worksheets mental math addition and subtraction grade 1 maths for 3 problems subtractio.
number patterns worksheets grade printable math s mental subtraction strategies to 2.
cover image mental addition subtraction worksheets and with compensation adjusting.
addition and subtraction worksheets for grade 3 new mental math.
horizontally arranged multi digit subtraction why horizontal worksheets minus mental maths addition.
first grade math worksheets mental subtraction to 1 year 4 children.
multi digit subtraction with period separated thousands euro mental addition and worksheets year 3.
mental maths test year 4 worksheets subtraction addition and grade 5.
mental math subtraction worksheets the best image collection grade 5.
large print multi digit subtraction with no regrouping mental math addition and 5th grade worksheets.
subtraction facts worksheets best stuff to buy images on of mental math addition and subtract.
adding and subtracting worksheets grade 2 mental subtraction math addition ability for class 3.
grade math worksheets 3 digit n with regrouping lovely kids borrowing numbers subtraction addition 2 mental for second and works.
maths worksheets for addition and subtraction counting in tens free money lessons mental medium to by.
grade math worksheets numbers and operations in base ten 2 mental subtraction addition workshe. |
theoretical and percent yield
You learned how to calculate theoretical yield and percent yield in general chemistry lab. Since chemistry is a cumulative discipline, we expect students to remember topics from previous chemistry course. Anyway, here is a brief recap:
write the balanced chemical equation of the reaction
if there is more than one product, determine which one (or ones) are the products of interest. If one of the products is organic and the other is inorganic, in an organic chemistry synthesis the product of interest will be the organic compound (duh...). If you are not certain which is the desired product, read the introduction to the procedure again.
determine the moles of each reactant you will use using literature physical constants (i.e. density, MW). This includes any inorganic reactants, but not any catalysts used or reaction solvents.
determine which reactant is the limiting reagent
determine the moles of product expected using the balanced equation of reaction and the moles of limiting reagent. This is the theoretical yield, expressed in moles. This can also be expressed in units of mass using the literature MW of the product.
use the mass of product obtained to determine the percent yield:
percent yield = grams of product obtained X
theoretical yield (in grams)
or convert the mass of product obtained to the moles of products obtained (using the MW of the product) to determine the percent yield:
percent yield = moles of product obtained X 100%
theoretical yield (in moles) |
United States Army Reserve
The United States Army Reserve is the reserve force of the United States Army. People in the reserve forces are not full-time soldiers, they also have a civilian career. The reserve forces of the US Army are made of the US Army reserve and the Army National Guard.
The Army Reserve was formed in 1908 to provide a reserve of medical officers to the Army. After the First World War Congress reorganized the U.S. land forces. The National Defence Act of 4 June 1920 authorized a Regular Army, a National Guard, and an Organized Reserve (Officers Reserve Corps and Enlisted Reserve Corps) of unrestricted size. The Organized Reserve later became the Army Reserve.
References[change | change source]
- www.history.army.mil - Chapter IV: The Aftermath of World War I |
UV beads absorb ultraviolet light. UV radiation is electro- magnetic radiation with a wavelength just shorter than that of visible light. The beads then release the energy of the UV light at a wavelength we can see. The wavelength emitted by the beads is different depending on the dye, and we see this as different colors.
Many molecules absorb UV light. Some, like DNA, can be damaged by the energy in UV light. Sunscreen blocks UV light, protecting the DNA in your skin.
The UV flashlight is not dangerous for short periods of time. It emits UV A wavelengths, which cause less damage than UV B.
Mess Kits are single-serving experiments that allow kids to explore scientific concepts at their own pace. Visitors are encouraged to complete as many as they want! See all Mess Kits. |
Anthropology and Human Modernity
Part of the Anthropology For Dummies Cheat Sheet
Modern humans have physical and behavioral differences from ancient humans. When you're studying anthropology — specifically, modernity in humans — keep these points in mind. They highlight the most important characteristics of anatomical and behavioral human modernity:
Anatomical modernity is having anatomical characteristics indistinguishable from modern, living humans. Appearing by 100,000 years ago, these characteristics include a larger brain (averaging 1,450cc's), a larger body overall, and the presence of a chin.
Behavioral modernity is behaving in ways that are indistinguishable from modern humans; it appears by 100,000 to 50,000 years ago and includes symbolism (the use of one thing to represent another thing), complex language (with complex grammar), and complex tool use (such as the use of symmetrical tools).
Modern humans colonized sub-Himalayan East Asia by 80,000 BP (before present), Southeast Asia and Australia by 40,000 BP, Europe at least by 30,000 BP, the New World by 14,000BP, and the Pacific and Arctic by 1,000 BP. |
- Cubism is a form of abstract art.
- Cubism was developed as a way of portraying multiple dimensions and perspectives of a subject or object onto a two dimensional canvas.
- Spanish artist Pablo Picasso and French artist Georges Braque pioneered the art form in 1907. The movement was also influenced by the earlier work of French artist Paul Cézanne.
- There are two styles of cubism. Analytic cubism uses geometrical forms and subdued colours. Synthetic cubism draws on decorative shapes, collage and bright colours.
- School A to Z features links to third-party websites and resources. We are not responsible for the content of external sites.
JW Power Seaside Still Life, 1926. JW Power Collection, University of Sydney, managed by Museum of Contemporary Art, Mrs Edith Power Bequest 1961.
- Any artwork that does not truly represent what it is depicting can be classified as abstract. Abstract art depicts forms in a conceptual manner.
- Cubism contributed to later forms of abstract art. Most cubist artworks show items as geometric solids or outlines.
- The cubist work of Spanish artist Pablo Picasso and French artist Georges Braque was influenced by the earlier work of artist Paul Cezanne, who said: "Everything in nature takes its form from the sphere, the cone, and the cylinder".
Forms of cubism
- Cubism can broadly be broken up into two sub-categories.
- Analytic cubism was the first phase of cubism and lasted until about 1912. Subjects were reconstructed in a number of intricate, sometimes interlocking geometric forms. A muted, limited colour palette unified the images.
- The second phase of cubism was known as synthetic cubism. This phase was more colourful and decorative, and incorporated the use of different textures and materials.
This site uses Google Translate, a free language translation service, as an aid. Please note translation accuracy will vary across languages.
Doing it by the book
As a parent it's only natural to want to help your child, but when it comes to homework and study, the completed work should be theirs.
Here are some important points to remember to ensure your child is following good practice for a lifetime of learning. |
- Ionizing radiation consisting of alpha particles, emitted by some substances undergoing radioactive decay.More example sentences
- Long before artificial probes were available, subatomic researchers such as Rutherford relied on concentrated alpha radiation.
- Capturing neutrons causes the boron nuclei to break apart, resulting in the emission of alpha radiation and lithium nuclei.
- After the boron is selectively localized to the tumor, it is irradiated with neutrons that cause the release of alpha radiation from the boron atoms.
More definitions of alpha radiationDefinition of alpha radiation in:
- The British & World English dictionary |
If a little-known microbe called Arcobacter butzleri has contaminated the water you drink or the food you eat, this troublesome pathogen could make you sick. Symptoms include diarrhea, stomach cramps, nausea, vomiting and fever, all of which can become chronic if left untreated.
But investigations by Agricultural Research Service (ARS) microbiologist William G. Miller and colleagues may speed discovery of innovative ways to control this microbe.
In 2007, Miller and co-researchers deciphered the sequence of the pathogen's genetic material, or genome. This work was a scientific "first" for any of the world's Arcobacters. Based at the ARS Western Regional Research Center in Albany, Calif., Miller did the research with co-investigators there and with others in the United States and abroad.
Since then, Miller has employed the genomic data in developing what's known as a "typing method" to differentiate A. butzleri from look-alike species, and to distinguish specific strains within those species. Medical professionals, public health agencies and researchers can use it when they're tracking the source of foodborne-illness outbreaks. In the past, for example, A. butzleri has been implicated as a cause of such outbreaks in Europe and Southeast Asia.
Cite This Page: |
Being able to communicate effectively in a foreign language is a challenge faced by many of us. If you’re a newcomer to a country, conveying a message in a language that is not your mother tongue is often necessary to access vital services, perform well on the job, achieve good grades and integrate into society. But it’s possible that speakers of different native languages face different challenges in making themselves easily understood.
In new research comparing the speaking performances of 60 adult learners of English from four different language groups: Chinese, Hindi/Urdu, Romance languages (French/Spanish) and Farsi, we found dramatic differences between how their use of language determines how understandable they are.
But our study showed that the language-related factors that underlie what makes someone sound accented were very similar regardless of a person’s mother tongue. For example, vowel and consonant errors universally make people sound accented.
Yet it’s not always these factors that affect how easy or difficult to understand a person is. Whereas producing inaccurate vowels and consonants impeded how easy Chinese learners were for English listeners to understand, for Hindi or Urdu learners, it was appropriate use of vocabulary and grammar that helped their ability to be understood.
Too much focus on accent
Foreign accents often receive an undue amount of attention because they are highly noticeable to listeners. Previous research has shown that untrained listeners can tell native and non-native speakers apart after listening to speech that is just 0.03 seconds long, is played backwards, or is in an unfamiliar language.
Despite listeners’ sensitivity to accent, there is growing agreement among language teachers and researchers that trying to reduce a learner’s accent is not an appropriate goal. This is mostly because people do not need to sound like native speakers to successfully integrate into a new society or to effectively carry out their professional tasks.
In addition, sounding like a native speaker is an unrealistic language learning goal for adults and also perhaps an undesirable one due to issues of identity. So most language experts agree that what counts the most in oral communication is for learners to be readily understandable or comprehensible to their conversational partners.
By teasing apart the aspects of speech that are essential for being understood from those factors that might be noticeable or irritating but do not actually impede communication, English teachers can target the most vital aspects of speech their students need to get their messages across.
Making yourself understood
We wanted to find out what impact an adult learner’s mother tongue has on how easy they are to understand when they speak a foreign language, and how important a part their accent played.
In our experiment, ten experienced English teachers scored the speech of four groups of 15 international students telling a story in English. The 60 students spoke Chinese, Hindi or Urdu, Romance languages (French or Spanish), and Farsi.
The teachers first provided judgements on how accented each speaker sounded and how difficult he or she was to understand. Next, they provided judgements using ten language variables including pronunciation, fluency, vocabulary, and grammar.
Here are some example recordings of speakers who scored relatively low and high. First, from the Chinese native speakers:
And then from the Farsi speakers:
What difference an acccent makes
Statistical tests were carried out to examine language-related influences on the listeners’ judgements of accent and comprehensibility, first for the entire group of 60 speakers, then broken down by each of the four language groups.
When it came to scoring the speakers on how accented they sounded, variations in their pronunciation were the strongest contributing factors. Our listeners – all English teachers – paid most attention to vowel and consonant errors regardless of the speaker’s native language background. Chinese accents sounded stronger than those of the other language groups.
Different stumbling blocks
The picture was different for ease of understanding. The graph below shows that – for the entire group of 60 international students – pronunciation variables: a combination of vowel/consonant accuracy, word stress, intonation and speech rate are not the only contributing factors to how easy a speaker is to understand. Vocabulary, grammar accuracy and complexity or “lexicogrammar” variables also play a part.
But there are no universal rules when it comes to making yourself understood. For Chinese learners, who were the lowest rated group overall, vowel and consonant errors were detrimental to being understood. Although such errors made Hindi and Urdu speakers sound more accented, it was grammatical errors, and not errors of pronunciation, that affected their comprehensibility.
For French and Spanish speakers, both pronunciation and grammar were linked to their comprehensibility. It could be that for Romance learners, rhythm, vocabulary, and grammar structures are closely tied together. Here is an example of a French native speaker who scored low relative to other French speakers:
In contrast, for Farsi learners, no single language variable was striking enough to be strongly linked with comprehensibility. But our listening English teachers may have had difficulty pinpointing problematic aspects of Farsi learners’ speech – who were rated as the most uniformly comprehensible of all groups in the study.
Pronunciation lessons for non-native English speakers should make it a priority to help learners be more easily understandable to their conversational partners rather than minimising their accents. Our study helps to shed light on the marked influence that people’s first language background can have on their ability to communicate in a comprehensible way.
Ultimately, instructional materials and teaching techniques should take into account the factors that are most important for helping learners communicate more effectively depending on their native language background. |
These seven Hingham boys posed with three bicycles are witnessing the birth of modern cycling. Behind them are two older bicycles—so-called “high wheelers” or “penny farthings” (the latter nickname descriptive of the relative sizes of the two wheels). High-wheelers originated in England and became popular in the United States in the early 1880s. As this photo lets us see clearly, these early bicycles had a “direct drive” mechanism, that is, the pedals attach directly to the wheel, so that the cyclist’s motion turns the wheel directly. Enlarging the front wheel, therefore, was the only way to make the bicycles go faster–and this is what happened. Front wheels often five feet in diameter, with the cyclist perched directly over the wheel, meant an increased risk of the cyclist pitching headfirst from the front of his bike. Cycling in the era of the high wheelers was a sport for athletic young men.
By the early 1890s, however, “safety bicycles”—like the one lying on the ground in front of the boys—had been introduced and quickly grown in popularity. With two wheels of equal size and pedals connected to a chain that propelled the rear wheel, this direct ancestor of our modern bicycles had a lower center of gravity and was easier to ride. With these technological advances—and the pneumatic tires which smoothed out the ride, bicycling became a very popular past time, with men, women, and children all participating. |
Among the features that generally distinguish mammals are hair, three middle ear bones, mammary glands in females, and a neocortex (a region of the brain). In the largest group of mammals, the placentals, the female generates a placenta from which her offspring feeds during pregnancy. The mammary glands of mammals produce milk for newborns as their primary source of nutrition. Except for the five species of monotremes (egg-laying mammals), all modern mammals give birth to live young. Like birds, mammals can forage or hunt in weather and climates too cold for nonavian reptiles or large insects. The mammal class includes some of the most intelligent animals on earth, such as elephants, rats, some cetaceans and certain primates (I think that must be the bonobos?). Mammals range in size from the 30–40 mm (1.2–1.6 in) bumblebee bat to the 33-meter (108 ft) blue whale.
The fennec fox is the smallest species of canid in the world. It weighs about 1.5–3.5 lbs (0.68–1.6 kg), with a body length of between 24–41cm (9–16 in) and a height of around 20.3 cm (8 in). Found in the Sahara desert of North Africa, its most distinctive feature is its unusually large ears. It mainly eats insects, small mammals, and birds, and its main predators are the African varieties of eagle owl.
Fennec foxes are social animals that mate for life, with each pair’s family controlling their own territory. The basic social unit is thought to be a mated pair and their offspring, and the young of the previous year are believed to remain in the family even after a new litter is born. Families of fennecs dig out dens in sand for habitation and protection, which can be as large as 120 m2 (1,292 sq ft) (?! I have lived in apartments a quarter of that size!) and adjoin the dens of other families. Playing behavior is common, including among adults of the species. Fennec foxes make a variety of sounds, including barking, a purring sound similar to a domestic cat’s, and a snarl if threatened. The fennec has a life span of up to 14 years in captivity, and is not presently endangered. Although it cannot technically be considered domesticated, if socialized with humans when it is young it can be kept as a pet in a domestic setting similar to dogs or cats.
A zonkey is a cross between a zebra and a donkey. “Zonkey” is not the technically correct name for such an animal; accepted terms include zebonkey, zebronkey, zebrinny, zebrula, zebrass, zebadonk, and zedonk (or zeedonk), all of which are fun to say. But it’s my zoo, and I like “zonkey” the best. (“Zebroid” is the generic name for zebra hybrids with any other member of the horse family; a zorse is the offspring of a male zebra and a female horse, and is sometimes called a zebrula, zebrule, zebra mule or golden zebra. The rarer reverse pairing is sometimes called a horbra, hebra, zebrinny or zebret. A zony is the offspring of a zebra stallion and a pony mare. All of these are also fun to say, but “zonkey” is still clearly superior.) Zonkeys are extremely rare in the wild, occurring only in South Africa where zebras and donkeys can be found in close proximity to each other. Most are deliberately bred by humans as riding and draft animals, curiosities for circuses and zoo specimens.
Like most other hybrid animals, zebroids are almost always sterile and cannot reproduce. A donkey has 62 chromosomes; a zebra has between 32 and 46 (depending on the species). The hybrid offspring will have a number of chromosomes somewhere in between. Zonkeys vary considerably depending on how the genes from each parent are expressed, and how they interact; although by no means universal, many zebroids develop some form of dwarfism.
Bats (order Chiroptera) are the only mammals naturally capable of true and sustained flight. Bats are found in almost every habitat on Earth, except for the Arctic, Antarctic and a few isolated islands. Predators of bats include bat hawks, bat falcons and even spiders. They are divided into two suborders: the largely fruit-eating megabats (“flying foxes”), and the echolocating microbats. About 70% of bats are insectivores; most of the rest are frugivores (fruit eaters). A few species, such as the fish-eating bat, feed on animals other than insects. And then there is the vampire bat, which is hematophagous — a bloodsucker. Bwahahahaha!
The social structure of bats varies, with some bats leading solitary lives and others living in caves colonized by more than a million bats. A single bat can live over 20 years, but bat population growth is limited by a slow birth rate.
Bat echolocation is a perceptual system whereby sounds are emitted specifically to produce echoes. By comparing the outgoing pulse with the returning echoes, the brain and auditory nervous system produce detailed “images” of the bat’s surroundings, much the same way human brains create images of our surroundings via visual input. Echolocation allows bats to detect, localize, and classify prey in complete darkness. At about 130 decibels, bat calls can be very loud. But fortunately for us, their calls are ultrasonic. Bats rarely fly in rain: it interferes with their echolocation and they are unable to locate their food.
Most microbats are nocturnal and are mainly active at twilight. Many species migrate great distances to winter hibernation dens; some pass into torpor in cold weather, only rousing and feeding once insects become active again in warmer weather. Others retreat to caves for winter to hibernate for six months.
People are beginning to understand the crucial role bats play in insect control and pollination. If bats were to become extinct, insect populations would explode to alarming levels. While conservation efforts are in place in many places to protect bats, many threats still remain.
These are baby fruit bat orphans. WANT.
Batzilla the Bat (Facebook page) chronicles the adventures at an Australian bat rescue organization, with pictures, videos and informative commentary.
RED-BEARDED TITI MONKEY (Callicebus caquetensis; Order: primate)
First described in 2010 (pdf), C. caquetensis is found only in the forests of the Caquetá region of Colombia’s Amazon basin. They are similar to other titi monkeys in many respects: for example, all 13 groups studied by the researchers consisted of a monogamous, bonded pair of adults and between one and four immature offspring (the pairs raise about one baby per year); after weaning, their diets consist primarily of fruit, with leaves the second most important food item and seeds only occasionally. Red-bearded titi monkeys are about the size of domesticated cats (Felis catus).
But their most unusual feature is also their most adorable. According to lead researcher and primatologist Thomas Defler, baby red-bearded titi monkeys purr like kitties:
“All of the babies purr like cats too,” Defler added. “When they feel very content they purr towards each other, and the ones we raised would purr to us.”
Defler says the monkeys also engage in “space saving” behaviors, wherein they encourage another monkey to get closer to them.
Baby pictures of Chloe:
C. caquetensis is critically endangered due to habitat destruction and fragmentation by the agricultural activity of a particularly invasive primate species (homo sapiens). Reaching new nearby forest fragments is extremely dangerous and next to impossible for the little monkeys, as they must cross open savanna—or barbed wire. Defler and the other researchers first describing C. caquetensis estimate the population size may be fewer than 250 adult animals, whereas a healthy population should number in the thousands. The species presently has a geographic range of about 100 square kilometers (39 sq mi) and actually occupies only about 10 square kilometers (3.9 sq mi) within that range.
Brown-throated three-toed sloth
Sloths are extremely slow-moving, arboreal (tree-dwelling), medium-sized mammals, native to the jungles of Central and South America. Their hands and feet have long, curved claws that allow them to hang upside down from branches without effort. While they sometimes sit up on top of branches, they usually eat, sleep, and even give birth hanging from tree limbs. Even after death, they sometimes remain hanging from branches.
The bulk of their diets consist of buds, tender shoots, and leaves, mainly of Cecropia trees. Sloths have made extraordinary adaptations to this arboreal browsing lifestyle. Leaves, their main food source, provide very little energy or nutrients, and do not digest easily. Sloths have evolved large, specialized, slow-acting stomachs with multiple compartments in which symbiotic bacteria break down the tough leaves. Since leaves provide so little energy, sloths have also evolved a range of measures to economize energy: they have very low metabolic rates (less than half of that expected for a mammal of their size), and maintain low body temperatures when active (30–34°C or 86–93°F), and even lower temperatures when resting. As much as two-thirds of a well-fed sloth’s body weight consists of the contents of its stomach, and the digestive process can take a month or more to complete. They love to eat hibiscus flowers like I love to eat chocolate. Sloths have about a quarter as much muscle tissue as other animals of similar weight, and sleep about 10 hours a day. (OMFG. I could totally be a sloth!)
Here is a baby sloth named Matty offering to share his hibiscus with you.
[h/t David Neale]
A sloth is also a remarkable ecosystem unto itself: a single sloth may be home to non-parasitic insects such as moths, beetles, and cockroaches as well as ciliates, fungi, algae and two species of symbiotic cyanobacteria, which provide the sloth camouflage. Sloths benefit from their relationship with moths, for example, because the moths fertilize the algae on the sloth, which in turn provides the sloth with nutrients.
Here are cute baby sloths getting a bath:
Within the tropical rainforests of South and Central America, sloths are outstandingly successful creatures. Four of the six living species, including the brown-throated three-toed sloth, are presently rated “least concern”; the maned three-toed sloth (Bradypus torquatus), which inhabits Brazil’s dwindling Atlantic Forest, is classified as “endangered”, while the island-dwelling pygmy three-toed sloth (B. pygmaeus) is critically endangered. The primary predators of sloths are the jaguar, the harpy eagle, and human poachers. Although all extant species are tree dwellers, extinct sloth species include many ground sloths, some of which attained the size of elephants.
In Costa Rica, the Aviarios Sloth Sanctuary cares for wounded and abandoned sloths. To date about 130 animals have been released back into the wild.
The conservative is a variety of the subspecies homo sapiens (order: primate). The two most common wild types are Democrat and Republican, although some Libertarian subspecies have also been identified. Their plumage is indistinct, bland and highly conformist. Conservatives display a marked propensity for flags, in the form of lapel pins and nest decor.
Conservative behavior is discernible in aggressive dominance displays to enforce strict socio-economic hierarchies—by violence if necessary—not unlike their close cousins the chimpanzees. The Palace houses the world’s preeminent institution for the study of conservatives; these particular specimens are presently on loan to the zoo from the lab.
WARNING: KEEP YOUR DISTANCE FROM THESE ANIMALS.
DO NOT ALLOW THEM ACCESS TO ANYTHING
THAT CAN BE USED AS A WEAPON.
They give the zoo staff enough trouble with the flying flag pins. |
ERIC Number: ED364052
Record Type: RIE
Publication Date: 1989
Reference Count: N/A
Destrezas de Matematica: Curriculo Basico. Guia para el Maestro (Mathematics Skills: Basic Curriculum. Teacher's Guide).
Puerto Rico State Dept. of Education, Hato Rey. Office of Special Education.
The fundamental importance of basic mathematics to daily life is emphasized in this teacher's guide for special education teachers in Puerto Rico. While it is necessary for the teacher to determine the needs and abilities of each student and adapt the curriculum accordingly, this guide presents, in Spanish, a set of lesson plans, each with an objective and suggested activities toward the objective. Activities are chosen for their practical relevance. Skills are presented in the following areas: (1) numeration, including cardinal, ordinal, and Roman numerals; (2) basic mathematics operations, including fractions and decimals; (3) measurement of time, temperature, volume, weight in the English and metric systems, and the monetary system; and (4) principles of geometry. An appendix provides specific guidance in teaching basic mathematical operations. (Contains 12 references.) (SLD)
Descriptors: Arithmetic, Computation, Daily Living Skills, Disabilities, Elementary Secondary Education, Fractions, Geometry, Lesson Plans, Mathematical Concepts, Mathematics Curriculum, Mathematics Instruction, Mathematics Skills, Measurement, Metric System, Monetary Systems, Numbers, Spanish Speaking, Special Education, Teaching Guides, Teaching Methods, Time, Volume (Mathematics), Weight (Mass)
Publication Type: Guides - Classroom - Teacher
Education Level: N/A
Audience: Teachers; Practitioners
Authoring Institution: Puerto Rico State Dept. of Education, Hato Rey. Office of Special Education.
Identifiers: Puerto Rico |
NANOMATERIALS Nanotubes reveal their true strength
Date of this Version10-2008
This document has been peer-reviewed.
Humankind has made continuous efforts to improve the mechanical performance of materials, with different eras of history being named after specific developments in metallurgical technology such as the Bronze Age and the Iron Age. Since their discovery in the early 1990s, carbon nanotubes have attracted extraordinary attention as potentially revolutionary mechanical elements: they are inherently light and stiff, and have been predicted to be extraordinarily strong. However, experimental values of various mechanical properties have always been much lower than theoretical predictions. On page 626 of this issue Horacio Espinosa and co-workers at Northwestern University and the Argonne National Laboratory report experimental results that, for the first time, show that multi-walled carbon nanotubes can have failure strengths and strains near those that have been predicted by quantum mechanical simulations. Moreover, they report that creating controlled mechanical linkages between the different walls of a nanotube can increase the maximum load that can be supported, offering the promise of even greater utility.
Nanoscience and Nanotechnology |
Darfur’s people are a complex mosaic of between 40 and 90 ethnic groups, some of ‘African’ origin (mostly settled farmers), some Arabs. All Darfurians are Muslim. The Arabs began arriving in the 14th century and established themselves as mainly nomadic cattle and camel herders. Peaceful coexistence has been the norm, with inevitable disputes over resources between fixed and migratory communities resolved through the mediation of local leaders. For much of its history, the division between ‘Arab’ and ‘African’ has been blurred at best, with so much intermarriage that all Darfurians can claim mixed ancestry. Identities have been defined in different ways at different times, based on race, speech, appearance or way of life.
An Independent Sultanate
At the heart of Darfur is an extinct volcano in a mountainous area called Jebel Marra. Around it the land is famously fertile, and it was here that the earliest known inhabitants of Darfur lived – the Daju. Very little is known about them. The recorded history of Darfur begins in the 14th century, when the Daju dynasty was superseded by the Tunjur, who brought Islam to the region.
Darfur existed as an independent state for several hundred years. In the mid-17th century, the Keyra Fur Sultanate was established, and Darfur prospered.
In its heyday in the 17th and 18th centuries the Fur Sultanate’s geographical location made it a thriving commercial hub, trading with the Mediterranean in slaves, ivory and ostrich feathers, raiding its neighbours and fighting wars of conquest in the surrounding region.
Darfur under siege
In the mid-19th century, Darfur’s sultan was defeated by notorious slave trader Zubayr Rahma, who was in turn subjugated by the Ottoman Empire. At the time, this included Egypt and what is now northern Sudan. The collapse of the Keyra dynasty plunged Darfur into lawlessness. Roaming bandits and local armies preyed on vulnerable communities, and Islamic ‘Mahdist’ forces fighting British colonial control of the region sought to incorporate Darfur into a much larger Islamic republic. A period of almost constant war followed, until 1899 when the Egyptians – now under British rule – recognized Ali Dinar, grandson of one of the Keyra sultans, as Sultan of Darfur. This marked a de facto return to independence, and Darfur lived in peace for a few years.
Colonial ‘benign neglect’
Ali Dinar refused to submit to the wishes of either the French or the British, who were busy building their empires around his territory. Diplomatic friction turned into open warfare. Ali Dinar defied the British forces for six months, but was ambushed and killed, along with his two sons, in November 1916. In January 1917 Darfur was absorbed into the British Empire and became part of Sudan, making this the largest country in Africa.
The only aim of Darfur’s new colonial rulers was to keep the peace. Entirely uninterested in the region’s development (or lack thereof), no investment was forthcoming. In stark contrast to the north of Sudan, by 1935 Darfur had only four schools, no maternity clinic, no railways or major roads outside the largest towns. Darfur has been treated as an unimportant backwater, a pawn in power games, by its successive rulers ever since.
Independence brings war
The British reluctantly but peacefully granted Sudan independence in 1956. The colonialists had kept North and South Sudan separate, developing the fertile lands around the Nile Valley in the North, whilst neglecting the South, East and Darfur to the west. They handed over political power directly to a minority of northern Arab élites who, in various groupings, have been in power ever since. This caused the South to mutiny in 1955, starting the first North-South war. It lasted until 1972 when peace was signed under President Nimeiry. But the Government continually flouted the peace agreement. This, combined with its shift towards imposing radical political Islam on an unwilling people, and the discovery of oil, reignited conflict in the South in 1983.
Darfur, meanwhile, became embroiled in the various conflicts raging around it: not just internal wars by the centre over its marginalized populations – many of the soldiers who fought for the Government against the South were Darfurian recruits – but also regional struggles. The use of Darfur by Libya’s Colonel Qadafhi as a military base for his Islamist wars in Chad promoted Arab supremacism, inflamed ethnic tensions, flooded the region with weaponry and sparked the Arab-Fur war (1987-89), in which thousands were killed and hundreds of Fur villages burned. The people’s suffering was exacerbated by a devastating famine in the mid-1980s, during which the Government abandoned Darfurians to their fate.
Bashir seizes power
In 1989 the National Islamic Front (NIF), led by General Omar al-Bashir, seized power in Sudan from the democratically elected government of Sadiq al Mahdi, in a bloodless coup. The NIF revoked the constitution, banned opposition parties, unravelled steps towards peace and instead proclaimed jihad against the non-Muslim South, regularly using ethnic militias to do the fighting. Although depending on Muslim Darfur for political support, the NIF’s programme of ‘Arabization’ further marginalized the region’s ‘African’ population.
The regime harboured several Islamic fundamentalist organizations, including providing a home for Osama bin Laden from 1991 until 1996, when the US forced his expulsion. Sudan was implicated in the June 1995 assassination attempt on Egyptian President Mubarak. Its support for terrorists and increasing international isolation culminated in a US cruise-missile attack on a Sudanese pharmaceutical factory in 1998, following terrorist bombings of the US embassies in Nairobi and Dar es Salaam.
The Janjaweed: ‘counterinsurgency on the cheap’
Janjaweed fighters, with their philosophy of violent Arab supremacism, were first active in Darfur in the Arab-Fur war in the late 1980s. Recruited mainly from Arab nomadic tribes, demobilized soldiers and criminal elements, the word janjaweed means ‘hordes’ or ‘ruffians’, but also sounds like ‘devil on horseback’ in Arabic. The ruthlessly opportunistic Sudanese Government first armed, trained and deployed them against the Massalit people of Darfur in 1996-98. This was an established strategy by which the Government used ethnic militias to fight as proxy forces for them. It allowed the Government to fight local wars cheaply, and also to deny it was behind the conflict, despite overwhelming evidence to the contrary.
The Comprehensive Peace Agreement
When President George W Bush came to power in 2000, US policy shifted from isolationism to engagement with Sudan. After 11 September 2001 Bashir ‘fell into line’, started to co-operate with the US in their ‘war on terror’ and a peace process began in earnest in the South. After years of painstaking negotiations, and under substantial pressure from the US, in January 2005 a Comprehensive Peace Agreement (CPA) was signed between the Government and the Sudan People’s Liberation Movement/Army (SPLM/A), ending 21 years of bloody war which killed two million people, displaced another four million and razed southern Sudan to the ground.
A surprisingly favourable deal for the South, the CPA included a power-sharing agreement leading up to a referendum on independence for the South in 2011, a 50-50 share of the profits from its lucrative oilfields, national elections in 2009, and 10,000 UN peacekeepers to oversee the agreement’s implementation. But the ‘comprehensive’ deal completely ignored Darfur, catalyzing the conflict that is currently engulfing the region.
The rebels attack
Rebellion had been brewing in marginalized, poverty-stricken Darfur for years. After decades in the political wilderness, being left out of the peace negotiations was the final straw. Inspired by the SPLA’s success, rebel attacks against Government targets became increasingly frequent as two main rebel groups emerged – the Sudan Liberation Army (SLA) and the Justice and Equality Movement (JEM). By early 2003 they had formed an alliance. Attacks on garrisons, and a joint attack in April on an airbase that reduced several Government planes and helicopters to ashes, were causing serious damage and running rings around the Sudanese army.
Facing the prospect of its control over the entire country unravelling, in 2003 the Government decided to counterattack. Manipulating ethnic tensions that had flared up in Darfur around access to increasingly scarce land and water resources, they unleashed the Janjaweed to attack communities they claimed had links to the rebels.
Julie Flint and Alex de Waal, Darfur: a Short History of a Long War, Zed Books, 2005; Ruth Iyob and Gilbert M. Khadiagala, Sudan: The Elusive Quest for Peace, Lynne Rienner Publishers, 2006; Douglas H. Johnson, The Root Causes of Sudan’s Civil Wars, Indiana University Press, 2006; Gerard Prunier, Darfur: The Ambiguous Genocide, Cornell University Press, 2007; www.wikipedia.org
This first appeared in our award-winning magazine - to read more, subscribe from just £7 |
Yesterday, soap was making computer components. Today, it's shampoo - specifically, an ingredient of shampoo called ethylenediaminetetraacetic acid, or EDTA for short.
Chemists playing around with DNA have known for a while that acid causes the structure of DNA to fold up into what's called an 'i-motif', but now they've found that adding positively-charged copper ions can switch the structure a second time into a hair-pin shape. That change can be reversed using EDTA.
Two shapes that can be switched between allows DNA to act like the 1s and 0s that make computers, paving the way for silicon to be replaced by strands of DNA. The same process could also be used to make nanoscale machines and to detect the presence of toxic copper ions in water.
Zoë Waller, who led the discovery at the University of East Anglia, said: "Our research shows how the structure of our genetic material - DNA - can be changed and used in a way we didn't realise. A single switch was possible before - but we show for the first time how the structure can be switched twice."
She added: "This research expands how DNA could be used as a switching mechanism for a logic gate in DNA-based computing or in nanotechnology."
The details of the discovery were published in the journal Chemical Communications.
Image credit: Thierry Ehrmann // CC BY 2.0 |
Many question the moral authority of the rioting in Ferguson that was triggered by yet another killing of an unarmed teen. The rioting is a symptom of a larger nationwide trend of resistance to the encroaching police state, and specifically the ability of law enforcement officials to kill, maim, and harass without consequence.
The riots began before all of the facts were gathered about the shooting. The timing of the riot is important. Preemptive rioting and the destruction of private property is the third step in a historical cycle that has played out since the foundation of governments. It is a five step cycle that ends in widespread rebellion and insurrection.
5 steps to insurgency
Prior to the digital age, pamphlets were the main method of spreading dissent around the world. The pamphlets examined and questioned the authority of the contemporary governments and control systems. In the modern world, pamphlets have been replaced by blogs, social media, and to a smaller degree, adversarial journalists.
Once the seed of dissent is planted, people take to the streets to voice their opposition to the government. These protests occur after the control systems of the era attempt to diffuse an offending incident.
Preemptive rioting follows a period of reactive protests that go unanswered by the government. The people begin taking to the streets and destroying private and public property as soon as an offending incident takes place, rather than waiting and hoping for the government to police itself.
Military or Law Enforcement backlash and crackdowns:
These riots and small incidents of resistance trigger a government reaction. The control systems of the country tighten their grip on the people and further curtail civil liberties and infringe on people’s rights. The government crackdown fuels the resistance movement as more people tire of government intrusion.
Widespread rebellion and insurrection:
At some point during the crackdown, an incident occurs that tosses a match into the powder keg of dissent. At this point, open rebellion occurs.
To showcase an example most Americans are familiar with, the American Revolution provides a clear instance for every phase of the cycle. As early as 1765, agitators were distributing pamphlets and making speeches. Patrick Henry made his “If this be treason, make the most of it speech!” speech that year (Pamphlet Phase). The government was unresponsive to the cries of the people and protests began. In 1770, British troops opened fire on one such protest in Boston. It is known today as the Boston Massacre (Reactive Protest Phase). The King continued to press colonials until preemptive rioting and violence began. While the 1773 Boston Tea Party is the most famous, violence was initiated in almost all colonial ports. In Annapolis, a ship was set ablaze along with its tea (Preemptive Rioting Phase). British authorities could see the coming conflict and attempted to seize arms from the colonials (Military or Law Enforcement Crackdown Phase). When the colonists of Lexington and Concord resisted, the colonies were plunged into open insurrection (Rebellion Phase).
Throughout history the same five phases repeat. The Tea Parties were not just about the Tea Act, they were a strike against the government for a collection of insults and intrusions. Much like the riots in Ferguson, the participants gave little thought to the private property destroyed during the action. Much the same way law enforcement looks at all of the innocent people killed as collateral damage or an unfortunate accident; the rioters in Ferguson (and during the Revolution) see the destruction as justified.
This will not be an isolated case. More and more people are warming to the idea of using violence against police forces that are constantly overstepping their bounds. An armed citizenry will resist oppression and it will be those carrying out the enforcement of unjust laws that become the target of resistance. In other countries during the Preemptive Rioting cycle, law enforcement officers are targeted while they are on duty, and their families are targeted while at home. Even the respected Founding Fathers of this country engaged in attacking government officials at home. Customs official Thomas Hutchinson’s home was attacked and his family barely escaped with their lives.
Without serious reform in what’s left of the justice system, the future is not one of officers walking free after killing an unarmed person; it will be one of officers becoming the target of sporadic violence. Despite the propaganda, being a cop in the United States is safer than being a trash collector. That will change, and officers will become targets of opportunity for those that previously sought reform through peaceful means.
Those in departments that have excused the actions of their officers and made significant peaceful reform impossible, have now set the stage for their officers to be shot while sitting at traffic lights. Only 61% of murders are solved in the United States. Imagine how hard it will be to solve an officer’s murder that is completely random and lacks a direct connection to the shooter. Without a clear motive, there is no place to even start investigating.
This riot is the warning sign of a very dangerous future for cops. This wasn’t an isolated shooter taking out cops; this was a large percentage of the population expressing rage at the rampant police brutality and corruption. It is doubtful that departments will get a second warning. The time for reform is now. Otherwise, the body count the news is reciting is more likely to be that of officers than of unarmed citizens.
“Those who make peaceful revolution impossible will make violent revolution inevitable.”
–John F. Kennedy
Editors Addition: Don’t take this as a threat to anyone, particularly law enforcement. It is a simple statement of fact with predictions based on logical consequences. As the police isolate themselves behind gas masks, barricades and the thin blue line, they become the anonymous “other”. As their authority and capricious violence tightens they become the symbol as well as the means of oppression. This is the second armed rebellion in the US this year. There was the Tea Party inspired Sagebrush Rebellion which effectively forced withdrawal of federal forces and now the ongoing resurrection in Ferguson which can be characterized as Occupy inspired. Should these two seemingly disparate movements merge, and with the militarized police propensity to over reaction, we could be headed toward very interesting/dangerous times. |
Children often resist when we say no. Naturally, they have ideas about how they want things to be and they want some control over their lives. Parents and caregivers can create a positive environment that encourages cooperation through some simple communication skills. Remembering to use these skills is the challenge for a concept called positive noticing, which is the encouragement of behavior you want to see.
To engage in positive noticing, first think of the positive qualities you would like a child to develop. When you see those behaviors, notice them and use specific praise, such as:
• “I saw you help your sister with the scissors earlier. That was thoughtful.”
• “I noticed that you took a deep breath and calmed down. That took a lot of self-control. “
• “You did your homework without a reminder today! You’re becoming a responsible person.”
Specific praise is much more effective than global praise such as “Good job!” because it highlights particular traits and actions. A great thing about using it often is that it helps us become more positively focused ourselves.
When it becomes a habit to respond this way, we see more positive things all around us.
• When/then: If a child resists doing something that needs to be done, saying, “When you have finished (doing the resisted thing), then you may (do the thing you would rather be doing or want to do instead.)
An example of this would be if a child won’t come put on pajamas when it’s time to get ready for bed, a caregiver might say, “When you have put on your pajamas, then we can read a story together.” This keeps the child focused on the positive thing that will happen afterward.
• Offer choices: Children do get told what to do often; offering choices allows them an opportunity to have a bit of control. For young children, choices should be limited to one or two things, both of which are equally acceptable to the caregiver.
For example, ask, “Would you rather wear the blue pants or the red ones?” or “Would you like an apple or an orange?” For the child who is resisting getting into the car when it’s time to go: “Would you rather hop to the car like a bunny or see if you can walk backwards to the car?” This turns the perceived challenge into a game and could get a child more focused on fun.
The when-then and offer choices strategies go together well when a child is having a particularly difficult time going with the schedule. For example, the child who wasn’t getting pajamas on might need an additional positive focus after hearing the “when-then” statement. The adult may say, “After your P.J.’s are on, would you rather brush your teeth before or after your story?”
These strategies are just a few of many others that can contribute to positive interactions and cooperation.
For more ideas, a great resource is “The No-Cry Discipline Solution” by Elizabeth Pantley (www.pantley.com/elizabeth). |
Your intestines are home to many different kinds of bacteria (and some non-bacterial organisms as well). Together they’re called the “gut microbiome.” They come from the food you eat – and whatever else gets into your mouth. Bacteria start colonizing your gut at birth.
Your gut microbiome aids in digestion and produces vitamins and other compounds that affect your health. It seems to play a role in many other health-related functions, including metabolism, cardiac health and mood.
New evidence shows that the bacteria in our gut also interact with our immune systems, and might even influence the body’s immune reaction to vaccines.
How can bacteria in your gut interact with your immune system?
We are still learning how gut bacteria and the immune system interact. Research suggests that the interaction evolved over time to manage the balance between reacting to harmful pathogens and tolerating non-harmful organisms. You want your immune system to react to the pathogens that can make you sick, while letting the beneficial bacteria living in your gut go about their business.
We are still learning what a healthy gut microbiome looks like. Evidence suggests that a balanced and diverse microbiome might contribute to better health overall, and a less diverse or less balanced microbiome can have a negative impact on health.
A review article from 2014 suggests that the overuse of antibiotics, changes in diets and the elimination of beneficial organisms that work with bacteria (like nematodes, a kind of worm) in high income countries may have resulted in gut microbiomes that lack the resilience and diversity of functions required to establish balanced immune responses. Why does that matter?
For instance, a 2013 study found that children living in Bangladesh have more diverse gut microbiomes than children from the United States. Researchers suggest that dietary differences – with children in the US eating more animal fats and protein – are a factor.
How do vaccines work?
Let’s start at the beginning. Vaccines work by introducing dead or weakened viruses or bacteria or pieces of them (called pathogens) to your body. Your immune system finds them and generates protective antibodies and other responses to that pathogen. Because they are dead or weakened, vaccines cannot cause disease symptoms in the majority of people.
This means that your body will have the antibodies to fight the pathogen and will be ready to mount a quick immune response if it’s ever encountered again. So if you are exposed to the pathogen – the kind that can cause real symptoms – your body already knows how to fight it. You don’t need to develop immunity by actually catching that disease and suffering its real, and sometimes dangerous or deadly, symptoms. You can go your entire life without ever suffering the symptoms of that disease. This is why the word “vaccine” has become synonymous with protection.
Unhealthy gut bacteria can make vaccines less effective
Scientists have started examining the interactions between gut bacteria and responses to vaccines. A recent review article concluded that the composition of your gut microbiome can influence whether a vaccine has an effect in your body.
Unhealthy gut microbiome composition (or “dysbiosis”) can lead to inflammation. And that means more bacterial cells pass through the damaged lining of the gut, which stimulates further immune system responses. This is called “leaky gut.” Vaccines may not be as effective because the immune system is already busy dealing with these bacterial cells “leaking” through the gut.
On the other hand, having a diverse and “healthy” gut microbiome, and thus no gut inflammation and “leakiness,” might allow a person’s immune system to focus on responding to the vaccine effectively.
Recent research has also found that the effectiveness of the seasonal flu shot could be enhanced by intestinal bacteria. The immune system detects specific proteins from the bacteria, and this detection seems to increase the immune system’s response to the flu vaccine. Then your body has an easier time mounting an immune response if you are exposed to the real flu virus.
Gut bacteria aren’t the only thing influencing your immune system
Could an unhealthy gut microbiome be the culprit in the rare cases when a person has an unexpected immune reaction to a vaccine, such as an anaphylactic reaction? We don’t know for certain yet, but it is a possibility.
Science is nowhere near being able to tell you which bacteria will always cause what immune system responses. And keep in mind that your gut bacteria are by no means the only factor affecting your immune system. Nutrition, age, sex, genetics and the kinds of pathogens you’ve been exposed to can all have an effect.
We don’t yet know exactly what a health-beneficial gut microbiome may look like, though recent research points to the fact that the [specific biochemical functions](http://www.cell.com/cell-host-microbe/abstract/S1931-3128(1500021-9) that different bacteria can carry out are more important than the species present in your gut.
Keeping your microbiome in good shape
As far we know the best way to establish and maintain a healthy gut microbiome is to get enough sleep and exercise, eat healthy meals that include lots of fruits and vegetables, avoid chronic and excessive stress and not to drink too much. You can also help maintain healthy gut bacteria by taking antibiotics only when they are necessary. Remember, antibiotics don’t help if you have a virus, such as colds or the flu. |
What is Measles?
Measles is a very contagious disease that produces a pink rash all over the body. It is caused by a virus that affects the respiratory system, skin, and eyes. The first symptoms appear about 10 days after becoming infected. A fever, cough, and runny nose develop, and the eyes become red, watery and sensitive to light. The fever may reach 105 degrees F (41 degrees C). Small pink spots with gray-white centers develop inside the mouth. A few days later, pink spots break out on the face. The rash then spreads all over the body. Once the rash reaches the feet -- in two or three days -- the fever drops and the runny nose and cough disappear. The rash on other parts of the body begins to fade, and the infected person starts to feel better.
Antibiotics and drugs do not work to shorten the duration or alleviate the symptoms of measles once it is contracted. Treatment mainly consists of allowing the disease to run its course. However, cool sponge baths and soothing lotions to relieve the itchy rash may be helpful. Drinking lots of liquids to prevent dehydration is recommended as well. The disease confers permanent immunity; the infected person will not contract it again.
Is measles dangerous?
Prior to the 1960s, most children in the United States and Canada caught measles. Complications from the disease were unlikely. Previously healthy children usually recovered without incident.(1) However, measles can be dangerous in populations newly exposed to the virus,(2) and in malnourished children living in undeveloped countries.(3,4) Ear infections, pneumonia, brain damage (subacute sclerosing panencephalitis), and death are some of the possibilities.(5) In advanced countries, measles can be severe when it infects people living in impoverished communities with poor nutrition, sanitation, and inadequate health care.(6) Complications are also more likely when the disease strikes infants, adults, and anyone with a compromised immune system.(7)
Scare Tactics: Doctors and other health authorities often try to frighten parents about measles by exaggerating the risks. For example, vaccine pamphlets published by the CDC claim that 1 out of every 1000 children who contract measles will get encephalitis, an infection of the brain.(8) However, Dr. Robert Mendelsohn, renowned pediatrician and vaccine researcher, had this to say: "The incidence of 1/1000 may be accurate for children who live in conditions of poverty and malnutrition" but for just about everyone else "the incidence of true encephalitis is probably more like 1/10,000 or 1/100,000."(9) Furthermore, about 75 percent of these cases will not show evidence of brain damage.(10)
Vitamin A and Nutrition: Several studies show that when patients with measles are given vitamin A supplements, their complication rates and chances of dying are significantly reduced. For example, as early as 1932 doctors used cod-liver oil -- high in vitamin A -- to treat measles and lower mortality by 58 percent.(11) Studies conducted in 1958 and 1961 confirmed that the wild measles virus has a severe short-term effect on immunity and the child's nutritional status, especially vitamin A and nitrogen metabolism.(12,13) But antibiotics -- later shown to be ineffective at treating measles -- soon replaced vitamin A therapy, and by the 1960s vaccinations gained preference over treatment protocols. However, during the mid-1980s new studies demonstrated an increased risk of diarrhea, respiratory disease, and death in children with mild vitamin A deficiency.(14,15)
In a 1987 study conducted in Tanzania, Africa, 180 children with measles were randomly divided into two groups and received routine treatment alone or with 200,000 i.u. of orally administered vitamin A. Mortality rates in the vitamin A group were cut in half. In fact, children under two years of age who did not receive vitamin A were nearly eight times more likely to die (Figure 1).(16)
In 1990, the New England Journal of Medicine confirmed that vitamin A supplements significantly reduce measles complication and death rates.(17) In 1992, researchers measured vitamin A levels in children with measles and determined that deficiencies were associated with lower levels of measles-specific antibodies, higher and longer lasting fevers, and a greater probability of being hospitalized.(18) The authors of the study recommended Vitamin A therapy for children under two years of age with severe measles.(19) And a 1993 study showed that 72 percent of all measles cases in the U.S. requiring hospitalization are deficient in vitamin A. The greater the deficiency, the worse the complications and higher the probability of dying.(20)
Malnutrition is clearly responsible for higher disease complication and death rates.(21) According to David Morley, infectious disease expert, "Severity of measles is greatest in the developing countries where children have nutritional deficiencies... The child with severe measles and an immune system suppressed by malnutrition secretes the virus three times longer than does a child with normal nutrition."(22) Dr. Viera Scheibner, vaccine researcher, summarizes the data more succinctly: "Children in Third World countries need improved vitamin A and general nutritional status, not vaccines."(23)
Fever Reducers: Poor nutrition and a vitamin A deficiency are not the only factors known to increase measles complication and mortality rates. Standard treatment protocols may be detrimental as well. For example, when doctors administer antipyretics (fever reducers, such as aspirin) to control the rising temperature in measles patients, greater problems are likely. In one study during a measles epidemic in Ghana, Africa, children were divided into two groups. One group received antipyretics -- typical at many hospitals. Mortality was five times greater than in the group that did not receive this treatment (Figure 2).(24) Researchers concluded that "children with the most violent, highly febrile form of the disease actually had the best prognosis."(25)
In another study conducted in Afghanistan, 200 children with measles were divided into two groups. Once again, members of one group received aspirin to lower fever. The study revealed that children receiving the antipyretics had prolonged illness, more diarrhea, ear infections and respiratory ailments, such as pneumonia, bronchitis and laryngitis, and significantly greater mortality rates.(26) According to Dr. Harold Buttram, who studied the data, "it could be inferred that interference with the natural course of the disease significantly dampened the immune responses of the children."(27) The authors of the study noted that the "adverse effect of antipyretics, which makes the course of the disease longer, facilitates superinfections which give rise to high mortality."(28) This study also suggests that "children suffering from measles should be kept warm enough in order to have fever and pass the disease safely."(29)
Dr. Robert Mendelsohn agrees that fevers should not be suppressed: "Doctors do a great disservice to you and your child when they prescribe drugs to reduce his fever... When your child contracts an infection, the fever that accompanies it is a blessing, not a curse... A rising body temperature simply indicates that the process of healing is speeding up. It is something to rejoice over, not to fear."(30) Other researchers have noted that "the development of cancer may quite possibly have been given a boost in certain cases through the repression of febrile conditions."(31) In fact, pyrexia (a condition resulting from fever inducers) has been used in the prevention and treatment of carcinomas.(32) Despite the evidence implicating antipyretics in prolonging disease and raising mortality rates, Dr. Scheibner ruefully observes that "the relentless suppression of fever in children with measles is still widely practiced."(33)
Does a measles vaccine exist?
In 1758, Francis Home conducted the first experiments to prevent measles by inserting measles-infected blood into deliberate cuts made on healthy people.(34) He claimed that his "variolation" technique caused a milder form of the disease. However, the procedure was not without danger; variolation was known to spread syphilis, tuberculosis, and several other diseases.(35)
In 1940, the U.S. military tested an experimental measles vaccine on enlisted personnel. Following severe reactions, the program was ended.(36) In 1954, a team of virologists headed by John F. Enders, an American scientist, found a way to separate the measles virus from other substances and grow it in living cells.(37) In 1960, Enders' vaccine was tested, and in 1963 both a live-virus shot and an inactivated vaccine were licensed. By the mid-1960s, several measles vaccines were available and being administered to millions of young children in the U.S. However, in 1967 the inactivated vaccine was removed from the market because it did not provide long-term immunity and was causing "atypical" measles.(38) By the early 1970s, Canada and other countries had begun nationwide measles vaccination campaigns.(39,40)
How safe is the measles vaccine?
The measles vaccine has a long history of causing serious adverse reactions. The pharmaceutical company responsible for producing the measles vaccine publishes an extensive list of ailments known to have occurred following the shot. Severe afflictions affecting nearly every body system -- blood, lymphatic, digestive, cardiovascular, immune, nervous, respiratory, and sensory -- have been linked to this "preventive" inoculation. These include: encephalitis, subacute sclerosing panencephalitis, Guillain-Barre syndrome, febrile and afebrile convulsions, seizures, ataxia, ocular palsies, anaphylaxis, angioneurotic edema, bronchial spasms, panniculitis, vasculitis, atypical measles, thrombocytopenia, lymphadenopathy, leukocytosis, pneumonitis, Stevens-Johnson syndrome, erythema multiforme, urticaria, deafness, otitis media, retinitis, optic neuritis, rash, fever, dizziness, headache, and death (Figure 3).(41)
The manufacturer also warns that the measles vaccine "has not been evaluated for carcinogenic or mutagenic potential" and "it is...not known whether [it] can cause fetal harm when administered to a pregnant woman or can affect reproductive capacity." Thus, "it would be prudent to assume that the vaccine strain of virus is...capable of inducing adverse fetal effects." Also, "caution should be exercised when...administered to a nursing woman."(42)
The most up-to-date information
on the measles (and MMR) vaccine
may be found in the book:
Vaccine Safety Manual |
When children enter elementary school, they are full of curiosity about the world around them. They want to know how things work, where things come from, what various words mean, and why people and animals in stories act the way they do. As they begin to recognize words and listen to stories with others at school, their curiosity builds, as does their knowledge.
Some children entering kindergarten have begun to learn to read; others have not. The early elementary years are the prime time for children to make strides in reading, no matter what their level upon entering school. If they learn to sound out words accurately in the first few years of school, while building vocabulary, knowledge, and understanding, then reading in itself should not pose a problem for them later; but, if they enter third or fourth grade without knowing how to read confidently, it will be difficult for them to handle the schoolwork. They may need intensive remedial help with basic skills while other students are studying literature and other subjects. For this reason, it is essential that they learn to read confidently and fluently early. Therefore the curriculum for grades K–2 includes a pacing guide of milestone instructional goals. This guide was written by Louisa Moats, a reading expert who was on the team that wrote the CCSS reading standards. Louisa has paced the reading foundation standards logically across the units in the Wheatley Porftfolio. Concepts of print, phonological awareness, and text-reading fluency are all addressed in a developmental progression that interacts with and influences each other.
In the initial years of elementary school, children also discover new worlds of literature. Immersed in stories, sounds, and letters, they make connections between what they hear and what they read. Starting in Kindergarten, children listen to a wide variety of excellent literary and nonfiction texts: stories, poems, songs, fables, myths, legends, biographies, and books on historical and scientific subjects. They hear literature read aloud to them daily, and they act out select stories. Through the diverse use of texts, topics are introduced and reintroduced in greater depth deliberately across the grades. The arts are integrated into the units; for example, students look at a collage by Henri Matisse in a unit on animals and listen to Sergei Prokofiev’s Peter and the Wolf in a unit on tales passed down from generation to generation. Certain units (such as “The Wild West” in Grade 2) are organized around geographical and historical topics, allowing students to compare descriptions of events and characters. In their dramatic readings, students build expressiveness while experimenting with different voices. Thus, they start to grasp the rhythms, forms, and themes of literary language and build knowledge and vocabulary throughout the content areas. Every day, they are immersed in sounds, stories, and ideas, and the teacher leads them in lively discussions and activities.
By the end of second grade, students should be able to read simple storybooks fluently and to write in print. In addition, they have learned to use graphic organizers to plan their writing. In third grade they read chapter books and write reports, letters, stories, poems, and descriptions. Throughout the early grades, they learn basic grammatical and literary terms and start to understand word structure. As the act of reading becomes second nature, students can focus on the content of what they read. By reading a wide variety of genres centered on important historic and scientific themes, students build content knowledge and begin to comprehend at greater levels of depth and in a wider range of topics.
When students enter fourth grade, they have a background in mythology, poetry, fiction, folktales, and nonfiction texts on a variety of subjects. They may be interested in specific topics that have come up in their classes or that they have pursued on their own. They may have taken an interest in a particular subject or begun to study a musical instrument, dance, or art. All of this will fuel their reading and writing.
Students in fourth and fifth grades delve into literature and nonfiction: They continue to read and discuss a wide variety of literature, nonfiction, fables, and mythology, as well as essays and speeches, and to make connections with the arts—for instance, by examining art by Michelangelo and a photograph from the Civil War. They learn about play, invention, investigation, and exploration, among other topics related to the life of the mind. They begin to understand the way in which literature offers insight into culture and history—for instance, by comparing Native American narratives with those of European settlers. As they read poetry, students practice their expressive delivery and learn about poetic elements, such as rhyme scheme, meter, stanza, metaphors, and similes, and how these contribute to the beauty and meaning of the poems. They learn to spend time with works that they do not immediately understand, allowing time for their understanding to grow. Certain units (such as “Clues to a Culture” in Grade 5) are organized around historical topics, allowing students to compare descriptions of events and characters. By the end of fifth grade, students are able to tackle serious topics such as life’s challenges and obstacles; civil and cultural strife; intellectual courage; and coming of age. They also recognize literature’s sounds, word play, nonsense, invention, beauty, mystery, and sheer fun. Through the diverse use of texts, topics are introduced and reintroduced in greater depth deliberately across the grades.
Students in grades four and five learn to collaborate with peers while writing reflective essays, reports, journals, stories, and responses to literature and arts. They also learn to create multimedia presentations. They build on their grammatical knowledge and demonstrate command of grammar and usage. Word study is an essential part of the units: Students learn multiple meanings of words, continuing their study of morphology and beginning to study etymology, thus gaining insight into the relationship between English and other languages, ancient and modern. In their essays, they are able to articulate a central idea and illustrate it with examples; to discuss themes in the works they have read; and to respond both formally and informally to literature. Class discussions allow students to explore questions and ideas together; oral presentations allow them to draw on multiple resources, refine their speaking skills, and learn from each other.
The content cloud below distills the key content knowledge in the Elementary School maps. The larger an event, name, or idea appears, the more emphasis it receives in the maps. As you examine this cloud, do keep in mind that the Elementary School maps contain much that is not included here. |
What is Facts for Life?
Using Facts for Life
Safe Motherhood and Newborn Health
Child Development and Early Learning
Nutrition and Growth
Coughs, Colds and More Serious Illnesses
Emergencies: Preparedness and Response
PDF and Word versions Resources
Childbirth is the most critical period for the mother and her baby. Every pregnant woman must have a skilled birth attendant, such as a midwife, doctor or nurse, assisting her during childbirth, and she must also have timely access to specialized care if complications should occur.
Every pregnancy deserves attention because there is always a risk of something going wrong with the mother, baby or both. Many dangers, illnesses or even death can be avoided if the woman plans to give birth attended by a skilled birth attendant, such as a doctor, nurse or midwife, and makes at least four prenatal visits to a trained health worker during the pregnancy.
The likelihood of the mother or the baby becoming ill or dying is reduced when childbirth takes place in a properly equipped health facility with the assistance of a skilled birth attendant, who also checks regularly on the mother and baby in the 24 hours after delivery.
When the pregnant woman is ready to give birth, she should be encouraged to have a companion of her choice accompany her to provide her with continuous support during childbirth and after birth. In particular, the companion can support the woman in labour to eat and drink, use breathing techniques for different stages of childbirth, and arrange for pain and discomfort relief as needed and advised by the skilled birth attendant.
During and immediately following childbirth, the skilled birth attendant will: |
Radar is an acronym for “RAdio Detection And Ranging,” and it uses radio waves to detect objects in the atmosphere. It was first devised in 1904 by the German inventor Christian Hülsmeyer (1881–1957), who called his radar detector a “telemobiloscope” and patented the device in 1906. The original purpose of his invention was for ships to be able to detect each other so that in poor weather conditions (e.g., heavy fog) they would not run into each other. Sadly, this brilliant invention did not catch on at the time; if it had, some speculate that the 1912 Titanic disaster could have been avoided. Another concept that Hülsmeyer came up with was the remote control. He believed, correctly, that radio waves could be used to turn mechanical devices on and off. Again, people ignored the concept and he never got the credit he deserved. |
SUCCESSFUL TEST TAKING INVOLVES:
- Attention to details - to understand the nuances of the question as well as provide the necessary content, depth and details in the response. IF the exam is an essay, it also involves attention to the writing process making sure spelling, grammar, and punctuation are correct. IF the exam involves math, the student must also attend to the written work, making sure numbers were copied correctly, and computations were completed accurately.
- Memory - making sure ALL the required materials were recalled and presented accurately, and that everything the questions required were addressed.
- Language - students must be able to read and comprehend exactly what the questions are asking, and then be able to recognize the 'best' answer from a multiple choice array, OR relate the necessary material in a comprehensive and comprehensible manner.
- Sequencing - students must often recall ORDERS of events (be they historical, fictional, numerical, etc.) and must be able to relate them in the correct sequence and comprehensive manner.
- Grapho-motor skills - are also involved as students must have the physical control and stamina to write responses for each question in the proper test location and in a 'readable' answer.
- Cognition/Comprehension - aside from simply understanding the question, students must recognize and follow patterns, construct main ideas, compare and contrast information from memory, compare and contrast what is necessary from what is not, compare and contrast choices as to how to respond. Students must also analyze questions and options, they must brainstorm how to respond as well as brainstorm what is the 'best' response. In essay exams students must often analyze, explain, relate, criticize, and evaluate content information, ALL of which requires higher order cognitive skills.
This post hopes to relieve test-taking anxiety while improving test-taking and study skills.
"A hundred cartloads of anxiety will not pay an ounce of debt" ---Italian Proverb.
While it's often ineffective to tell someone not to be anxious when they are, ANXIETY will not only not pay an ounce of debt, it won't help improve test scores either. In fact, if you're like me it will actually hurt test taking.
So aside from boning up on relaxation techniques and/or yoga, the remainder of this post helps parents, teachers and students PREPARE for tests, how to TAKE tests, and GAIN PERSPECTIVE to bust anxiety to a manageable inconvenience. [Feel free to add your own strategies in the comments.]
TIPS FOR STUDYING:
- Begin studying as soon as you know of the test. Schedule your time with breaks. The more time you have to prepare, the more opportunities review, the less pressure on time management and (hopefully) the stronger your memory links.
- Make outlines, review sheets, flash cards, "cram sheets" to review and establish multi-sensory memory paths and 'recall routes'.
- Here is a great note-taking/summarizing technique:
- Break your page into three sections: First, fold the paper in half vertically so you have a right side and a left side. Leave space on the top of the paper (for a title describing the page's contents), and space across the bottom (to write a brief summary).
- On the top section across the page - label or title the contents (summarize)
- On the right side, write your notes in bullets
- On the left side, under each bullet of notes, leave a key word or include some visual icon or image to represent its content
- On the bottom of the page write a summary
- Prepare review sheets, writing key points and ideas on one side of the paper (in outline or concept map format) showing relationships between key points. Use the other side of the paper for definitions, examples, formulas, etc.
- Review your notes, and you may want to review others' notes as well. Older kids may want to form study groups and parents (for kids in newly established groups) may want to monitor making sure the group actually studies.
- Take breaks. Get up and walk around. You may want to walk around while reciting - the kinesthetic aspect has been found to help with memory retrieval. Sometimes creating songs or chants also helps memorization.
TEST TAKING TIPS:
- Eat a light meal before the test (food is necessary for energy but heavy foods can make you groggy).
- Sleep before a test - 8 hours of sleep is recommended.
- As soon as you receive your test...mind dump: ask for/use a scrap piece of paper to jot down any memorized information you think you might forget.
- Quickly scan the test, thinking of how best to budget your time, making sure you allow time to read the directions and questions carefully.
- Read the entire question carefully. Don't assume you know what it is asking until you have completely read it.
- IF you are concerned about time, answer the questions you know you can answer first and mark the more challenging questions to return to later.
- Focus on your test, don't bother looking to see how others are responding - it will distract you, it will take time away from your work and it will not help you.
- IF you don't understand a question, ask the teacher to clarify it (if appropriate). You may also want to write a note in the margin explaining your response.
- Circle key words in difficult questions. This may help you focus on the main point.
For ESSAY QUESTIONS the objective is to demonstrate that you know the topic, can explain it, AND can support your explanation using vocabulary and technical jargon used in class and readings. When taking the test:
- Start your essay with a clear opening sentence that DIRECTLY responds to the question prompt.
- Make sure you understand the questions:
- Compare questions usually want students to focus on SIMILARITIES as well as DIFFERENCES.
- Contrast questions ask students to focus on differences between related items, qualities, events or problems.
- Criticize questions usually call for YOUR JUDGEMENT with respect to merits or factors of given ideas, statements, or events.
- Define questions require concise, clear meanings presented in an authoritative manner.
- Discuss questions ask students to ANALYZE carefully and present considerations (pros and cons) of targeted subjects. This requires a complete and detailed response.
- Evaluate questions require a careful look at an idea/event and responses should stress ADVANTAGES and LIMITATIONS. Responses should be written in an authoritative tone with some personal comments.
- Explain questions usually ask students to CLARIFY and INTERPRET the material. Here, students should address the "how's" and "why's" of a given event/situation, often consolidating and/or reconciling differences of opinions.
- Illustrate questions require students to translate, clarify, diagram a given topic/situation with CONCRETE examples to support their positions.
- Relate questions ask students to SHOW RELATIONSHIPS, emphasizing connections and associations, usually in descriptive form.
- Review questions typically require students to CRITICALLY EXAMINE a topic. Students should analyze and comment briefly about the topic in an organized, sequential manner which addresses the major points of an issue/problem/event.
- Summarize questions typically ask students to concisely relate the main points, ideas or facts of an issue.
- Read the question a few times to make sure you understand what it is asking.
- Underline key terms and clue words.
- If you run into vague terminology, define it in your own terms and then look for the best alternative answer.
- After reading the question, come up with your 'most likely' response BUT read ALL the response choices before selecting your 'best choice'.
TIPS TO MANAGE TEST ANXIETY:
- BE PREPARED - studying and self-testing can help. Toward this end, set up realistic study goals. Goals should be :
- specific (reading or writing "x" amount of work or reading/writing for "x" amount of time a sitting)
- measurable (for example set aside a specific amount of time or a concrete goal or number of pages to read, etc.)
- challenging but attainable
- Evaluate your success with goal setting and adjust future goals when necessary.
- THINK POSITIVELY and IF you find you're worrying too much, try to consciously change gears.
- Break studying into chunks. Make a study schedule that includes breaks which will distract you from your tension and anxiety while empowering you to better incorporate what you've studied.
- Anxiety feeds on itself. IF you tend to freeze, you may want to talk to a teacher and see if there are alternatives WHILE you address the anxiety.
- No one I've known - even very gifted students - 'ace' every test. Realize this. Also realize that there are often several ways to achieve your goal, and often circuitous routes are more enriching.
- Think about tests you may not have done well on. Was there a pattern or type of question that you had/have difficulty with? If so try to address these issues. You may find you need to:
- Slow down and read the directions more carefully
- Slow down (especially math or physics) making sure you don't copy an equation wrong or misread a number, or simply make careless calculation errors.
- If there is extra time - go over your work before handing in the test.
- Congratulate yourself on completing what you have done - focus on the good and easy and not on the overwhelming.
- Use relaxation techniques while studying and before exams.
- Here is a Five Finger Relaxation Technique I found via the Student Counseling Services at the University of Chicago
TEST TAKING STRATEGY LINKS:
- Understanding Underachievers
- Academic Success Center, George Washington University
- Survival Strategies for Test Taking
- Test Taking Tips
- Test taking tips from Teen Health
- Understanding common words used in essay prompts
- General strategies for objective tests
- How to prepare for ESSAY tests
- Strategies for multiple choice tests
- Managing Test Anxiety - advice from University of Western Ontario
Finally, some INSPIRATIONAL / MOTIVATIONAL QUOTES to help you relax and gain perspective, or simply laugh. [Please feel free to leave your own in the comments.]
"I have missed more than 9,000 shots in my career. I have lost almost 300 games. On 26 occasions I have been entrusted to take the game winning shot...and I missed. I have failied over and over and over again in my life. And that's precisely why I succeed" ----Michael Jordan, NBA Allstar
"I'm not telling you it is going to be easy, I'm telling you it's going to be worth it"---Art Williams, Professional basketball player
"Neither you nor the world knows what you can do until you have tried." ---Ralph Waldo Emerson
"So many times people end up fixated on doing things right, that they end up doing nothing at all." ---The Wright Brothers
"When I was young I observed that nie out of ten things I did were failures, soI did ten times ore work." ---George Bernard Shaw, Irish playwright
"Though no one can go back and make a brand new start, anyone can start form now and make a brand new ending." --- Carl Bard, Scottish theologian religious writer broadcaster
"Try not. Do or do not; there is no try." ---Yoda, Jedi Master, Star WarsIn closing, here is a YouTube clip from http://ipassthecpaexam.com/exam-quotes/:
As always, thank you for your visit. Please leave your own test-taking advice, motivational anecdotes of funny test-taking experiences in the comments. |
Considering factors such as environmental sustainability, cities’ waste management is a crucial task. Today, city municipalities must rethink their strategies for sustainable waste management, to balance the city ecosystem, and also ensure a healthy living environment. By leveraging the key emerging digital technologies like the Internet of Things (IoT), cities must incorporate certain recycling strategies, for efficient management of wastes and garbage including the e-waste. Replacing the traditional and outdated waste collection methods with IoT sensor-enabled bins and sophisticated waste management applications, are the key to build a clean smart city.
With advancements in sensor technology, massive datasets containing information can be brought into use, for successful deployment of IoT waste management applications. The more real-time data, these sensors gather, the more optimized planning city municipalities can perform. By automating the route optimization of garbage trucks, by leveraging the sensor tech, a lot of time, fuel and money can be saved. These pickup trucks, to collect trash, generally stick to a daily routine of following a specific route, if a sanitation department lack IoT connectivity, the truck drivers generally don’t have any advanced information on how full a trash bin is, unless they encounter it. By deploying IoT sensors, the sanitation department can in advance let the truck drivers know the actual fill level of trash bins, with this critical insight to sanitation workers, a lot of time, fuel, and money can be saved, as the trash data whose loads vary on a day to day, weekly, or season basis, are pre-known. |
Drilling for oil in Ohio began in 1860. Drillers opened the first oil well in Ohio history near Macksburg, in Washington County. Additional wells soon appeared in Washington County and Noble County as well. By 1950, various companies had drilled more than 175,000 wells in forty-five Ohio counties. These wells had produced approximately 615 millions of oil. Most of Ohio's oil reserves are located in northwestern and eastern parts of the state, with the largest concentrations being located south of Toledo. As of 1950, oil companies guessed that the equivalent of another twenty-eight million barrels of oil remained under Ohio's surface.
During the late nineteenth century and the early twentieth century, numerous Ohio companies amassed fortunes from the oil industry. The Standard Oil Company came to dominate oil refining during this era, having a virtual monopoly. As the federal government sought to prohibit monopolies during the late 1800s and the early 1900s, Standard Oil lost its stranglehold over the industry. By the start of the twentieth century, oil drilling in the United States had shifted from states like Ohio to locations in the American Southwest. Ohio companies also moved westward. The Ohio Oil Company began to drill for oil in the Rocky Mountains during the early 1900s, although it also continued to extract oil from Ohio's soil. At the start of the twenty-first century, Ohio still produces some oil. In 1981, more than six thousand new wells appeared in the state, although companies drilled fewer than seven hundred new wells in 1993. |
… two thirds of the world’s population suffers from moderate to severe water shortage by 2025?
- 90 percent of fresh water consumed is used in agriculture.
- In arid regions agricultural and human consumption are in stiff competition. As a result, currently at least 500 million people suffer from fresh water shortages.
- There is an urgent need to reduce water used in protein production in affected areas.
- Supplementing animal feed with Evonik amino acids leads to a reduction in the required amount of vegetable protein sources, eg. soya and corn: fewer vegetable protein sources means a smaller water footprint of production.
- Feeding animals with healthy, Evonik amino-acid-balanced feed eliminates their need to metabolize and excrete superfluous protein: less excretion means a smaller water footprint of production.
We help lower water use in animal protein production. |
Dec. 15, 2017: On Friday, Dec. 15th, at the Cape Canaveral Air Force Station in Florida, SpaceX launched a new sensor to the International Space Station named TSIS-1. Its mission: to measure the dimming of the sun. As the sunspot cycle plunges toward its 11-year minimum, NASA satellites are tracking a decline in total solar irradiance (TSI). Across the entire electromagnetic spectrum, the sun’s output has dropped nearly 0.1% compared to the Solar Maximum of 2012-2014. This plot shows the TSI since 1978 as observed from nine previous satellites:
Click here for a complete explanation of this plot.
The rise and fall of the sun’s luminosity is a natural part of the solar cycle. A change of 0.1% may not sound like much, but the sun deposits a lot of energy on the Earth, approximately 1,361 watts per square meter. Summed over the globe, a 0.1% variation in this quantity exceeds all of our planet’s other energy sources (such as natural radioactivity in Earth’s core) combined. A 2013 report issued by the National Research Council (NRC), “The Effects of Solar Variability on Earth’s Climate,” spells out some of the ways the cyclic change in TSI can affect the chemistry of Earth’s upper atmosphere and possibly alter regional weather patterns, especially in the Pacific.
NASA’s current flagship satellite for measuring TSI, the Solar Radiation and Climate Experiment (SORCE), is now more than six years beyond its prime-mission lifetime. TSIS-1 will take over for SORCE, extending the record of TSI measurements with unprecedented precision. It’s five-year mission will overlap a deep Solar Minimum expected in 2019-2020. TSIS-1 will therefore be able to observe the continued decline in the sun’s luminosity followed by a rebound as the next solar cycle picks up steam. Installing and checking out TSIS-1 will take some time; the first science data are expected in Feb. 2018. Stay tuned.
Dec. 11, 2017: You’ve heard of comets. But have you ever heard of a rock comet? They exist, and a big one is approaching Earth this week. 3200 Phaethon will fly past our planet on Dec. 16th only 10 million km away. Measuring 5 km in diameter, this strange object is large enough for amateur astronomers to photograph through backyard telescopes. A few nights ago, the Astronomy Club of the Sing Yin Secondary School in Hong Kong video-recorded 3200 Phaethon’s approach using a 4-inch refractor:
“We observed 3200 Phaethon from the basketball court of our school campus,” the club reports. “Our school is located close to the city center where the visual limiting magnitude is about 2 to 3. Despite the glare, we were able to record the motion of this object.” (For others who wish to do this, Bob King of Sky & Telescope has written an excellent set of observing tips.)
3200 Phaethon is the source of the annual Gemini meteor shower, which is also coming this week. Sky watchers can see dozens of Geminids per hour on Dec. 13th and 14th as gravelly bits of the rock comet disintegrate in Earth’s upper atmosphere. The best time to look is during the dark hours before sunrise when Gemini is high in the sky.
“This is 3200 Phaethon’s closest encounter with Earth until December of 2093, when it will come to within 1.8 million miles,” notes Bill Cooke of NASA’s Meteoroid Environment Office. Despite the proximity of the rock comet, he doesn’t expect to see any extra Geminids this year. “It would take at least another revolution around the sun before new material from this flyby could encounter Earth – probably longer.”
A “rock comet” is an asteroid that comes very close to the sun–so close that solar heating scorches plumes of dust right off its stony surface. 3200 Phaethon comes extremely close to the sun, only 0.14 AU away, less than half the distance of Mercury, making it so hot that lead could flow like water across its sun-blasted surface. Astronomers believe that 3200 Phaethon might occasionally grow a comet-like tail of gravelly debris–raw material for the Geminid meteor shower. Indeed, NASA STEREO-A spacecraft may have seen this happening in 2010. There is much to learn about 32900 Phaethon, which is why NASA radars will be pinging it as it passes by. Stay tuned for updates.
Dec. 9, 2017: Since the spring of 2015, Spaceweather.com and the students of Earth to Sky Calculus have been flying balloons to the stratosphere over California to measure cosmic rays. Soon after our monitoring program began, we quickly realized that radiation levels are increasing. Why? The main reason is the solar cycle. In recent years, sunspot counts have plummeted as the sun’s magnetic field weakens. This has allowed more cosmic rays from deep space to penetrate the solar system. As 2017 winds down, our latest measurements show the radiation increase continuing apace–with an interesting exception, circled in yellow:
In Sept. 2017, the quiet sun surprised space weather forecasters with a sudden outburst of explosive activity. On Sept. 3rd, a huge sunspot appeared. In the week that followed, it unleashed the strongest solar flare in more than a decade (X9-class), hurled a powerful CME toward Earth, and sparked a severe geomagnetic storm (G4-class) with Northern Lights appearing as far south as Arkansas. During the storm we quickened the pace of balloon launches and found radiation dropping to levels we hadn’t seen since 2015. The flurry of solar flares and CMEs actually pushed some cosmic rays away from Earth.
Interestingly, after the sun’s outburst, radiation levels in the stratosphere took more than 2 months to fully rebound. Now they are back on track, increasing steadily as the quiet sun resumes its progress toward Solar Minimum. The solar cycle is not expected to hit rock bottom until 2019 or 2020, so cosmic rays should continue to increase, significantly, in the months and years ahead. Stay tuned for updates as our balloons continue to fly.
Technical note: The radiation sensors onboard our helium balloons detect X-rays and gamma-rays in the energy range 10 keV to 20 MeV. These energies, which span the range of medical X-ray machines and airport security scanners, trace secondary cosmic rays, the spray of debris created when primary cosmic rays from deep space hit the top of Earth’s atmosphere. |
Greater Than Less Than Worksheets 1St Grade. Entire libraryworksheetsfirst grademathgreater than, less than, equal to. Less than greater than work sheets generator.
Entire libraryworksheetsfirst grademathgreater than, less than, equal to. Some of the worksheets for this concept are greater lesson 122 work 1 equal or not equal than, comparing numbers up to 2 digit s1, greater than less than preschool math work, identifying which number is. Such a fun way to practice comparing numbers to 100.
Look into the relevant standards here, or dig.
If your preschooler easily recognizes all numbers between 1 and 10, this is the next natural progression. Top 10 1st grade greater than, less than kids activities. He often knew which number was the the worksheets below build upon those themes. First and second grade christmas math worksheets. |
- by Jeroen Fijan, 27/03/20
Richard Mollier was a professor of Applied Physics and Mechanics and a pioneer of experimental research in thermodynamics in the late 19th century. He carried out meticulous calculations for every state and property of air.
The result: the emblematic HX diagram.
An easy-to-read tool still in use today. In this issue of our blog, we explain how the diagram works and how to read it.
Why make thousands of calculations every time you need to predict the state of a medium? Richard Mollier saved us a tremendous amount of time by making all those calculations for us and instead giving us this powerful tool.
The diagram provides a graphic representation of the relationship between physical conditions and the corresponding changes in the system: the two can be linked simply by drawing some lines and knowing what their intersections represent.
Constant temperature lines in the diagram are largely horizontal, but slightly tilted. Each line corresponds to a temperature, and they are simple and proportionate – in other words, if you need the line for 21.5°C and this is not indicated in the graph, you can simply imagine a line exactly in the middle between those for 21 and 22°C.
The vertical lines in the diagram represent absolute humidity in grams per kilogram, with a range from 0 to 40 g/kg. They show how much water vapour the air can contain at different temperatures: the warmer the air, the more water vapour it can contain.
The curved lines in the diagram represent the relative humidity of air. As we mentioned above, air can hold a fixed amount of water vapour. Relative humidity is the ratio of existing water vapour in the air to the maximum possible amount of vapour the air could potentially contain.
The 100% humidity line is also called the saturation line. This is the maximum amount of vapour that air in a given condition can contain.
The diagonal lines in the diagram represent specific enthalpy, which indicates the internal energy of the air. Again, as with humidity, this is higher when the air is hotter.
The last set of lines in the diagram are the lines of air density, which range from 1.1 to 1.35 kg/m³. Colder air is heavier than hotter air as colder molecules are packed more closely together and thus denser in low temperatures. As temperature rises, the atoms enter a more excited state, and space between them increases, reducing density.
The dew point and wet bulb temperature are two important variables that can be read indirectly from the Mollier diagram. The dew point is the temperature at which air starts to condense. The wet-bulb temperature is the theoretical temperature read by a thermometer covered in water-soaked cloth over which air is passed.
As an example, let us imagine an arbitrary state, like 25°C with 50% relative humidity. You can find the dew point by drawing a line from the point where the 50% relative humidity curve intersects with an imaginary line indicating a temperature of 25°C in the graph straight down to the saturation line (which, as you remember, represents 100% relative humidity). The temperature corresponding to this point is the dew point temperature – in this case 14°C.
For the wet bulb temperature, we again start from the point where relative humidity is 50% and temperature is 25°C, but instead of a vertical line, we follow the specific enthalpy line down to the saturation line. The temperature at this point is the wet bulb temperature, or around 18.3°C in our example.
In the next blog, we’ll show you how to make practical use of the diagram by making calculations with it.
Jeroen Fijan | R&D Manager
Jeroen Fijan has been working at Heinen & Hopman since 2001. He started as a draughtsman and, over the years, worked his way up to the top of the R&D division. Sustainability is a top priority in the quest to improve H&H's products and processes.
The diagram lets us establish the relationship between physical conditions and the corresponding changes in the system simply by drawing some lines.
- R&D Manager
- R&D Manager |
Digital Camera HQ is reader-supported. When you buy through links on our site, we may earn an affiliate commission.
One of three elements that affect a photograph's exposure (along with ISO and aperture), shutter speed determines the amount of time that passes while a single image is being taken, or the amount of time that passes while the shutter is open. When the shutter is open, a picture is being taken, when it closes, the image is complete. The button that takes the photo is called the shutter release, because it opens the shutter to take the image.
Shutter speed has a direct effect on how much light is in an image. When the shutter is open for a longer period of time, more light is let in, so the image will be brighter. The opposite is also true—shorter shutter speeds mean less light is let in, so the image is darker. Of course, the amount of light, or the image's exposure, can also be adjusted through aperture and ISO.
Shutter speed is written in seconds, or fractions of seconds that indicate how long the shutter is open. For example, a one second shutter speed is much slower than 1/500. On most cameras, the slowest shutter speed is called Bulb—this isn't actually a speed, but under this setting, the shutter will remain open until you press the shutter release a second time.
Most cameras will use a number instead of a fraction to indicate the shutter speed inside the viewfinder or on the LCD screen to save space. A 1/500 shutter speed is often designated in the viewfinder as 500, for example. Shutter speeds that are not fractions (i.e. a second or longer) are accompanied by a quotation mark: 1” in the viewfinder is a one-second shutter speed.
How Does Shutter Speed Impact Blur?
So, if a slower shutter speed lets in more light, it makes sense that, for low light pictures, you should use a slow shutter speed. But there's more to choosing a shutter speed than just light: Any object that moves while the shutter is open will be blurred. Sometimes, the blur is intentional, but other times you want a crisp subject frozen in time with zero blur.
Most of the time, blur is unwanted, so a fast shutter speed is used. When taking pictures of sports or snapping shots of kids or babies, for example, a fast shutter speed is usually best. The faster the subject is moving, the faster the shutter speed should be to freeze the action. Since the best shutter speed depends on the speed of your subject, there's no hard and fast rule as to what setting to use. For fast subjects, try starting out at 1/500. Check your shots in the LCD screen and increase if you have some blur, or decrease if the subject is sharp but too dark.
Sometimes, blur can be a good thing. Blur can give an image a sense of movement. Many waterfall images, for example, use a slow shutter speed so the water appears as a smooth, white blur. Intentionally using a slow shutter speed to blur motion is called long exposure photography—it's great for smoothing out water, showing the motion of traffic or people or photographing fireworks, just to name a few. Just like freezing the action, there's no hard, fast rule for the right shutter speed to use for a long exposure.
You can start at one second, then increase or decrease from there to add more or less blur. Bulb mode (set on most cameras by turning your shutter speed all the way down) allows you to choose based on the motion you see, stopping when the action is complete.
To (successfully) take an image with a slow shutter speed, you'll need a tripod. Since any motion while the shutter is open becomes blur, if the entire camera moves, even just slightly, the whole image will be blurry. A tripod prevents unwanted blur by keeping the camera steady during the entire exposure. To further prevent camera shake, you can also use the camera's self timer or a remote so that touching the camera to take the picture doesn't move the camera either.
What shutter speeds require a tripod? Well, it depends on a few factors. Image stabilization allows you to shoot at one to three settings slower before needing a tripod, compared to cameras (or lenses) without stabilization. Zoom also plays a role—more zoom requires a faster shutter speed. As a general rule for DSLR and mirrorless users, the minimum handheld shutter speed = 1/focal length. What does that mean? Remember the focal length is how far your lens is zoomed in. So if you are shooting at 50mm, your shutter speed should be at least 1/50. If you are using as 300mm telephoto, you'll want a much faster 1/300. This guideline just applies to using the camera without a tripod—remember you'll want to increase the speed for faster subjects.
How to Set Shutter Speed
Ready to try it out? Shutter speed can be set in two ways. Manual mode allows you to set the shutter speed as well as the aperture and ISO—allowing the photographer to have complete control over the image. But managing all three at once can be rather daunting for beginners—if you are new to shutter speed, start by using Shutter Priority mode. In Shutter Priority, the user sets the shutter speed, the camera will automatically select the aperture and ISO for the proper exposure. That way, you can focus on mastering shutter speed before throwing in the other two elements to exposure.
In Shutter Priority, the shutter speed is typically adjusted by using the control wheel at the back of the camera. Some advanced compacts don't have a control wheel at all and the shutter speed is adjusted using the arrow keys or a touchscreen. When shooting in manual mode, the camera has to have controls dedicated to aperture as well. If your camera has only one control wheel, a function button typically lets you swap between the two. Cameras with two wheels will have one dedicated to each function for faster adjustments. If you're not sure, check your camera's manual for specifics on your model.
Shutter speed allows you to control whether an image has blur or whether it's sharply frozen in time, as well as affecting exposure. As one of three parts of the exposure triangle, understanding shutter speed is essential to mastering manual modes and taking complete control over your photography. |
Calypso (Saturn XIV)
A satellite of Saturn discovered in 1980 on the images taken by Voyager 1. It shares the same orbit as Telesto and Tethys at a distance of 294,660 km and turns around the planet with a period of 1.888 days. It is 34 x 22 x 22 km in size.
In Greek mythology, Calypso was a sea nymph and the daughter of the Titan Atlas.
Dione (Saturn IV)
The fourth largest moon of Saturn and the second densest after Titan. Its diameter is 1,120 km and its orbit 377,400 km from Saturn. It is composed primarily of water ice but must have a considerable fraction of denser material like silicate rock.
Discovered in 1684 by Jean-Dominique Cassini, Italian born French astronomer (1625-1712). In Gk. mythology Dione was the mother of Aphrodite (Venus) by Zeus (Jupiter).
Enceladus (Saturn II)
The eighth of → Saturn's known → satellites, discovered by Herschel in 1789. It is about 500 km in diameter and orbits Saturn at a mean distance of 238,000 km with a period of 1.37 days. Enceladus has the highest → albedo (> 0.9) of any body in the → Solar System. Its surface is dominated by clean ice. Geophysical data from the → Cassini-Huygens spacecraft imply the presence of a global → ocean below an ice shell with an average thickness of 20-25 km, thinning to just 1-5 km over the south polar region. There, → jets of → water vapor and icy grains are launched through fissures in the → ice. The composition of the ejected material measured by Cassini includes salts and silica dust. In order to explain these observations, an abnormally high heat power is required, about 100 times more than is expected to be generated by the natural → decay of → radioactive elements in rocks in its core, as well as a means of focusing activity at the south pole. According to simulations, the core is made of unconsolidated, easily deformable, porous rock that water can easily permeate. The → tidal friction from Saturn is thought to be at the origin of the eruptions deforming the icy shell by push-pull motions as the moon follows an elliptical path around the giant planet. But the energy produced by tidal friction in the ice, by itself, would be too weak to counterbalance the heat loss seen from the ocean; the globe would freeze within 30 million years. More than 10 GW of heat can be generated by tidal friction inside the rocky core. Water transport in the tidally heated permeable core results in hot narrow upwellings with temperatures exceeding 90 °C, characterized by powerful (1-5 GW) hotspots at the seafloor, particularly at the south pole. The release of heat in narrow regions favors intense interaction between water and rock, and the transport of hydrothermal products from the core to the plume sources (Choblet et al., 2017, Nature Astronomy, doi:10.1038/s41550-017-0289-8)
In Gk. mythology Enceladus was a Titan who battled Athene in their war against the gods. When he fled the battlefield, Athene crushed him beneath the Sicilian Mount Etna.
Enkelâdos, from the original Gk. pronunciation of the name.
Hyperion (Saturn VII)
The sixteenth of → Saturn's known → natural satellites. It is shaped like a potato with dimensions of 410 x 260 x 220 km and has a bizarre porous, sponge-like appearance. Many of the sponge holes or craters have bright walls, which suggests an abundance of → water → ice. The crater floors are mostly the areas of the lowest → albedo and greatest red coloration. This may be because the average temperature of roughly -180 °C might be close enough to a temperature that would cause → volatiles to → sublimate, leaving the darker materials accumulated on the crater floors. Hyperion is one of the largest bodies in the → Solar System known to be so irregular. Its density is so low that it might house a vast system of caverns inside. Hyperion rotates chaotically and revolves around Saturn at a mean distance of 1,481,100 km. It was discovered by two astronomers independently in 1848, the American William C. Bond (1789-1859) and the British William Lassell (1799-1880).
Hyperion, in Gk. mythology was the Titan god of light, one of the sons of Ouranos (Heaven) and Gaia (Earth), and the father of the lights of heaven, Eos the Dawn, Helios the Sun, and Selene the Moon.
The sixth → planet from the Sun and the second largest with an equatorial diameter of 120,536 km orbiting at an average distance of 1,429,400,000 km (9.54 → astronomical units) from Sun. With an → eccentricity of 0.05555, its distance from the Sun ranges from 1.35 billion km (9.024 AU) at its → perihelion to 1.509 billion km (10.086 AU) at its → aphelion. Its average orbital speed being 9.69 km/s, it takes Saturn 29.457 Earth years (or 10,759 Earth days) to complete a single revolution around the Sun. However, Saturn also takes just over 10 and a half hours (10 hours 33 minutes) to rotate once on its axis. This means that a single year on Saturn lasts about 24,491 Saturnian solar days. Saturn has a mass of 5.6836 × 1026 kg (95.159 → Earth masses) and a mean density of 0.687 g cm-3. Like Jupiter, Saturn is about 75% → hydrogen and 25% → helium with traces of → water, → methane, and → ammonia, similar to the composition of the primordial Solar Nebula from which the solar system was formed. The temperature on Saturn is ~ -185 °C. Like Jupiter, Saturn has a solid core of iron-nickel and rock (silicon and oxygen compounds). The core has an estimated mass of 9-22 Earth Masses and a diameter of about 25,000 km (about 2 Earth diameter). The core is enveloped by a liquid → metallic hydrogen layer and a → molecular hydrogen layer. Saturn's interior is hot (12,000 K at the core). The planet radiates more energy into space than it receives from the Sun. Most of the extra energy is generated by the → Kelvin-Helmholtz mechanism as in Jupiter. Saturn has 62 known satellites. → Saturn's ring. On 1 July 2004 NASA/ESA's → Cassini-Huygens became the first to orbit Saturn, beginning a 13 year mission that revealed many secrets and surprises about Saturn and its system of rings and moons.
O.E. Sætern "Italic god," also "most remote planet" (then known), from L. Saturnus, Italic god of agriculture, possibly from Etruscan.
Keyvân Mid.Pers. Kêwân, borrowed from Aramean kâwân, from Assyrian kaiamânu.
Fr.: nébuleuse Saturne
A planetary nebula in the Aquarius constellation discovered by William Herschel in 1782. It has a size of about 0.3 x 0.2 light-years and lies about 1400 light-years away. Also known as NGC 7009.
halqehâ-ye Keyvân (#)
Fr.: anneaux de Saturne
A system of rings around Saturn made up of countless small particles, ranging in size from micrometers to meters, that orbit the planet. The ring particles are made almost entirely of → water ice, with some contamination from → dust and other chemicals. The ring system is divided into six major components: D, C, B, A, F, and G rings, listed from inside to outside. But in reality, these major divisions are subdivided into thousands of individual → ringlets. The large gap between the A and B rings is called the Cassini division. Saturn's rings are extraordinarily thin: though they are 250,000 km or more in diameter, they are less than one kilometer thick. → A ring, → B ring, → C ring, → D ring, → F ring, → G ring. |
Chorale Prelude on “All Glory, Laud and Honor” (the melody is in the pedals) J.S. Bach (1685–1750)
“Oh Morning Star, How Fair and Bright” Johann Christoph Bach (J.S. Bach’s great uncle, 1642–1703)
Question: What do you get when you combine a medical doctor, a renowned concert organist and musicologist, a significant New Testament scholar, a heaping helping of Mother Theresa, and a Nobel Peace Prize?
Answer: You get Albert Schweitzer (1875–1965), whose birthday was the 14th of this month.
Schweitzer was born in Alsace, on the border between France and Germany — a village so small that the Catholic and Lutheran congregations shared the same sanctuary. 18-year-old Schweiter played for the eminent organist and composer Charles-Marie Widor (1844–1937) who was so impressed with Schweitzer’s playing that he agreed to teach him without fee.
Schweitzer went on to study theology and music at Kaiser Wilhelm University of Strasbourg. At age 23 he returned to Paris to write his PhD dissertation on _The Religious Philosophy of Kant_ at the Sorbonne, and to study in earnest with Widor. Schweitzer rapidly gained prominence as a musical scholar and organist, and was dedicated to the restoration of historic pipe organs. He even developed a unique configuration of microphones for recording concerts still known as the “Schweitzer Technique”. With theological insight he explicated J. S. Bach’s deep use of musical symbolism in his sacred music, astonishing Widor with his insights. Widor’s encouragement resulted in Schweiter’s two-volume biography of Bach — a landmark in Bach scholarship.
At age 24 Schweitzer became a deacon at the church of Saint Nicholas in Strasbourg, a year later he was ordained as curate, and a year after that became provisional Principal of the Theological College of Saint Thomas (from which he had just graduated) — at age 28 his appointment was made permanent. At age 31 he published his _Quest of the Historical Jesus_ — a landmark work in New Testament studies. Particularly in the 19th century there was a major effort to excavate the ‘historical Jesus’, the “real Jesus” behind the ‘facade’ of the Gospel account. Schweitzer analyzed over 50 of those efforts, and demonstrated that in every case the investigating scholar ended up ‘discovering’ a ‘historical Jesus’ who happened to match the scholar’s expectations (prejudices?) with which he had undertaken his investigations in the first place. This was a major blow to the entire ‘historical Jesus’ movement — a project that continues to this day but with nary the same enthusiasm. (The last marvelous paragraph of Schweitzer’s book is the text for today’s anthem.)
At age 30 Schweitzer answered the call of “The Society of the Evangelist Missions of Paris” which was looking for a medical doctor. Alas, the committee wouldn’t accept his offer because of his “incorrect” Lutheran theology. Nevertheless, in spite of a chorus of dismay from his friends, family and colleagues, he resigned his theological post and entered medical school. He planned to spread the Gospel by the example of his Christian labor of healing, rather than through the verbal process of preaching.
By extreme efforts he completed his studies in six years. Now armed with a medical degree, Schweitzer made a proposal they could hardly refuse: to go as a medical doctor to present day Gabon (on the western African coast and the equator) to work at his own expense. He refused to attend a committee inquiring into his doctrine, but met each committee member personally and was at last accepted. Through concerts and other fund-raising, he was able to equip a small hospital.
In their first nine months he and his wife Helene (an anesthesiologist) treated about 2000 patients. After briefly occupying a shed formerly used as a chicken coop, in 1913 they built their first hospital of corrugated iron, with two 13-foot rooms (consulting and operating rooms) and with a dispensary and sterilising room in spaces below the broad eaves. The waiting room and dormitory (42 by 20 feet) were built, like native huts, of unhewn logs.
Schweitzer frequently visited Europe to raise money and awareness of his efforts, and eventually his hospital became self sufficient — it is still in operation. With an international stature in his day something like Mother Theresa in ours, he was awarded the Nobel Peace Prize in 1952. He died age 90 at his hospital in Africa. Helene predeceased him by three years, and they are both buried there. |
What is energy?
In order to explain how wind energy works, let’s start by asking what is energy?
Simply put – energy is the ability to do work. For example, when we eat, our bodies transform the energy from food into movement in our muscles.
Generally, energy can be categorised into either kinetic energy (the energy of moving objects) or potential energy (energy that is stored). The different types of energy include thermal energy, radiant energy, chemical energy, electrical energy, motion energy, sound energy, elastic energy and gravitational energy.
In the case of wind energy, wind turbines take the kinetic energy that’s in the wind and convert that kinetic energy into mechanical power. We mostly use this mechanical power in the form of electricity.
Want to know how it works? Scroll down!
What is wind?
Wind. It’s always been with us, and it always will be. So, where does it come from?
Basically, wind is caused by 3 things:
- The heating of the atmosphere by the sun,
- The rotation of the Earth, and
- The Earth’s surface irregularities.
Air under high pressure moves toward areas of low pressure – and the greater the difference in pressure, the faster the air flows and the stronger the wind!
What is wind energy?
Wind turbines capture the energy of the wind and convert it to electricity.
Wind energy is an alternative to energy produced by burning fossil fuels.
Wind energy comes from a natural and renewable resource (it will never run out), it is clean: it produces no greenhouse gas emissions nor emits air pollutants, and it uses very little water.
So, how do we produce it?
What is a wind turbine?
A wind turbine is a device that converts kinetic energy from the wind into electricity.
A group of wind turbines is called a wind farm. On a wind farm, turbines provide power to the electrical grid. These turbines can be found on land (onshore) or at sea (offshore).
Wind turbines are manufactured in a wide range of shapes and sizes, but the most common design is the one with 3-blades mounted on a horizontal axis. Their output ranges from as small as 100 kilowatts to as big as 12 megawatts.
They can be placed in a huge range of locations: on hills, in open landscapes, fixed to the bottom of the sea – and we can even have floating turbines in deep waters!
There are three main variables that determine how much electricity a turbine can produce:
- Wind speed – Stronger winds allow us to produce more electricity. Higher turbines are more receptive to strong winds. This means wind turbines generate electricity at wind speeds of 4 – 25 metres per second.
- Blade radius – The larger the radius or “swept area” of the blades, the more electricity can be produced. Doubling the blade radius can result in four times more power.
- Air density – “Heavier” air exerts more lift on a rotor. Air density is a function of altitude, temperature and air pressure. High altitude locations have lower air pressure and “lighter” air so they are less productive turbine locations. The dense “heavy” air near sea level drives rotors more effectively.
How does a wind turbine work?
There is a wind vane 1 at the top of each turbine: this tells the turbine the speed and direction the wind is blowing.
The turbine then rotates on the tower to face into the wind, and the blades 2 rotate on their axis to create maximum resistance against the wind.
The wind starts turning the blades which are connected to a hub 3 and a low-speed shaft 3.
The low-speed shaft spins at the same speed as the blades (7-12 revolutions per minute). But we need a much faster rotational speed for the generator to produce electricity.
That’s why most wind turbines have a gearbox 4, which multiplies the rotational speed of the low-speed shaft by over 100 times to the high-speed shaft 5, which rotates up to 1,500 revolutions per minute.
This is connected to a generator 6, which converts the kinetic energy into electricity.
Turbines that do not have a gearbox are connected directly from the hub to the generator 6 through their axis (this is called ‘direct-drive’).
How does electricity get from the turbine to our homes?
The generator in the nacelle typically produces alternating current (AC) electricity.
The electricity is sent down a thick cable inside the tower, and then via underground cables to a substation.
At the substation, the voltage is modified so it can be fed into the power grid and transported to electricity consumers.
This is how the electricity produced by wind is able to power electrical appliances in our homes, schools, hospitals and offices.
What happens when the wind doesn’t blow?
Wind project developers carefully choose the sites where they install wind farms. When a turbine is not turning, it is usually because it is under maintenance, or because it must be stopped for safety reasons in case of strong winds or a storm.
But occasionally, there is not enough wind to turn the turbine. Does this mean that we lose out on wind energy? No!
By combining wind turbines with energy storage systems we can store that energy for later use whenever wind levels are low.
There are many, many different forms of storage (batteries, pumped heat storage or hydro, high-energy supercapacitators, etc.), and the storage of the electricity generated by wind energy is vital to the energy transition.
Why should we use wind energy instead of other sources of energy?
In addition to being clean, wind energy is the cheapest form of new power generation in most of Europe today. A cost-efficient way to reduce our greenhouse gas emissions and meet our climate targets by 2050!
And – because wind energy can happily coexist with other sectors, the land where wind farms are located can also be used for farming and other purposes.
What are the other benefits of wind energy?
Wind energy brings significant benefits to local communities. Wind farms pay taxes to the local municipalities and make other financial and community contributions too. Polls across Europe show that 75-80% of those who live near wind farms support them.
Wind energy is also a true European success story: it employs 300,000 people in Europe and contributes €37bn to the EU GDP. Each GW of onshore wind Europe builds supports around 5,000 jobs in planning, manufacturing and installation. The ongoing operation and maintenance of wind farms supports local jobs too.
The wind industry has brought new jobs and investment to shipbuilding areas and coal regions. Regions with steel and chemicals industries are also benefitting. More wind energy will mean more benefits for more communities across Europe.
How can we make the most of wind energy?
Wind is already 15% of Europe’s electricity… but electricity is only a quarter of Europe’s energy.
If we want a cleaner, greener world, we need more wind power and we need to increase the share of electricity in the energy mix.
So here’s what we need to do:
Get more wind energy into the grid by electrifying heating & cooling, transport and industrial processes. Find out more in our report Breaking new ground.
Increase investment in grid infrastructure and electric vehicle charging points.
Improve energy storage solutions to stock renewable energy and store it in the case of overproduction.
The potential for wind energy is enormous!
Visit our About Wind section for more information on wind energy in Europe . |
The bone marrow is an organ, and organs are formed by cells that perform a similar function to a certain degree. The main function of the bone marrow is the production of blood cells whether they are red blood cells, white blood cells or platelets. When a disease affects the bone marrow, the patient usually has abnormalities in the production of such components. Bone marrow biopsy is one of the most commonly ordered hematological investigations because the bone marrow is the main manufacturing plant of blood cells. Since it is hidden under a layer of thick bone, we need special techniques to obtain a sample for investigations. After birth and as the human advances in age, the percentage of the active “red” marrow decreases, and by late childhood and early adulthood, it becomes almost completely replaced by the inactive “yellow” marrow. The active marrow becomes limited to the axial bones, especially the sternum or “breastbone”, ribs, hip bone and the proximal ends of long bones. In other words, the parts near the center of the body.
Bone marrow biopsy is ordered for the following conditions:
- An unexplained anemia, alarmingly decreased platelets or increased white blood cells.
- The diagnosis, staging and monitoring of the treatment of hematological malignancies whether they are leukemias, myelodysplastic disorders, lymphomas or multiple myeloma.
- Fever of an unknown origin.
- Some rare metabolic diseases, such as Nieman-Pick and Gaucher, which are related to the storage of fats and glycogen in our bodies.
- Metastatic cancers from elsewhere in the body and staging of other granulomatous diseases, including sarcoidosis and tuberculosis.
Bone marrow biopsy types
There are two main types of bone marrow biopsies: aspiration biopsy and trephine biopsy.
In bone marrow aspiration, the doctor will push a needle through the thick bone -under local anesthesia-, and then aspirate a small amount of the marrow, this procedure is used in the case of some bone cancers to get a general look on the cellular components of the bone marrow.
Trephine bone marrow biopsy on the other hand involves the sampling of a small part of the bone with the marrow underneath with special needles and technique. It is more accurate than the standard bone marrow biopsy, and the only option in cases where bone marrow aspiration can’t be performed as in cases of aplastic anemia -which is a type of anemia due to the inability of the bone marrow to produce red cells-, and metastatic cancers.
Bone marrow biopsy is usually carried out either in a hospital or an outpatient clinic, and the doctor will usually administer local anesthesia with antianxiety medications. |
The purpose of this article is to show the operation, applications, advantages and other information about ISDN technology. This technology emerged in the 1980s and is still used today, after some improvements. All of these details will be presented in the following texts.
What is ISDN
According to Abbreviationfinder, ISDN is the acronym for Integrated Services Digital Network. It is a service available in digital telephone exchanges, which allows access to the internet and is based on the digital exchange of data, where packets are transmitted by multiplexing (possibility of establishing several logical connections in an existing physical connection) over “pair” conductors -braided”.
ISDN technology has been around for some time, having been consolidated between the years 1984 and 1986. Through the use of suitable equipment, a conventional telephone line is transformed into two 64 Kb / s channels, where it is possible to use voice and data at the same time, each one occupying a channel. It is also possible to use both channels for voice or data. Viewed coarsely, it is as if the telephone line has been transformed into two.
A computer with ISDN can also be connected to another that uses the same technology, an interesting resource for companies that want to directly connect branches with the headquarters, for example.
ISDN technology has a transmission standard that allows signals that travel internally to telephone exchanges to be generated and received in digital format on the user’s computer, without the need for a modem. However, for an ISDN service to be activated on a telephone line, it is necessary to install ISDN equipment at the user’s access location and the telephone exchange must be prepared to provide the ISDN service.
How ISDN equipment works
The bandwidth of a conventional analog line is 4 KHz. In an ISDN digital line, this value is 128 Kb / s, which means that the 4 KHz signal no longer exists, since the switchboard interface at the other “end of the line” no longer works with analog signals. The electronic circuits of the telephone exchange carry out equalization and detection of the digital signal at 128 Kb / s transmitted from the user’s equipment.
This transmission technique on the digital line is known as “Hybrid with Echo Cancellation”. The user’s equipment receives the telephone cord from the telephone network and provides two or more outputs: one for the telephone set and the other for connection to the computer, usually via serial cable.
When the user’s equipment is informed by the telephone exchange that a telephone call will reach him, or when the user activates the telephone device to make a call, automatically one of the two channels used in the transmission at 128 Kb / s starts transmitting the data to the 64 Kb / s while the user uses the phone for voice, on the channel provided. After the end of the use of voice, the channel is used again for data transmission at 128 Kb / s. However, it is important to note that the user’s ISDN equipment must support this mechanism (known as call bumping), otherwise this feature may not work and the user does not receive the call.
The first use cases for ISDN technology date between the years 1984 and 1986, shortly after the first ISDN specifications were determined. At that time there was no need for data transmission at 128 Kb / s. But then, what was the ISDN technology developed for? In fact, ISDN technology was a “solution” to a “problem” that did not yet exist for the vast majority of users. In 1990 the ITU-T (International Telecommunication Union), issued the Px64 specifications for videophone, whose central idea would allow the use of video in telephone calls. However, terminal prices were not viable for the vast majority of users and the exchange of images and audio over a telephone connection was a novelty that few people were interested in, as if it were a futuristic idea (and it is not). It was also seen that the videophone on the analog lines, generated higher costs to have an acceptable quality. ISDN was created to solve this problem and make equipment cheaper.
Shortly thereafter, the internet started to appear to the world. Quickly, users who achieved satisfactory speed when connecting to BBS (Bulletin Board System / Service – system available to the common user in the period known as: “pre-Internet”) realized that on the Internet, the same efficiency did not exist, even with modems of 28.8 Kb / s, the fastest at the time. The unpreparedness of telephone companies to provide access to the “Internet” phenomenon, in addition to the precarious state of the infrastructure of the first ISPs, contributed to this. However, the world of the “WWW” was something fascinating and unmissable. Given this perception, many began to wonder how to achieve higher and more stable speeds on internet connections. ISDN technology proved to be interesting for these purposes and then came to be used for that purpose, replacing its initial development objective.
Ways of using ISDN
It is possible to use two forms of communication with ISDN, to be seen below.
Basic access – BRI
The first is the basic access for the home or small office user: ISDN-BRI ( Basic Runtil Interface), where you can connect multiple terminal equipment. The basic access connection always makes two channels available, thus allowing the maximum use of two devices or connections simultaneously. However, it is possible to connect up to 8 devices to the ISDN, but only two will be able to use the technology at the same time.
The service is recognized by the MSN (Multiple Subscriber Number) which determines which device is intended to be connected. ISDN-BRI can also serve as a substitute for traditional telephone accesses and is composed, as already mentioned, of two data channels (B channels) of 64 Kb / s, and a signaling channel of 16 Kb / s (D channel).
Primary access – PRI
The second way is the primary access ( Primary Multiplex), which allows the use of a maximum of 30 channels, with a transmission rate of 2048 kbits. In this case, the ISDN is provided directly from the telephone exchange and not through a conventional telephone line. Primary access enables simultaneous communication on 30 devices, thus being useful for medium and large companies and internet service providers. This type of ISDN also has a D channel, which operates at 64 Kb/s.
The D channel
Regardless of the type of ISDN used (BRI or PRI) there is a channel, called D (D channel), also known as a “data channel”, which is responsible for maintaining an 8,000-bit “reserve” and also necessary information for both channels B, such as data transmission protocol, type of equipment, as well as information of interest to the telephone company, such as rates, date and hours of connection, in short.
By combining the characteristics of the D channel with the appropriate hardware equipment, it becomes possible to “join” the B channels to transmit data more quickly.
In ISDN technology, there are basically 4 significant protocols for the user. All protocols are used in the useful channel and not in the data channel. Are they:
V.110: the V.110 speed protocol is a transmission process that has existed since the principles of ISDN technology. Data is transmitted at up to 38,400 bit / s. The rest of the capacity (up to 64 kbits) is occupied with redundant data packets;
V.120: it is the successor to V.110 and has few differences from the first. The main one is that data is transmitted at up to 54,000 bit / s;
X.75 and T70NL: both are more recent and are able to take full advantage of the transmission capacity of Channel B. It was these protocols that allowed ISDN technology to be a viable solution for internet access. |
Hospitals, doctors’ surgeries and care homes are trusted to look after people in their most vulnerable state. As such, any medical facility has a responsibility to provide the best possible care while also keeping their patients safe. In addition to general health and safety practices, part of this process will include regular assessments and monitoring, which typically involves electronic equipment.
This electronic equipment may include blood pressure and heart rate monitors, defibrillators, and motorised beds. Everyday, almost 1 million patients visit an NHS healthcare facility, many of whom use equipment as part of their treatment.
These essential pieces of electronic equipment are vital to patient care, but they can also present a very real danger. Because of this, a medical establishment has a legal responsibility to complete tests to ensure the safety of healthcare equipment, and thus, their patients. Along with visual checks and PAT testing, this should include regular EMC testing.
What is EMC Testing?
Electromagnetic compatibility testing assesses the amount of electromagnetic energy that is released by a piece of equipment and how it affects the surrounding electromagnetic environment. While all electronic devices release electromagnetic energy, if the level of interference and energy emitted is too high, it could lead to disturbance and result in malfunction.
Essentially, the purpose of an EMC test is to make sure that a piece of equipment is emitting an appropriate level of electromagnetism, in order to function properly within its environment.
Why is it Needed?
When it comes to healthcare, it is important to carry out regular EMC tests. This is particularly important for two reasons: patient safety and equipment performance.
In terms of patient safety, medical equipment performs vital, life-saving functions. This means that any instance of equipment failure due to electromagnetism could cause equipment failure and lead to disastrous – and ultimately unavoidable – consequences.
Similarly, healthcare equipment must be able to function to a high standard. However, this can be affected by electromagnetism, resulting in equipment that is substandard or damaged. When this happens, equipment may provide inaccurate readings and measurements, which has the potential to lead to incorrect diagnosis or treatment.
EMC Testing Equipment
EMC testing was first introduced into British legislation in 1899. Known as ‘The Lighting Clauses Act’, it was created when it was found that electronic cables could interact. While the first EMC equipment was basic, it has been developed into highly sophisticated technology.
Nowadays, companies like MCS Testing Equipment offer numerous pieces of equipment. This includes analysers and receivers, which analyse frequencies to highlight EMC issues; impedance stabilisation networks, for immunity and emission tests; and specialist software for compliance.
Completing EMC Tests
While manufacturers must ensure they comply with electronic compatibility regulations when manufacturing, after an extended period of use, equipment can develop a fault that can result in higher electromagnetic emissions. As such, the equipment that was designed to help patients may stop functioning properly.
For this reason, medical equipment should be tested on a regular basis. This could be measured over a period of months or vary depending on how often it is used.
When completing EMC tests, it is important to seek a qualified technician or engineer, to ensure tests are carried out to the right standard. However, healthcare providers can purchase or hire EMC equipment as required, to allow regular tests to be completed.
Patient safety is about more than reducing human errors, providing correct medication, and maintaining clean premises to reduce the risk of infection. As an important part of patient care, healthcare providers must ensure that all medical equipment is performing to a high standard, as damaged equipment could compromise patient safety.
EMC testing can help to ensure that a piece of equipment is fully functional and safe to use. However, an EMC test alone is not enough and should be carried out in conjunction with regular visual checks, PAT tests and repairs. |
This unit is about motivation. It identifies and applies motivational concepts to understand behavior of humans and animals🧑🤝🧑🐻. It talks about the strengths and weaknesses between theories and the most basic primary needs: physiological, social, and sexual.
Motivation is something that directs a behavior. For example, if you want to get a good grade on the AP Psych exam in May, you are motivated to study 💯 You're probably very familiar with motivation, but it goes deeper than you think and basically exists in every action you do.
Before going through the theories, let's discuss some vocab terms:
Instincts are behaviors that occur unconsciously because they usually just "feel right."
Incentives drive us toward or away from the behavior we want. The incentive could either be a positive stimulus or a negative stimulus, but either way, it impacts our behavior🚶
Intrinsic motivation is when you are doing something for yourself. An example of this would be reading just because you love to read ❤️📖
Extrinsic motivation is when you are doing something for an external factor. Using the above example, if you read just to fulfill a summer assignment ✔️📖, you were extrinsically motivated.
The overjustification effect is when an external factor decreases one's intrinsic motivation to complete a certain task. For example, if you began to learn French on your own time and then came across a really good job offer that requires French, you may now begin to learn French just for the job, rather than yourself💰
Image Courtesy of Sites at Penn State.
High self-efficacy is the belief that someone can complete a task successfully. This usually goes hand in hand with high intrinsic motivation and accepting challenges along the way.
Low self-efficacy is being uncertain that you can master a task and goes hand in hand with low intrinsic motivation. You don't feel as interested in learning the task, so you are unsure if you will be good at it. Having low self-efficacy leads to giving up and avoiding obstacles.
Many different theories about motivation developed over time. Let's discuss them!
Instinct Theory (evolutionary)
This theory has to do with Charles Darwin's principle of natural selection that stated that those that are best adapted to their environments are most likely to mate and survive. Therefore, the motivation in this theory is to survive and we, as well as animals, adapt behaviors that help us live 💕
|Example||Strength of Theory 👍||Weakness of Theory 👎|
|All babies display innate reflexes like rooting and sucking||It helps explain similarities due to our ancestral past.||It helps explain animal behaviors better than human behaviors.|
Drive-reduction Theory (biological)
This theory focuses on how our inner pushes and external pulls interact to drive our behaviors.
We have our need, drives, and behaviors. Our physiological needs create a tensional state that motivates an organism to satisfy that need by doing a certain behavior.
By doing this behavior, we should reach homeostasis, which is a steady internal state.
|Example||Strength of Theory👍||Weakness of Theory👎|
|When you need food, you become hungry, and then you cook yourself something to make the feeling of hunger go away.||It explains our motivation to reduce arousal by meeting basic needs, hunger, or thirst.||It doesn't explain why some motivated behaviors increase arousal.|
Image Courtesy of Myers' AP Psychology Textbook 2nd Edition.
Optimal Arousal Theory
The optimal arousal theory focuses on finding the right level of stimulation. An organism tries to find behaviors that actually increase arousal because everything else bores them.
|Example||Strength of Theory👍||Weakness of Theory👎|
|Being bored and getting yourself into trouble just because you needed to find something to do. Another example is "Curiosity kills the cat" and you just wanna try something new that excites you! ||It explains that motivated behavior may increase or decrease arousal.||It doesn't explain our motivation to address our more complex social needs.|
The Yerkes-Dodson law suggests moderate arousal can lead to optimal performance. With this being said, you've probably experienced the law in real life.
If you were ever way too relaxed 😴 when taking an exam or way too stressed 😟, I bet you noticed a decrease in your exam performance. However, if you are moderately aroused so much that you are aware and alert, you will obtain a higher score.
💡Tip—The Yerkes-Dodson Law is very different from the optimal arousal theory. It focuses more on the relationship between performance and arousal.
Image Courtesy of ResearchGate.
Maslow’s Hierarchy of Needs
Maslow came up with a theory based on needs. The first level of needs focuses on fulfilling basic, physiological needs. Once they are met, the focus shifts to more cognitive and abstract needs.
From the bottom to the top, the pyramid reads:
💧🍔Physiological needs (air, food, and water)
🏠Safety (shelter, place to live)
💕Belongingness (love, a connection with someone or something)
😍Self esteem (loving yourself)
🏆Self actualization (achieving any goal you set your mind to).
|Strength of this Theory||Weakness of this Theory|
|It incorporates the idea that we have levels of various needs.||The order of needs may change depending on the circumstance of the person.|
🎥 Watch: AP Psychology |
The Domain Name System (DNS) Server is a server that is specifically used for matching website hostnames (like example.com)to their corresponding Internet Protocol or IP addresses. The DNS server contains a database of public IP addresses and their corresponding domain names. Every device connected to the internet has a unique IP address that helps to identify it, according to the IPv4 or IPV6 protocols. The same goes for web servers that host websites. For example, the IP address of one CDNetworks server located in Mountain View, California is 126.96.36.199.
DNS servers help us avoid memorization of such long numbers in IP addresses (and even more complex alphanumeric ones in the IPV6 system) as they automatically translate the website names we enter into the browser address bar into these numbers so that the servers can load the right web pages.
Introduction to the Domain Name System
To understand the role of the DNS Server, it is important to know about the Domain Name System. The Domain Name System is essentially a phonebook of the internet. Just like how a phonebook matches individuals to a phone number, the DNS matches a website name to their corresponding IP address.
What is DNS?
The DNS is a system of records of domain names and IP addresses that allows browsers to find the right IP address that corresponds to a hostname URL entered into it. When we try to access a website, we generally type in their domain names, like cdnetworks.com or wired.com or nytimes.com, into the web browser. Web browsers however need to know the exact IP addresses to load content for the website. The DNS is what translates the domain names to the IP addresses so that the resources can be loaded from the website’s server.
Sometimes, websites can have numerous IP addresses corresponding to a single domain name. For example, large sites like Google will have users querying a server from distant parts of the world. The server that a computer from Singapore tries to query will likely be different from the one a different computer from say Toronto will try to reach, even if the site name entered in the browser is the same. This is where DNS caching comes in.
DNS caching is the process of storing DNS data on the DNS records closer to a requesting client to be able to resolve the DNS query earlier. This avoids the problem of additional queries further down the chain and improves web page load times and reduces bandwidth consumption.
The amount of time that the DNS records are stored in DNS cache is called time to live or TTL. This period of time is important as it determines how “fresh” the DNS records are and whether it matches recent updates to IP addresses.
DNS caching can be done at the browser level or at the operating system (OS level).
- Browser DNS Caching
Since web browsers generally store DNS records for a set amount of time, it is usually the first place that is checked when a user makes a DNS record. Being on the browser, there are fewer steps involved in checking the DNS cache and making the DNS request to an IP address.
- Operating system (OS) level DNS caching
Once a DNS query leaves an end user’s machine, the next stop where a match is sought is at the operating system level. A process inside the operating system, called the “stub resolver” checks its own DNS cache to see if it has the record. If not, the query is sent outside the local network to the Internet Service Provider (ISP).
How Does a DNS Work?
The DNS is responsible for converting the hostname, what we commonly refer to as the website or web page name, to the IP address. The act of entering the domain name is referred to as a DNS query and the process of finding the corresponding IP address is known as DNS resolution.
DNS queries can be of three types: recursive query, iterative query or non-recursive query.
- Recursive query – These are queries where a DNS server has to respond with the requested resource record. If a record cannot be found, the DNS client has to be shown an error message.
- Iterative query – These are queries for which the DNS client will continue to request a response from multiple DNS servers until the best response is found, or an error or timeout occurs. If the DNS server is unable to find a match for the query, it will refer to a DNS server authoritative for a lower level of the domain namespace. This referral address is then queried by the DNS client and this process continues with additional DNS servers.
- Non-recursive query – these are queries which are resolved by a DNS resolver when the requested resource is available, either due to the server being authoritative or because the resource is already stored in cache.
The Different Types of DNS Server
Once a DNS query is entered, it passes through a few different servers before resolution, without any end user interaction.
- DNS recursor
This is a server designed specifically to receive queries from client machines. It tracks down the DNS record and makes additional requests to meet the DNS queries from the client. The number of requests can be decreased with DNS caching, when the requested resources are returned to the recursor early on in the lookup process.
- Root name server
This server does the job of translating the human-friendly host names into computer-friendly IP addresses. The root server accepts the recursor’s query and sends it to the TLD nameservers in the next stage, depending on the domain name seen in the query.
- Top Level Domain (TLD) nameserver
The TLD nameservers are responsible for maintaining the information about the domain names. For example, they could contain information about websites ending in “.com” or “.org” or country level domains like “www.example.com.uk”, “www.example.com.us” and others. The TLD nameserver will take the query from the root server and point it to the authoritative DNS nameserver associated with the query’s particular domain.
- Authoritative nameserver
In the last step, the authoritative DNS nameserver will return the IP address back to the DNS recursor that can relay it to the client. This authoritative DNS nameserver is the one at the bottom of the lookup process that holds the DNS records. Think of these as the last stop or the final authoritative source of truth in the process.
DNS Lookup vs DNS Resolver
The process by which a DNS server returns a DNS record is called a DNS lookup. It involves the query of the hostname from the web browser to the DNS lookup process on the DNS server and back again. The DNS resolver is the server that deals with the first step in the DNS lookup process and which starts the sequence of steps that end in the URL being translated into the IP address for loading the web pages.
First, the user-entered hostname query travels from the web browser to the internet and is received by the DNS recursive resolver. The recursive DNS server then queries the DNS root server which responds with the address of the to the TLD server responsible for storing the domains.
The resolver then makes a DNS request to the corresponding domain’s TLD and receives the IP address of the domain nameserver. As a last step, the recursive DNS server queries the domain nameserver and is returned with the IP address to send to the web browser. It is after this DNS lookup process is done that the browser can request for individual web pages through HTTP requests.
These steps make up a standard DNS lookup process but they can be shortened with DNS caching. DNS caching allows the storage of the DNS lookup information locally on the browser, the operating system or a remote DNS infrastructure, which allows some of the steps to be skipped for faster loading. |
The Mayans have a very historical inheritance in the many ruins scattered throughout the world. Tourists from all over are experiencing first hand a part of history that just inexplicably baffles the mind, as even today, historians cant seem to agree on the precise cause of the demise of the Maya Indians. There are various tribes that comprise the Mayan community, of which their language is still spoken, although English is also practiced.
All inhabitants of the Americas are thought to have originally migrated across the Bering Straits when the level of the oceans dropped enough to form a land bridge between Alaska and Siberia. These foraging nomads migrated throughout North America and eventually down through Central and South America.
The archaeological record shows evidence of the first Maya people as early as 1100 BC. These pioneers descended into the Copan Valley in Honduras from either the Guatemala highlands or another nearby mountainous region and made temporary camps in the known Maya region. Early Maya inhabitants hunted local game and developed agricultural subsistence techniques until about 900 BC. Around this time, the first true farmers of the Maya people built permanent residences in the valley (Schele and Freidel 1990: 306-307).
About 4000 BC, these people had spread out over the highland areas of Central America and soon reached a population size where they began to form small settlements and domesticate plants.
|Send money online to Honduras with Xoom.com for as low as $2.99.
Archaeologists are able to date finds and sites of the Mayan civilization using artifacts of ceramic, stone, shells and bone. They also use the Mayans own calendar. The Mayans used a rather complex calendar system. Monumental stone inscriptions were carved using a hieroglyphic script and a method of reckoning the passage of time called the Long Count. The most striking feature of this system is that the Mayans dated events to the exact day.
Archaeologists have devised numerous correlations with our own Gregorian calendar to accurately place any event recorded in these Mayan inscriptions. Devised by three well known archaeologists, the most accepted interpretation of the Mayan dates is known as the G-M-T correlation. Using these dates, Archaeologists have been able to decipher three major periods of Mayan Civilization - the Preclassic, Classic and Postclassic periods. For perspective, the flowering of the Mayan civilization corresponds to the later years of the Roman Empire
This chronology of the Mayan civilization is rather simple. Simply put, it started in the Preclassic period, rose to dominance in the Classic period, and declined and disappeared in the Postclassic period.
The beginning of Mayan life in Central America (known as Mesoamerica to archaeologists) occurred around 5000 BC, when wandering nomads from the north found they could settle down and domesticate plants. These early crops consisted of corn, beans and other plants. The domestication of plants required that people stay in one location to tend the fields. Thus were born the first Mayan settlements.
With the growth of settlements and farming, so came innovations to make life easier. Some of the more important inventions include pottery vessels for storage, cooking and serving of food. Because of its weight and fragility, pottery is not often used by nomads. The presence of pottery normally indicates a tendency to long term settlement. Much of what archaeologists know of the early Preclassic period in Belize comes from the Mayan site of Cuello, outside of Orange Walk Town. Radiocarbon dating from a series of buildings and trash dumps (archaeologists love places where people threw their trash) reveal occupation from about 2500 BC. These structures were small buildings with clay platforms and fired clay hearths.
Other items of preclassic origin that were identified include stone utensils for grinding corn and a fluted stone projectile point. During the preclassic time, corn progressed from being a small cobbed, low yielding crop to larger cobbed, high-yielding varieties. The farmers were learning how to maximize their efforts, and passed on what they learned to succeeding generations.
With the improvement in farming, the invention of more sophisticated tools, and the growth in size of settlements, the Maya culture became associated with a civilization with larger cities containing ceremonial centers.
As time progressed, the sites became more numerous and larger. The sites exhibited more organization with public buildings, elaborate burials, and jade jewlerly. Jade became a spectacular marker of the elite, both in quantity owned and in the quality of the workmanship.
Near the end of the Preclassic Period, trading flourished as networks formed between the growing settlements. Most of the major ceremonial centers were started about this time.
The Classic Period is the Mayan Golden Age. Mesoamerica became adorned with massive, ornate and brightly colored architecture. Exquisite works of art and advances in astronomy and mathematics are hallmarks of this Period. This was the age of the development of one of the most sophisticated systems of writing ever devised in the Western Hemisphere.
The Classic period began with the carving of the first hieroglyphic dates on Mayan stelae in 250 A.D. and ended six and a half centuries later with the last dates carved into half finished monuments, as if the artisans walked away in mid hammer stroke. Most of the greatest ceremonial centers in Mesoamerica - Tikal, Caracol, Palenque - came to their greatest glory during the Classic period. And for some yet unknown reason, all were abandoned or far into decline within a span of a few years near the end of the ninth century.
Much of what Archaeologists know of the Mayan Civilization comes from archeological work done on Classic Period sites. Scientists orginally constructed a model of Mayan society as a ceremonial center supported by widely spaced subsistence communities. But intense study on the agricultural practices revealed that the Maya used highly sophisticated techniques to feed a dense and growing population surrounding the ceremonial centers.
These practices included terracing of hillsides and river banks. Terracing allowed intense agriculture of land otherwise unsuitable for crops. Using drainage ditches and irrigation, Mayan farmers maintained corn fields and harvested such diverse crops as manioc, sweet potatoes, and beans. Of great importance was the ramon nut. Large underground chambers were constructed to store the ramon nuts for long periods of time. Some archaeologists theorize that these storage chambers were used in time of famine.
The Classic Maya augmented their starch diet of vegetables and nuts with animal protien. The main source of meat came from hunting the abundant white tailed deer, along with the small brocket deer and two species of wild pig. The Classic Maya also collected turtles and large numbers of freshwater snails.
Emphasis traditionally has been on the large ceremonial centers of the time. But recently, archaeologists have taken a close look at the entire social structure, and have concentrated on the small Mayan settlements and the rural farmers which supported the Mayan Civilization through the production of food. These subsistence farmers lived in dwellings very similar to the Maya of today. Most homes were constructed of perishable material harvested from the forests.
The structure of Mayan society centered around a major ceremonial site. A regional trading system would integrate the products of outlying areas with minor ceremonial sites and eventually with the major ceremonial center. Well developed causeways, called sacbeobs ("white roads" from the plastered surfaces) radiated out from the major sites in all directions toward the minor sites.
The Classic Period chronology has been developed based on the rise, flourishing, and steady decline of the Mayan Civilization. Some archaeologists also base these divisions of the period on the influences of major ceremonial centers on all of Mesoamerica and the Mayan Civilization as a whole.There is evidence that the Maya utilized terracing and elaborate water management systems during this time. The evidence of terracing suggests that the Maya began to cultivate even the steepest slopes. The spectacular growth of the Early and Middle Classic Period was followed by a sudden collapse in the early 9th century. The collapse signalled a massive depopulation of the interior regions of the area, while those sites near water, such as Lamanai, appear to have survived into the Postclassic period.
Archaeologists continually debate over what triggered the rise of the Mayan Civilization. But even greater debate ensues over why this once great civilization collapsed. The period that followed the abandonment of the rainforest centers is known as the Postclassic Period. This Period closes upon the Spanish Conquest in the mid-sixteenth century.
The Post Classic period is characterized by a lack of emphasis on tall pyramids and elaborate structures. Instead, the Maya concentrated on ground level buildings and created their art on stucco which quickly erodes. In fact much less is known of the Maya in the Post Classic Period than in the Classic Period because of the lack of art, artifacts and structures from the Post Classic Period.
Many archaeologists agree that the collapse of the Mayan civilization was triggered by a number of factors. Population was probably one of these. According to researchers, parts of the Mayan region were sustaining nearly 400 people per square mile - a heavy density for an agriculturally based society.
Other factors could have been malnutrition and disease. Studies of human bones and teeth from Late Classic burial mounds have found strong evidence of these factors, including syphilis and other communicable diseases.
The social gulf between the ruling elite and the common people is another factor that archaeologists feel contributed to the decline of the civilization.
Some researchers feel that the breakdown of trade contributed greatly to the collapse. Archeologists believe that "realms" may have been established where outlying districts provided items of trade. These items were brought to a central location for redistribution. These economic links become vulnerable during times of stress and change.
Other scientists believe that climate contributed to the diminishing Mayan population. The Mayans had settled in the lowlands around 8000 BC and began practicing large-scale farming as early as 2000 BC. By the beginning of the medieval climate optimum in AD 500, the population was nearly 14 million, making it one of the largest centers of civilization anywhere. But the thriving Mayan cities began to experience diminished long-term rainfall patterns. Dry conditions began in 760 and, after a 50-year wet period, drought again set in about 860. Another drought followed in 910. The boom-and-bust cycles of rainy and dry periods contributed to eras of both growth and decline. Therefore, some believe that technology, population sizes, and agricultural intensity overwhelmed the land. Yields declined with the dry conditions and these structural incongruities led to ongoing wars between Mayan city-states that eventually contributed to their collapse.
The causes of the Mayan collapse are obviously complex and varied, and not yet well understood. But the consequences of the collapse are clear. Construction of ceremonial centers stopped; the intensive farming methods ceased; the population dropped from an estimated three million to 450,000 in less then a century.
Also, "Nova" has extensive information on the Maya. Click for maps of mayan ruins, hieroglyphs, and more mayan history.
The Maya nation is an homogeneous group of people who have occupied roughly the same territory for thousands of years. They speak some thirty languages that are so similar that linguists believe that they all have the same origin, a proto Mayan language that could be as much as 7000 years old! They will will explain how geographical isolation made the original language evolve towards an eastern branch subdivided into proto-K'iche and Mam and a western branch subdivided into proto-Q'anjob and proto-Tzeltal and how the further division of these sub branches gave rise to the 30 languages spoken today. The in situ evolution of their language implies that they were the original permanent inhabitants of the Maya area and suggests that that today's two million Mayas probably share a very ancient common genetic origin.
That is quite different from the warlike Aztec and Inca nations who invaded their neighbours and absorbed their populations by imposing their language, customs and religion. The Aztecs were a small ambitious "Chichimec" (savage) tribe from the north west who migrated into new lands, absorbed new ideas, evolved further and grew powerful enough to impose their language and gods (Huitzilopochtli), on the indigenous people they conquered. It is the story of outsiders becoming the governing elite of pre-existing populations for a relatively short time. The Incas of Cuzco were also a short lived foreign elite governing a wide variety of pre-existing nations.
The Maya had no centralised political leadership. They developed a common culture by absorbing and developing elements borrowed from their neighbours. The long count calendar, writing with glyphs and the basic tenets of their religion can be traced directly to the Olmecs through Izapa. The Olmec civilisation disappeared before the advent of the Christ but its heritage formed the basis for all other mezoamerican civilisations such as the Monte Alban Zapotec, the great Teotihuacan hegemony, the Tula Toltecs and finally the Aztecs.
The Maya were also influenced by Teotihuacan that controlled the Mexican highlands from the first to the seventh centuries. The Mayan golden age lasted five centuries from 300 to 800 AD. Then, they stopped building temples, declined and became fragmented in competing states that were easy prey for invading forces from the north such as the Toltec which had been expelled from Tula around the end of the 10th century. The Toltecs became the ruling elite of the Maya in the post classic period. Toltec gods were added to the Maya pantheon but the Toltecs were absorbed as they leaned to speak Yucatec Maya.
The Maya were organised in city states, sometimes co-operating, sometimes fighting each other but they shared the same beliefs and deferred to priests who derived power from their knowledge of astronomy, mathematics and numerology. The Maya were very much aware of the passage of time. They recorded some dates on stelae and probably much more in books that are lost now because fanatical Spanish Catholic priests destroyed them to eradicate "pagan beliefs". Retracing the history of the Maya is like finding the solution of a detective novel for we have to rely on whatever clues we can find in what is left of archaeological sites that the Spanish did not plunder or destroy.
There are many unanswered questions about the Maya but the cause of their decline remains the greatest mystery. Their civilisation was not destroyed by an overwhelming outside force. The Olmec suffered the destruction of San Lorenzo around 900 BC and that of La Venta around 600 BC but no such catastrophe befell the Maya. Similarly, Teotihuacan was destroyed by warfare around 700 and so was Tula around 1000 AD but Maya power disintegrated from within. Many hypotheses have been proposed, overpopulation, famine, epidemics, civil disorder... Some of these factors might have played a role in some places but I tend to think that the common people just stopped believing in the dogma the elites were using to establish their power and justify their excesses. Similarly, the disintegration of the Soviet Empire can largely be explained by the excesses of a corrupt elite and the subsequent disbelief in the supremacy of the communist system by the common people.
There are hundreds of known Maya sites spanning two millennia. It can become difficult to follow, so click for a Maya Archaeological Sites table to use as a quick reference of where some of the more important sites are located (southern highlands, central lowlands and northern lowlands) and the period they are best associated with (pre-classic, classic and post classic). |
Lens of Time: Building a Butterfly Wing
For more than 100 years, evolutionary biologists have been working to understand the mechanisms and processes that give rise to some of nature’s most complex designs. This is often slow and painstaking work. Take a close look at the intricate detail of a butterfly wing, and you’ll see the enormity of the challenge. Each wing is made up of tens or hundreds of thousands of tiny scales, arranged like pixels in a digital image to produce an astounding array of colorful patterns. But not all colors are created equal. While many are derived from pigments, some, like the iridescent blue of the beautiful blue morpho butterfly, are created by the reflection and refraction of light. These so-called “structural colors” are the result of nano-scale structures on the wing scales—structures smaller than a single wavelength of light—that dictate the colors that ultimately reach our eyes.
To understand how these structural colors develop, evolutionary and developmental biologist Nipam Patel and his team at UC Berkeley are using a novel set of techniques and tools. They have essentially created a way to open a window directly into living butterflies as they develop. And by using time-lapse microscopy, they are able to generate movies from sequences of thousands of still photographs to reveal exactly how the structures that make these colors possible develop. The insights gained from these observations will be critical as the scientists take their research further—to the genetic level—where they plan to follow embryonic cells as they differentiate to create the nanostructures responsible for butterflies’ rainbow displays. |
Where Is The Pine Island Glacier Located?
The Pine Island Glacier, mapped by the United States Geological Survey, is the fastest melting glacier in Antarctica. The melting of this glacier accounts for 25% of the ice loss in Antarctica. The melting ice of the glacier flow into the Pine Island Bay in the Amundsen Sea of Antarctica. The ice stream glacier has an extremely remote location with the nearest human inhabited research station being at Rothera, about 1,300 km away. Due to the remoteness of the glacier, most of the information on the glacier is derived via satellite-based or airborne measurement. The Antarctic Treaty prohibits any nation from claiming the region as their own.
Geography Of The Pine Island Glacier
The Pine Island Glacier drains about 10% of the area of the West Antarctic Ice Sheet. The glacier has the highest net contribution of ice to the sea among the ice drainage basins of the world. The Pine Island Glacier drainage basin occupies an area of 175,000 square km.
What Would Happen If The Pine Island Glacier Melts?
The Pine Island Glacier and the Thwaites Glacier are two of the five largest ice streams in Antarctica. If these glaciers were to melt, they would destabilize the entire West Antarctic Sheet and also affect the East Antarctic Ice Sheet. This melting process would trigger sea level rise by 1 to 2 meters. Even worse, the Pine Island Glacier and other ice streams draining into the Amundsen Sea are not protected by large floating ice shelves in the ocean. Thus, there is no geological barrier to stop the retreat of the ice.
A Future Catastrophe In The Making
According to research reports, the flow of the Pine Island Glacier has sped up by 73% from 1974 to the end of 2007. More water is now being added to the sea than what is being replaced by snowfall. Thus, by the end of 2007, the Pine Island Glacier had a negative mass balance of 46 gigatons per year. It is believed that global warming triggered rise in sea temperatures have influenced the melting of the glacier. If the thinning and melting of the glacier continues, the entire main trunk of the glacier would be afloat within a span of only a century. Since the glacier is located near a volcano in the Hudson Mountains, volcanic activities could also play a major role in increasing the flow of the glacier in the future. |
When European settlers began to arrive in New Zealand in the early 1800s, they needed safe ports and harbours to anchor their ships. They chose to build their settlements where it was safe for the ships bringing people and supplies.
The first landing places were small and roughly built. Many were just areas of mud, or jetties were made with planks held up by barrels that led to the water.
As more ships came, harbours needed to be managed properly, so a system of harbour boards was set up in 1870. Their engineers organised building works.
From small to large ports
Small ports appeared, scattered along the coast and serving their own little communities. These ports grew into larger towns, which became richer as railway lines were built to bring goods from other places. Some ports did better than others – Dunedin, Wellington, Auckland and Lyttelton harbours were busy, while many smaller ports were closed down.
Types of port
New Zealand has three main types of port:
- Major ports: The natural features of Wellington, Auckland and Lyttelton (serving Christchurch) made good harbours because they were sheltered and had deep water.
- River ports: many ports, such as those at Greymouth, Westport and Wanganui, were built in sheltered river mouths. However, rivers could be shallow or have sandbars and ships were often wrecked.
- Breakwater ports: some ports were dangerous because of rough seas. The ports at Timaru, Ōamaru, Napier and New Plymouth were made safe by building huge artificial harbours protected by breakwaters.
The 20th and 21st centuries
Working on wharves was back-breaking, and in the 20th century wharfies (watersiders or longshoremen) held many large protests on the waterfront. Working conditions became better. Cranes and other machinery have made it possible to quickly unload container-loads of cargo.
Recently, the areas around New Zealand’s harbours have been cleaned up, and waterfronts have become places to walk, shop, or have a drink. |
How you can help prepare your child for school
We’ve written recently
about the importance of a strong kindergarten or preschool year to help children become ‘school-ready’.
It takes a well-rounded curriculum
and highly skilled teachers
, as well as plenty of love and attention to really get those brain cells firing
But a child’s readiness for school isn’t just the job of teachers, and there are several things you can do at home to support your child as they progress towards school.
Support their developing social-emotional skills
A child’s social-emotional skills are the building blocks for the rest of their developmental pyramid. That’s the view of Tom Brien, teacher at Goodstart Mona Vale
“It’s my firm belief that if a child is emotionally intelligent and can self-regulate they will more easily pick up the other skills they need.
“A lot of my coaching with parents is about helping their child to recognise the emotions they’re feeling, negotiate with peers and self-regulate in socially acceptable ways.
“These are really valuable skills for young children to have.”
Children learn naturally
Encourage exploration and play
through play, and encouraging your child to explore the world around them will open the door to a wealth of opportunities to inquire and learn.
You can encourage this type of play at home by:
- Providing opportunities for your child to play outdoors and in natural environments
- Having things like dress up clothes and art materials readily available
- Asking open-ended questions about what your child is doing or what they’ve observed
- Introducing new words or perspectives that relate to their play
We have some excellent tips on our blog about supporting play in babies
and three to five year olds
Build on their early literacy and numeracy skills
Early childhood provides fertile ground for developing literacy and numeracy skills, and there are many simple ways you can support your child’s budding abilities at home.
Literacy is about much more than simply reading and writing, and one of the simplest ways to help grow this essential skill set
is by reading books to your child.
Books provide a fun and engaging entry point for developing vocabulary, knowledge, creativity, concentration, empathy and imagination.
Likewise, numeracy skills can be developed at home by encouraging things like counting, sorting items into larger or smaller, measuring ingredients during cooking, dividing food into equal shares or setting the table with the right number of utensils and plates.
Your imagination is the only limit!
Establish familiarity with the concept of school
Children feel secure when they know what to expect, so use the year before school to gently introduce the concept of school and build familiarity with the environment, behaviours and routines they’ll encounter.
Some simple tips include:
- Ask your child what they think about going to school and encourage their questions
- Talk about how they’ll get to and from school
- Let them dress up in their school uniform
- Read books about starting school
- Arrange some playdates with other children who’ll be attending the same school
- Practice the morning routine of getting ready and packing lunch
Doing this at home will reinforce what your child’s preschool or kindergarten teacher is doing, and can really help children make a successful transition to school. |
This Graphic Organizing: Early American History lesson plan also includes:
In collaborative groups, young US historians sort cards (each labeled with a single early American event or issue) according to which of the first four presidents was leading the country at the time. Learners copy the events onto a graphic organizer to support comprehension and retention. I've seen this activity used to distinguish between Federalists and Democratic-Republicans; it would work well anywhere opposing points of view are involved.
- This resource is only available on an unencrypted HTTP website. It should be fine for general use, but don’t use it to share any personally identifiable information |
The world is full of questions and problems. So everyone is always looking for answers and solutions. This perpetual cycle is what makes search algorithms so important. The better the formula for solving a problem is, the faster you’ll be able to find what it is you’re looking for.
In the ordinary (or classical) world, computers do their thing (including standard searches) using bits that can represent either 0 or 1. But in the quantum world, computers use quantum bits (qubits for short) that can represent 0, or 1, or 0 and 1 at the same time through what is known as superpositioning.
This superposition of states is what gives quantum computing its extraordinary speed. While in this state, an algorithm can simultaneously search both 0 and 1, which means however big a list of search elements is, the algorithm will be able to do its search more quickly.
Typically, the time it takes to do a standard search is said to be proportional to the number of search elements because in its worst case, the algorithm will have to go through all the elements before it finds exactly what it’s looking for.
But with the algorithm designed over two decades ago by computer scientist Lov Grover, the speed of search became proportional not to the number of elements, but to the square root of the number of elements. It’s referred to as a quadratic speed-up and it’s considered a revolutionary feat because of its impact. And it was made possible by using the concepts of quantum mechanics.
Because Grover’s algorithm is a quantum algorithm, it automatically follows that it can only be executed by a quantum computer. And while it only took two years to demonstrate the feasibility of executing Grover’s algorithm on a quantum computer (the earliest one ever built), not much could be done because it only worked using a few qubits. In other words, scaling it up to handle larger and more complex computations was proving to be a challenge.
It was like that in the beginning, and 20 years later, the challenge remained the same. Until a few days ago when researchers from the University of Maryland led by Caroline Figgatt made a startling claim: for the first time ever, they were able to successfully execute Grover’s algorithm on a quantum computer that was scalable.
The team did it by suspending a string of five ytterbium ions in an electromagnetic field. Each ion acted like a magnet that can be oriented up or down by using a laser to turn it from one state to another, or into two states at once. They also used the ions’ repulsive forces to enable the qubits to interact with one another.
Flipping the ions’ state is what they used to store data. Letting the ions interact is what they used to process data. And Grover’s algorithm is what they used to do the quantum computation and show that it does make searching considerably faster. Specifically, their three-qubit quantum computer could do a search faster than a classic eight-bit computer.
Figgatt’s team has taken a big first step. And it’s heating up the race to build the first functional scalable quantum computer even more. Because aside from the obvious financial benefits that will be gained, there’s also the matter of helping the world solve its most complex problems through quantum computing and possibly the most powerful search algorithm to date.
The team’s work has been published in the Quantum Physics journal under the title “Complete 3-Qubit Grover Search on a Programmable Quantum Computer”. |
All Forms are Created From Five Basic Forms
There are only five basic forms from which all other forms are created. They are the sphere, the cone, the cylinder, the cube, and the doughnut shaped torus. Parts of these forms combine to create everything we see. Imagine a half cylinder on top of a cube and you have the shape of a mailbox, a half sphere and a cone make a teardrop form, a fir tree is a cone an oak is a half sphere. The cylindrical coffee mug has a half torus handle.
Values Create Form
Each of these forms has distinctive light and dark value shapes that define them. Spheres are recognized by crescents and ovals. Cones have triangular light and dark value shapes. Cubes and flat surfaces are even blends. Cylinders are stripes. The torus is crescents and stripes. Concave versions of these forms have the same value shapes but without reflected light. (See Shadows/Reflected light.) When you can paint these five forms you can paint all other forms.
A SPHERE is defined by CRESCENTS AND OVALS. Sphere forms are painted with crescent and curved brush strokes. (See Brushes)
CONES are defined by TRIANGULAR values of light and dark. Cones are painted and blended using triangular brush strokes.
CYLINDERS are defined by light and dark value STRIPES. Cylinders are painted with parallel brush strokes.
CUBES and all FLAT surfaces are governed by the same rules. GRADUAL EVEN BLENDS depict a receding flat surface. If there is a flat surface parallel to your canvas, it may be painted with a single color or value. CUBES are various receding flat surfaces. Each surface is a gradual blend. Cubes are painted with parallel brush strokes.
TORUS value shapes combine aspects of two other basic forms. They take the parallel STRIPES of a cylinder for the middle and the CRESCENTS of a sphere for the ends. The torus is painted using crescent and curved brush strokes.
Here you can see that value shapes are stronger than contour lines for the creation of form.
Lighting can be misleading in seeing forms, particularly flat surfaces. Try to see the form first. Then see the lighting on it.
Bill Martin Recommends:
End of Advertisers & Affiliates Section. Thanks for your support. |
When male fruit flies find a bounty of food, they mark the territory with a pheromone called 9-tricosene. The pheromone entices females to lay their eggs nearby, presumably to give the offspring a full meal and a chance to survive, according to a team of researchers led by Christopher J. Potter of Johns Hopkins University (eLife 2015, DOI: 10.7554/elife.08688). The chemical mark also acts as a dinner-is-ready beacon to other Drosophila melanogaster that aggregate in response to the scent in search of the promised buffet. The team found that 9-tricosene activates a family of odor receptors known as Or7a. These receptors are also activated by several alcohols, aldehydes, and E2-hexenal, a volatile compound that wafts from damaged plants—another food source for the tiny insects—and that in turn also guides egg-laying behavior. The research suggests that a variety of chemically distinct signals activate Or7a receptors triggering a common behavioral response—laying eggs near a promising food source. |
Sugar is an important source of energy to the human body. It is a carbohydrate, which is the most essential fuel for the brain, and it provides the body with the energy needed for various other organs to function. Sucrose, or table sugar, is the main source of sugar in most parts of the world.
For a human body to function correctly, it needs carbohydrates, protein, fat, water, minerals and vitamins. Similar to other forms of carbohydrates, such as starch, sugar contains 4 kilocalories per gram. Comparatively, 1 gram of protein has 4 kilocalories, 1 gram of fat has 9 kilocalories and 1 gram of alcohol has 7 kilocalories. Sugar exists in nature, such as plant sugar, which is a product of photosynthesis. Sucrose, the most common type of sugar, is extracted from beet or cane by sugar manufacturers.
Fructose, glucose, lactose and sucrose are the four types of sugar. Fructose, glucose and sucrose are found in honey, vegetables and fruits. Lactose is found in milk and milk products. Sources of sugar include: row carrot, apple, banana, cherry, grape, orange, dried apricot and honey. Sugar has a natural, sweet taste when added to food. It also provides bulk and texture to various traditional foods, such as bakery products and jam. Sugar is central to the browning process that makes bread and pastries have a pleasant golden color and flavor. |
Motivate students to learn about important women in American history. This free printable activity shares a nonfiction passage about Eleanor Roosevelt that is customized to two different reading levels for your third-fifth grade students. Students will read the informational passage about her time as First Lady and her role in American politics. Then they will answer free response and multiple choice questions to check understanding. This free printable worksheet covers a high-interest topic that will be easy to integrate into your Language Arts or Social Studies lesson plans for Women's History Month in March.
Spark Rewards members get instant and exclusive access to Free Printables to help them spark meaningful lightbulb learning moments in the classroom, at home, and everywhere in between. |
A song or songlike poem that tells a story. Most ballads have a regular pattern of rhythm
and use simple language and refrains
as well as other kinds of repetition
. Ballads usually tell sensational stories of tragedy, adventure, betrayal, revenge, and jealousy. Arising in the Middle Ages, folk ballad
s were composed by anonymous singers and were passed down orally from generation to generation before they were written down. Literary ballad
s, on the other hand, are composed and written down by known poets, usually in the style of the older folk ballads.
The typical ballad stanza is a quatrain with the rhyme scheme abcb (although this is by no means universal). The meter is often loosely iambic with four stresses in the first and third lines and three stresses in the second and fourth lines. The number of unstressed syllables in each line may vary. |
From Apollo samples and crater counts, we know that the bulk of volcanism on the moon occurred from 3.9 to 3.1 billion years ago, and we’ve long believed that all volcanic activity shut off around a billion years ago. But now, dozens of newly detected topographic anomalies reveal that volcanic activity didn’t stop on the moon abruptly when we thought it did -- rather, it’s slowed gradually, and it may not even be done yet.
The discovery of these rock deposits suggests that volcanoes were erupting on the moon within the last 100 million years -- during the Cretaceous when dinosaurs roamed the Earth -- making the moon much, much warmer than we thought. It may be time to rewrite the textbooks. The findings were published in Nature Geoscience this week.
These rock deposits, called "irregular mare patches” (IMPs), are characterized by a distinct combination of textures: smooth, rounded, shallow mounds next to patches of rough, blocky terrain. They’re the remnants of small basaltic eruptions, and they’re scattered all across volcanic plains. Until recently, these features were considered very rare: Only one, named Ina, was been spotted by Apollo 15 in the 1970s.
Turns out, Ina isn’t a one-off oddity. Using NASA’s Lunar Reconnaissance Orbiter Camera (LROC), researchers from Arizona State University and Westfälische Wilhelms-Universität Münster spotted 70 irregular mare patches on the near side of the moon. These are too small to see from Earth, and in fact, most of the lava flows that make up the dark plains visible to us (including the face of the “man on the moon”) erupted between 3.5 and 1 billion years ago.
Their wide distribution strongly suggest that late-stage volcanic activity was not an anomaly. And their sharp nature and the absence of impact craters greater than 20 meters in diameter indicate that at least three of these IMPs formed in the last 100 million years, according to an LROC release.
Ina may be less than 50 million years old, they found, and volcanic activity at another IMP called Sosigenes only ended about 18 million years ago. Sosigenes (pictured, right) is only about 300 meters deep, 3 kilometers wide, and 7 kilometers long. You can see how sparse the number of craters on the lava flows are.
Because these IMPs formed way after the well-established volcanic shutdown a billion years ago, the findings suggest that the interior of the moon is hotter than previously thought. Our cold, dead moon’s still got some heat in it. “The existence and age of the irregular mare patches tell us that the lunar mantle had to remain hot enough to provide magma for the small-volume eruptions that created these unusual young features,” Sarah Braden of ASU says in a NASA release. “Young volcanism indicates possibly more magma, or magma at higher temperatures, or magma at shallower depths, or all of the above,” she tells New Scientist.
Images: NASA/GSFC/Arizona State University |
The term ‘-itis’ in a medical word means ‘inflammation’. Conjunctivitis is inflammation of the covering of your eye and inside your eye lids. This can be caused by an infection or by an allergy.
If you have itchy, red, watery eyes this may be an allergy rather than an infection. If you are not sure if you have an allergy or an infection, ask to talk to your pharmacist. They can ask you a few questions to help decide. You would not want to leave an eye infection untreated; but if it’s an allergy then you need something completely different to treat your eyes. You need to be sure of the cause, so you can use the correct medicine.
For allergy eyes (known as allergic conjunctivitis) symptoms include eyes that are:
An infection can often start in one eye then spread to the other. For an allergy it is more likely to be both eyes at once.
Allergic conjunctivitis can be just one of the symptoms for people with hayfever. If you get hayfever, you are likely to have itchy throat and runny nose as well as the watery, itchy eyes. For some people the symptoms are not so easy to describe and there can be confusion between hayfever and a cold. Do talk to your pharmacist to ensure you are using the best treatments for you or your family.
November through to February are usually the highest pollen count in NZ and this is generally considered to be ‘pollen season’. Wind pollenated plants like grasses have lots of pollen blowing in the wind and are most likely to be the cause of an allergy like allergic conjunctivitis. Even during ‘allergy season’ some days may be better than others. Cool still days will have less pollen and dust blowing around than dry, windy days.
Some allergies are all year round and may be due to:
- household dust mites which are tiny insects that lives in every home mainly in bedrooms, carpets and mattresses, as part of the dust.
- mould spores
- chemical scents such as household detergents or perfume
You may also experience short term allergic conjunctivitis if they get something in your eye like shampoo.
Self-care and medications
- A cool damp facecloth over the eyes may help. Some people prefer warm instead of cool. Use what feel best for you.
- Wear glasses when outdoors to help protect your eyes from the pollen.
- Don’t let the grass in your lawn go to seed. Make sure you cut it regularly.
- Be careful drying your bedding and clothes outside during pollen season.
- For seasonal allergies keep your windows closed and stay inside (as much as possible) on windy days.
- Talk to your pharmacist about the most appropriate treatments for you (or your family). There are a range of drops and tablets that are available without a prescription that your pharmacist can recommend when appropriate.
- There are also a range of medicine available on prescription from your doctor. Your pharmacist can refer you to the doctor if they think you need something stronger.
See your doctor immediately if your eye is very painful or if you are having trouble seeing clearly. There are lots of different causes for eye problems and some require immediate attention to prevent permanent loss of vision.
This blog provides general information and discussion about medicine, health and related subjects. The information contained in the blog and in any linked materials, are not intended nor implied to be a substitute for professional medical advice. |
Q: What is pink eye? What are the symptoms and how can it be treated? Is it contagious?
A: Pink eye is a common name for conjunctivitis. It?s an inflammation of the conjunctiva, the membrane on the inside of the eyelid and covering the whites of the eyes.
The most frequent causes of pink eye are bacterial or viral infections and allergies. Other possible causes include irritants, such as chemicals, air pollution or dust, and exposure to certain kinds of light, such as sunlamps.
A red, bloodshot eye is a characteristic of pink eye. The eyelid may swell and the eye may burn or feel irritated. Bacterial conjunctivitis may produce a white or yellowish, thick and goopy discharge. This discharge may stick the eyelids shut during sleep. Fluid discharge from allergic or viral conjunctivitis tends to be clear and thin.
Bacterial and viral conjunctivitis is most definitely contagious. The disease often affects young children and spreads rapidly through touch, through the air at close range, and by contact with something the infected child has contaminated, such as towels or bed linens. Children can sometimes spread it to the other eye.
Bacterial and viral conjunctivitis will usually go away without medication, but bacterial conjunctivitis is often treated with topical antibiotic ointment or drops. Preventive practices may help limit the spread of pink eye among children and adults. Frequent hand-washing, not sharing cosmetics, towels, etc. are all recommended for prevention.
These steps are hard to implement among young children, and epidemics often spring up in preschools and day care facilities. Many schools and child-care providers insist that parents to keep their child at home until the pink eye goes away. |
In the News
- Skin cancer is the most common form of cancer in the United States, taking thousands of lives every year. By the age of 70, one in five Americans will develop skin cancer. Now NASA is helping public health officials track the primary cause of the disease: overexposure to ultraviolet radiation.
- By measuring solar radiation reflected from Earth’s surface and scattered by its atmosphere, the OMI team derives important information about aerosols such as dust and smoke and pollutants like nitrogen and sulfur dioxide.
- The Ozone Monitoring Instrument (OMI) international team received the Pecora Award for its “sustained team innovation and international collaboration to produce daily global satellite data that revolutionized air quality, stratospheric chemistry, and climate research.”
- New Documentary Tells the Remarkable Story of How Scientists Discovered the Deadly Hole in the Ozone – and the Even More Remarkable Story of How the World’s Leaders Came Together to Fix It
- NASA ended the Tropospheric Emission Spectrometer's (TES) almost 14-year career of discovery. TES was the first instrument designed to monitor ozone in the lowest layers of the atmosphere directly from space. Its high-resolution observations led to new measurements of atmospheric gases that have altered our understanding of the Earth system.
- For the first time, scientists have shown through direct satellite observations of the ozone hole that levels of ozone-destroying chlorine are declining, resulting in less ozone depletion.
- A new NASA-led study has solved a puzzle involving the recent rise in atmospheric methane, a potent greenhouse gas, with a new calculation of emissions from global fires. The new study resolves what looked like irreconcilable differences in explanations for the increase.
- New NASA-funded research has devised a way to use satellite measurements of the precursor gases that contribute to ozone formation to differentiate among three different sets of conditions that lead to its production.
- Measurements from satellites this year showed the hole in Earth's ozone layer that forms over Antarctica each September was the smallest observed since 1988, scientists from NASA and NOAA announced.
- NASA Earth Observatory's Image: The map shows a regional picture of sulfur dioxide emissions as detected by the Ozone Monitoring Instrument (OMI) on NASA's Aura spacecraft.
- Using a new satellite-based method, scientists at NASA, Environment and Climate Change Canada, and two universities have located 39 unreported and major human-made sources of toxic sulfur dioxide emissions.
- Flaring of waste natural gas from industrial oil fields in the Northern Hemisphere is a potential source of significant amounts of nitrogen dioxide and black carbon to the Arctic, according to a new NASA study, which features OMI Nitrogen dioxide data.
- Kevin Bowman of NASA, discusses the health and environmental impacts of China's high smog levels will have on the US West Coast.
- Dr. Paul Newman is the chief scientist for atmospheric sciences at NASA Goddard. In this talk he discusses how chlorofluorocarbons were destroying the ozone layer, what policy-makers did about it, and what challenges the ozone layer faces today.
- Dr. Bryan N. Duncan is a deputy project scientist for the Aura Mission at NASA Goddard. In this talk he tells the story of air quality in three cities- Beijing, Los Angeles, and Atlanta.
- Worldwide action to phase out ozone-depleting substances has resulted in remarkable success, according to a new assessment by 300 international scientists. The stratospheric ozone layer, a fragile shield of gas that protects Earth from harmful ultraviolet light, is on track to recovery over the next few decades.
(BBC News) 09.10.2014
- The ozone layer that shields the earth from cancer-causing ultraviolet rays is showing early signs of thickening after years of depletion, a UN study says.
- During the first half of the twentieth century, coal burning at power plants, factories, and homes filled the air over the Midwestern U.S. with pollution...
- The Aura Mission Celebrates it's TENTH year since launch!
- Newly released maps reveal that U.S. air quality has markedly improved over the last decade.
- While some forests emit volatile organic compounds that are involved in ozone pollution, history shows attempts to control smog have a better chance of succeeding by focusing on vehicle emissions.
- NASA's Aura satellite, celebrating its 10th anniversary this year on July 15, has provided vital data about the cause, concentrations and impact of major air pollutants.
On the 10th anniversary of the launch of NASA's Aura spacecraft, we offer 10 examples of how the satellite has changed our view of dust, pollution, aerosols, and ozone in our atmosphere.
- Aura's Ozone Monitoring Instrument (OMI) images show that the effects of federal and state efforts have left the air far cleaner than it was a decade prior.
- Though Earth's ozone layer has been depleted over the past four decades by chlorofluorocarbons (CFCs) and similar chemical compounds, the changes are expressed differently at the North and South Pole
- After ten years in orbit, the Ozone Monitoring Instrument (OMI) on NASA's Aura satellite has been in orbit sufficiently long to show that people in major U.S. cities are breathing less nitrogen dioxide - a yellow-brown gas that can cause respiratory problems.
- When faced with a complex problem, Aura project scientist and co-lead for the Chemistry Climate Model Anne Douglass instructs herself to think like a scientist.
- New NASA research on natural ozone cycles suggests ozone levels in the lowest part of Earth's atmosphere probably won't be affected much by projected future strengthening of the circulating winds that transport ozone between Earth's two lowest atmospheric layers.
- Power plant emissions of sulfur dioxide -- an atmospheric pollutant with both health and climate impacts -- have increased across India in recent years, according to a new analysis of data from a NASA satellite.
- Anne Douglass is the project scientist for Aura, one of NASA's Earth Observing System's flagship missions.
- Aura's Ozone Monitoring Instrument (OMI) instrument show long tracks of elevated nitrogen dioxide (NO2) levels along certain shipping routes.
- Cold winter weather and burgeoning industrial economies have made for difficult breathing in Asia and the Middle East this January.
- A new NASA-led study finds that when it comes to combating global warming caused by emissions of ozone-forming chemicals, location matters.
- Aura's Education and Public Outreach lead, Ginger Butcher, exhibited the new "Engineer a Satellite" activity in Washington, DC for the 2012 Earth Day event on the National Mall and the US Science and Engineering Festival.
- NASA scientists lead forums at Howard Community College
- A team of scientists have used the Ozone Monitoring Instrument (OMI) on NASA's Aura satellite to confirm major reductions in the levels of a key air pollutant generated by coal power plants in the eastern United States. The pollutant, sulfur dioxide, contributes to the formation of acid rain and can cause serious health problems.
- In March 2011, the Earth Observatory published images of a rare, deep depletion in the ozone layer over the Arctic. The images came from daily observations made by Aura's OMI instrument.
- A NASA-led study has documented an unprecedented depletion of Earth's protective ozone layer above the Arctic last winter and spring caused by an unusually prolonged period of extremely low temperatures in the stratosphere.
- Fires throughout Ontario are generating pollution that is showing up in data from NASA's Aura Satellite in the Great Lakes region.
- Fires raging in central Africa are generating a high amount of pollution that is showing up in data from NASA's Aura Satellite, with the ominous shape of a dark red butterfly in the skies over southern part of the Democratic Republic of the Congo and northern Angola.
- In its early, violent days, the eruption at the Puyehue-Cordón Caulle volcanic complex sent clouds of ash high into the atmosphere.
- NASA's Aura Satellite has provided a view of nitrogen dioxide levels coming from the fires in New Mexico and Arizona.
- Recent pollution levels from the fires in Canada's Northwest Territories do not appear to be as high as they were at the end of June as the fires have come under more control since then.
- Scientists have a good reason to track noctilucent or polar mesospheric clouds: they are a pretty good gauge of even the tiniest changes in the atmosphere. These "night-shining clouds" as they are sometimes called, are thin, wavy ice clouds that form at very high altitudes and reflect sunlight long after the Sun has dropped below the horizon.
- On July 6 this summer, Virginia's Department of Environmental Quality issued the region's first "unhealthy" air alert since 2008.
- Last month when ash from a volcanic eruption in Iceland shut down air traffic over much of Europe, an international network of centers dedicated to this aviation hazard sprang into action.
- The annual ozone hole has started developing over the South Pole, and it appears that it will be comparable to ozone depletions over the past decade.
- Sea ice at the other end of the world has been making headlines in recent years for retreating at a breakneck pace. Satellite measurements show that, on average, Arctic sea ice has decreased by four percent per decade.
- Aura has fulfilled its requirement for a 5 year lifetime and continues to provide high quality science.
- Prior to widespread human settlement and forest clearing, there was no such thing as a fire season in the Amazon Rainforest.
- The U.S. soybean crop is suffering nearly $2 billion in damage a year due to rising surface ozone concentrations harming plants and reducing the crop's yield potential, a NASA-led study has concluded.
- Chinese government regulators had clearer skies and easier breathing in mind in the summer of 2008 when they temporarily shuttered some factories and banished many cars in a pre-Olympic sprint to clean up Beijing's air.
- The Antarctic ozone hole reached its annual maximum on Sept. 12, 2008, stretching over 27 million square kilometers, or 10.5 million square miles. The area of the ozone hole is calculated as an average of the daily areas for Sept. 21-30 from observations from the Ozone Monitoring Instrument (OMI) on NASA's Aura satellite.
- In late April 2008, Kilauea Volcano Volcano on Hawaii's big island continued its pattern of increased activity, including elevated seismic tremors and emissions from the volcano's Halema'uma'u vent.
- Weather broadcasts have long been a staple for people planning their day. Now with the help of NASA satellites, researchers are working to broaden daily forecasts to include predictions of air quality, a feat that is becoming reality in some parts of the world.
- The Aura spacecraft currently flies about 15.22 minutes behind Aqua in the A-Train. The Aura Project is proposing to move Aura spacecraft closer to Aqua. Aura would follow Aqua by about 8 minutes along the same track after the move.
- NASA scientists will join researchers from around the world to celebrate the 20th anniversary of the Montreal Protocol, an international treaty designed to reduce the hole in Earth's protective ozone layer.
- For the first time, NASA scientists have used a shrewd spaceborne detective to track the origin and movement of water vapor throughout Earth's atmosphere. This perspective is vital to improve the understanding of Earth's water cycle and its role in weather and climate.
- Pinpointing pollutant sources is an important part of the ongoing battle to improve air quality and to understand its impact on climate. Scientists using NASA data recently tracked the path and distribution of aerosols -- tiny particles suspended in the air -- to link their region of origin and source type with their tendencies to warm or cool the atmosphere.
- Two new NASA-funded studies of ozone in the tropics using NASA satellite data not previously available are giving scientists a fuller understanding of the processes driving ozone chemistry and its impacts on pollution and climate change.
- NASA and National Oceanic and Atmospheric Administration (NOAA) scientists report this year's ozone hole in the polar region of the Southern Hemisphere has broken records for area and depth.
A NASA and university study of ozone and carbon monoxide pollution in Earth's atmosphere is providing unique insights into the sources of these pollutants and how they are transported around the world.
- Scientists analyzed 25 years of independent ozone observations at different altitudes in Earth's stratosphere.
- Summer in the city can often mean sweltering "bad air days" that threaten the health of the elderly, children and those with respiratory problems. This summer the nation's capitol has been no stranger to such severe air-quality alerts.
06.29.2006 - The Antarctic ozone hole's recovery is running late. According to a new NASA study, the full return of the protective ozone over the South Pole will take nearly 20 years longer than scientists previously expected.
- Since launching in July 2004, Aura has been retrieving information and producing valueable data of the Earth and its atmospheric properties. View the selected top ten discoveries that Aura's instruments have brought us so far.
- Thunderstorms over Tibet provide a main pathway for water vapor and chemicals to travel from the lower atmosphere, where human activity directly affects atmospheric composition, into the stratosphere.
- Sulfur dioxide (SO2) emissions from the eruption were measured by the Ozone Monitoring Instrument (OMI) on NASA's EOS/Aura satellite from the Anatahan (Mariana Islands)
4.13.2005 - Aura's Ozone Monitoring Instrument (OMI) provided data for the color-coded images which focus only on aerosols (particles in the atmosphere) from the damage of fires in Alaska.
- Scientists head north to learn about air quality, ozone, and climate change predictions.
- The Ozone Monitoring Instrument (OMI) recorded the Manam volcano eruption on NASA's new Aura satellite.
- The instruments onboard Aura will help scientists monitor pollution production and transport around the world.
- Satellite offers unprecedented precision.
- We share the air we breathe not only with other people but also with the rest of our environment .
- Forty K-12 educators from the United States and France participated in an 11-day NASA-sponsored workshop this past summer aimed at bringing real-life science experiences into the classroom.
- The safety measures taken during the launch of Aura are characteristic of NASA's commitment to safety and mission assurance.
- Aura, a mission dedicated to the health of the Earth's atmosphere, successfully launched today at 6:01:59 a.m. EDT.
07.14.2004 - The next launch attempt will be on Thursday morning, July 15, during a three-minute launch window that opens at 6:01:59 a.m. EDT.
- The launch of NASA's Aura spacecraft has been postponed by at least 24 hours to Sunday, July 11 at 6:01:57 a.m. EDT.
- John Gille: Searching for Patterns in the Clouds, Anne Douglass: Making the World Safe for Blondes, Peter Siegel: Studying the Energy of the Universe
06.29.2004 - A next-generation Earth-observing satellite is scheduled for liftoff on Saturday, July 10 at approximately 6:01:57 a.m. EDT.
- The same gas -- ozone -- that is the main factor in bad air also protects us from the Sun's harmful effects.
- Interviews with Andrea Razzaghi and Pieternel Levelt.
- A mission to understand and protect the air we breathe.
- The story of a molecule and the spacecraft designed to help us understand it.
- On June 19, the launch of Aura satellite will help scientists understand how atmospheric composition affect the Earth.
- Temperature, humidity, winds and the presence of other chemicals in the atmosphere influence ozone formation, and the presence of ozone, in turn, affects those atmospheric constituents.
- Researchers will brief the press and discuss science goals of the mission at 4 p.m. EDT, May 17 in Montreal.
- Six schools in the Czech Republic received awards recently for their collection of ozone data as part of a GLOBE project
- GLOBE is an international organization of students and teachers who collect and share data about the health of the environment. |
One of the largest remaining wetland complexes in New Zealand, the lagoon and wetlands provide a vital refuge for rare birds and threatened plants.
One of the largest remaining wetland complexes in New Zealand, the Waituna Lagoon and Awarua Wetlands is hugely important for its biological diversity and cultural values.
The lagoon and the surrounding 20,000 hectares of wetlands area was one of the first in New Zealand to be officially recognised as a wetland of international importance. A 3500 hectare section of the wetlands, known as the Waituna Wetland Scientific Reserve, was listed as part of New Zealand’s obligations when signing the Ramsar Wetland Convention; an international convention that promotes the wise or sustainable use of wetlands. The site includes four major wetland types: coastal lagoons, freshwater swamps, extensive peatlands, and estuaries. Each ecosystem is unique and maintained by different ecological processes.
The wetlands provide a vital refuge for rare bird species, including the Southern New Zealand Dotterel (Tūturiwhatu), Marsh Crake (Koitareke), Fernbird (Mātātā) and Australasian Bittern (Matuku). The area is also visited frequently by many different trans-equatorial migrating and wading bird species, attracting rare visitors to New Zealand such as the Siberian Tattler, Greenshank and Sanderling. Most of the migratory waders are present only from October to late March, but some of the more common species are present through the winter as well. Many threatened plants and insects, as well as wildfowl, native fish and trout, also call the area home.
So bring your binoculars and spot rare birds, enjoy the easy walking tracks, explore the margins of the lagoon in a kayak or small boat, or just soak in the amazing and unique sights at this important natural site.
- Free Things to Do
- Outdoor Activities
- Natural Attraction
- Less than half day
- Wildlife Spotting
- Scenic Photo Point
- Scenic Attraction
- Walking & Tramping |
Search Within Results
Common Core: Standard
Common Core: ELA
Common Core: Math
Topic: Common Core Learning Standards
- Grade 5 Module 3: Addition and Subtraction of Fractions In Module 3, students' understanding of addition and subtraction of fractions extends from earlier work with fraction equivalence and decimals...
- Students cut a rectangular shape into 6 equal units. They show three different ways of cutting it.
- Students measure a notebook with centimeters and inches, then discuss the unit difference. |
The earth’s oceans are its circulatory system, transporting physical and thermal energy, moderating temperatures, CO2 levels and most importantly providing a habitable planet. Wave energy is a significant renewable energy resource, as water density is approximately 1000 times greater than that of air, relatively providing much higher energy flux densities, and enabling high energy extraction from smaller devices.
Wave Energy is created as a result of weather variations in heat and pressure, generating winds blowing across a great fetch impinging on the oceans below. Waves can gather and transfer large amounts of energy extremely efficiently.
The map below shows that there are many high wave energy sites located close to high population densities, and the figures represent kilowatts per meter of wave front. |
Deep Sea Algae
Weʻre learning about algae that grows in deep sea environments, called mesophotic reefs. Beyond the reach of conventional scuba, mesophotic reefs receive just enough sunlight for algae to grow and photosynthesize. Deep sea corals, other invertebrates, and fishes also inhabit these remote environments.
The Hawaiian archipelago is a unique place to study mesophotic reefs. The gently sloping volcanic islands provide large areas of rocky habitat for deep sea algae and corals to live. Mesophotic reefs are too deep for snorkeling or traditional scuba. Advances in diving technology, submersibles and remotely operated vehicles have allowed researchers access to these area in recent years —providing the world with new understandings of deep water communities and connectedness across ocean basins.
We catch up with algae experts Dr. Alison Sherwood and Dr. Heather Spalding to learn more about mesophotic reefs and the exciting discoveries they are making of many never-before-seen species of deep sea algae. We visit the plant science lab at UH Mānoa to learn how to identify different types of algae—and the process of classifying and describing a new species. Researchers are using the genetics of deep sea algae to help determine their relationship to local shallow water species as well as to mesophotic reefs in other parts of the world. We take a quick dive into the methods researchers use to extract DNA from algae, copy and clean the DNA, and analyze genetic sequences on the computer.
Then, we check out the algae and plant specimens at the Bishop Museum in downtown Honolulu with collections manager Barbara H. Kennedy. Bishop Museum is the official Herbarium of Hawaiʻi—where samples of new species as well as samples of rare and endangered plants and algae are stored in special water and fire proof rooms.
Follow us on social media at voiceoftheseatv.
Watch the archived YouTube Watch Party with recorded researcher comments from our live-chat on Wed, Feb. 5th, 2020.
- University of Hawaiʻi at Mānoa Department of Botany
- College of Charleston Department of Biology
- Bishop Museum
- Botany (plant and algae) collections
- NOAA article on deep sea reefs: What are Mesophotic Coral Ecosystems?
- NOAA article on cruise featured in this episode: Researchers observe coral reef damage and invasive alga in Papahānaumokuākea Marine National Monument
- UH News articles on new algae species:
Learn more about the submersibles used by UH researchers at the Hawaiʻi Undersea Research Laboratory
- Hawaiʻi Sea Grant Ka Pili Kai Ho‘oilo 2019
- Issue “Limu Gifts from the Sea“ |
The Manhattan Project involved one of the largest scientific collaborations ever undertaken. Out of it emerged countless new technologies, going far beyond the harnessing of nuclear fission. The development of early computing benefited enormously from the Manhattan Project’s innovation, especially with the Los Alamos laboratory’s developments in the field both during and after the war.
Prior to the advent of modern, digital computers, complex Analog computers were used to perform calculations. Although the word “computer” has come to mean a number of things, Analog and digital computers share the same basic task: calculating and manipulating numbers using logical rules. Analog computers have existed for hundreds of years, and include such simple devices as the slide rule.
Analog computers were vital to work at Los Alamos. Enrico Fermi was renowned for his exceptional skills on his German Brunsviga calculator. When physicist Herbert Anderson bought a faster Marchant calculator, Fermi upgraded too, always wanting to be on the cutting edge.
Analog computers were so integral to the Manhattan Project, and so often used, that they frequently broke down. Physicists Nicholas Metropolis and Richard Feynman set up a kind of computer repair shop, taking apart the machines and working on how to fix jams and breakages. When MED officials discovered Metropolis and Feynman’s outfit, they initially shut it down. They soon realized, however, that their services were vital, and the MED allowed Metropolis and Feynman’s hobby to continue unimpeded.
The Project at Los Alamos also used old punch-card style computers produced by IBM. When the machines were first delivered to the lab, the scientists were skeptical. A race was organized between the IBM machines and the hand-operated computers. Although the two initially kept pace, after about a day of work the hand-operators began to fatigue, while the punch card machines kept working. Fermi eventually became so enamored with the machines that they inspired him to explore the world of digital computing.
The Dawn of Digital: ENIAC
For their money, the U.S. Army Ordinance Corps received a processor that could handle 50,000 instructions per second—an iPhone processor, by contrast, can handle about five billion. However, ENIAC did significantly speed up calculation times—artillery calculations that had previously taken twelve hours on a hand calculator could be done in just thirty seconds. ENIAC was intertwined with nuclear science from the beginning: one of its first real uses was by Edward Teller, who used the machine in his early work on nuclear fusion reactions.
John von Neumann
One of the most important names in the history of computing is John von Neumann, a Hungarian-American polymath and Manhattan Project veteran. Von Neumann joined Princeton’s Institute for Advanced Study (IAS) in 1933, the same year as his mentor, Albert Einstein. Like many of those initially hired by IAS, Von Neumann was a mathematician by training.
During the war, he worked at Los Alamos on the mathematics of explosive shockwaves for the implosion-type Fat Man weapon. He worked with IBM mechanical tabulating machines, tailored for this specific purpose. As he grew familiar with the tabulators, he began to imagine a more general machine, one that could handle far more general mathematical challenges—a computer.
Near the end of the war, von Neumann put together a report on the architecture of such a machine—today, that architecture is still called the von Neumann architecture. His work relied on the thoughts of Alan Turing, a young mathematician at Princeton whose work had defined the limits of computability. His dream of a general machine was already being implemented—in the form of ENIAC.
When von Neumann returned to Princeton after the war, he built the IAS computer, which implemented his von Neumann architecture. Starting in 1945, the IAS computer took six years to build. Meanwhile, the British "Manchester Baby" computer, the first stored program computer, successfully ran its first program in 1948; the Electronic Delay Storage Automatic Calculator (EDSAC) at Cambridge University followed suit in 1949. Once the IAS computer was complete, its basic design was re-implemented in more than twenty different other computers all over the world.
It’s a MANIAC
Von Neumann’s project at Princeton represented reflected a recent surge of interest in computing and its applications in science, technology, mathematics, and weapons manufacturing. The scientists working at Los Alamos—then in pursuit of nuclear fusion weapons—had every reason to join in. In 1951, a team of scientists, led by Nicholas Metropolis, constructed a computer called the Mathematical and Numerical Integrator and Calculator: MANIAC.
MANIAC was substantively smaller than ENIAC: only six feet high, eight feet wide, and weighing in at half a ton. MANIAC was able to store programs, while ENIAC could not. MANIAC’s design was based on the IAS computer, which had originally also been called MANIAC. It therefore used von Neumann’s architecture, making it one of the ancestors of many modern computers.
Although it eventually was used for a variety of purposes, MANIAC’s first job was to perform the calculations for the hydrogen bomb. When Enrico Fermi casually suggested the idea of building a fusion device in the early ‘40s, Edward Teller became fascinated by the idea of designing and constructing a hydrogen bomb. Polish-American mathematician Stanislaw Ulam soon joined Teller to help build the so-called “Super”. Using a design that Oppenheimer had called “technically sweet,” the two created the Teller-Ulam device, which used a fission reaction to ignite a fusion reaction.
MANIAC, along with IAC and ENIAC, was used to perform the engineering calculations required for building the bomb. It took sixty straight days of processing, all through the summer of 1951. On November 1, 1952, the first full-scale thermonuclear device was tested at Elugelab Island. The “Ivy Mike” test vaporized the entire island, as well as eighty million tons of coral. MANIAC’s calculations had been successful.
Even more Innovation
The advent of computing allowed for major innovation in the realm of simulation. Metropolis led a group that developed the Monte Carlo method, which simulates the results of an experiment by using a broad set of random numbers. It was named for the Monte Carlo casino, where Stanislaw Ulam’s uncle often gambled. First invented during the Manhattan Project, the Monte Carlo method had been used on old analog computers. However, that work was slow and time consuming. By using MANIAC, physicists like Fermi and Teller could perform simulations much faster. This allowed for better understandings the behaviors of particles and atoms.
MANIAC was used for innumerable other experiments and discoveries. In 1953 and 1954, it performed analysis that helped discover the Delta particle, a new sub-atomic particle. George Gamow used it for early research into genetics. MANIAC also contributed to advancements in two-dimensional hydrodynamics, iterative functions, and nuclear cascades. And, in 1956, Mark Wells wrote a program to simulate “Los Alamos Chess,” a bishop-less, six-by-six version of the classic board game. In so doing, MANIAC became the first computer to play, and then beat a human, at a chess-like game.
- Jim Holt, "How the Computers Exploded," New York Review of Books, June 7, 2012.
- Kevin Kelly, "Q&A: Hacker Historian George Dyson Sits Down With Wired’s Kevin Kelly," Wired, Feb. 17, 2012
- Herbert L. Anderson, "Metropolis, Monte Carlo, and the MANIAC," Los Alamos Science, Fall 1986.
- Mike Brewster, "John von Neumann: MANIAC's Father," Business Week, Apr. 7, 2004.
- W. Barksdale Maynard, "Daybreak of the Digital Age," Princeton Alumni Weekly, Apr. 4, 2012.
- The Shelby White and Leon Levy Archives Center. |
Continuous Probability Distributions Study Guide
Introduction to Continuous Probability Distributions
As was the case with discrete distributions, some continuous random variables are of particular interest. In this lesson, we will discuss two of these: the uniform distribution and the normal distribution. The normal distribution is particularly important because many of the methods used in statistics are based on this distribution. The reasons for this will become clearer as we work through the rest of the lessons.
In the first lesson, we learned that a continuous random variable has a set of possible values that is an interval on the number line. It is not possible to assign a probability to each point in the interval and still satisfy the conditions of probability set forth in Lesson 10 for discrete random variables. Instead, the probability distribution of a continuous random variable X is specified by a mathematical function f(x) called the probability density function or just density function. The graph of a density function is a smooth curve.A probability density function (pdf) must satisfy two conditions: (1) f(x) ≥ 0 for all real values of x and (2) the total area under the density curve is equal to 1. The graphs of three density functions are shown in Figure 11.1.
The probability that X lies in any particular interval is shown by the area under the density curve and above the interval. The following three events are frequently encountered: (1) X < a, the event that the random variable X assumes a value less than a; (2) a < X < b, the event that the random variable X assumes a value between a and b; and (3) X > b, the event that the random variable X is greater than b. We say that we are interested in the lower tail probability for (1) and the upper tail probability when using (3). The areas associated with each of these are shown in Figure 11.2.
Notice that the probability that a < X < b may be computed using tail probabilities:
P(a < X < b) = P(X < b) – P(X < a).
If the random variable X is equally likely to assume any value in an interval (a, b), then X is a uniform random variable. The pdf is flat and is above the x-axis between a and b, and it is 0 outside of the interval. The height of the curve must be such that the area under the density and above the x-axis is 1. Because this region is a rectangle, the area is the height times the width of the interval, which is b – a. Thus, the height must be ; that is, the pdf of a uniform random variable has the form
= 0, otherwise.
A graph of the pdf is shown in Figure 11.3.
A group of volcanologists (people who study volcanoes) has been monitoring a volcano's seismicity, or the frequency and distribution of underlying earthquakes. Based on these readings, they believe that the volcano will erupt within the next 24 hours, but the eruption is equally likely to occur any time within that period. What is the probability that it will erupt within the next eight hours?
Define X = the time until the eruption of the volcano. X has positive probability over the interval (0,24) because the volcano will erupt during that time interval. Because the length of the interval is 24 – 0 = 24, the height of the density curve must be for the area under the density and above the x-axis to be one. That is, the pdf is
= 0, otherwise.
The probability that the volcano will erupt within the next eight hours is equal to the area under the curve and above the interval (0,8) as shown in Figure 11.4. This area is .
In the previous example, notice that the area is the same whether we have P(0 < X < 8) or P(0 ≤ X < 8) or P(0 < X ≤ 8) or P(0 ≤ X ≤ 8). Unlike discrete random variables, whether the inequality is strict or not, the probability is the same for the continuous random variables. This also correctly implies that, for continuous random variables, the probability that the random variable equals a specific value is 0.
- Kindergarten Sight Words List
- First Grade Sight Words List
- Social Cognitive Theory
- 10 Fun Activities for Children with Autism
- Child Development Theories
- Signs Your Child Might Have Asperger's Syndrome
- A Teacher's Guide to Differentiating Instruction
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Theories of Learning
- Curriculum Definition |
The Reflected Glory
THE EARLY Communists never doubted what it was that made them Communists.
"It was the Russian Revolution--the Bolshevik Revolution of November 7, 1917--which created the American Communist movement," the American Communist leader, Charles E. Ruthenberg, proclaimed. "The Communist Party came into existence in the United States, as elsewhere, in response to the ferment caused in the Socialist parties by the Russian Revolution," he wrote on another occasion. And still later, he reiterated, "The movement which crystallized in the Communist Party had its origin and gained its inspiration from the proletarian revolution in Russia."1
American Communism proudly represented the Bolshevik revolution in the United States. It owed its reason for existence to something that happened five thousand miles away.
Before 1917, the revolutionary outlook in Russia was so gloomy and obscure that it offered little for other countries to imitate. Since it was generally assumed that Russia had to catch up with the West before the material and political conditions would be ripe for a socialist revolution, Russia had much more to learn from the West than the West had to learn from Russia. By common consent, the next stage for Russia was supposed to be a "bourgeois-democratic," not a socialist revolution. Marx had thrown out some hints that Russia might not have to go through the traditional stages of capital- |
Nipponosaurus, meaning “Japanese lizard”, is a genus of hadrosaurid dinosaur from the Late Cretaceous Period (late Santonian – early Campanian age). It was discovered in 1934 during the construction of a hospital on Karafuto Prefecture (now Sinegorsk, Sakhalin, Russia). More material belonging to this specimen was recovered in the summer of 1937. Though the quality of bone preservation is poor, the skeleton is about 60% complete. It is still one of the most poorly understood lambeosaurine dinosaurs.
Suzuki and colleagues re-described this dinosaur in 2004, and came to the conclusion that it was roughly 25 feet in length. It is considered a close relative to the well-known North American dinosaur Hypacrosaurus. |
Through millions of years of evolution, animals living in an aquatic environment have diversified to occupy the ecological niches available in the ecosystem. When studying the habitats of these particular organisms, three main areas of the freshwater environment can be distinctly classified.
- The Profundal Region - An area of still water that receives no sunlight, therefore lacks autotrophic creatures. The animals in this zone rely on organic material as a means of food, which is sourced from the more energy rich areas above the profundal region.
- The Pelagic Region - The pelagic region can be found below the surface water, and is defined by the light that is available to it. The pelagic region does not include areas near the shore or sea bed.
- The Benthic Region - The benthic region incorporates all the freshwater environment in contact with land, barring the shallow shore areas. The benthic region is capable of hosting a large volume of organisms, as nutrient and mineral rich sediments are available as a food source while part of the benthic region can occupy the euphotic zone, the area of water where light is available. This will allow an ecological niche for autotrophic organisms which in turn can be a food source for herbivores.
Another distinctive niche for the animal community is that above (epineuston) and below (hyponeuston) the water surface. Epineustic animals receive food from the surrounding hydrosere vegetation, where small animals fall into the water from vegetation and are preyed upon by these epineustic animals.
Below these surface dwelling animals are a collective of animals called the nekton, which live in the pelagic and profundal regions, though rise to the pelagic regions to feed upon these epineustic animals. Fish are included in this nekton community, which play a vital cog in these freshwater communities. Some of these fish are only temporary members of the community, as they move between fresh and salt water. Anadromous fish spawn in freshwater, but live much of their lives in salt water. Catadromous fish are the opposite of this, and spend much of their lives in the freshwater community. Each way, the fish present in the environment at any time form the link between the upper and lower layers of the freshwater community.
Previous pages have described how plants are the primary producers of the freshwater community, harnessing new energy from the sun into the environment. The next page looks at some of the animals that rely on these plants in the community, and animals that survive in the depths of the water and along the water shore and bed. |
Trees are the tallest and oldest plants, and the Blean is the largest ancient woodland in Kent. However, it is the smaller plants we would like to celebrate here, as we take a closer look at some of the typical woodland flowers seasonally found beside the paths, rides and amongst the trees. One of the great pleasures of walking through the Blean is seeing carpets of flowers in the spring. However wild flowers provide so much more, including information!
How old is the Blean?
Bluebells, wood sorrel, wood anemone and yellow archangel rarely grow outside of ancient woodlands (woodland that has existed on a site since 1600 or before).
What lies underground?
Wildflowers tell us about site conditions, soils and the underlying geology. For example lesser celandines are found in wetter areas, and bluebells and primroses in the brighter coppiced areas. Acidic soils have produced the right conditions for heather and gorse to thrive in Church Woods.
Who needs wildflowers?
Well apart from their beauty, native wildflowers are an important part of the incredibly rich biodiversity of the woods. They provide food and habitat for a wide range of insects and other creatures, including the nationally rare heath fritillary butterfly that feeds on the common cow-wheat growing on the wide rides and coppiced woodland of the Blean. Bats and birds feed on the insects, and the nightjar, a rare wonderful bird of southern heaths and open woodland glades, nests in between the clumps of heather in specially created and managed heathland areas.
Flowers we don’t need!
Sadly not all flowers are a welcome sight. A number of vigorous non-native species have escaped gardens or mistakenly been introduced into woodland areas. In parts of the Blean plants such as rhododendron and Spanish bluebells threaten to overwhelm our native species and so reduce biodiversity.
Please enjoy the flowers with your eyes, noses and cameras only! It is for all of the above reasons that it is a criminal offence to pick or dig up native wild flowers without express permission from the owners. |
After some Australian motorists drowned when their cars were swept away by floodwaters in June 2016, university researchers investigated how much force it takes to wash cars away from the road. A 1 tonne vehicle was moved by water 15 centimeters deep flowing at 3.6 km/hr. It was carried away in 60 centimeters of water. A 2.5 tonne vehicle was moved by 45 centimeters of water and began floating in 95 centimeters of water. The cars were moved so easily partly because even shallow water can be deceptively strong, and partly because modern cars are so air-tight that instead of taking on water they get pushed along by it. Even slow-moving water is powerful because water is heavy: each cubic metre weighs 1 tonne. They concluded that vehicles became vulnerable to moving floodwaters once the depth reached the floor of the vehicle. This is consistent with Queensland advice that “Water deeper than the bottom of your car door is enough to float your vehicle away, or splash the engine and cause it to stall”.
If a shallow river can be so powerful, a global flood would be an enormous catastrophic disaster. If this happened about 4,350 years ago, surely there would be some evidence of it still visible today. This blogpost is a summary of eight main points that were made by Dr Tasman Walker in a presentation on “Evidence of Noah’s flood in Australia”.
What would we expect to find on earth if there was a global flood as described in Genesis chapters 6-8 in the Bible?
Fractures in the earth’s crust
The two main sources of the water for the flood are described as “all the underground waters erupted from the earth, and the rain fell in mighty torrents from the sky” (Gen. 7:11-12NLT). Subterranean water burst from beneath the earth and there was torrential rain for 40 days. The flood waters rose to cover the highest mountains of the pre-flood world by 8 meters (Gen. 7:17-20). By the way, Mt Everest didn’t exist before the flood because there are sedimentary rocks with marine fossils on its summit.
If underground water was erupting from the earth on a global scale you would expect that the earth’s crust would be fractured. Today major fractures are seen in the earth’s crust around the edges of the continental plates. When these continental tectonic plates move against each other there are earthquakes and volcanic activity. Such geological activity around the Pacific Ocean is called the “ring of fire”.
So we would expect to find fractures in the earth’s crust and we do. These fractures are evidence of Noah’s flood.
Billions of dead things
If a catastrophic flood covered the earth for six months you would expect to find billions of dead things all over the earth. “Everything (except those on the ark) that breathed and lived on dry land died” (Gen. 7:22NLT). Because such a flood would transport huge amounts of sediment across the earth, most of the creatures that drowned would be buried in the sediments. Because such a flood would also transport huge amounts of sediment into the ocean and cause a depletion in oxygen levels, many marine creatures would die and be buried as well.
At Winton in Queensland, there are many well-preserved fossils of dinosaurs and marine creatures. Dinosaur fossils have also been found at Muttaburra (Queensland). These fossil-bearing sediments extend across the Great Artesian Basin into New South Wales, South Australia, and the Northern Territory, while marine fossils are found in other parts of Central Australia.
So we would expect to find billions of dead things (fossils) in sedimentary rocks and we do. These fossils are evidence of Noah’s flood.
Evidence of rapid burial
If a catastrophic flood covered the earth for six months you would expect to find evidence of rapid burial.
At Richmond in Queensland an exceptionally well-preserved marine reptile fossil was discovered in 1990. It’s a plesiosaur that’s over 4 meters long. In order to be preserved so well it must have been buried rapidly. Fossils of land animals are also found in this region, such as the ankylosaur (an armored dinosaur).
So we would expect to find evidence of rapid burial and we do. These fossils of creatures that were buried rapidly are evidence of Noah’s flood.
Sediment covering huge areas
If a catastrophic flood covered the earth for six months you would expect to find evidence of sediment covering huge areas.
Geologists find that layers of sedimentary rocks extend across large areas called sedimentary basins. They can contain coal, oil and gas that’s used as fossil fuels. For example, the Great Artesian Basin and the Sydney Basin. There are also examples in other continents. And there are also offshore sedimentary basins on the continental shelf of countries around the world.
So we would expect to find evidence of sediment covering huge areas and we do. These layers of sedimentary rock across huge areas are evidence of Noah’s flood.
Evidence of raging waters
If a catastrophic flood covered the earth for six months you would expect to find evidence of raging waters. As these flood waters would have been highly energetic, they would have transported material along with the flow.
The Three Sisters rock formation at Katoomba is composed of sandstone, which was laid down by water. This layer extends across the Sydney sedimentary basin. An examination of the sedimentary layers evident in road cuttings shows layers 1-2 meters thick, which indicates that a lot a water was involved in transporting and depositing this sediment. This water must have been continually rising (to enable continual deposition). There are many cross-beds that go at an angle across the strata. They are formed when the sediment layer builds sideways.
Uluru (Ayers Rock) in the Northern Territory is made of sandstone and the layers have been tipped up so they are almost vertical. These strata are visible as parallel lines on Uluru. This shows that there hasn’t been any significant erosion between the deposition of the strata. So there was rapid deposition – one layer was laid upon the other quite quickly. When we look at a geological cross-section through Uluru it is evident that a lot of sandstone has been eroded from above Uluru. The grains that comprise Uluru are angular, poorly sorted (a large range of particle sizes) and well-preserved (not weathered) which is consistent with rapid deposition.
Kata Tjuta (the Olgas) is a group of large, domed rock formations 25km west of Uluru. They are comprised of large boulders (30-50cm in size). These all face in a similar direction, which is the direction of the water flow that transported them to this site. They indicate highly energetic flood waters.
So we would expect to find evidence of raging waters (which transport and deposit lots of sediment) and we do. These cross-beds, parallel sedimentary strata and boulders are evidence of Noah’s flood.
Evidence of massive erosion
If a catastrophic flood covered the earth for six months you would expect to find evidence of massive erosion. After the flood waters peaked and subsided, they would have flowed off the continents and eroded material away as they flowed back into the ocean.
When you stand at Echo Point overlooking the Three Sisters, you see that Jamison valley is cut into a flat plateau. How did it get so flat? As the floodwaters moved across the continent, they eroded the surface flat. That’s how plateaus formed all around Australia and around the world. Jamison valley is much larger than any valley caused by Kedumba River that flows through it (the same is true for the Grand Canyon in USA). Geomorphologists call these overfit valleys – the valley is too big for the river. How did it get to be such a large valley? The valley was carved by a lot of water and not by the current river. As the floodwaters subsided, when hills became exposed, the water carved out large valleys transporting the sediment out of the area.
This is also evident at Carnarvon Gorge in Queensland at the intersection of the Great Artesian Basin and the Bowen Basin. A large valley has been cut into a sandstone plateau that’s capped by basalt. As material has been eroded away, these sedimentary layers originally covered a much larger area.
As a result of such erosion, rivers can flow through mountain ranges rather than around them. For example, the Heavitree Gap in the MacDonnell Ranges near Alice Springs. How did that happen? Many explanations have been proposed, but none of them work. As the floodwaters subsided, the higher parts of the ridge become exposed and the water flows between the gap between them. As the waters continue to drop, they continue to erode through this gap until when the water has all subsided the gap remains and a river flows through this gap today. It’s called an air gap if it doesn’t go down to the level of the adjacent surface.
So we would expect to find evidence of massive erosion and we do. These large overfit valleys are evidence of Noah’s flood.
Evidence of youthfulness
If a catastrophic flood covered the earth for six months about 4,350 years ago, you would expect to find evidence of youthfulness.
At Kata Tjuta there are a few boulders lying around, but not many. And there is a small apron around it, but not a large one as if it had been eroding for millions of years. And there’s very little debris around the base of Uluru or Kata Tjuta. This indicates that it was eroded recently.
So we would expect to find evidence of youthfulness and we do. The lack of erosional debris is evidence of Noah’s flood.
Worldwide memories of the flood
All of the people of the earth are descendants of the eight people on Noah’s ark. As the global flood occurred about 4,350 years ago, you would expect to find memories of Noah’s flood in the different people groups around the world.
Cultures around the world have flood legends (or stories). For example, the Bundaba Flood Story of the Aboriginals at Broome is given in the appendix. Common features in the stories are that there was a moral cause, people were drowned, there were people saved in a boat, and there was a bird.
So we would expect to find worldwide memories of the flood and we do. These flood stories are evidence of Noah’s flood.
There’s plenty of evidence in Australia of Noah’s flood. Evidence of eight aspects of Noah’s flood show that what we observe is consistent with what the Bible says. This flood is a key to connecting the Bible to the world around us. It explains the sedimentary rocks and the fossils. And it washes away the millions of years that are assumed by evolutionists.
It also helps us understand the world. It makes sense of biblical creation. Death and suffering came after Adam and Eve and not before them because they were a consequence of sin. Whereas according to the idea of evolution, death and suffering over millions of years brought about our existence.
Questions and answers
What is rapid burial?
When animals and fish die today they disintegrate and are recycled. They aren’t fossilized. So, how were the fossils preserved? If they are buried quickly it stops them being scavenged and it affects how bacteria destroys animal’s bodies. So rapid burial is necessary to produce fossils.
What about continental drift?
Like evolutionists, creationists fit the evidence into their world view. There is a creationist model of how the continents were all together before the flood and they broke apart during the flood (catastrophic plate tectonics). The earth’s mantle (beneath the earth’s crust) can suddenly lose its strength under high temperature and high pressure. So the continental movement could have happened very quickly (continental sprint) during Noah’s flood. In the second half of the flood the ocean basins sank and the continents rose: “Mountains rose and valleys sank to the levels you decreed” (Ps. 104:8NLT).
What does “the earth was divided” in the time of Peleg mean (Gen. 10:25)?
This is just before the tower of Babel when God divided the people into different language groups and they dispersed across the earth (Gen. 11:1-9). This is what we believe it means. It couldn’t be the separation of the continents because if it happened a few hundred years after the flood that would be a huge catastrophe and many people would perish and there is no evidence of this in Scripture.
When was Mt Everest formed?
The earth’s crust moved during the flood. The mountain ranges like Mt Everest were elevated towards the end of the flood. The mountians we see today rose up at this time. The shapes of the mountains were carved by the waters of the flood (and any post-flood ice).
Do we know how high the mountains were before the flood?
No. We know there were mountains before the Flood because the Bible speaks of them (Gen. 7:19-20). But we don’t know how high they were. Some creation geologists speculate that they weren’t as high as those today.
What about the ice age?
It happened after the flood. The flood is the only thing that explains the ice age. It was due to warm waters after the flood caused by the addition of hot subterranean water and by heat from volcanic activity. And large amounts of volcanic dust and aerosols in the atmosphere would have reflected solar radiation back into space causing low atmospheric temperatures. Warm oceans evaporate water, which then moves over the land. Cold air over the continents results in this water precipitating as snow. And the snow accumulates forming ice. Because the ice was not fully melted the following summer, the ice built up from year to year. It has been estimated that the ice accumulated for 500 years after the flood and then retreated to where it is today over another 200 years. But evolutionists don’t have an adequate explanation for the ice age.
What about global warming?
Climatic modelers try to include the ice age, but they don’t include Noah’s flood. They think that the earth’s atmosphere is unstable and a little change will tip it over the edge. Whereas the earth’s climate is very stable – after the huge climatic disturbance of the global flood, it took about 700 years to come back to equilibrium.
What about the decrease in longevity?
Before the flood people typically lived over 900 years. After the flood this decreased exponentially towards 100 years (David) and then 70 years. It was probably a genetic change and not an environmental one because after the flood Noah lived 350 years (Gen. 9:28) and Shem lived 500 years.
What about the Behemoth described in Job 40?
We believe it was a brachiosaur (sauropod) dinosaur. The size of its tail is one of the reasons. We believe that dinosaurs and people lived together. They were called dragons and other names because “dinosaur” is a modern name.
Appendix: The Bundaba Flood Story
Long, long ago there was a great flood. It happened because some children found the “winking” owl and plucked out all its feathers. The bird flew without wings, into the heavens and showed himself to Ngowungu, the Great Father. Ngowungu became very angry and decided to drown the people.
Later the people saw a small cloud rising which grew bigger and bigger till it spread all over the sky. The thunder began to roll and crash and the people were greatly afraid. With the rain and thunder was a terrible wind which broke great limbs off trees and rooted up others. During this terrible storm there was a noise above the awful crashes of thunder. This noise was coming from the north. The salt water, the sea, came pouring over the ranges from the north. The flood rose higher and higher till all the land was covered except the tops of two or three mountains.
A bird with a leaf in its mouth flew in front of them showing them the way to Mt. Broome. From further west a man and his wives with a dog were battling their way in a canoe when a bird with a leaf in its mouth flew in front of them showing them the way to Mt. Broome. They eventually reached Mt. Broome and landed there where some other survivors were.
Then Djabalgari, the great left-handed man incised his little finger and let the blood trickle down into the flood waters. The waters began to go down and eventually disappeared off the country. All other people were drowned.
Acknowledgement: This blogpost was sourced from a presentation by Dr Tas Walker (a geologist) on “Evidence of Noah’s flood in Australia”.
Written, July 2016
In March 2016 the NSW Environment Protection Authority served notice requiring a company to conduct a mandatory environmental audit of its waste oil processing facility near Maitland. This followed a pattern of environmental non-compliance at the facility, including serious breaches involving air emissions and water discharges. The audit of site practices and procedures includes assessment of testing waste products, operation and maintenance of pollution control equipment, bunding and spill management, and potential impacts on groundwater. In this post we carry out an audit of the naturalistic explanation of the origin of life.
In 1999 New Holland published a book, ‘In six days: why 50 scientists choose to believe in creation’. The editor, Dr John Aston, noted in the preface that:
‘Why would educated scientists still believe in creation? Why wouldn’t they prefer to believe in Darwinian evolution or even theistic evolution, where an all-powerful intelligence is seen as directing the evolutionary processes? Could scientists believe that life on earth is probably less than 10,000 years old? How would they deal with the evidence from the fossil record and the ages suggested by the radioactive dating of rocks as millions and billions of years old?’
‘During the past century, the biblical story of Genesis was relegated to the status of a religious myth and it was widely held that only those uneducated in science or scientific methods would seriously believe such a myth. However, my experience in organizing this book, is that there is a growing number of highly educated critically thinking scientists who have serious doubts about evidence for Darwinian evolution and who have chosen to believe in the biblical version of Creation.’
The scientists gave their personal response to the question: ‘Why do you believe in a literal six-day biblical Creation as the origin of life on earth?’ The responses were divided into two categories ‘Science and Origins’ (dealing with the scientific critique of evolution as well as the scientific basis for creation) and ‘Religion and Origins’ (dealing with a more philosophical approach to the question of evolution and creation). My contribution was in the latter section (p.322-327).
There are two main views about the origin of the universe and the origin of life: those based on naturalism and those based on an intelligent Creator. As these events occurred long ago and are not subject to direct observation or experimental tests, both of these perspectives are mainly philosophical beliefs based on certain assumptions about the physical world.
This fact is ignored or distorted in most modern treatments of the topic of origins. For example, the March 1998 issue of National Geographic included an article titled, ‘The rise of life on earth’. The editor of the magazine wrote concerning this article on the origin of life: ‘Science is the study of testable, observable phenomena’, and religious faith is ‘an unshakeable belief in the unseen’. This ‘straw man argument’ diverts the discussion away from the issues of science and logic to the separate topic of science versus religious faith. It also ignores the fact that there are no obvious ‘testable, observable phenomena’ on the origin of life. Furthermore, the language used in the article demonstrates that naturalism also relies on faith in the unseen.
The naturalistic view of origins is that everything that exists can be explained by physical and chemical processes alone. This differs from the view that matter, energy, physical and chemical processes and life were established by a Creator as revealed in the Holy Bible.
Searching for truth
An environmental auditor relies on two main factors: objective evidence and agreed standards. The outcome of each part of an audit depends on comparing the observable evidence against the relevant standard. Of course, environmental standards change in time and space across the world. Similarly, any explanation of origins should be consistent with the body of ‘observable evidence’ and any relevant ‘standards’. This is complicated by the fact that the evidence is viewed today, a long time after the beginning of the universe and life. Also, in a changing world, it is not immediately obvious which standards are relevant. The Bible is the only reliable and consistent source of truth; it is like a fixed frame of reference. Other authorities, such as science and logic, are not sufficient, as they may change in time and space; they are like a changing frame of reference.
The laws of physics and chemistry are examples of the relative standards of science, which change with time as knowledge develops. They were developed under present conditions and assume that the universe already exists. Two of these fundamental laws are that life always comes from earlier life and that mass/energy is conserved. Applying them to the origin of life assumes that all these conditions were true at that time. To say; then, that naturalism explains the origin of life is ‘circular reasoning’, as the outcome is largely determined by the assumptions made. Although these laws may describe the present world, it would be a gross assumption to extrapolate them back to the unobserved initial conditions. Yet this is done frequently by those with a naturalistic viewpoint, without acknowledgement of the uncertainties involved and the limitations of the scientific method.
The assumptions of both naturalism and biblical creation and the principles of the scientific method are stated clearly in W Gitt’s ‘Did God Use Evolution?’ 1993, CLV Christliche Literatur-Verbreitung e. V.
The Bible is a source of ‘absolute’ truth that has stood the test of time much longer than any other document or philosophy. Of course, as in the case of any literature, it requires interpretation as to what is historical and what is metaphorical or symbolic. Besides obvious literary techniques, the most reliable method is to use the whole message of the Bible to interpret any particular passage. Otherwise, an interpretation may not be consistent with the rest of the Bible.
The Bible contains three clear tests for determining whether a belief, teaching or philosophy is true or false. To be true it must pass each of the three tests:
The Jesus test: This test states that, ‘Every spirit that acknowledges that Jesus Christ has come in the flesh is from God, but every spirit that does not acknowledge Jesus is not from God. This is the spirit of the antichrist … This is how we recognize the Spirit of truth and the spirit of falsehood’ (1 Jn. 4:2-6NIV). The question to be answered in this test is: What does it say about Jesus Christ? The Bible teaches that Christ was unique: divine and human, sinless, eternal and the Creator. It is false to deny that Christ was the divine Son of God. Beliefs that fail this test usually claim that Christ was, at best, a great teacher or a prophet. They may even encourage the view that Christ and other events in the Bible are mythical.
The gospel test: The Bible warns about those promoting a different gospel, ‘If anybody is preaching to you a gospel other than what you accepted, let them be under God’s curse!’ (Gal.1:9). The question to be answered in this test is: What is its gospel? In other words: what is the core belief or hope? The Bible says that the root cause of all our problems is that everyone has sinned and fallen short of God’s requirements—resulting in death. The only means of rescue is salvation by faith in Christ. ‘Different gospels’ are those that differ from this. They either add to it or take away from it. There is a warning against adding to or taking away from the words of the Bible (Rev. 22:18-19). Broader aspects of the gospel include the original creation and the ultimate restoration of all things (Rev. 4:11; 21:1-22:6). We need to be careful when applying this test because a ‘different gospel’ may deceive by using words similar to the true gospel but give them different meanings.
The fruit test: Jesus Christ warned, ‘Watch out for false prophets. They come to you in sheep’s clothing, but inwardly they are ferocious wolves. By their fruit you will recognize them’ (Mt. 7:15-20). The question to be answered in this test is: What kind of fruit is evident? In other words, what type of attitudes and behavior does it encourage? Is the divine nature or the sinful nature most evident? The former is characterised by the fruit of the Spirit: love, joy, peace, patience, kindness, goodness, faithfulness, gentleness and self-control. The sinful nature may involve: idolatry, sexual immorality, selfish ambition, pride, hostility, quarrelling and outbursts of anger (Gal. 5:19-23).
These tests will now be used to assess the naturalistic view of origins.
The Jesus test: As naturalism means that nature is all there is, it is associated with atheism. For example, the American Association of Biology Teachers states, that; ‘The diversity of life on earth is the outcome of evolution: an unsupervised, impersonal, unpredictable and natural process of temporal descent with genetic modification that is affected by natural selection, chance, historical contingencies and changing environments.’
This view of origins has no need for a Creator or the divine, and so is consistent with a belief that Jesus Christ was only a human being and not divine. Naturalism clearly fails the Jesus test.
The gospel test: As naturalism assumes there is no God, it accepts no absolute standards of ‘right’ and ‘wrong’, and rejects the existence of ‘sin’ in the sense of falling short of God’s standard. Therefore, it teaches that there is no need of a savior. Its gospel is that nature has made itself and the Genesis account of origins is not true. A biblical consequence of this is that if there was no paradise at the beginning as described in Genesis, then there can be no hope for a future paradise (Acts 3:21). In fact, naturalism rejects all the basic biblical truths, such as: creation, the beginning of evil, the need for salvation and the ultimate destiny of human beings. So, naturalism fails the gospel test.
The fruit test: Naturalism supports and is associated with: materialism, humanism (humanity is self-sufficient, capable of solving all their difficulties) and pantheism (‘nature’ replaces God). Its acceptance leads to: less value on human life (practices such as abortion and euthanasia are more acceptable). Another example from the past is racism; less value on family life (biblical marriage is less important; divorce is more acceptable); less value on morals (truth is now relative, not absolute); a ‘might is right’ attitude that supports the strong, but not the weak (survival of the fittest; a competitive world; compassion involves saving ‘weak genes’). As these are opposite to the values of the Bible, naturalism fails the fruit test.
It is clear from this that the viewpoint of naturalism fails all the three biblical tests for determining what is true. Therefore, it is false and is not consistent with the overall message of the Bible.
Due to the influence of the above philosophies, claims are often made in the name of ‘science’ that go far beyond the available evidence, and some aspects of modern science have become increasingly tenuous and speculative. In fact, the everyday use of the word ‘science’ has changed from dealing with things that are observable and testable to meaning ‘naturalism’ and so includes conjecture and dubious hypotheses.
Although we live in a ‘cause-and-effect’ universe, ultimate causes, such as origins, are outside the realm of reliable science. Science can only reliably deal with the present world; it cannot reliably deal with the past (such as origins) or the future (such as ultimate destinies), as it cannot directly observe these. I believe all scientists should be wary of their assumptions, as these can largely determine their findings. They should also be wary of extrapolations outside the range of observation. The further the extrapolation, the less reliable the prediction. Changes in the assumptions will change the prediction. This applies in particular to boundary conditions, such as those involving initial conditions (or origins). Therefore, scientists can only speculate, imagine and guess about the origin of life.
Dr Hawke is a Senior Environmental Consultant with an electricity supply company in Sydney, Australia. He holds a BSc with first class honors in Physics from the University of Sydney, and PhD in Air Pollution Meteorology from Macquarie University. Over the past 22 years, Dr Hawke has worked as an environmental scientist and environmental consultant for a state government regulatory authority and the electrical power industry. He is also a Certified Environmental Auditor with the Quality Society of Australasia.
Published in 1999
Today there is a national election in Australia. Key election issues include: the economy, jobs, health, education and the environment. The political parties seeking election included the Greens, the Renewable Energy Party, the Animal Justice Party, and the Sustainable Australia Party. This post looks at the foundation of the ethics and morals of the environmental movement.
I gave this message at a conference in 1998. It’s based on the situation over 18 years ago. Although the examples are now historical, most modern examples would be similar in many ways. Many people are still concerned about the natural environment.
Concern for the environment and pollution affects us all: we see and hear about it in the daily news media, it’s taught at all levels of education, it affects all businesses in some way, governments pass more and more laws about it, and in 1996, the first national “State of the Environment” report said that, “Australians are among the most environmentally aware people in the world”.
My background is in science (physics and mathematics) and environmental science. I am a certified environmental auditor, who audits environmental management systems for industry and businesses. In this message I will present the results of an audit of the foundations the environmental movement and of modern science. So we are looking at basic beliefs, values, viewpoints and assumptions. The findings will be compared to the Bible, which I believe is God’s guidebook for humanity.
Environmentalism involves concern for the physical world, such as advocating protection and conservation of the natural environment. It’s a complex and recent subject that has developed over the past 30 years.
Model of aspects of environmentalism
A schematic diagram of aspects of environmentalism provides a framework for this message. The two main aspects of environmentalism are the principles, which are what we believe, and the practices, which are what we do. Our principles (assumptions and values) have a strong influence on our behavior. That’s why they are sometimes called “guiding principles”. This message is focused on the principles that can drive environmentalism.
According to the Bruntland Report, “Our common future” (1987), “to achieve the goals of sustainable development, good environment, and decent standards of life for all involves very large changes in attitude”. Where do these attitudes come from? Our minds. If we are consistent, they are the principles that drive our practices. For example, if we believed in the golden rule (treat others as we would like them to treat us), then we would help others. Or, if we are selfish, we may ignore others or exploit them. But other things besides our principles can influence our behavior.
The schematic diagram shows how our assumptions and circumstances can also influence our behavior. For example, in the case of global warming; the principles are our worldview, values and ethics; the science is the mathematical models that predict temperatures and sea levels; the assumptions are those made in the scientific predictions; the circumstances are the technology available and the particular situation in each nation; and the practices are what each nation does in response to this issue.
In environmental auditing we begin by checking compliance with the organisation’s environmental policy because it contains their guiding principles, including philosophy, values, and ethics.
This message is focused on the principles of environmentalism and the assumptions of science. What are they? And, how do they compare with the Bible?
Principles of environmentalism
Environmentalism is based on a viewpoint that nature should be valued and protected. This is a pro-environment/conservation world view. Many world views have been explored in attempts to develop an ethical basis for environmentalism. There is range of viewpoints and philosophies within the environmental movement which overlap and can lead to conflicts. The three main categories of principles are based on the three main parts of our world. They are: human-centered, nature-centered, and God-centered.
Nature is our life support system; we depend on it for survival. So people have a self-interest in the preservation of their environments. It’s important because of its impact on people. For example, ozone depletion in the upper atmosphere can increase risk of skin cancer. So we want to protect stratospheric ozone.
The two main ideas in this approach are conservation and preservation. Nature is a resource that needs to be conserved for human needs. So, Government Forestry Services manage forests to maintain productivity. Nature also needs to be protected for the enjoyment of all people. For example, zoos and nature parks.
Sometimes people can have a negative impact on the environment. For example, the exploitation of nature without consideration for sustainability.
This introduces the idea that nature has intrinsic value – it should be preserved unless it conflicts with something of greater value. In this category we will look at two approaches: species centered and ecosystem centered.
The first approach says that species have rights or intrinsic value. For example, animal rights are promoted – as they have a value of their own, we should seek to minimise our impact on animals. This can lead to treating other species as though they were human. Stephen Gould advocates applying the golden rule to nature and the environment, “Do to others as you would have them do to you” (Lk. 6:31NIV). Similar rules exist in other religions as well. This means treating nature as we would want to be treated. But try applying this to an ant! It would be difficult avoiding killing an ant as we walk around.
The second approach acts for the good of all nature, not just human interests. This is more holistic as it involves the whole ecosystem/biosphere. This can lead to reverence for nature and wilderness, such as deep ecology. Here all natural things (ecosystems, life, landscape) have an intrinsic right to exist and there is a feeling of being connected with nature. This in turn can lead to Gaia theory (which is named after the ancient Greek earth goddess), where the earth is viewed as a single organism, like a living thing. It claims that evolution is not random, but is directed by Gaia.
These modern ideas are similar to ancient ones where nature has a spirituality. Animism is the belief that all natural objects and the universe possesses a soul. And pantheism is the belief that: God is not a personality, but a force; the universe exists of itself; all natural happenings are God, and that God is everything and everything is God; and Mother Nature replaces God.
Examples of these principles of environmentalism are given in the Appendix 1.
Problems with these principles of environmentalism
Human-centered environmentalism is not sufficient, as it omits much of the ecosphere. So most environmentalists have stopped using this approach.
Nature-centered environmentalism also has limitations, particularly with regard to species rights, sanctity of life and intrinsic value. For example: How can we determine priority between species? Is there a hierarchy of rights? Catastrophes (e.g. fires, droughts, storms, volcanic eruptions, earthquakes) that kill huge numbers of organisms are a part of nature. So nature can be destructive. It does not act as a perfect God, unless you believe in a God who can be evil. As species are interdependent (can be linked by a chain of dependence), this leads to saying that “all aspects of nature have intrinsic value” – but it is impossible to preserve everything. And it doesn’t help to solve day-to-day environmental problems.
We now turn to the Biblical viewpoint of the physical environment (values, principles, truths). We need to realize that the Bible contains basic principles which can be applied to all areas of our life. It contains God’s plans for the natural world (its history and its destiny) and how He intends us to live in it.
We will look at three Biblical principles here: creation, the gospel, and stewardship.
Doctrine of creation
This has two parts: God as creator, and God as sustainer. First, God created everything. “God made the world and everything in it” (Acts 17:24-28). “By faith we understand that the universe was formed at God’s command, so that what is seen was not made out of what was visible” (Heb. 11:3). So God is the sole source of all that exists. “Everything God created is good” (1 Tim. 4:4). Jesus is “the author of life” (Acts 3:15), “He made the universe” (Heb. 1:2). “All things were created through Him and for Him” (Col. 1:16). “God saw all that He had made, and it was very good” (Gen 1:31) – Eden was paradise.
Creation is separate to the Creator. “They exchanged the truth of God for a lie, and worshipped and served created things rather than the Creator” (Rom 1:25).
God owns creation. “The earth is the Lord’s, and everything in it, the world, and all who live in it” (Ps. 24:1).
The awe and beauty of nature. “For since the creation of the world God’s invisible qualities–His eternal power and divine nature–have been clearly seen, being understood from what has been made, so that people are without excuse” (Rom. 1:20).
The relationship between God, people and nature can be summarized as follows. God is infinite and personal. People are finite and personal. Animals, plants and machines are finite and impersonal. So humanity has special value, we share personality with God. We were made in God’s image, and people still have some of God’s image (Gen 9:6). Also, God came to earth as a man. So the Bible says that humans are both a part of nature (but not on the basis of biological unity), and apart from nature (like God). Nature is not our Mother, it is our brother and sister (as we are both created things).
“For this is what the LORD says– He who created the heavens, He is God; He who fashioned and made the earth, He founded it; He did not create it to be empty, but formed it to be inhabited” (Isa 45:18). So, creation has value because God made it and owns it.
Second, God sustains everything.
Jesus – “sustaining all things by His powerful word” (Heb. 1:3). “In Him all things hold together” (Col. 1:17). “in Him we live and move and have our being” (Acts 17:28). “Nothing in all creation is hidden from God’s sight” (Heb. 4:13). So the bible teaches that God sustains natural processes. The creation is dependent on the Creator for its continuing existence.
This includes the forces that hold things together (such as nuclear forces and gravity). Without Him all things would fly apart! God also cares for birds and vegetation. “Look at the birds of the air; they do not sow or reap or store away in barns, and yet your heavenly Father feeds them. Are you not much more valuable than they?” (Mt. 6:26). “And why do you worry about clothes? See how the flowers of the field grow. They do not labor or spin.” (Mt.6:28). “If that is how God clothes the grass of the field, which is here today and tomorrow is thrown into the fire, will He not much more clothe you – you of little faith?” (Mt. 6:30).
We can view God’s power and presence in nature, like electricity flows through a wire. The wire is not the electricity, but it can be the vehicle through which the electricity flows. God is not nature and nature is not God. To think that would be a to think like a pantheist and not a Christian. But in this sense, God is in nature.
“Your bodies are temples of the Holy Spirit, who is in you … Therefore, honor God with your bodies” (1 Cor. 6:19-20). Our bodies and senses should be used and appreciated for God. Similarly, all creation has been made by God and He sustains it, therefore, honor God as you interact and appreciate the physical world.
“Do not offer any part of yourself to sin, as an instrument of wickedness, but rather offer yourselves to God, as those who have been brought from death to life; and offer every part of yourself to Him as an instrument of righteousness” (Rom. 6:13). So the body and the physical world can be viewed as an instrument (or tool) which can be used for good or bad. We should honor God in our way of living in the material world – and work out what this means in the various areas of our life.
The gospel is the good news, that addresses the bad news. God created a perfect universe, but because of the fall into sin when Adam and Eve disobeyed God, the universe is now flawed. To fix the situation, God sent Jesus to enable redemption and restoration. Those who accept what Jesus did are promised eternal life in the new heaven and new earth, while those who reject it face eternal punishment.
The fall into sin led to suffering, decay and death (Gen. 3; Rom. 8). Genesis 3 is one of the most important chapters in the Bible. God cursed not only people, but also nature, because of human sin. It explains the problem of evil in our world, in both humanity and in nature. It’s the ultimate cause of environmental problems. We live in a fallen world, different to the original condition of “very good”. Nature is abnormal, and it can be destructive. Environmentalist try to stop death in the environment. The fall explains death.
Now looking at redemption and restoration. Christians are seen as being part of a new creation, “Therefore, if anyone is in Christ, the new creation has come: the old has gone, the new is here!” (2 Cor. 5:17). Through Jesus, people can be reconciled to God. The biblical visions of the kingdom of God are visions of people in harmony with nature. The Bible teaches that the effects of the curse on nature will end and nature will be restored to its original splendor (it will be a sinless, deathless paradise, reconciled to God). Nature will also enjoy with Christians the effects of redemption.
“I consider that our present sufferings are not worth comparing with the glory that will be revealed in us. The creation waits in eager expectation for the children of God to be revealed. For the creation was subjected to frustration, not by its own choice, but by the will of the one who subjected it, in hope that the creation itself will be liberated from its bondage to decay and brought into the freedom and glory of the children of God.
We know that the whole creation has been groaning as in the pains of childbirth right up to the present time. Not only so, but we ourselves, who have the firstfruits of the Spirit, groan inwardly as we wait eagerly for our adoption to sonship, the redemption of our bodies. For in this hope we were saved. But hope that is seen is no hope at all. Who hopes for what they already have? But if we hope for what we do not yet have, we wait for it patiently” (Rom. 8:18-25). So all of creation is looking for redemption by God; not by people like us.
Christians share the gospel message with many people, even though they know that probably only a few will respond. Likewise, Christians ought to be willing to care for the created world, even though they know that they can’t bring full restoration.
Our bodies and the physical world will be transformed one day (like Jesus after His resurrection). The restoration will be through Jesus; “to reconcile to Himself (God) all things, whether things on earth or things in heaven” (Col. 1:20). “Heaven must receive Him (Christ) until the time comes for God to restore everything, as He promised long ago through His holy prophets” (Acts 3:21). When God judges the ungodly, the earth will be destroyed by fire and replaced by a new heaven and a new earth (2 Pt. 3:7-13). And we “are looking forward to a new heaven and a new earth, where righteousness dwells”.
God will then live with mankind as in the Garden of Eden, “There will be no more death or mourning or crying or pain, for the old order of things has passed away” (Rev. 21:1-8).
Doctrine of Stewardship
God told Adam and Eve, “Be fruitful and increase in number; fill the earth and subdue it. Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground” (Gen 1:28). “Subdue” (“kabask” in Hebrew) means to conquer. “Rule” (“radah” in Hebrew) is generally used to describe the righteous and loving rule of a good and kind king. For example, King Solomon “ruled over all the kingdoms west of the Euphrates River, from Tiphsah to Gaza, and had peace on all sides. During Solomon’s lifetime Judah and Israel, from Dan to Beersheba, lived in safety, everyone under their own vine and under their own fig tree” (1 Ki. 4:24-25).
God told Adam how this rule is to be carried out. “The LORD God took the man and put him in the Garden of Eden to work (“abad” in Hebrew) it and take care (“shama” in Hebrew) of it” (Gen. 2:15). Elsewhere “abad” is translated to “serve” (e.g. “we will serve the Lord”, Josh. 24:15) and “sharma” is translated to “keep”, “watch” or “preserve” (e.g. “The Lord bless you and keep you”, Num. 6:24). God keeps His people in such a way to demonstrate His great love and care. All this was given before the fall of man, so there is no suggestion of evil or exploitation of nature here. So, Adam managed the garden of Eden. Before the fall there was perfect harmony between humanity and the environment.
As God owns the world, Christians can be seen as His stewards (or managers, a delegated authority). A “steward” is a manager of a household (e.g. Lk. 16:1-9). Peter also used it as a metaphor for believers, “Each of you should use whatever gift you have received to serve others, as faithful stewards of God’s grace in its various forms” (1 Pt. 4:10).
Stewardship means caring for creation as God would. And we are accountable to God. For example, in the Old Testament there was a Sabbath rest for animals and a Sabbath year rest for agricultural land (Ex. 20:10; 23:10-11).
The assumptions of modern science
Science provides a useful method for finding out things about the way the world works. The assumptions and boundaries of “science” largely determine the findings of science. Only theories consistent with these are acceptable to science. We will look at three major assumptions of science.
Doctrine of Naturalism
Science assumes a naturalistic world where the physical universe is all that exists. Nature is all there is. So, everything is explained in terms of mechanical processes. God only exists as an idea in the minds of religious believers. Naturalism is associated with: materialism -there is only matter (no unseen world of souls, spirits or deities) and atheism – there is no God. This limits science to naturalistic theories. As science excludes the supernatural (by definition), a model or explanation that incorporates supernatural intervention (e.g. creative intelligence), cannot be called “scientific”. Therefore, “Creation science” is impossible. As a result, science is unable to disprove the spiritual, as whatever it discovers is “natural” by definition.
Doctrine of Evolution
Science assumes an evolving world (mainly because the only alternative is an act of creation by a God). Mutation and selection are assumed to be the driving forces of evolution. As naturalism and evolution are assumptions of science, science cannot be used to prove these.
Examples of these doctrines of science are given in the Appendix 2.
Doctrine of Uniformity
Science usually assumes the present is the key to the past and the future. Sometimes there is immense extrapolation into the past (e.g. speculation on the origin of life) and into the future (e.g. speculation on global warming), without proper consideration of assumptions and uncertainty. I call this speculative science. It fails to recognize that many deep questions are unanswered and will probably never be definitively answered, given the limits of science. For example, how was the universe created?
Consequences for Christianity
Once science was based on what was able to be observed, repeated and tested. But these assumptions have been added in such a way that anything outside this scope is deemed to be unscientific and false. When applied to situations outside the scope of observational science, this approach renders all other viewpoints false. For example, it means that there is no need to prove that evolution happened. Instead they just say that it happened with no need for a rigorous proof. In this way, science has used evolution to destroy Christianity. This is explained below.
Biblical viewpoint of evolution
Putting the doctrine of evolution (one of the assumptions of modern science) to the test. The Bible contains three clear tests for determining what is true and false:
The Jesus test: Who was Jesus Christ?
“This is how you can recognize the Spirit of God: Every spirit that acknowledges that Jesus Christ has come in the flesh is from God, but every spirit that does not acknowledge Jesus is not from God …. This is how we recognize the Spirit of truth and the spirit of falsehood” (1 Jn. 4:2-6).
The gospel test: Is it a different gospel?
“I am astonished that you are so quickly deserting the one who called you by the grace of Christ and are turning to a different gospel–which is really no gospel at all. Evidently some people are throwing you into confusion and are trying to pervert the gospel of Christ. But even if we or an angel from heaven should preach a gospel other than the one we preached to you, let them be under God’s curse!” (Gal. 1:6-8).
The fruit test: What kind of fruit is evident?
“Watch out for false prophets. They come to you in sheep’s clothing, but inwardly they are ferocious wolves. By their fruit you will recognize them. Do people pick grapes from thornbushes, or figs from thistles? Likewise, every good tree bears good fruit, but a bad tree bears bad fruit. A good tree cannot bear bad fruit, and a bad tree cannot bear good fruit. … Thus, by their fruit you will recognize them” (Mt. 7:15-20).
Here’s how evolution goes in these tests.
The Jesus test
According to evolution, there is no need for a Savior. Jesus was only a human being, not divine. He was not the Creator (as there is no need for one), or the “second Adam”, as there was no Adam who disobeyed God in the first place. So, it fails the Jesus test.
The gospel test
The gospel according to evolution is compared with the gospel according to the Bible in the schematic diagram. This shows they are totally different. And evolution undermines all aspects of the gospel – all basic Biblical truths.
Evolution provides a new creation story, “As a story of creation, the book of Genesis long ago crumbled under the weight of science, notably Darwin’s theory of natural selection“ (Time, 4 Nov. 1996, p80).
If evolution is true, then death and suffering is not the result of sin. “According to Genesis, nature is in essence benign … But according to Darwinism, the evil in nature lies at its very roots, instilled by its creator, natural selection” (Time, 4 Nov. 1996, p81). The biological roots of sin are attributed to impulses that arose by natural selection and that were then inherited as they enhanced the chances of survival and reproduction. This means that sin, death and suffering are an inherent part of nature from the beginning of time.
So there is no need for a Savior and heaven and hell don’t exist. Its message is that “salvation comes through science”.
The fruit test – the fruits of evolution
The idea of evolution supports and is associated with: naturalism, materialism, atheism, humanism (humanity is self-sufficient; capable of solving all his difficulties), and pantheism.
Acceptance of the idea of evolution leads to the following:
Less value on human life (practices such as abortion and euthanasia are more acceptable). Another example from the past is racism (e.g. Australian Aboriginals were considered to be biologically inferior to Europeans. This was justified by biological determinism promoted by evolutionary anthropology).
Less value on family life (marriage less important, divorce is more acceptable)
Less value on morals (truth is now relative, not absolute).
A “might is right” attitude, which supports the strong, but not the weak (survival of the fittest, a competitive world, compassion involves saving “weak genes”).
These are fruits of the sinful nature, not the divine nature. So the doctrine of evolution is a major cause behind many of the problems in our society.
Results of the tests
So the “doctrine of evolution” fails all three Biblical tests. This means it’s a false doctrine, an idol, the creation story and religion of modern science.
Secular environmentalism represents a new religion (see schematic diagram). By trying to introduce ethics and morals into a world that has discarded the Bible, most environmentalists adopt ethics which are centered on humanity or nature and they follow the idols of: humanism, atheism or pantheism. These are all justified by belief in evolution (which is also an idol). Idolatry is following ideas that replace the Creator God. Although they claim to be wise, such environmentalists are foolish because their actions are based on a lie (a false idea) (Rom. 1:22, 25). Due to the influence of these philosophies, claims are often made in the name of science that go far beyond the available evidence.
But the Bible gives us a God-centered view of the world, it reveals the Creator, and gives us responsibility to care for the creation as God’s stewards. Biblical environmentalism (see schematic diagram) can be based on Biblical principles and assumptions. The principles include: creation, sustenance, the fall (these three show that in some respects, the past is the key to the present), redemption, restoration and stewardship. Besides the natural world, this assumes the supernatural (there is more than the physical world), special creation (which can’t be explained by current laws), and possible catastrophes (so we need to be careful when extrapolating). Let’s care for creation as God’s stewards (or managers).
Examples of principles of environmentalism
The Rio declaration on Environment and development (1992) has 27 principles, including:
Principle 1 “Human beings are at the center of concerns for sustainable development. They are entitled to a healthy and productive life in harmony with nature” – Human centered
Principle 3 “The right to development must be fulfilled so as to equitably meet development and environmental needs of present and future generations” – Human centered
Principle 7 “States shall cooperate in a spirit of global partnership to conserve, protect and restore the health and integrity of the earth’s ecosystem” – Ecosystem centered
Agenda 21 is the program to implement the Rio declaration. It proposes a program for action for sustainable development. Its Preamble says:
“Humanity stands at a defining moment in history. We are confronted with a perpetuation of disparities between and within nations, a worsening of poverty, hunger, ill health and illiteracy, and the continuing deterioration of the ecosystems on which we depend for our well-being. However, integration of environment and development concerns and greater attention to them will lead to the fulfilment of basic needs, improved living standards for all, better protected and managed ecosystems and a safer, more prosperous future” – Human centered
The National “State of the Environment” report says, “Preserving Australia’s biodiversity is important for four reasons”. One of these is Ethics which means that “no species and no generation has the right to remove earth’s resources solely for its own benefit” – Nature centered
The objectives of the NSW EPA include:
“reduce the risks to human health and prevent the degradation of the environment” – Human centered, and “achieve Ecologically Sustainable Development by implementing: the precautionary principle (being cautious), intergenerational equity (protect the environment for future generations)” – Human centered, “conservation of biological diversity & ecological integrity” – Species & ecosystem centered and “improved valuation & pricing of environmental resources” (using economics).
Greenpeace’s philosophy is:
“Ecology teaches us that humankind is not the center of life on the planet. Ecology has taught us that the whole earth is part of our body, and that we must learn to respect it as we respect ourselves. As we feel for ourselves, we must feel for all forms of life – the whales, the seals, the forests, the seas. The tremendous beauty of ecological thought is that it shows us a pathway back to an understanding and an appreciation of life itself – and understanding and appreciation that is imperative to that very way of life” – Ecosystem centered, leading to pantheism.
The Australian Conservation Foundation Mission is loaded with evolutionary assumptions:
“The conservation ethic reveres the enormous sweep and splendor of life, through three million millennia of geological time and its spread into many millions of diverse species and habitats. It is conscious that Homo sapiens is but a relative newcomer. From this perspective, it seeks to approach other species and their environments with humility and without arrogance” – Nature centered, reveres evolution
“It seeks to sustain diverse and active living communities in which non-human life can resume, in comparative tranquillity, the ponderous process of evolution which has been so disrupted and confused by the interruption of man” – Nature centered
“Intrinsic to the ethic is the recognition that human life is an integral part of this slow, inexorable and continuing evolutionary process; that our own adaption results from it and our destiny is tied to its continuance; our genes carry chemical messages shared with many other species now living and with many progenitors extending back to the beginning of life. Consequently, conformity with the conservation ethic confers benefits on humanity in terms of greater efficiency and satisfaction in meeting basic human needs and producing more resilient, supportive and fulfilling communities” – link to Human centered via evolution
“Against these threats, conservation seeks to hold the earth in trust for future generations, both human and non-human” – Ecosystem centered,
The UN Environment Program: “Caring for natural resources and promoting their sustainable use is an essential response of the world community to ensure its own survival and well-being” – Human centered
The Director General of UNESCO: “Unlike modern industrial society, many traditional cultures promote not only the need but the sacred duty for people to live in symbiosis with their natural environment … Our greatest need at the present time is perhaps for a global ethic – transcending all other systems of allegiance and belief – rooted in the consciousness of the interrelatedness and sanctify of all life. Such an ethic would tamper humanities acquired knowledge and power with wisdom of the kind found at the heart of the most ancient human traditions and cultures – in Taoism and Zen (Buddhist), in the understandings of the Hopi and the Maya Indians, in the Vedas (Hindu scripture) and the Psalms, in the very origins of human culture itself” – Ecosystem centered, leading to pantheism & other religions
“The theme of Theodore Roszak’s book The Voice of the Earth is our relationship to the natural world … He proposes a new relationship to nature, one based on modern science which regards the world as a living organism, a dynamic system with the capacity to self-regulate … possible solutions which Roszak envisions in an ecologically-grounded form of animism … The motivation for change on a planetary level must rise from deep within. This is where we must hear the voice of the earth, as she expresses herself through us as a genuine person need for a new quality of life. Her voice can bring us in contact with the ecological unconscious, the parts of the soul that we have lost touch with. What are needed are ‘ecological goals that can heal the psyche, psychological values that can heal the planet’” (Habitat May 94, p53) – Pantheism
Examples the doctrines of science (in the field of ecology)
“Nature in its infinite wisdom gave our animals soft feet so they would be gentle on Australia’s fragile soils” (Habitat, Dec 96, p5).
“Throughout evolution, only two kinds of eyes have ever been invented. One is the vertebrate eye, which works like a single-lens camera; the other is the compound eye of insects and crustaceans” (NA, Winter 97, p34).
“Some animals have evolved to look like other animals or even plants, thereby reducing predator pressure” (NA, Autumn 96, p8).
“We’re the dominant species on the planet; at the top of the food chain; at the top of the evolutionary tree. The way we got there is by being incredibly ruthless and self-centered” (NA, Autumn 96, p47).
“Frogs worldwide have evolved almost 30 different ways of reproducing” (NA, Summer 94-95, p64).
“As a group, spiders have developed an astounding array of techniques to capture and immobilize their prey” (NA, Spring 94, p17).
“The ‘apeman’ – australopithecines – did not die out. We are those apemen, just as living apes are members of the group from which they descended. In the same way, dinosaurs didn’t die out – they are still alive and kicking as birds. All that’s happened is an evolutionary change through time in the shape of the creatures in these long-lasting lineages” (NA, Winter 94, p68).
An academic – Judith Kinnear (Sydney University Gazette, Apr 96, p25)
“The Darwinian model of evolution by natural selection enriches many of my everyday experiences: my walks in the Australian bush make me ponder a unique flora that evolved after the break-up of the Gondwana super-continent; my visits to the zoo reveal the living products of divergent evolution that fall into an orderly pattern; my viewing of a TV program on the appearance of antibiotic resistance in harmless bacteria transforming them into untreatable killers reminds me of the ongoing impact of evolutionary forces”. This shows that the doctrine of evolution is now embedded in our society. Everyone is indoctrinated in it so that it’s a worldview or paradigm and like a religion.
Written, January 1998; Posted, July 2016
Also see: Recognizing false teachers
During a person’s life, the elastin in a blood vessel will go through an estimated two billion cycles of pulsation. Elastin’s flexibility allows skin to stretch and twist, blood vessels to expand and relax with every heartbeat, and lungs to swell and contract with each breath.
An international team of researchers has discovered the molecular motions of elastin, one of the proteins that gives blood vessels and skin their strength and flexibility. They found a hierarchical structure of scissor-shaped molecules that gives elastin its remarkable properties. Elastin tissues are made up of molecules of a protein called tropoelastin, which are strung together in a chain-like structure. First they discovered the shape and structure of the tropoelastin molecules. Then they studied the dynamics of the material as it forms large structures that can stretch and rebound. The dynamics turned out to be complex and surprising,. “It’s almost like a dance the molecule does, with a scissors twist – like a ballerina doing a dance”. Then, the scissors-like appendages of one molecule naturally lock onto the narrow end of another molecule, like one ballerina riding piggyback on top of the next. This process continues, building up long, chain-like structures. These long chains weave together to produce the flexible tissues that our lives depend on – including skin, lungs, and blood vessels. These structures assemble very rapidly. A key part of the puzzle was the movements of the molecule itself, which the team found were controlled by the structure of key local regions and the overall shape of the protein.
A researcher said, “Studying how these materials fail under extreme conditions yields important insights for the design of new materials that replace those in our body, or for materials that we can use in engineering applications in which durable materials are critical. Designing materials that last for many decades without breaking down is a major engineering challenge that nature has beautifully accomplished, and on which we hope to build”.
So the amazing features of elastin are attributed to nature. Isn’t nature clever! Apparently its designed and constructed complex materials that last for many decades without breaking down. If it’s more clever than us, then I think it deserves a capital “N” – Nature.
Maybe they should also research what the Bible says, “Through Him (God the Son, Jesus Christ) all things were made; without Him nothing was made that has been made” (Jn. 1:3NIV).
Written, March 2016
Also see: Evolution is theomorphic
Many gargoyles peer down from medieval churches, cathedrals, houses and town halls in Western Europe. They are usually animals, or people, or hybrid animals/people carved in stone. The animals may be real animals or fantastic beasts. Dogs and lions were the most frequent animals used, while dragons were the most frequent fantasy creatures depicted. The dragons usually had a pair of membranous wings, some legs, a long reptilian tail, a long snout with visible teeth, and a fierce expression.
Before the use of downpipes, waterspouts (which projected out from an upper part of a building or a roof gutter) preserved stonework by diverting the flow of rainwater away from buildings. They prevented rainwater from running down the walls and affecting the foundations. Gargoyles are decorative waterspouts, being carved stone figures with water spouts through their mouths. The figure was often an elongated animal because the length of the gargoyle determines how far water is thrown from the wall. They were usually carved in the form of a grotesque face, figure or frightening creature with wide-opened mouths to enable the water flow.
By the way, the word “gargoyle” shares a common root with the word “gargle”; which comes from “gargouille”, a French word for “throat”.
History of gargoyles
Gargoyles can be traced back thousands of years in Egypt, Italy and Greece. Lion-shaped gargoyles were used by the ancient Egyptians and on Greek temples. During the Roman Empire, lead pipes were added to gargoyles to channel water without eroding the stone. And gargoyle water spouts were found at the ruins of Pompeii.
There are gargoyles on medieval (Middle aged) church buildings in Western Europe like the Gothic Notre Dame cathedral in Paris. Most were carved out of limestone or marble between the 10th and 15th centuries during the Romanesque and Gothic styles of architecture and decorative arts. Although once lead drainpipes were introduced in the 16th century there was no longer any practical need for gargoyles, they continued to be used for decorative purposes.
A dragon legend
There is a French legend about St. Romanus delivering the village of Rouen from a monster called Gargouille in about 600 AD. The fierce dragon which had a long, reptilian neck, a slender snout, batlike wings and the ability to breathe fire from its mouth, lived in a cave near the river Seine. The dragon caused much fear and destruction with its fiery breath, spouting water and the devouring of ships and men.
There are multiple versions of the story, but when la Gargouille was burned, its head and neck were so well tempered by the heat of its fiery breath that they would not burn. These were then mounted on the walls of the church (or city wall) to scare off evil spirits and used for protection.
Although this legend may be embellished and have changed over time, it’s interesting that dragon stories occur in many cultures, particularly the Chinese and Japanese.
Is there a link between dragon gargoyles and dinosaurs?
Dragons and dinosaurs
The word “dragon” was used 23 times in the King James Bible (KJB), which was published in 1611, about 400 years ago. It seems as though the original translators of the KJB understood that a “dragon” was a real creature, not one that was legendary or mythical. It appears the “dragons” that were understood to be real animals were probably types of dinosaur which are now extinct. The translators of the KJB didn’t know about dinosaurs as this word was coined 230 years later in 1841. So the word “dragon” is probably an old word for a type of dinosaur.
Could some of the gargoyle dragons have been carved from the dragons that were known in the 1600s?
Ancient images and dinosaurs
There is also a theory that gargoyles were inspired by early findings of dinosaur fossils such as proceratops. Adrienne Mayor (“The First Fossil Hunters: Paleontology in Greek and Roman Times”) saw a similarity between some Greek and Roman images and dinosaurs. To explain this she hypothesised that these were based on fossil bones discovered in these times.
She said that, “The earliest depictions of griffins looked really gnarly and brutish. It looked as if the artist were trying to portray something real rather than mythological. And then … I realized that they looked like modern reconstructions of dinosaurs in a museum”.
However, as she believed that all the dinosaurs died out millions of years ago, she failed to consider the possibility that the griffins could be based on seeing the actual creatures, not just the bones. After all, it takes modern scientists considerable research to reconstruct the appearance of an unknown animal from its bones.
As the story of the gargoyles relates to ancient times, some of it has probably been lost from our history. But they remind us of a time when buildings were more decorative and ornate. Some speculate about the symbolism of gargoyles, but I think they show the skill and creativity of the medieval stone masons.
It is interesting to note the image of a dragon on the Ishtar gate in Babylon which was constructed about 575 BC.
Written, February 2015
Andy, our new grandson, arrived recently. Here are a few things that I am reminded of at a time like this.
Just like you and I, Andy is unique. There is no one else on earth (past, present and future) who is exactly like him. He has a unique genome, which is comprised of about 3 billion DNA base pairs in each cell of his body. He grew from a single cell itself.
God designed and created our world so that, over a period of nine months, the genetic information in a single cell can develop into a child that is ready to be born. It takes a lot of design to build a genome; it’s amazingly complex.
The Bible says that the development of a baby in the womb is an example of God’s power (omnipotence) and skill. King David wrote, “You made all the delicate, inner parts of my body and knit me together in my mother’s womb. Thank you for making me so wonderfully complex! Your workmanship is marvelous—how well I know it. You watched me as I was being formed in utter seclusion, as I was woven together in the dark of the womb. You saw me before I was born” (Ps. 139:13-16NLT). And another psalm says of God the Creator, “You made me; you created me” (Ps. 119:73). Of course God knows all about a baby as it’s growing in the womb (Jer. 1:5; Gal. 1:15).
I think another example of God’s power and skill is life itself. Can anyone explain the origin of life, without referring to God? We see that life always comes from life. Andy’s family tree goes back to Adam and Eve. How did Adam and Eve become alive? The Bible says their life came from God (Gen. 2:7, 22). Only God can create life; scientists can’t manufacture it, they just use it.
Is Andy perfect?
Although Andy is perfect in the eyes of his parents, in two ways he isn’t perfect.
Firstly, just like you and I, Andy’s genome contains mutations inherited from his parents. When parents reproduce, they make a copy of their genome and pass this to their child. From time to time, mistakes occur (called “mutations”), and the next generation does not have a perfect copy of the original genome. Each new generation carries all the mutations of previous generations plus their own. So the mutations accumulate from generation to generation. This means that the human genome is degenerating genetically with time due to the accumulation of mutations. In order to minimize the risk of deformed offspring that can result from shared mutations between genetically close parents, marriage is usually prohibited between close relatives. In fact, such limitations needed to be imposed after about 26 copies of the human genome, which was in the times of Moses’ children (Lev. 18:6-16; 20:11, 17, Dt. 27:22).
Secondly, just like you and I, Andy has a sinful nature. This means that he will have a natural tendency to misbehave. The Bible says that we are all sinners by nature (Rom. 3:23; Eph. 2:1-3). Even when Andy tries to do the right thing, it will be elusive (Rom. 7:14-20). This attitude affects our mind, will and emotions in particular (Jer. 17:9). But according to Andy’s “Beginner’s Bible”, “Jesus knew that he had to die for the sins of all people. It was part of God’s plan. When it was time, Jesus died on the cross for our sins.”
Now we can look forward to seeing Andy grow and develop from a baby to a child, to an adolescent, and then to a man, the way God has planned.
Written, February 2015
A pantheistic creation story
“Big History” is a modern origins story that is being developed online for school students and is supported by Bill Gates, one of the world’s richest men. The objective is to develop a framework for learning about anything and everything that includes a deeper awareness of our past. It claims to tell a story that our children need to know. But what will they learn?
History of the universe
“Big History” is based on eight “threshold moments” when the universe increased in complexity:
- Origin of the universe billions of years ago as explained by the big bang theory.
- Stars light up from the remnants of the exploding gases.
- New chemical elements form when stars die and create new types of atoms.
- Rocky planets such as the earth and the solar system form around stars.
- Molecules combine to form single celled living organisms that evolve into multi-celled organisms.
- Human beings appear as a consequence of evolutionary processes.
- After the last ice age about 10,000 years ago, humans develop agriculture.
- The modern revolution, characterized by the use of fossil fuels and global communication.
Although “Big History” is based on humanity’s current scientific and historical understanding, the following questions come to mind.
Although Big History states these as facts, it assumes that:
- The origin and development of the universe can be explained by the laws of science.
- The early universe was simple.
- The universe has become more complex with time.
Once these are assumed, the rest follows as a consequence. So the course is teaching that these assumptions are facts. Instead they are scientifically unprovable.
As history trumps science when dealing with the past, the Bible surpasses science with regard to the origin of the universe and the nature of the early universe. In particular, the assumption of the uniformity of scientific laws isn’t valid past the creation of the universe about 6,000 years ago. So history shows the first two assumptions are false.
The assumption that “The universe has become more complex with time”, goes against common sense and the law of cause and effect. How can an inanimate object gain increasing complexity and increasing information by using the laws of science alone and not outside intelligence? How can an animal or plant gain increasing complexity and increasing information by using the laws of science alone and not outside intelligence?
This secular origins story is based on miracles than can’t be explained by science. Yet it claims to be scientific!
Before the beginning of time it assumes there was nothing – no time, mass, energy or space. After the beginning of time a tiny particle smaller than an atom appears that contains everything in today’s universe. This means that something appears from nothing, which is not allowed in the laws of physics! In a science where there is no place for miracles, this is certainly a miracle. How was all the mass and energy within the universe created out of absolutely nothing using only the physical forces within the universe? How could the universe create itself?
According to Big History, the universe can create complexity and information. This happens as a series of stages, each of which produces something utterly new. Each of these threshold moments is a miracle as the increase in complexity and information cannot be explained by the laws of science. How could the universe increase complexity and information by itself?
How does the universe create complexity despite the second law of thermodynamics, which says that “The general tendency of the universe is to move from order and structure to lack of order and lack of structure”? This is never explained. There is just a statement that it can create complexity, but with great difficulty! So the universe builds itself, which is pantheism. The universe is god. So intelligence and intent is attributed to inanimate objects or concepts, such as “Life interjects” and “DNA learns”.
It is interesting to note that the example of increasing complexity used in Big History was our modern society and not a biological example such as the DNA molecule or the human mind.
Big History is guilty of circular reasoning. It states assumptions and ideas as facts. Its key findings are based on its assumptions and presuppositions. For example, it reveals how complexity slowly evolved. But this is also an assumption.
Big history is speculative. It says, “We can imagine the early universe breaking up into billions of clouds”.
Big history is a product of the secular paradigm or worldview that has rejected the God of the Bible.
Big History is certainly ambitious. It has big assumptions, big miracles, big extrapolation and big imagination. Although it claims to be big on history, it actually includes little recorded history.
As “the story of how the universe creates complexity” and the “unifying story that gives a sense of the whole of history”, it is a pantheistic creation story that replaces the Bible.
Do our children need to know this? How much better if they knew the Bible and accepted its message as the unifying story that gives a sense of the whole of history?
Written, October 2013 |
Marcescence is the retention of dead plant organs that normally are shed. It is most obvious in deciduous trees that retain leaves through the winter. Several trees normally have marcescent leaves such as oak (Quercus), beech (Fagus) and hornbeam (Carpinus), or marcescent stipules as in some but not all species of willows (Salix). Marcescent leaves of pin oak (Quercus palustris) complete development of their abscission layer in the spring. The base of the petiole remains alive over the winter. Many other trees may have marcescent leaves in seasons where an early freeze kills the leaves before the abscission layer develops or completes development. Diseases or pests can also kill leaves before they can develop an abscission layer.
Marcescent leaves may be retained indefinitely and do not break off until mechanical forces (wind for instance) cause the dry and brittle petioles to snap.
Many palms form a skirt-like or shuttlecock-like crown of marcescent leaves under new growth that may persist for years before being shed. In some species only juveniles retain dead leaves and marcescence in palms is considered a primitive trait.
The term marcescent is also used in mycology to describe a mushroom which (unlike most species, described as "putrescent") can dry out, but later revive and continue to disperse spores. Genus Marasmius is well known for this feature, which was considered taxonomically important by Elias Magnus Fries in his 1838 classification of the fungi.
One possible advantage of marcescent leaves is that they may deter feeding of large herbivores, such as deer and moose, which normally eat the twigs and their nutritious buds. Dead, dry leaves make the twigs less nutritious and less palatable.
Marcescent leaves may protect some species from water stress or temperature stress. For example, in tropical alpine environments a wide variety of plants in different plant families and different parts of the world have evolved a growth form known as the caulescent rosette, characterized by evergreen rosettes growing above marcescent leaves. Examples of plants for which the marcescent leaves have been confirmed to improve survival, help water balance, or protect the plant from cold injury are Espeletia schultzii and Espeletia timotensis, both from the Andes.
The litter-trapping marcescent leaf crown of Dypsis palms accumulate detritus to enhance their nutritient supply. By the same token, palms with marcescent leaf bases are also more susceptible to epiphytic parasites like figs that may completely engulf and strangle the palms.
- Berkley, Earl E. 1931. Marcescent leaves of certain species of Quercus. Botanical Gazette 92: 85-93.
- George W. Argus. "88. Salix planifolia Pursh". Flora of North America 7.
- Hoshaw, R.W. and Guard, A.T. 1949. "Abscission of marcescent leaves of Quercus palustris and Q. coccinea". Botanical Gazette 110: 587–593.
- Addicott, Fredrick T. (1982). Abscission. University of California Press. p. 51. ISBN 978-0-520-04288-9.
- L.B. Holm-Nielsen, ed. (1989). Tropical Forests: Botanical Dynamics, Speciation & Diversity. Academic Press. p. 161. ISBN 978-0-08-098445-2.
- Dowe, John (2010). Australian Palms: Biogeography, Ecology and Systematics. Csiro Publishing. p. 160. ISBN 978-0-643-10185-2.
- Dransfield, John; Uhl, Natalie W. (2008). Genera Palmarum: the evolution and classification of palms. Kew Pub. p. 294. ISBN 978-1-84246-182-2.
marcescent in immature [descriptive of several spp.]
- Moore, Harold Emery; Uhl, Natalie W. (1982). Major Trends of Evolution in Palms. New York Botanical Garden. p. 69.
- See introduction to Roy E. Halling "A revision of Collybia s.l. in the northeastern United States & adjacent Canada" Inst. of Syst. Botany, The New York Botanical Garden, Bronx, NY 10458-5126
- E. M. Fries Epicrisis systematis mycologici (1838) Uppsala: Typographia Academica
- Svendsen, Claus R. 2001. Effects of marcescent leaves on winter browsing by large herbivores in northern temperate deciduous forests. Alces 37(2): 475-482.
- Goldstein, G. and Meinzer, F.1983. Influence of insulating dead leaves and low temperatures on water balance in an Andean giant rosette plant. Plant, Cell & Environment 6: 649-656.
- Smith, Alan P.1979. Function of dead leaves in Espeletia schultzii (Compositae), and Andean caulescent rosette species. Biotropica 11: 43-47.
- Bramwell, David; Caujapé-Castells, Juli (2011-07-21). The Biology of Island Floras. Cambridge University Press. p. 189. ISBN 978-1-139-49780-0.
- Kramer, Gregory T. (2011). "Palm Tree Susceptibility to Hemi-Epiphytic Parasitism by Ficus". (M.S. thesis). University of Florida.
|Wikimedia Commons has media related to Marcescence.|
- Narration and video about marcescence |
This page explains what first ionisation energy is, and then looks at the way it varies around the Periodic Table - across periods and down groups. It assumes that you know about simple atomic orbitals, and can write electronic structures for simple atoms. You will find a link at the bottom of the page to a similar description of successive ionisation energies (second, third and so on).
Important! If you aren't reasonable happy about atomic orbitals and electronic structures you should follow these links before you go any further.
Defining first ionisation energy
The first ionisation energy is the energy required to remove one mole of the most loosely held electrons from one mole of gaseous atoms to produce 1 mole of gaseous ions each with a charge of 1+.
This is more easily seen in symbol terms.
It is the energy needed to carry out this change per mole of X.
Worried about moles? Don't be! For now, just take it as a measure of a particular amount of a substance. It isn't worth worrying about at the moment.
Things to notice about the equation
The state symbols - (g) - are essential. When you are talking about ionisation energies, everything must be present in the gas state.
Ionisation energies are measured in kJ mol-1 (kilojoules per mole). They vary in size from 381 (which you would consider very low) up to 2370 (which is very high).
All elements have a first ionisation energy - even atoms which don't form positive ions in test tubes. The reason that helium (1st I.E. = 2370 kJ mol-1) doesn't normally form a positive ion is because of the huge amount of energy that would be needed to remove one of its electrons.
Patterns of first ionisation energies in the Periodic Table
The first 20 elements
First ionisation energy shows periodicity. That means that it varies in a repetitive way as you move through the Periodic Table. For example, look at the pattern from Li to Ne, and then compare it with the identical pattern from Na to Ar.
These variations in first ionisation energy can all be explained in terms of the structures of the atoms involved.
Factors affecting the size of ionisation energy
Ionisation energy is a measure of the energy needed to pull a particular electron away from the attraction of the nucleus. A high value of ionisation energy shows a high attraction between the electron and the nucleus.
The size of that attraction will be governed by:
The charge on the nucleus.
The more protons there are in the nucleus, the more positively charged the nucleus is, and the more strongly electrons are attracted to it.
The distance of the electron from the nucleus.
Attraction falls off very rapidly with distance. An electron close to the nucleus will be much more strongly attracted than one further away.
The number of electrons between the outer electrons and the nucleus.
Consider a sodium atom, with the electronic structure 2,8,1. (There's no reason why you can't use this notation if it's useful!)
If the outer electron looks in towards the nucleus, it doesn't see the nucleus sharply. Between it and the nucleus there are the two layers of electrons in the first and second levels. The 11 protons in the sodium's nucleus have their effect cut down by the 10 inner electrons. The outer electron therefore only feels a net pull of approximately 1+ from the centre. This lessening of the pull of the nucleus by inner electrons is known as screening or shielding.
Warning! Electrons don't, of course, "look in" towards the nucleus - and they don't "see" anything either! But there's no reason why you can't imagine it in these terms if it helps you to visualise what's happening. Just don't use these terms in an exam! You may get an examiner who is upset by this sort of loose language.
Whether the electron is on its own in an orbital or paired with another electron.
Two electrons in the same orbital experience a bit of repulsion from each other. This offsets the attraction of the nucleus, so that paired electrons are removed rather more easily than you might expect.
Explaining the pattern in the first few elements
Hydrogen has an electronic structure of 1s1. It is a very small atom, and the single electron is close to the nucleus and therefore strongly attracted. There are no electrons screening it from the nucleus and so the ionisation energy is high (1310 kJ mol-1).
Helium has a structure 1s2. The electron is being removed from the same orbital as in hydrogen's case. It is close to the nucleus and unscreened. The value of the ionisation energy (2370 kJ mol-1) is much higher than hydrogen, because the nucleus now has 2 protons attracting the electrons instead of 1.
Lithium is 1s22s1. Its outer electron is in the second energy level, much more distant from the nucleus. You might argue that that would be offset by the additional proton in the nucleus, but the electron doesn't feel the full pull of the nucleus - it is screened by the 1s2 electrons.
You can think of the electron as feeling a net 1+ pull from the centre (3 protons offset by the two 1s2 electrons).
If you compare lithium with hydrogen (instead of with helium), the hydrogen's electron also feels a 1+ pull from the nucleus, but the distance is much greater with lithium. Lithium's first ionisation energy drops to 519 kJ mol-1 whereas hydrogen's is 1310 kJ mol-1.
The patterns in periods 2 and 3
Talking through the next 17 atoms one at a time would take ages. We can do it much more neatly by explaining the main trends in these periods, and then accounting for the exceptions to these trends.
The first thing to realise is that the patterns in the two periods are identical - the difference being that the ionisation energies in period 3 are all lower than those in period 2.
Explaining the general trend across periods 2 and 3
The general trend is for ionisation energies to increase across a period.
In the whole of period 2, the outer electrons are in 2-level orbitals - 2s or 2p. These are all the same sort of distances from the nucleus, and are screened by the same 1s2 electrons.
The major difference is the increasing number of protons in the nucleus as you go from lithium to neon. That causes greater attraction between the nucleus and the electrons and so increases the ionisation energies. In fact the increasing nuclear charge also drags the outer electrons in closer to the nucleus. That increases ionisation energies still more as you go across the period.
Note: Factors affecting atomic radius are covered on a separate page.
In period 3, the trend is exactly the same. This time, all the electrons being removed are in the third level and are screened by the 1s22s22p6 electrons. They all have the same sort of environment, but there is an increasing nuclear charge.
Why the drop between groups 2 and 3 (Be-B and Mg-Al)?
The explanation lies with the structures of boron and aluminium. The outer electron is removed more easily from these atoms than the general trend in their period would suggest.
You might expect the boron value to be more than the beryllium value because of the extra proton. Offsetting that is the fact that boron's outer electron is in a 2p orbital rather than a 2s. 2p orbitals have a slightly higher energy than the 2s orbital, and the electron is, on average, to be found further from the nucleus. This has two effects.
The explanation for the drop between magnesium and aluminium is the same, except that everything is happening at the 3-level rather than the 2-level.
The 3p electron in aluminium is slightly more distant from the nucleus than the 3s, and partially screened by the 3s2 electrons as well as the inner electrons. Both of these factors offset the effect of the extra proton.
Warning! You might possibly come across a text book which describes the drop between group 2 and group 3 by saying that a full s2 orbital is in some way especially stable and that makes the electron more difficult to remove. In other words, that the fluctuation is because the group 2 value for ionisation energy is abnormally high. This is quite simply wrong! The reason for the fluctuation is because the group 3 value is lower than you might expect for the reasons we've looked at.
Why the drop between groups 5 and 6 (N-O and P-S)?
Once again, you might expect the ionisation energy of the group 6 element to be higher than that of group 5 because of the extra proton. What is offsetting it this time?
The screening is identical (from the 1s2 and, to some extent, from the 2s2 electrons), and the electron is being removed from an identical orbital.
The difference is that in the oxygen case the electron being removed is one of the 2px2 pair. The repulsion between the two electrons in the same orbital means that the electron is easier to remove than it would otherwise be.
The drop in ionisation energy at sulphur is accounted for in the same way.
Note: After oxygen or sulphur, the ionisation energies of the next two elements increase because of the additional protons. Everything else is the same - the type of orbital that the new electron is going into, the screening, and the fact that it is pairing up with an existing electron.
Students sometimes wonder why the next ionisation energies don't fall because of the repulsion caused by the electrons pairing up, in the same way it falls between, say, nitrogen and oxygen.
Between nitrogen and oxygen, the pairing up is a new factor, and the repulsion outweighs the effect of the extra proton. But between oxygen and fluorine the pairing up isn't a new factor, and the only difference in this case is the extra proton. So relative to oxygen, the ionisation energy of fluorine is greater. And, similarly, the ionisation energy of neon is greater still.
Trends in ionisation energy down a group
As you go down a group in the Periodic Table ionisation energies generally fall. You have already seen evidence of this in the fact that the ionisation energies in period 3 are all less than those in period 2.
Taking Group 1 as a typical example:
Why is the sodium value less than that of lithium?
There are 11 protons in a sodium atom but only 3 in a lithium atom, so the nuclear charge is much greater. You might have expected a much larger ionisation energy in sodium, but offsetting the nuclear charge is a greater distance from the nucleus and more screening.
Lithium's outer electron is in the second level, and only has the 1s2 electrons to screen it. The 2s1 electron feels the pull of 3 protons screened by 2 electrons - a net pull from the centre of 1+.
The sodium's outer electron is in the third level, and is screened from the 11 protons in the nucleus by a total of 10 inner electrons. The 3s1 electron also feels a net pull of 1+ from the centre of the atom. In other words, the effect of the extra protons is compensated for by the effect of the extra screening electrons. The only factor left is the extra distance between the outer electron and the nucleus in sodium's case. That lowers the ionisation energy.
Similar explanations hold as you go down the rest of this group - or, indeed, any other group.
Trends in ionisation energy in a transition series
Apart from zinc at the end, the other ionisation energies are all much the same.
All of these elements have an electronic structure [Ar]3dn4s2 (or 4s1 in the cases of chromium and copper). The electron being lost always comes from the 4s orbital.
Note: The 4s orbital has a higher energy than the 3d in the transition elements. That means that it is a 4s electron which is lost from the atom when it forms an ion. It also means that the 3d orbitals are slightly closer to the nucleus than the 4s - and so offer some screening.
Confusingly, this is inconsistent with what we say when we use the Aufbau Principle to work out the electronic structures of atoms.
I have discussed this in detail in the page about the order of filling 3d and 4s orbitals.
If you are a teacher or a very confident student then you might like to follow this link.
If you aren't so confident, or are coming at this for the first time, I suggest that you ignore it. Remember that the Aufbau Principle (which uses the assumption that the 3d orbitals fill after the 4s) is just a useful way of working out the structures of atoms, but that in real transition metal atoms the 4s is actually the outer, higher energy orbital.
As you go from one atom to the next in the series, the number of protons in the nucleus increases, but so also does the number of 3d electrons. The 3d electrons have some screening effect, and the extra proton and the extra 3d electron more or less cancel each other out as far as attraction from the centre of the atom is concerned.
The rise at zinc is easy to explain.
In each case, the electron is coming from the same orbital, with identical screening, but the zinc has one extra proton in the nucleus and so the attraction is greater. There will be a degree of repulsion between the paired up electrons in the 4s orbital, but in this case it obviously isn't enough to outweigh the effect of the extra proton.
Note: This is actually very similar to the increase from, say, sodium to magnesium in the third period. In that case, the outer electronic structure is going from 3s1 to 3s2. Despite the pairing-up of the electrons, the ionisation energy increases because of the extra proton in the nucleus. The repulsion between the 3s electrons obviously isn't enough to outweigh this either.
I don't know why the repulsion between the paired electrons matters less for electrons in s orbitals than in p orbitals (I don't even know whether you can make that generalisation!). I suspect that it has to do with orbital shape and possibly the greater penetration of s electrons towards the nucleus, but I haven't been able to find any reference to this anywhere. In fact, I haven't been able to find anyone who even mentions repulsion in the context of paired s electrons!
If you have any hard information on this, could you contact me via the address on the about this site page.
Ionisation energies and reactivity
The lower the ionisation energy, the more easily this change happens:
You can explain the increase in reactivity of the Group 1 metals (Li, Na, K, Rb, Cs) as you go down the group in terms of the fall in ionisation energy. Whatever these metals react with, they have to form positive ions in the process, and so the lower the ionisation energy, the more easily those ions will form.
The danger with this approach is that the formation of the positive ion is only one stage in a multi-step process.
For example, you wouldn't be starting with gaseous atoms; nor would you end up with gaseous positive ions - you would end up with ions in a solid or in solution. The energy changes in these processes also vary from element to element. Ideally you need to consider the whole picture and not just one small part of it.
However, the ionisation energies of the elements are going to be major contributing factors towards the activation energy of the reactions. Remember that activation energy is the minimum energy needed before a reaction will take place. The lower the activation energy, the faster the reaction will be - irrespective of what the overall energy changes in the reaction are.
The fall in ionisation energy as you go down a group will lead to lower activation energies and therefore faster reactions.
Note: You will find a page discussing this in more detail in the inorganic section of this site dealing with the reactions of Group 2 metals with water.
© Jim Clark 2000 (last modified August 2016) |
This exhibition presents an overview of the Mexican Revolution as a historic event in which individuals, groups, and social classes pursued diverse goals to achieve political, economic, and social change. It also highlights several definitive political and military moments during the Revolution, as well as the people who witnessed and shaped it. The Mexican Revolution brought deep changes to Mexican life. Its legacies included an improved government and greater political tolerance. In the years after the Revolution, Mexico enacted agrarian reform, increased benefits for the working classes, and reformed education and healthcare nationwide. |
The Dangers of Dehydration
Most people who suffer from dehydration are not aware they are becoming dehydrated until the effects have reached a serious level. With dehydration, more water is moving out of your cells and out of your body than the amount of water you are consuming. It can have sudden and severe effects, but it can also be a gradual chronic problem with far reaching health consequences. This article discusses the consequences and prevention of both acute and chronic dehydration.
The percent of the human body made up of water ranges from 50-75%. The average for an adult is 50-65% whereas infants are closer to 75%. On average adult muscles normally hold around 60-70% water and the brain 75-85% water. The blood and lungs are closer to 90%. Knowing these numbers makes it easier to understand the significance of proper hydration to your health. Chronic dehydration can have a negative effect on high blood pressure, kidney damage, headaches, poor circulation and respiratory disease. Water is crucial for proper digestion, liver and kidney detoxification and waste removal.
Common ways we lose water every day are from water vapor in our breath when we exhale and water in our sweat, urine and stool. If we do not take water in with the same pace it is lost then dehydration will occur. Urine should be light or straw colored, but with dehydration it is often dark and cloudy in appearance.
Conditions That Can Cause Dehydration
- Fever, vomiting and diarrhea – These conditions increase the need for water intake.
- Heat exposure – Hot weather and humidity rapidly increase the amount of fluids lost.
- Exercise – Any activity that makes you sweat requires extra water intake. It is best to drink water before, during and after exercise.
- Diseases such as diabetes – The kidneys try to reduce excess blood glucose levels in the body.
- Inability to seek appropriate water and food – Food can contribute about 20% of total water intake.
- Significant injuries to skin – burns, sores or severe skin diseases/infections
Mild to Moderate Dehydration Symptoms
- Dry or Sticky Mouth
- Sleepiness or Tiredness – children less active than usual
- Few or no tears when crying
- Dry Skin
- Dizziness or Lightheadedness
- Low or no urine output: urine looks dark yellow
- Increased Thirst
- Swollen Tongue
Severe Dehydration Symptoms
- Lethargy or Coma
- Extreme fussiness or sleepiness in infants & children; irritability & confusion in adults
- Very dry mouth, skin and mucous membranes
- Lack of Sweating
- Sunken Eyes
- Shriveled and dry skin that doesn’t “bounce back” when pinched
- Low Blood Pressure
- Rapid Heart Beat
- Rapid Breathing
- Delirium and unconsciousness (extreme cases)
Call Your Doctor If A Dehydrated Person Experiences The Following
- Vomiting more than a day
- Fever over 101° F/38° C
- Diarrhea for more than 2 days
- Unexpected / Unintended Weight Loss
- Decreased Urine Production
Take A Dehydrated Person To The E.R. Immediately If These Situations Occur
- Fever Over 103° F/ 39° C
- Difficulty Breathing
- Chest or Abdominal Pains
- No Urine in 12 Hours
Many people have a tendency to drink beverages that can be mildly dehydrating such as coffee, colas and other drinks containing caffeine. Alcohol consumption also increases dehydration. A sense of dehydration can often be confused with hunger. High protein diets can increase dehydration tendency. Remember to increase your water intake before, during and after exercising.
How Much Water is the Right Amount to Drink?
There is no perfect answer as it depends greatly on your general health and activity level. The old standard has been about eight 8-ounce glasses of water a day. The Institute of Medicine has determined that men should drink about 13 cups (3 liters) and women about 9 cups (2.2 liters) of fluids a day.
In order to prevent dehydration and maintain the proper amounts of fluids to keep your body healthy try to make water your primary beverage. It can be helpful to drink a glass of water with each meal and one between each meal. Also, remember to drink water before, during and after exercising. During warm weather it will also be important to increase your water intake from your normal levels.
It is possible to drink too much water, although this is rare. When the kidneys can’t excrete the excess water a condition called hyponatremia can occur which is the result of the mineral / electrolyte content of the blood becoming diluted. This results in low sodium levels in the blood.
If you have noticed signs of dehydration or have concerns about the proper amount of water intake, you should consult with a health care provider.
The Doctors at Coon Rapids Chiropractic Office can answer your questions about proper hydration.
Coon Rapids Chiropractic Office |
In a culture that emphasizes youth, and where too many children do not spend enough time with grandparents, it is important in the coming year to include honoring the elderly as one of your teaching objectives. This need not be difficult, and it can fit into ongoing curriculum and instruction in a variety of ways.
The three essential components are to get students to think about the elderly, create interaction with the elderly, and then foster reflection about the experience.
Getting Them Started
Children in fourth grade and under benefit from seeing pictures of older adults involved in various activities as a starting point. As for the photos, ask students what they are doing, ask who they know who looks like this and does things like this, and ask them what is special about older people. In fifth grade and up, it can be effective to start with words such as "senior citizen," "grandparent," "older adult," and "elderly." Have students in groups generate lists of words in association with these words, and share them. Ideally, post them for all to see.
You can probably envision a number of next steps from just this start. The goal is to challenge stereotypes about older adults and to help students understand their importance. While it would be another form of stereotyping to encourage students to think that all seniors are wise, it will not hurt to introduce them to this possibility, and other positive ones. You might have students read books about senior citizens and/or grandparents. You might have them look up information about aging as part of a science or health class. Biographies of leaders, inventors, composers, writers, and others who were productive and prominent into their senior years can be used.
Creating an Interaction
Even young students can be helped to interview senior members of their families, and these can be designed with increasing complexity. Having older students work together to develop lists of questions that they share can be a powerful learning experience. Also, ask senior citizens to come into the classroom and preparatory assignments and questions can be appropriately generated. Many communities have retired seniors who can talk about their careers; many houses of worship have networks of seniors who can talk about history they have experienced. And many assisted living and senior housing facilities would be very pleased to engage some of their residents with schools and students, either at the school site or at their sites.
Depending on the subject area and one's learning goals, students can create a written product, a play, a video, an interview, a song, graphs and charts, a poster or other artistic portrayals. Ideally, these could be shared with other students, parents, community residents, etc. Be sure thank-you notes are written, where appropriate (not emailed!)
Following the direct encounter (and ideally, more than one), have students reflect on what they have learned from their preparation, their encounter, and their presentation. Use the occasion to help build their affective vocabulary. As before, the format of the reflection can be whatever would fit into your curriculum.
An unnamed source in the writings of the Apocrypha said, "Dishonor not the old; we shall be numbered among them." It's good advice. To that, I would add the old adage, Respect your elders -- and learn from them. |
The body’s longest nerve, the vagus nerve, is the autobahn between what scientists have referred to as the “two brains” — the one in your head and the other in your gastrointestinal tract. The nerve is key for telling you the tank is full and to put the fork down because it helps transmit biochemical signals from the stomach to the most primitive part of the brain, the brainstem.
But in this animal study, researchers may have found a greater purpose behind this complex circuitry involving the vagus nerve. This “gut-brain axis” may help you remember where you ate by directing signals to another part of the brain, the hippocampus, the memory center.
Following our stomach
The scientists believe that this gut instinct, this connection between spatial awareness and food, is likely a neurobiological mechanism that dates back ages to when the definition of fast food was a herd of deer running away from the nomadic hunters who tracked them.
Back then especially, it would be critical for the gut to work with the brain like a Waze or Google Maps navigation app, said Scott Kanoski, an assistant professor of biological sciences at USC Dornsife and corresponding author of the paper. Those wandering early humans could remember a site where they had found and collected food and return repeatedly for more. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.