text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Foods High in Iodine
Iodine is a chemical element essential for the production of thyroid hormones that regulate growth and metabolism. Diets deficient in iodine increase risk of retarded brain development in children (cretinism), mental slowness, high cholesterol, lethargy, weight gain, and goiter: a swelling of the thyroid gland in the neck.
What foods are naturally high in iodine? Iodine is a component of almost every living plant and animal. No standard measurements of iodine in food exist because iodine concentrations vary across the world. In general, foods from the sea contain the most iodine, followed by animal foods, and then plant foods. Of all foods seaweed, like kelp, is the most famous and reliable source of natural iodine, however egg and dairy products can also be good sources.How much iodine do I need? In your entire lifetime you will need less than a teaspoon of iodine to ensure good health, however, your body cannot store iodine so you have to eat a little bit every day. You only need 150 micrograms (or a 1000th of a teaspoon) to meet your daily requirement.
If iodine is in most plant and animal foods how can anyone be deficient? According to the World Health Organization iodine deficiencies exist in 54 countries as of 2003.
There is no exact answer as to why iodine deficiencies occur, however, two theories exist:
- People live in a part of the world with low levels of iodine in the soil or sea.
- People eat high amounts of refined foods that lose their iodine content during refinement. Refined sugar, for example, has no iodine.
Some countries, like the U.S., show risk from excess iodine intake which suggests over consumption of foods fortified in iodine, like salt.
Beware: Too much iodine can be bad for you. Over consumption of iodine can be toxic and just as damaging as a deficiency. As little as 1000 micrograms of Iodine in a day causes irritations like burning of the mouth and throat, nausea, vomiting, stomach ache, and even coma. Like under-consumption, too much iodine prevents proper production of thyroid hormones leading to goiter.I don’t eat salt, meat, or seaweed, where can I get iodine? Your options are to consider supplements, buy foods enriched in iodine, or ensure that the plant foods you consume come from parts of the world where the soil is rich in iodine.
|
<urn:uuid:6f9c2d62-2f3f-4376-95cc-8b4c79a95bca>
|
CC-MAIN-2016-26
|
http://foodhighestniodine.blogspot.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00147-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951312 | 503 | 2.921875 | 3 |
Restoration Director of Michigan Chris May at The Nature Conservancy’s Erie Marsh Preserve
The time has finally come for Erie Marsh, a 2,217-acre expanse of coastal wetlands, containing eleven percent of Southeastern Michigan’s remaining marshlands. The Nature Conservancy is currently collaborating with the Erie Shooting and Fishing Club, Ducks Unlimited, and Michigan Department of Natural to launch the first phase of its multi-year plan to reconnect Erie Marsh Preserve to Maumee Bay and Lake Erie. This plan was made possible through a 2.6 million dollar grant from the National Oceanic and Atmospheric Administration’s Great Lakes Restoration Initiative. The project is planned to start in December, with one goal in mind: improve the overall health and productivity of the wetlands within the bay with hopes of a rewarding outcome.
One of the biggest struggles Erie Marsh faces is being so close to urbanized cities like Detroit and Toledo. Erie Marsh endures uncontrollable factors from the cities such as sedimentation, nutrient inputs which cause changes in water quality, and hardening of shorelines. However, there are controllable factors that have been put in place to improve and restore Erie Marsh.
An integral player in the restoration project is the Erie Shooting and Fishing Club. The club donated the property to the Conservancy in 1978, while retaining hunting and fishing rights. The club’s directors and members have remained involved and conscious of problems on the marsh such as invasive species control and wetland erosion.
During the 1940s, the Erie Shooting and Fishing Club constructed multiple dikes around the marsh to control water from the bay into the wetlands. As the years progressed, dikes were built covering more than 1,000 acres of the marsh. The dikes were a controversial issue at the time. Although dikes provided control over the amount of water flowing into the wetlands, the dike segregated Erie Marsh from Maumee Bay. As the dikes have degraded since the 1940s, part of the restoration project is to build a sufficient passageway for native aquatic species to enter and exit through the marsh. As Chris May simply puts it, “diked wetlands that have been segregated from natural waters for decades can still be reconnected and be beneficial for native aquatic species.”
A contributor to the Erie Marsh restoration project is our very own Restoration Director of Michigan, Chris May. Chris and the Conservancy plan to set the first phase of the restoration project in December. The project consists of constructing a large fish passageway structure, with two four-foot diameter openings that lead into a 258-acre open-water area on the south end.
The restoration project consists of constructing a large double-dike distribution canal connecting the management units of the marsh. Within the canal, water is able to rise to a level above the management units, meaning it can be distributed by gravity versus electricity or fuel-powered pumps. Future plans on the marsh include extending the water distribution to the preserve’s northern end.
Pumping will be needed occasionally, and the Conservancy plans on replacing the diesel driven pump and purchasing a modern pump that will flow up to 12,000 gallons per minute and will pump in both directions. This makes it useful for conservation staff to independently manage specific units throughout the marsh.
The restoration project also focuses on the control of invasive and native species. Invasive species (aquatic and terrestrial) cause problems at the marsh. In particular, Phragmites (common reed) poses harmful threats to both land and water. Originally from Asia, Phragmites can grow up to fifteen feet tall and tends to create monocultures which block out sunlight and the struggle for existence for native species is at risk.
“In my opinion, Phragmites is the most devastating invasive species in the entire Great Lakes area,” says May. “Plenty of research shows that common reed reduces the diversity of native fish, birds, insects, and crustaceans. The thatches decompose much more slowly, which actually raises the elevation of the marsh and can completely change the local hydrology.”
An effective method to control the Phragmites is to spray it with herbicides, remove the standing dead material, and then flood the area with at least three feet of water. The future pumps and shored up dikes will allow staff to control and manage flood affected areas without causing damage elsewhere. May is currently collaborating with local, state, and national partners to develop a state-wide management plan to control Phragmites.
Without the help of our partners, the restoration project would not have been able to overcome all of the obstacles Erie Marsh faces. Working together, communicating with one goal in mind: the comeback of Erie Marsh.
|
<urn:uuid:c3e73eb6-ec1d-462c-9ea1-3cc057ef10ec>
|
CC-MAIN-2016-26
|
http://www.nature.org/ourinitiatives/regions/northamerica/unitedstates/michigan/explore/restoration-of-the-erie-marsh.xml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00014-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94592 | 975 | 2.9375 | 3 |
Learn a little about the amazing Great Wall of China with your children, and tie in your learning with our activity suggestions, printables and stories.
Great Wall Of Friendship
A mix between a craft and an activity, this "Great Wall" encourages children to think about what they value in their friends and family.
Facts about the Great Wall of China
The Great Wall of China was built over 2000 years ago and is the longest man-made structure ever built. It is an instantly recognisable structure which many people are familiar with, though often know little about. We have gathered below some of the most interesting facts about the Great Wall including when the wall was built, how long the wall is, and if it is visible from the moon.
How long is the Great Wall of China?
The Great Wall of China is approximately 6000 kilometre long. However if you were to measure all the individual structures and changes to the wall made over the centuries, it is believed the final measurement would total over 50000 kilometres!
When was the Great Wall of China built?
The Great Wall was originally built over 2000 years ago, around 221 BC. Most of the current Great Wall was built during the Ming dynasty (between 1368 and 1644.)
History of the Great Wall of China
It is thought that the earliest wall was built under the rule of Emperor Qin, who successful unified parts of China around 221 BC. Previously, individual states had built their own wall defences, but now Emperor Qin sought to connect the walls to provide defences against northern invaders. He ordered the building of the Wan Li Chang Cheng" as it was known in China. This translates as "the ten thousand li Great Wall". A "li" is a Chinese length unit. 2 li are equal to 1 km.
Read (or print) the famous story Meng Jiangnu Weeps.
Most of the original wall no longer exists. Over the centuries that followed each dynasty did more work to maintain and develop the wall. The Ming dynasty (1368-1644) carried out a major rebuilding project extending the Great Wall, which resulted in a 6000 kilometre wall which is what is mainly in evidence today.
More Pictures of the Great Wall of China
Can you imagine how difficult it was to build the Great Wall of China? Look at the terrain that it covers! How did the workmen transport the stone? What techniques were used to build on such steep hillsides?
This photo, taken in the early morning, shows how beautiful and astounding the Great Wall can look, and what a marvellous feat of building it was.
Why was the Great Wall of China built?
It is believed that the main purpose of the Great Wall was to protect China from invasion or attack by northern tribes (such as the Mongols).
The Great Wall of China from Space (or from the Moon)
It is actually a myth that the Great Wall can be seen from the moon. However, it can be seen from space, and images have been returned from low-level space vehicles including the space shuttle. The photo on the left is from NASA, and clearly shows parts of the wall.
Who built the Great Wall of China?
The original wall was ordered by the Emperor Qin over 2000 years ago. The wall was constructed by labourers comprising soldiers, common people and criminals. The wall was built of different materials over the centuries. The earliest wall was largely made of compacted earth, surrounded by local stone. Much use was made of local material to keep costs down and enable building to continue quickly. The later Ming wall was largely made of brick. It is estimated that up to 1 million people died while constructing the Great Wall!
How tall is the Great Wall of China?
In places the Great Wall is 25 feet tall. It ranges from 15 to 30 feet wide.
Map of the Great Wall of China
This is a map of The Great Wall of China as it was in the Ming Dynasty (1368-1644) when the wall was rebuilt and extended. Most of the Ming Dynasty wall can still be seen today.
How was the Great Wall of China defended?
The Great Wall included a series of watch towers and forts which could house soldiers, grain and weapons. Beacons could enable the passing of messages quickly along the wall. Special weapons were developed to enable the wall to be defended against attack, replicas of which are on display on the modern day wall. At one time it is thought that up to 1 million soldiers were stationed along the length of the wall!
How long did it take to build the Great Wall of China?
The Great Wall was built over many years. It is believed the original Great Wall was built over a period of approximately 20 years. The Great Wall which is mainly in evidence today was actually built during the Ming dynasty, over a period of around 200 years.
When was the Great Wall of China finished?
The original Great Wall was extended and developed until the rule of the Ming Dynasty. When the Ming rulers were overthrown in 1644, no further work was done on the Wall until recent years in attempts to preserve parts of the structure.
Today many tourists visit the Great Wall of China, and walk along it. You can see some in the photo above.
How many people did it take to build the Great Wall of China?
Many thousands of people were involved in the building of the wall. From records it appears that 300,000 soldiers and 500,000 common people were involved in constructing the original Great Wall under Emperor Qin. Many people lost their lives during this work and archaeologists have discovered many human remains buried under sections of the wall.
|
<urn:uuid:6e263d5b-5961-40a2-a410-7ba6fb03db2f>
|
CC-MAIN-2016-26
|
http://www.activityvillage.co.uk/the-great-wall-of-china
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00162-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979998 | 1,155 | 2.953125 | 3 |
The Topic of This Month Vol.22 No.7(No.257)
Cryptosporidium parvum is an enteric protozoon, one of coccidian parasites. It's infection results from oral ingestion of oocysts (with a diameter of 4.5-5.4X4.2-5.0ƒÊm) excreted in patients' stools. The number of oocysts excreted by one patient may reach as high as 1010. In principle, the clinical symptoms are nonhemorrhagic watery diarrhea, abdominal cramps, and appetite loss (see p. 161 of this issue). There is no known effective drug for treatment of cryptosporidiosis, nevertheless most normal people will recover spontaneously. Immunosupressed patients, however, tend to become inveterate, and severe cases to turn fatal (see p. 162 of this issue). Infection of the gallbladder, bile duct, and respiratory organs is known in immunosupressed patients. Although C. parvum alone used to be regarded as causing infection among immunologically healthy individuals, C. meleagridis (avian origin) has recently been found in human infection by nucleotide sequence analysis (see p. 163 of this issue). It seems possible that some new pathogenic species could be found in the future. Apart from this, infections of AIDS patients with C. baileyi or C. muris have been reported.
Giardia lamblia (syn. G. duodenalis or G. intestinalis ) is also a protozoon of one of flagellates. It's trophozoite has such morphological characteristic as; four pairs of flagella and a ventral disc. Only asexual reproduction is so far known. The cysts, 5-8X8-12ƒÊm in size, are excreted in patients' stool. Infection is acquired upon oral ingestion of the cysts. Infection occurs in the duodenum and upper small intestines, sometimes spreads to bile duct and gallbladder. The main symptoms are nonhemorrhagic diarrhea, abdominal cramps, and steatorrhea (see 161 and 162 of this issue). Metronidazole is the treatment most often applied.
Cryptosporidiosis and giardiasis have been classified into the category IV notifiable infectious diseases in the National Epidemiological Surveillance of Infectious Diseases (NESID) under the Law Concerning the Prevention of Infectious Diseases and Medical Care for Patients of Infections (the Infectious Diseases Control Law) enacted in April 1999. Physicians who suspected the illness from clinical symptoms must notify the governor of the patients through the nearby health center within seven days after the illness has been confirmed by etiological diagnosis.
Cryptosporidiosis patients notified before the 2nd week of June 2001 after enactment of the Infectious Diseases Control Law have counted as few as 13 (Table 1). Eight of them were imported cases and the main region of acquiring infection has been Indian Subcontinent. However, as many as 209 giardiasis cases have been notified during the same period. The incidence by month and by region of acquiring infection is shown in Fig. 1. The estimated areas of acquiring infection were; overseas for 93 cases, such as India for 38 cases and Thailand for 17 cases, within this country for 84 cases, and unknown for 32 cases. No seasonality was seen in the occurrence of domestic cases. The age distribution by sex of patients in Japan (Fig. 2) shows that most patients notified were adults, with a peak at 20-34 years of age followed by a lower peak at 50s. Patients aged less than 20 numbered only seven. There were more male than female patients at a ratio of 3:1. Reports from other countries tell that giardiasis is prevalent among children; a recent surveillance performed in USA shows a highest peak in children at ages of 0-5 years followed by somewhat a lower peak in adults at ages of 31-40 years. This indicates that the adult group has the possible exposure to infected children (CDC, MMWR, Vol. 49, SS07, 2000). Following the above results, it was pointed out that the objects of parasite examination in Japan involved a larger number of particular risk groups such as travelers to developing countries. For the bona fide trend of incidence, pathogen surveillance must be expanded to cover all diarrheal cases.
In regard to the methods for patients' fecal tests for protozoa, those for cryptosporidiosis involved such methods that enhance the detection efficiency, including fluorescent antibody staining (Table 1). On the other hand, for about 75% of tests for giardiasis, conventional light microscopy of stool specimens is still in use; use of cyst concentration or staining has been limited to the approximate of 25%. For both protozoa, fluorescent antibody reagent kits have already been developed, which have contributed to heighten the detection efficiency (health insurance does not apply to such kits, though). Regarding the protozoan detection methods, Protozoological Analytical Manual-Cryptosporidium and other enteric protozoa-has been distributed by the National Institute of Infectious Diseases presenting test procedures in accordance with the laboratory's capability.
Those protozoan parasites are transmitted via drinking water or food, or in some cases by contact infection. They have been placed under the category IV notifiable diseases for the occurrence of patients must be detected as promptly as possible for the prevention of large-scale outbreaks of infection mediated by drinking water. Waterborne cryptosporidiosis outbreaks have often been reported in USA and UK after mid 1980s. In the 1993 incident occurring in Milwaukee, Wisconsin, more than 400,000 citizens were infected. A similar trend has been seen in giardiasis; as many as 42 waterborne outbreaks were reported in USA during 1965 and 1980. In Japan, outbreaks of waterborne cryptosporidiosis occurred in a building complex accommodating a number of business clients in Hiratsuka City, Kanagawa Prefecture in 1994 (see IASR, Vol 15, No. 11) and in Ogose Town, Saitama Prefecture from drinking water in 1996 (see IASR, Vol. 17, No. 9). In the latter outbreak, about 70% (8,812 people) of the town population were infected.
The Ministry of Health and Welfare (MHW; at that time), taking this situation seriously, organized a study group for urgent control of Cryptosporidium and other enteric protozoa in drinking water in August 1996 (reorganized in August 1997). The study group compiled a tentative guideline for Cryptosporidium control in drinking water (Notice No. 248 by Water Supply Division, Health Service Bureau, MHW October 1997) to present preventive and emergency measures to water utilities and prefectures (partly amended in June 1998). Further, cryptosporidiosis and giardiasis were placed under the category IV notifiable diseases when the Infectious Diseases Control Law was enacted in order to intensify the patient surveillance.
In addition to the waterborne outbreaks, an outbreak involving nine patients who took part in experimental animal infection is known (the Proceedings of the 7th Annual Meeting of Association of Animal Protozoiases, April 1993); there have been only few reports on sporadic cases (see p. 162 of this issue).
Contamination of water with Cryptosporidium or other pathogenic protozoa brings forth a serious problem because of the difficulty of their disinfection or removal once contamination occurs. The clearance efficiency of conventional water treatment can be expected to be 99.9% for Cryptosporidium and 99.99% for Giardia . (Oo-)cysts of protozoa, particularly of Cryptosporidium , are highly resistant to chlorine, therefore chlorine disinfection is not feasible. If source water or drinking water including well water contains protozoan parasites, report to the Ministry of Health, Labour and Welfare is requested according to the manual for health risk management of drinking water (Notice No. 162 by Water Supply Division, Health Service Bureau, MHW on April 10, 1997). In line with the request, contaminations of 29 rivers in 13 prefectures were reported during April 1999 and June 2000 (see p. 164 of this issue).
In addition, contamination of swimming pools with Cryptosporidium (accidents from fecal contamination) has captured the attention of Europe and USA (see p. 171 of this issue). Because of the inefficacy of chlorine disinfection, Cryptosporidium contamination of swimming pools might lead to outbreaks of infection. Such general hygienic management as maintenance of environmental sanitary conditions and limitation of use of recreational water facilities by diarrheal cases must be intensified all over Japan.
|
<urn:uuid:b67cd3e1-f73b-4b51-9f9a-3892547f50b4>
|
CC-MAIN-2016-26
|
http://idsc.nih.go.jp/iasr/22/257/tpc257.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00052-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949813 | 1,819 | 2.671875 | 3 |
Turn an ordinary drive into a world of fun adventures and possibilities. more
Could it be endometriosis?
Susan would have to take a day off work almost every month because of painful periods. She also had recurring diarrhoea. Then in her 30s she found it difficult to get pregnant.
“I’d been diagnosed with irritable bowel syndrome and told I was one of the unlucky girls who had painful periods,” Susan, now 43, says. “Finally after struggling to get pregnant, I was given a laparoscopy and all was revealed – I had what my doctor said was ‘stage 4’ endometriosis which is pretty severe.”
Susan, who went on to conceive two babies but not without difficulty, is one of the 10 percent of women who have endometriosis, a common gynaecological condition, often linked with pain and infertility.
What is endometriosis?
Endometriosis occurs when tissue from the inner lining of the uterus, called endometrium, grows outside the uterus – usually in areas like the ovaries, fallopian tubes, and ligaments that support the uterus as well as sometimes on organs including the bladder, bowel, vagina, and cervix.
This misplaced tissue can develop into growths or lesions which respond to the menstrual cycle in the same way that the tissue of the uterine lining does – that is, each month the tissue thickens.
But outside the uterus, often the tissue is unable to separate itself and shed from the organ it’s adhered to. When this happens each month it can create scarring, adhesions and inflammation which can go on to cause pain, infertility and bowel problems.
Endometriosis is often categorised by health professionals in stages, starting at Stage 1 which has relatively few implants of endometrium outside the uterus, through to Stage 4 which has extensive implants as well as many adhesions.
Even today with the widespread knowledge of this condition, it’s believed many cases remain undiagnosed, written off as just “painful periods”.
Research has found that the average delay between the onset of symptoms and diagnosis is still six to 10 years. That’s a long time, considering that the condition can worsen over the years without treatment for some women and further hamper their future plans to conceive.
Symptoms to investigate
- period pain – immediately before and during the period
- pain during or after sexual intercourse
- abdominal, back and/or pelvic pain
- pain with opening bowels, passing wind or urinating
- abdominal pain at the time of ovulation
- bowel or bladder symptoms, including bleeding, constipation and diarrhoea, increase in urinary frequency
- mood changes
- premenstrual symptoms
- heavy bleeding, with or without clots
- irregular bleeding with or without a regular cycle
- premenstrual spotting
How does endometriosis affect fertility?
Treatment options for endometriosis
- Surgery: this involves removing the sites of endometriosis from the areas outside the uterus.
- Hormonal management: these remedies can suppress the growth of endometrial cells and is often used in milder cases.
- Natural remedies: some women claim to have experienced some relief of symptoms from herbal and other natural medicines.
- Pain management: as this is often the most pervasive symptom, many women get relief from analgesics and anti-inflammatory medicines and other remedies like hot water bottles, exercise, relaxation, controlled breathing and positive thinking to alleviate pain.
Related fertility health articles
- Age and fertility
- Ectopic pregnancy
- Egg timer fertility test
- Pain during ovulation
- Why can't we get pregnant?
This article was written by Fiona Baker for Kidspot, Australia's best family health resource. Sources include Jean Hailes for Women’s Health endometriosis website.
Last revised: Wednesday, 2 November 2011
This article contains general information only and is not intended to replace advice from a qualified health professional.
- Outsmarting the super nit
- 7 ways to stay active with your kids this summer
- 7 ways to keep your kids cool this summer
- Germ busting your toddler
- How to really clean your house
- Simple stress busters for busy families
- 10 delish juices and smoothies for healthy living
- Why marriage is good for you
- Six secrets of active families
- Top tips to improve your family's health and happiness
|
<urn:uuid:a0f695b1-1f93-49ed-a27f-c5c874771851>
|
CC-MAIN-2016-26
|
http://www.kidspot.com.au/familyhealth/Pregnancy-Health-Could-it-be-endometriosis+6365+184+article.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00150-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957541 | 943 | 2.734375 | 3 |
Extended Environments Markup Language - EEML
For a full discussion of EEML, please refer to http://www.eeml.org.
Extended Environments Markup Language (EEML) is a protocol for sharing real time sensor data between remote responsive environments, both physical and virtual. It can be used to facilitate direct connections between any two environments; it can also be used to facilitate many-to-many connections as implemented by the web service Pachube, which enables people to tag and share real time sensor data from objects, devices and spaces around the world. Further information is available at http://www.eeml.org/.
Possible end-users range from construction managers, large-building occupants and architects to electronics manufacturers and interactive artists and designers.
EEML supports installations, buildings, devices and events that collect environmental data and enables people to share this resource in realtime either within their own organisations or with the world as a whole via an internet connection or mobile network access. It can enable buildings to "talk", sharing remote environmental sensor data across the network in order to make local decisions based on wider, global perspectives. The EEML protocol supports datastream sources that respond to and exchange data with other installations, buildings, devices and events through data stream tagging. (This user-configurability allows people who use Pachube to identify their datastreams to others who can then search for data streams that they want to use).
EEML is a markup language that describes the data output of sensors and actuators, often in an architectural context but also in interactive environments, interface devices and even Second Life. Crucially, EEML supports the addition of context or "meta-data" about where the data came from. This is meaningful both to machines and humans when searching for data streams that they particularly need without knowing the exact details of the source. It is also important for those wishing to make spontaneous or previously unplanned connections between data streams from different sources with common contexts. The source that EEML is designed to support is data from sensors and devices deployed in the environment. The term "environment" encompasses both the physical world of, for example your home or studio as well as the virtual world of, for example Second Life.
EEML can be used along side well-established XML formats for data interchange such as Industry Foundation Classes in the construction industry, developed by the International Alliance for Interoperability where IFC2x has gained acceptance as one of the formats for Building Information Modeling or BIM. Crucially, using EEML, sensor data from buildings can be mapped onto building components in realtime and exchanged with simulation software and facilities management software thus extending the benefits of BIM to the post-occupancy phase.
EEML is designed to be extensible to support on-going development of types of responsive environments not envisaged at the start of development.
Sample EEML document:
<?xml version="1.0" encoding="UTF-8"?> <eeml xmlns="http://www.eeml.org/xsd/005" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.eeml.org/xsd/005 http://www.eeml.org/xsd/005/005.xsd" version="5"> <environment updated="2007-05-04T18:13:51.0Z" creator="http://www.haque.co.uk" id="1"> <title>A Room Somewhere</title> <feed>http://www.pachube.com/feeds/1.xml</feed> <status>frozen</status> <description>This is a room somewhere</description> <icon>http://www.roomsomewhere/icon.png</icon> <website>http://www.roomsomewhere/</website> <email>myemail@roomsomewhere</email> <location exposure="indoor" domain="physical" disposition="fixed"> <name>My Room</name> <lat>32.4</lat> <lon>22.7</lon> <ele>0.2</ele> </location> <data id="0"> <tag>temperature</tag> <value minValue="23.0" maxValue="48.0">36.2</value> <unit symbol="C" type="derivedSI">Celsius</unit> </data> <data id="1"> <tag>blush</tag> <tag>redness</tag> <tag>embarrassment</tag> <value minValue="0.0" maxValue="100.0">84.0</value> <unit type="contextDependentUnits">blushesPerHour</unit> </data> <data id="2"> <tag>length</tag> <tag>distance</tag> <tag>extension</tag> <value minValue="0.0">12.3</value> <unit symbol="m" type="basicSI">meter</unit> </data> </environment> </eeml>
|
<urn:uuid:82d6026e-54d8-4e8e-8167-d703bd53bb04>
|
CC-MAIN-2016-26
|
http://www.haque.co.uk/eeml.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00172-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.819316 | 1,091 | 2.90625 | 3 |
A. The hemlock woolly adelgid (Adelges tsugae) has been a pest in the United States since 1924. This insect, a native of Asia, is a serious pest of hemlocks from the Smoky Mountains to southern New England. Hemlock forests are severely impacted because it is very difficult to get complete coverage of the foliage using aerial applications of insecticides. In a landscape setting, and with proper equipment, treatment is much more effective. Trees of all ages are attacked and insects are more devastating on those that are under environmental or cultural stress.
Woolly adelgid are sucking insects that slow or prevent tree growth and cause needles to discolor and to drop prematurely. The adelgids prefer the new twig growth, feeding on sap and injecting their saliva. The damage first appears as a light green to yellow discoloration, followed by a premature needle drop, branch dieback and general loss of vigor. The symptoms will begin at the base of the tree and gradually move to the top.
Trees can die from an adelgid attack, but it may be four years to six years, depending on size, stress and location of the tree. Dense stands of hemlock trees may be more difficult to manage because of incomplete coverage of pesticides and the movement of the adelgid to neighboring trees.
It is important to continually monitor susceptible hemlock species within a landscape setting. Control can be achieved with oils and insecticidal soaps. The most vulnerable stage of the woolly adelgid is the immature crawler stage (nymphs) that moves out of the protection of the woolly secretion. These immature insects are exposed on the new growth from June to October, at which time they begin to secrete the white protective wax. The 1 percent horticultural or summer oils are very effective when used between August and September.
It is important to completely cover the top and bottom of the needles and branches, and avoid using the materials if the temperatures are above 90 degrees and there is 90 percent humidity. Lower humidity and sunny days are ideal for quicker drying on the plant surface and reducing the potential for a phytotoxic response (burning of plant). Heavier oils, referred to as dormant oils, are used as the name implies, during the dormant part of the season when leaves are off the trees. Dormant oils are applied in the spring, prior to budbreak (March/April) at the rate of 3 percent to 5 percent. Fall applications of dormant oils may interfere with twig development and the breakdown process of the oil.
Oils have many positive attributes and are important in an integrated pest control strategy, but there are some precautions when using them:
-- Avoid using with fungicides.
-- Do not apply when foliage is wet or rain is expected.
-- Do not apply when humidity is greater than 90 percent and temperatures are higher than 90 degrees.
-- Do not apply to drought-stressed plants.
-- Do not apply to sensitive plants (read the label).
-- Do not apply after buds are open.
-- Do not apply after leaf drop.
-- Do not spray on conifers after November or December because it will remove the protective waxy substance.
-- Oils can change the blue color of certain plants to green, but the new growth will be blue.
-- As with any material, take the necessary safety precautions as printed on the label.
Woolly adelgid can be effectively controlled in a home landscape. It is important to monitor for the pest and apply materials when the plants are most vulnerable. Remember to use the lighter horticultural oils in the summer and heavier dormant oils before growth in the spring.
Bill Hlubik's "Plant Talk" column appears every Thursday in The Star-Ledger. Bill is a professor and agricultural and resource management agent for Rutgers Cooperative Extension-The New Jersey Agricultural Experiment Station, Rutgers University. He is also a host of the "If Plants Could Talk" television series on NJN Public Television. Send your garden inquiries to Plant Talk, The Star-Ledger, 1 Star Ledger Plaza, Newark, N.J. 07102-1200 or e-mail them here.
|
<urn:uuid:45151be5-d9f4-4016-b0c6-ccc42d913689>
|
CC-MAIN-2016-26
|
http://www.nj.com/homegarden/garden/index.ssf/2008/08/use_oils_to_control_hemlock_wo.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.928268 | 874 | 3.71875 | 4 |
Howard Norman is the author of the highly regarded novel THE NORTHERN LIGHTS (1987). In Norman’s second novel THE BIRD ARTIST, Fabian Vas lives in the remote village of Witless Bay, Newfoundland. As the narrator of the novel, the reader is presented with the matter-of-fact world that Vas inhabits. Because of the harshness of the environment, there is a toughness required of the citizens of Witless Bay. The terrain punishes anyone who is weak of body and/or of spirit. The novel’s action takes place in 1911. Vas does his best to make a living by drawing birds. He has talent, but there are so few people willing to pay money for his art. Vas, though, is more than an illustrator of birds. He admits to having murdered Botho August, the lighthouse keeper. To better understand what drove Vas to commit murder, he relates what previously has transpired in the community of Witless Bay.
Norman writes with a sure hand. There is no pretension in his approach to the story. The characters he has created are gruff, no-nonsense types of people. Margaret Handle is a hard-drinking woman who speaks her mind and passionately loves Fabian Vas. She also has an affair with the lighthouse keeper. Vas’s mother Alaric is another strong-willed female who takes up with the lighthouse keeper when her husband is away for the summer. In this cruel environment, the people seem to act upon their passions without much regard for the consequences. Jealousy and revenge torment Vas as well as other characters in the novel. THE BIRD ARTIST is a wonderfully vital creation. Men and women struggle against a brutal locale, against one another, and against themselves in order to survive with some degree of dignity.
|
<urn:uuid:3ccbdf30-a70c-4422-8ac8-399e64e0fb1e>
|
CC-MAIN-2016-26
|
http://www.enotes.com/topics/bird-artist
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00120-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.975566 | 372 | 2.640625 | 3 |
In 2010, the first fuel economy standards for 35.5 miles per gallon for cars and light-duty trucks (like pickups) for model years 2012 through 2016 were created. In 2011, new standards were set for heavy-duty vehicles like semi trucks, garbage trucks and buses.
Now, they are in the middle of setting fuel economy standards for cars for model years 2017 through 2025. Those proposed standards call for a fleet-wide average of 54.5 miles per gallon by 2025.
Together, the two sets of rules for cars will reduce American oil consumption by 2.2 million barrels a day by 2025 – more than our daily 2010 oil imports from the entire Persian Gulf. They’ll also cut greenhouse gas pollution by more than 6 billion metric tons, and will save owners more than $4000 over the life of their new car. The heavy-duty vehicle regulations will additionally reduce our oil consumption and greenhouse gas pollution.
Status: The first set of rules for cars and the rules for heavy-duty trucks are final. The second set of rules for cars has been proposed, and EPA just finished holding public hearings and accepting comments about them; they have not been finalized yet. But – opponents have sued to stop the first standards. The DC Circuit Court will hear arguments in the case at the end of this month.
Oil and Gas New Source Performance Standards and Hazardous Air Pollutant Standards
EPA has proposed new emissions limits for oil and natural gas production and distribution facilities. EPA recognized oil and natural gas production as a major source of toxic air pollution since 1992, and added natural gas transmission and storage as a major source of pollution in 1998. Since that time, technological advances have led to more and different types of oil and gas exploration, especially shale gas production through the controversial process called fracking – but the last time national clean air standards for these processes were updated was 1985 in one instance and 1999 in another. One expert called the current standards “limited, inadequate and out of date.”
Status: EPA issued a draft rule last summer, and finished accepting comments on it at the end of November. They could issue a final rule as soon as this spring.
Greenhouse Gas Standards
EPA is expected to propose standards to limit greenhouse gas emissions from new or renovated power plants. Under a 2010 court settlement, EPA was supposed to propose the rules last summer, but it missed that deadline.
Adding to the problem is a huge lawsuit that challenges whether EPA has the authority to regulate greenhouse gases at all. After a 2007 decision by the Supreme Court, EPA had to determine whether greenhouse gases posed a threat to human health and welfare. It found they were. Now the DC Circuit Court of Appeals will review that finding and several related issues.
Status: The court will hear arguments at the end of this month and could rule any time after that. If they rule against EPA, the greenhouse gas standards and the fuel standards already in place will be in serious jeopardy. If they rule for EPA, then EPA can continue its work to reduce carbon pollution from the power and other sectors.
Photo credit: ffffound
|
<urn:uuid:11ef0b74-940b-4e04-9232-3bd7d39f345e>
|
CC-MAIN-2016-26
|
http://www.care2.com/causes/regulations-every-mother-should-love.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00089-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961207 | 630 | 2.953125 | 3 |
| Free Medications
Primary Personality Disorders
by Stuart Sorensen – RMN
There are many different types of personality disorder. These can be broadly categorised into two discrete types called primary and secondary personality disorders (Lyttle J. 1992). Secondary personality disorders are essentially neurotic in nature and are generally more distressing for the sufferer than for those around them. Primary personality disorders tend to be most distressing for the people associated with sufferers. These are often termed antisocial or psychopathic personality disorders and it is this group of disorders which is the focus of this handout.
There are three classifications of primary personality disorder:
Let’s begin by examining the formation or aetiology of primary personality disorder.
Mainstream psychological theory divides human behavior into two broad categories called adaptive and maladaptive. These can also be described as functional and dysfunctional. Simply put adaptive behavior works and is not generally disruptive either for the protagonist or those around them. Maladaptive behavior can be extremely disruptive and tends to be distressing.
Both patterns of behavior, adaptive and maladaptive are learned by trial and error. If we grow up in a society which rewards adaptive behavior we learn to behave in adaptive ways. On the other hand if our upbringing is characterized by manipulation, emotional blackmail, violence or a host of other maladaptive behaviors then those are what we learn. Incidentally it is often a mistake to assume that primary personality disorders are automatically the result of ‘bad parenting’.
People can be influenced by a wide range of sub-cultures during their formative years and learn their social skills from a wide variety of sources including friends, social culture and the media. As a rule knowledge of the aetiology of personality disorder is a useful diagnostic and preventative tool but it is usually unhelpful as a basis of blame attribution. Making parents feel guilty after the damage has been done helps nobody and can cause resentments which de-rail the therapeutic process. It is often useful to share this information with parents in order to prevent the formation of personality disorders in their children but not once the personality has become fixed. Also, it must be said, parenting is often completely irrelevant.
Put simply, people’s personalities are shaped by their experiences. If we grow up in a loving environment where we are encouraged to feel safe and to explore our world without fear of condemnation we develop into confident people with high self-esteem. If on the other hand we are not valued as children and not taught the value of others we grow up with poor self-esteem and little concern for those around us.
Whatever our upbringing and personality type it is generally accepted that the personality ‘fixes’ during the third decade of life (the twenties). After this time it is difficult and arguably impossible to alter a personality in any meaningful way. In some cases people with a milder form of primary personality disorder can be helped to behave more adaptively but not to actually change their personality. Research has demonstrated that even this limited degree of success can only be achieved with long term therapeutic intervention lasting one year or more in a dedicated therapeutic community. Attempting to ‘treat’ primary personality disorders in any other type of environment tends to create disruptions, jeopardizes other patients in many cases and serves little or no useful purpose. Medium or high-grade primary personality disorders do not appear to be amenable to change at all after this age.
The ICD-10 is the diagnostic reference book for mental and behavioral disorders and is accepted throughout Europe. It describes primary personality disorder as Dissocial personality disorder (World Health Organization – 1992) and lists the traits of this personality disorder as follows:
"(a) callous unconcern for the feelings of others;
(b) gross and persistent attitude of irresponsibility and disregard for social norms, rules and obligations;
(c) incapacity to maintain enduring relationships, though having no difficulty in establishing them;
(d) very low tolerance to frustration and a low threshold for discharge of aggression, including violence;
(e) incapacity to experience guilt or to profit from experience, particularly punishment;
(f) marked proneness to blame others, or to offer plausible rationalizations, for the behavior that has brought the patient into conflict with society."
In order to make the diagnosis of Dissocial Personality Disorder at least three of these traits must be present and enduring over time. Let’s look at how these personality traits interact to create the pattern of behavior typical of this disorder.
Callous unconcern for the feelings of others can be defined as lack of conscience and comes from the inability to empathize with others. This effectively removes the normal social barriers associated with respect for other people. The dissocial personality disordered person will quite literally ride roughshod over anyone in order to get what they want and will be incapable of feeling any remorse or even understanding right and wrong in the normal way. Hence the characteristic gross and persistent attitude of irresponsibility and disregard for social norms, rules and obligations. It also explains the incapacity to maintain enduring relationships although their typically charming front means that they have no difficulty in establishing them.
These people crave stimulation and are easily bored. This is why they have a very low tolerance to frustration which combined with their inability to empathize explains their low threshold for discharge of aggression, including violence.
Dissocial personality disorders are characterized by marked proneness to blame others, or to offer plausible rationalizations…. To put it another way they do not generally accept responsibility for their misconduct which is another reason why those around them tend to suffer. Their plausibility often results in innocent bystanders being blamed for offences in which they had no part and friendships can be destroyed. Dissocials are particularly dangerous with regard to vulnerable people such as the disabled or mentally ill who are often unable to recognize or withstand their behavior.
Finally their incapacity to experience guilt or to profit from experience, particularly punishment is the reason for their resistance to treatment. This is partly because they experience stimuli less intensely than other people do. Put another way they feel pain less and don’t experience any emotions as intensely either. That’s why they’re so easily bored and in part explains their need to engineer dramatic (and often extremely disruptive) situations. Such situations may be said to ‘punctuate the emptiness’ caused by their high stimuli threshold.
Many people are drawn into what has come to be known as the savior fantasy in relation to these people and will patiently endure a range of unpleasant circumstances in an attempt to put them back on ‘the right track’. An excellent source of information about the sort of strategies used by dissocials in these situations is GAMES PEOPLE PLAY (Berne E. – 1964).
So what can we do?
In any behavioral disorder it is vital to draw firm and consistent boundaries. This is very different from the usual stance people take when dealing with others. As a rule in our society 'no' tends to mean 'no - unless you can persuade me otherwise'. With psychopaths 'no' must be absolute. And it must be consistently maintained throughout the team.
Psychopaths tend to play one person off against another and will use your friends and colleagues to emotionally blackmail you by gaining their support with plausible explanations for their behavior. Typically they will explain how hard they are trying and how difficult it is to cope with their problems - particularly when that callous nurse (you) won't give them any slack. Then comes the trump card: 'How can anyone expect me to get better when the nurse (you) treats me so unfairly?' Students are vulnerable because they are not yet used to this sort of manipulation and regularly get hurt emotionally by strategies such as these.
The same is true of the patient's parents and associates, which is why they often visit the ward to verbally attack the staff. These people often complain officially about staff. Be aware that these people generally are doing precisely what they believe to be right and are only fighting against the perceived injustice the psychopathic patient has persuaded them of. Incidentally this is why nurses on psychiatric wards are so insistent that the approach is consistent and that the rationale is well documented. Psychopaths are dangerous people to the inexperienced.
Perhaps the greatest skill in dealing with dis-social personalities is assertiveness. See the related handout in this series. Assertiveness skills help you keep boundaries and avoid the manipulation and emotional blackmail.
Relatives find it extremely difficult to understand and deal appropriately with psychopathic family members. This is understandable and certainly not a reason to dismiss or otherwise under-value them or their experience. Just as you had no knowledge of psychopathy before you began your training - neither can they be expected to. They are generally reasonable people faced with a bewildering situation and doing the best they can. It is often possible to help them by teaching assertiveness but don't call it that - most parents and relatives prefer to think of it in the popular guise of 'tough love'. The message is the same. Essentially it's important to help them understand their personal rights and also to accept that the psychopath is an adult. However bizarre their relatives' behavior may be, however destructive or offensive it is the psychopath's own responsibility. Relatives have no need to feel responsible. Incidentally they don't need to run around after the psychopath either although that is often extremely hard for relatives to hear and the message often fails to get through at all.
Those who do take this message on board often find that the psychopathic relative will eventually learn to leave them alone but this may result in total separation. This is no different from a bereavement resulting from death of a loved on. For that reason it is inappropriate to try to 'force' the relative into an assertive position. The resulting separation may be too hard for them to cope with. It is enough to help them recognize the issues. Anything further must remain their own choice.
You, however, have a professional responsibility and, excluding personal acquaintances and relatives, have a duty to maintain a professional distance. This is not simply an archaic instruction which has no basis in reality. This is a vital part of your care, not only for the psychopathic patient but also for the other vulnerable patients in your charge. The therapeutic relationship involves many difficult things which have nothing to do with 'ordinary' life outside hospital. Psychiatric nurses simply cannot afford to let psychopathic patients manipulate them.
This short handout will not make you an expert but it will help you keep yourself emotionally secure. It will also help you to protect the vulnerable mentally ill patients in your care. Please feel free to discuss any or all of the issues raised with your mentor on the ward. Enjoy your placement.
Berne E. (1964)
Games People Play
Lyttle J. (1992)
World Health Organization (1992)
The ICD-10 Classification of Mental and Behavioural Disorders
Compliments of Stuart Sorensen – RMN
Copyright © Patty Fleener, M.S.W. All rights reserved.
|
<urn:uuid:279a2a91-0004-43f5-998b-cb3f71b6156a>
|
CC-MAIN-2016-26
|
http://www.mental-health-today.com/articles/pd.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00063-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956714 | 2,261 | 3.234375 | 3 |
Flag of Australia
The flag of Australia is a national flag. In 1901, Australia became a single country, instead of six separate colonies. A competition was held to design a new flag for the new country. The winning flag has a blue background, the Union Jack, and six stars. Five stars are in the shape of the constellation the Southern Cross, the other is the Commonwealth Star. The flag has been used from 1903, but did not become the official flag of Australia until 1953.
Competition[change | change source]
On 29 April 1901, some private companies with the support of the Australian Government set up a competition to design a new flag for Australia. The prize money was ₤200 including ₤75 from a magazine and ₤50 from a tobacco company. There were 32,823 designs entered into the competition. These were put on show in Melbourne at the Royal Exhibition Building. The judges, from the army, navy, merchant navy, and pilot service, to choose the best flag were:
- Captain Clare of HMS Protector
- Captain Edie, Superintendent of Navigation, Sydney
- Captain J.A. Mitchell, from the Victorian Pilot service
- Lieutenant Thompson, from HMS Katoomba
- Mr.J.W. Evans, Member of the House of Representatives from Tasmania.
The judges took eight days to choose a winning flag.
- Mrs. Annie Dorrington, an artist, of Perth
- Egbert John Nuttall, an architect, of Melbourne
- Ivor Evans, a 14 year old school boy, of Melbourne
- Leslie J. Hawkins, an apprentice optician, of Sydney
- William Stevens, a ship's officer, of Auckland, New Zealand.
The winning flag was flown from the top of the Exhibition Building. The Prime Minister Edmund Barton wrote to the Governor-General to get approval from the King for the new flag. This was officially announced on February 20, 1903.
The flag[change | change source]
Some of the winning flags had different numbers of points on the stars of the Southern Cross. This was simplified to seven points for the four largest stars and five for the small one. The Commonwealth Star had six points for the six states. This was the design in 1903. In 1912, an extra point was added to the Commonwealth Star for the Territories of Australia.
In 1934, the government published how the Australian flag should be made. The flag of the United Kingdom, the Union Jack was still the official flag in Australia until 1954. By law, only the Australian government was allowed to fly the blue Australian flag. Everyone else that wanted to use an Australian flag used the red one, called the Red Ensign. In 15 March 1941, the Prime Minister of Australia, Robert Menzies, gave permission for the blue flag to be used on public buildings, schools and by the public. Australian ships would use the Red Ensign.
In 1954, the Australian Flag became the official Australian National Flag. The Australian flag is the most important flag flown in Australia, and must be given more importance than flags from other countries. When it is raised or lowered people should face the flag and not talk. It should only be used properly, for example, it can not be used to cover a table or a seat, and it can not be allowed to touch the ground.
Other Australian flags[change | change source]
The Royal Australian Navy has had its own flag from 1 March 1967. This looks like the national flag but has a white background with blue stars, called the Australian White Ensign. The Royal Australian Air Force got its own flag in 1949. This was changed in 1981 to feature a red kangaroo.
References[change | change source]
- Castles, Ian (1989). Year Book Australia 1989. Canberra: Australian Bureau of Statistics. pp. 35-37. ISBN 03124746. http://www.abs.gov.au/AUSSTATS/[email protected]/Previousproducts/1301.0Feature%20Article11989?opendocument&tabname=Summary&prodno=1301.0&issue=1989.
- Foley, Carol A. (1996). The Australian Flag: Colonial Relic or Contemporary Icon. The Federation Press. ISBN 1862871884.
- "Flags". Australian Encyclopaedia IV. (1958). Angus and Robertsom. 95-96.
- History of the Australian national flag (Part 4). Flagspot. Accessed 6 November 2008.
|
<urn:uuid:9af309a7-88ed-42c0-85f3-78a07b53bbac>
|
CC-MAIN-2016-26
|
https://simple.wikipedia.org/wiki/Flag_of_Australia
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93365 | 929 | 3.21875 | 3 |
James A. Ramage, Andrea S. Watkins. Kentucky Rising: Democracy, Slavery, and Culture from the Early Republic to the Civil War. Lexington: University Press of Kentucky, 2011. 480 pp. $40.00 (cloth), ISBN 978-0-8131-3440-6.
Reviewed by Timothy Jenness (Lincoln Memorial University)
Published on H-CivWar (April, 2013)
Commissioned by Hugh F. Dubrulle
Henry Clay’s Kentucky and the Evolution of Trans-Appalachian America
Kentucky statesman Henry Clay has been lauded by many historians as one of America’s leading nationalists during the Republic’s early years. Indeed, before fellow Kentuckian Abraham Lincoln’s political ascendancy, Clay’s presence on the national stage had been permanently etched onto the nation’s historical scrolls. Though he was repeatedly denied the presidency, nineteenth-century historians John W. Barber and Henry Howe claimed that the Kentuckian “stands with the nation as one whose affections were filled with the idea of the glory and welfare of the American republic.” For James A. Ramage and Andrea S. Watkins, the “national legend” that arose around his name is both the starting point and one of the themes of their recently published collaboration, Kentucky Rising (p. 1). Focusing on such topics as Clay’s influence, patriotism, Kentuckians’ martial spirit and engagement in the democratic process, and class fraternization, Ramage and Watkins argue that between the American Revolution and Civil War, Kentuckians were driven by “the optimistic and hopeful dreams of a rising globally oriented society” (p. 16). These dreams coexisted with a belief system that viewed slavery as a necessary evil; one based on Clay’s claim that slavery’s status within Kentucky’s borders should be determined by Kentuckians. According to the authors, this was “the lodestar that guided Kentucky through the Civil War” (p. 17).
Kentucky’s status as a border slave state and role as a key participant in the Ohio River Valley’s growing economy make it an important subject for historical analysis. Ramage and Watkins’s book is a credible addition to the historiography on the Bluegrass State that includes works by Stephen Aron, John B. Boles, Robert V. Remini, and more recently, Harry S. Laver, Christopher Phillips, and Anne E. Marshall. These historians have offered tremendous insight into the important role Kentucky played in nineteenth-century America. In their fifteen thematic chapters, two of which focus on Clay and three on the Civil War, Ramage and Watkins consider a variety of topics that they believe illustrate the ways in which Kentuckians’ lifestyle and priorities reflected their patriotism, self-sacrifice, and belief in liberty, all of which, they argue, pervaded the commonwealth’s antebellum heritage. Other topics include art and architecture, the War of 1812 and Mexican-American War, entertainment and culture, religion, women, science, slavery, and, of course, politics.
The book’s first two chapters introduce the concrete and mythical nature of Clay’s role in Kentucky’s and America’s history as pivotal to the development of both. For the authors, Clay in many ways was the personification of Kentucky’s leading role in white Americans’ effort to build a new democratic nation whose frontier was in a constant state of transition. His “American System” of protective tariffs, a national bank, and internal improvements dovetailed nicely with antebellum Lexington’s status as the West’s leading manufacturing center (p. 17). According to Ramage and Watkins, Clay’s economic plans grew out of and “mirrored the hopes of [hemp] manufacturers and investors” in the town (p. 26). Late in life, his standing as a national hero was acknowledged by most contemporaries who, by his death in 1852, were well aware of his Union-saving compromises. Though the authors break little new ground in their assessment of Clay’s leadership, they correctly point out that most Kentuckians agreed with his attitude toward slavery. In Clay’s view, slavery was evil but necessary to maintain public safety for all, unless freed slaves were educated or deported from the United States. More important, it was up to the state to determine the institution’s status within its borders. His membership in the American Colonization Society was a reflection of his belief that such a policy would provide blacks with the opportunities to which they were entitled. In explaining their decision to focus so much initial attention on the statesman, Ramage and Watkins argue that his spirit and national ideals lived on in the national public memory. Clay and Kentucky, they suggest, were synonymous and were remembered together as “champions of the Union” (p. 56). In subsequent years, Kentucky would adhere to his views on slavery, states’ rights, opposition to emancipation, and the enlistment of black troops, although George Prentice, the editor of the Louisville Daily Journal, believed that Clay would never have betrayed the Union.
The authors’ title, Kentucky Rising, suggests their belief that the commonwealth played a leading role in the nation’s development. In that context, chapter 3 focuses on the growth of art and architecture in the state. As the center of the arts community during the first quarter of the nineteenth century, Lexington attracted some of the region’s finest artists. Painter Matthew Harris Jouett distinguished himself in portraits and miniatures, John James Audubon focused on the sketching of birds, Joel Tanner Hart carved sculptures of Cassius M. Clay and Andrew Jackson, and Benjamin Latrobe became a prominent architect. Ramage and Watkins argue that the state’s growth in population, turnpikes, and later railroads “facilitated the exchange of ideas and materials” (p. 79). In so doing, they link the growth of Kentucky’s native culture to the nation’s development.
Chapters 4 and 6 explore the relationship between politics, economics, and culture. Kentucky’s budding democracy reflected not only the importance citizens placed on participation in the public arena, as witnessed by voter turnouts that frequently surpassed 70 percent, but also the degree to which Kentuckians’ behavior paralleled that of their fellow Americans. Arguing that the measure of a state’s influence was met by its ability to produce leaders, Ramage and Watkins point out that “national political leaders frequently looked to Kentucky for qualified men to lead the nation” (pp. 84-85). In this regard, though, one wonders about the degree to which Kentucky was really unique given the plethora of leaders Virginia and Massachusetts provided during the same period. Nonetheless, Clay’s American System, they argue, benefited everyone, especially after the Louisiana Purchase permanently opened the Mississippi River. The changes wrought by steamboats, for instance, enabled politicians and entertainers to bring their agendas and performances to people on the frontier. For “garrulous and socially minded” Kentuckians who took advantage of the changing environment, class fraternization was a natural byproduct (p. 129). As early as 1817, Kentuckians attended theatrical and music performances where exposure to great composers, such as Beethoven and Mozart, further broadened their horizons and enabled them to brush shoulders with other social classes. As the authors put it, settlers came to Kentucky for a better life, but they continued to identify with the East from where they came and to participate “in European and global culture” (p. 147).
In chapters on the War of 1812 and Mexican-American War, Ramage and Watkins delve into the nature of Kentuckians’ famous martial spirit, a legacy of military tradition, they argue, that can be traced back to the commonwealth’s frontier era. The courage and élan of Kentucky’s militia in such engagements as the Battle of the Thames during the War of 1812 and the Battle of Buena Vista during the Mexican-American War served to solidify Kentucky’s reputation despite disparaging remarks by General Andrew Jackson after the Battle of New Orleans in 1815. The authors argue that many Kentuckians supported the Mexican conflict, in part, because it was the desire of younger men “to gain personal honor by identifying with the Kentucky tradition of patriotism and honor through military service” (p. 171). It should be pointed out, however, that similar attitudes could be found in other southern states. The martial tradition was alive and well across the South throughout the antebellum period.
Ramage and Watkins contend that religion played a central role in Kentuckians’ lives. By the early nineteenth century, the Second Great Awakening had engulfed the commonwealth, converting many people and diversifying American Protestantism in the process. The authors point out that the revivalism split the Presbyterians even as it benefited the Baptists and Methodists because they tended to be more democratic than other denominations. In Kentucky, blacks and whites usually worshiped together, although they were segregated with blacks sitting in the back of the church or the balcony. Ramage and Watkins argue that the growth of black Baptist churches was a direct result of the tremendous growth in black membership in predominantly white churches. Church membership, they aver, enabled both blacks and white women to step out from underneath the patriarchal domination of white men.
Two chapters on science seek to support the authors’ emphasis on Kentucky’s leadership during the nation’s early years. The creation of Transylvania University, for example, reflected Kentuckians’ “forward-thinking vision,” in part, because it was the first university west of the Appalachians (p. 193). In Ramage and Watkins’s view, this vision was consistent with the same positive outlook Clay had for national development. After a medical school opened in Louisville in the 1830s, Kentucky took the lead in medical and scientific research, as medical professors sought innovations in smallpox vaccination, surgical anesthesia, and quinine treatment for fevers.
The book’s final five chapters focus on slavery and the Civil War. Kentucky’s version of the institution, like that in other slave states, was influenced by both location and economy. As in other border states, slavery did not dominate Kentucky’s labor force because cotton, rice, and sugar were not profitable. Many bondsmen resided on small farms where they tended to live and work alongside their masters cultivating hemp, the state’s most important cash crop. Kentucky chattels endured a variety of abuses similar to those faced by their brethren farther south. Despite strong antislavery sentiment, Kentuckians passionately defended their labor system. Unlike much of the slaveholding South where, beginning in the 1830s, people came to see slavery as a “positive good,” Kentuckians continued to view the institution as a “necessary evil” until the Civil War. According to Ramage and Watkins, this enabled them to hold conflicting views and created other challenges. Such a view, they argue, essentially led to political inertia because “it prevented a change in attitude regarding slavery that led to definitive action” (pp. 258-259). Colonization was a popular cause in the 1830s and 1840s but white citizens opposed immediate emancipation because of the potential havoc it would bring to the economy. Conservative opponents of slavery preferred gradual emancipation, but such leaders as Cassius M. Clay, Henry’s cousin, failed to gain many converts to the cause. Fears of a race war also hindered the effectiveness of emancipationists’ arguments. Divisions among the state’s white citizenry doomed emancipation until the Thirteenth Amendment was ratified in 1865.
Kentucky’s location on the border enhanced its importance to the Union war effort, particularly in 1861 when the attack on Fort Sumter threw the Ohio River Valley into turmoil. Like the rest of the Upper South, it was hopelessly divided--forty thousand men fought for the Confederacy and more than one hundred thousand wore Union blue. President Lincoln handled his native state carefully during the war’s early months, yet as the conflict progressed, it became a stated goal of Union commanders to win Kentuckians’ hearts. They embarked on a “pacification program” that Ramage and Watkins argue had the opposite effect among many Kentuckians. Efforts to stamp out political dissent, even after Kentucky had clearly decided to remain loyal, incensed many people. The Emancipation Proclamation and enlistment of black troops made it appear to Kentuckians that Confederate charges were right--Lincoln was indeed an abolitionist. The authors assert that there were four reasons why Kentuckians opposed emancipation: racial prejudice, their belief that they had already decided the issue during the 1849 Constitutional Convention, black enlistment offended white Kentuckians’ honor and manhood, and the Union’s use of black troops made Kentuckians look like cowards. After the war, such views would help to make Kentucky one of the staunchest defenders of the “Lost Cause.”
Kentucky Rising offers a readable survey of the commonwealth’s role in the formative years of the American Republic. Nonetheless, several criticisms should be noted. The dearth of consulted manuscript sources is puzzling, particularly given the rich collections at such repositories as the Filson Historical Society. More generally, the book is a bit too celebratory and tends to overly elevate Kentucky’s contributions to the nation. One wonders, for instance, how the state could be both “quite provincial” and able to think “globally in terms of the world market” (p. 141). The authors seem to suggest that Kentuckians’ martial tradition was stronger or more pervasive than in the rest of the South, an argument that is not particularly convincing. There are sweeping statements that appear designed to address Kentucky’s distinctiveness but, in fact, do quite the opposite; for instance: “Slave owners in Kentucky often viewed themselves as benevolent masters who provided for their slaves from birth to death” (p. 242). While true, such a view was typical of most slaveholders across the South and not one that was unique to Kentucky. Furthermore, in discussing the black community, Ramage and Watkins fail to address adequately slavery’s regional variations within the commonwealth. Did differences in the institution exist between the Ohio River counties and the counties along the Tennessee line? What about disparities between urban and rural areas? Interestingly, they note that historians have often debated the “humanity” of Kentucky’s slave system vis-à-vis the rest of the South, yet no answer is really provided. Finally, the authors claim that their state’s “deep commitment to democracy was manifest in the mass public meetings of protest or support that were so powerful” in the nation’s early years (p. 337). Again, Kentucky’s experience was relatively typical as Americans everywhere began to more fully understand and appreciate the political framework created by the Founding Fathers. Despite these criticisms, Kentucky Rising affords those interested in Kentucky history an enjoyable read.
. John W. Barber and Henry Howe, The Loyal West in the Times of the Rebellion; also, Before and Since: Being an Encyclopedia and Panorama of the Western States, Pacific States and Territories of the Union (Cincinnati: F. A. Howe, 1865), 101.
. See Stephen Aron, How the West Was Lost: The Transformation of Kentucky from Daniel Boone to Henry Clay (Baltimore: Johns Hopkins University Press, 1996); John B. Boles, Religion in Antebellum Kentucky (Lexington: University Press of Kentucky, 1976); Robert V. Remini, Henry Clay: Statesman for the Union (New York: W. W. Norton, 1991); Harry S. Laver, Citizens More Than Soldiers: The Kentucky Militia and Society in the Early Republic (Lincoln: University of Nebraska Press, 2007); Christopher Phillips, “‘The Chrysalis State’: Slavery, Confederate Identity, and the Creation of the Border South,” in Inside the Confederate Nation: Essays in Honor of Emory M. Thomas, ed. Lesley J. Gordon and John C. Inscoe (Baton Rouge: Louisiana State University Press, 2005); and Anne E. Marshall, Creating a Confederate Kentucky: The Lost Cause and Civil War Memory in a Border State (Chapel Hill: University of North Carolina Press, 2010).
. Vice Presidents Richard M. Johnson and John C. Breckinridge, Senators Henry Clay and John J. Crittenden, Speaker of the House Linn Boyd, and President Zachary Taylor, not to mention Abraham Lincoln and Jefferson Davis, were either born in Kentucky or grew up there.
. See Nathan O. Hatch, The Democratization of American Christianity (New Haven and London: Yale University Press, 1989) for an excellent analysis of the links between democracy and Christianity in the early nineteenth century.
If there is additional discussion of this review, you may access it through the network, at: https://networks.h-net.org/h-civwar.
Timothy Jenness. Review of Ramage, James A.; Watkins, Andrea S., Kentucky Rising: Democracy, Slavery, and Culture from the Early Republic to the Civil War.
H-CivWar, H-Net Reviews.
|This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.|
|
<urn:uuid:b945e45f-97b0-4829-9e83-aaad631afa92>
|
CC-MAIN-2016-26
|
https://www.h-net.org/reviews/showrev.php?id=35030
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00032-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948473 | 3,689 | 2.984375 | 3 |
It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
1967: Gus Grissom, Ed White and Roger Chaffee are killed on the launch pad when a flash fire engulfs their command module during testing for the first Apollo-Saturn mission. They are the first U.S. astronauts to die in a spacecraft.
The command module, built by North American Aviation, was the prototype for those that would eventually accompany the lunar landers to the moon. Designated CM-012 by NASA, the module was a lot larger than those flown during the Mercury and Gemini programs, and was the first designed for the Saturn 1B booster.
For a time, the mission called AS-204 had two flight plans. AS-204A, manned by Gus Grissom, Edward White, and Roger Chaffee,* was "to verify spacecraft crew operations and CSM subsystems performance for an earth-orbit mission of up to 14 days' duration and to verify the launch vehicle subsystems performance in preparation for subsequent operational Saturn IB missions." The flight would be in the last quarter of 1966 from Launch Complex 34 at Cape Kennedy. AS-204B, on the other hand, would be an unmanned mission with the same objectives (except for crew operations), to be flown only if spacecraft and launch vehicle had not qualified for manned flights. And there were doubts. Gas ingestion in the service module propulsion system in AS-201 and the resulting erratic firing had caused some misgivings, although these had been somewhat allayed by AS-202.50
The fire that killed the Apollo 1 crew on a test launch pad spread so rapidly because the capsule was pressurized with pure O2 but at slightly more than atmospheric pressure, instead of the ⅓ normal pressure that would be used in a mission.
BTW, it was not known as Apollo One until after the deaths of the astronauts. It was actually a test capsule that was never meant to fly in space. After their deaths, it was decided to call it Apollo One to make it seem something other than a test mishap.
|
<urn:uuid:566a8da8-3a75-4730-9637-4dcd665bb547>
|
CC-MAIN-2016-26
|
http://www.abovetopsecret.com/forum/thread656026/pg1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00174-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969937 | 461 | 3.40625 | 3 |
Nevidjon and Erickson, in their article "The Nursing Shortage: Solutions for the Short and Long Term" write, "Many reasons explain the continual decrease in enrollment in basic nursing programs. First and foremost is the fact that women have many choices today when selecting a post high school education and career."
However, a variety of career choices for women have been noted almost since the beginning of organized nursing. At the ninth annual convention (June 5-7,1906) of the Nurses Associated Alumnae (later renamed the American Nurses Association) the president, Annie Damer, in her presidential address stated "but now when women teachers, women ministers, and doctors and lawyers, are all successfully entering upon their life's work after a system of preparation different from ours, yet equally equipped for it, while their home life has been controlled by themselves, can we say that our schools are sending out women of greater intelligence or skill, of higher moral character attainment?" (American Nurses Association, 1976, p. 343). This statement suggests women were being prepared in a number of professions back in 1906.
At the twenty-sixth Convention of the National League for Nursing Education (Later renamed the National League for Nursing) Katherine Olmstead, Executive Secretary, Central Council for Nursing Education, reported "we know that twenty-five years ago there were exactly two occupations for women, nursing and teaching. At the last census in 1910 we found that there were three hundred occupations in which women were engaged, and since the war that number has been more than tripled" (Olmstead, 1920/1991, p. 178). This citation, too, indicates women have long had a variety of careers from which to choose, a variety of choices that may pull nurses away from nursing and into other professions, thus aggravating nursing shortages.
Additionally nursing shortages have been aggravated by ignoring a large and important source of potential nursing recruits, namely persons of the male gender. This is in spite of numerous reports that have encouraged a concerted effort to also recruit men into nursing. Back in 1949 the Committee on the Function of Nursing wrote, "the number of potentially competent male nurses is very large and recruitment efforts should therefore be directed at them"(pp. 24-25). It is interesting, and unfortunate, to note that just prior to this study, a very large group of women were paid to go through nursing programs, while men, who were R.N.'s, were drafted into the military, but not permitted to serve as nurses.
In 1970 the National Commission for the Study of Nursing and Nursing Education recommended "...recruitment of men into nursing be fostered through modification of the sex-linked occupational image of the profession by the national and state organizations, and the adoption of specific policies and goals to increase the percentage of males entering nursing preparatory programs by those institutions that offer them" (p. 141).
Even more recently, the Secretary's Commission on Nursing (1988) recommended the "establishment of a national campaign to promote the image of men in nursing and the idea of nursing as an attractive career option for men..." (p. 44). One response to this study was to feature Miss America in Seventeen Magazine.
Please, before nursing destroys itself, stop regretting that nurses today have other career choices, and rather follow the advice of these previous studies that have recommended men be recruited into nursing. Why does nursing keep resisting the recruitment and retention of men in nursing? Making nursing an attractive profession to both men and women can help to alleviate the nursing shortage and prevent us from self-destruction.
Bruce Wilson, PhD, RN, C
Department of Nursing
University of Texas - Pan American
American Nurses Association. (1976). One strong voice: The story of the American Nurses Association. Kansas City, MO: Author.
Committee on the Function of Nursing. (1949). Program for the nursing profession. New York: MacMillan.
National Commission for the Study of Nursing and Nursing Education. (1970). An abstract for action. Jerome Lysaught, Director. New York: McGraw-Hill.
Olmstead, K. (1920/1991). The recruiting of student nurses. Reprinted in N. Birnbach & S. Lewenson (Eds.), First words: Selected addresses from the National League for Nursing 1984 - 1993. New York: National League for Nursing.
Secretary's Commission on Nursing. (1988). Final Report Volume 1. Washington, D.C.: Department of Health and Human Services.
|
<urn:uuid:0d31c1c4-def0-4c53-93fc-7fc07fb15e6f>
|
CC-MAIN-2016-26
|
http://nursingworld.org/MainMenuCategories/ANAMarketplace/ANAPeriodicals/OJIN/LetterstotheEditor/BruceWilsonLetter.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00170-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966511 | 924 | 2.671875 | 3 |
Sprawling study by noted English historian Gilbert (A History of the Twentieth Century, 1999, etc.) celebrates hundreds of men and women who saved Jewish lives during the years of the Shoah.
These “Righteous Among Nations,” the Yad Vashem, were comparatively rare in WWII–era Europe, where homegrown fascists, nationalists, criminals, and ordinary people with scores to settle visited murder upon the Jews or stood by as it was committed en masse. Gilbert gathers some truly remarkable stories of the brave deeds of the Righteous: poor Polish farmers, for instance, who hid Jewish families under barn floors or in attics; Italian priests and nuns who disguised refugees as monks and novices (as in Assisi, where one hiding place was “the only convent in the world with a kosher kitchen”); British prisoners of war who smuggled Jews scheduled for annihilation into their own camps, keeping them fed and hidden for months at a time at grave risk to their own safety. These stories are marvelous moral lessons, of course, and it may seem churlish to complain about Gilbert’s approach to relating those exemplary deeds, which, sad to say, is eminently respectful but not especially interesting. He piles anecdote atop anecdote with little discrimination and even less commentary, save at the very end, when he briefly considers the various motives the Righteous may have had in doing their good deeds: hatred of the Nazis, religious devotion, simple human decency, and so on. In the end, the catalogue-like narrative is just a little numbing and more than a little repetitive; it would have been useful to have fewer stories with more consideration of what they mean.
Less memorable than other studies of the subject.
|
<urn:uuid:0831279b-0cd8-4ec4-a845-4f6735e34ff2>
|
CC-MAIN-2016-26
|
https://www.kirkusreviews.com/book-reviews/martin-gilbert/the-righteous/print/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00164-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964224 | 350 | 2.859375 | 3 |
Astronomers discover wind patterns on Titan that recur over tens of thousands of years.
A Moment of Science looks at the many different moons in our solar system.
Could Titan support life? Scientists hope to learn more about Titan's atmosphere to find out!
The study of nitrogen compounds found in Titan's atmosphere may help scientists understand how life originally formed on our own planet.
Looking for a seaside vacation spot? Consider Saturn's largest moon, Titan. There's no chance of getting sunburned, but you'll want to take a heavy parka...
|
<urn:uuid:4ec6768e-456f-4a07-b445-fbb2ef0f8543>
|
CC-MAIN-2016-26
|
http://indianapublicmedia.org/amomentofscience/tag/titan/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00192-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.91521 | 115 | 2.65625 | 3 |
But what if files could be hidden within the complex digital code of a photographic image? A family snapshot, for example, could contain secret information and even a trained eye wouldn't know the difference.
That ability to hide files within another file, called steganography, is here thanks to a number of software programs now on the market. The emerging science of detecting such files - steganalysis - is getting a boost from the Midwest Forensics Resource Center at the U.S. Department of Energy's Ames Laboratory and a pair of Iowa State University researchers.
Electronic images, such as jpeg files, provide the perfect "cover" because they're very common - a single computer can contain thousands of jpeg images and they can be posted on Web sites or e-mailed anywhere. Steganographic, or stego, techniques allow users to embed a secret file, or payload, by shifting the color values just slightly to account for the "bits" of data being hidden. The payload files can be almost anything from illegal financial transactions and the proverbial off-shore account information to sleeper cell communications or child pornography.
"We're taking very simple stego techniques and trying to find statistical measures that we can use to distinguish an innocent image from one that has hidden data," said Clifford Bergman, ISU math professor and researcher on the project. "One of the reasons we're focusing on images is there's lots of 'room' within a digital image to hide data. You can fiddle with them quite a bit and visually a person can't see the difference."
"At the simplest level, consider a black and white photo - each pixel has a grayscale value between zero (black) and 255 (white)," said Jennifer Davidson, ISU math professor and the other investigator on the project. "So the data file for that photo is one long string of those grayscale numbers that represent each pixel."
Encrypted payload files can be represented by a string of zeros and ones. To embed the payload file, the stego program compares the payload file's string of zeros and ones to the string of pixel values in the image file. The stego program then changes the image's pixel values so that an even pixel value represents a zero in the payload string and an odd pixel value represents a one. The person receiving the stego image then looks at the even-odd string of pixel values to reconstruct the payload's data string of zeros and ones, which can then be decrypted to retrieve the secret file.
"Visually, you won't see any difference between the before and after photo," Davidson said, "because the shift in pixel value is so minor. However, it will change the statistical properties of the pixel values of the image and that's what we're studying."
Given the vast number of potential images to review and the variety and complexity of the embedding algorithms used, developing a quick and easy technique to review and detect images that contain hidden files is vital. Bergman and Davidson are utilizing a pattern recognition system called an artificial neural net, or ANN, to distinguish between innocent images and stego images.
Training the ANN involved obtaining a database of 1,300 "clean" original images from a colleague, Ed Delp, at Purdue University. These images were then altered in eight different ways using different stego embedding techniques - involving sophisticated transfer techniques between the spatial and wavelet domains - to create a database of over 10,000 images.
Once trained, the ANN can then apply its rules to new candidate images and classify them as either innocent or stego images.
"The ANN establishes kind of a threshold value," Bergman said. "If it falls above the threshold, it's suspicious.
"If you can detect there's something there, and better yet, what method was used to embed it, you could extract the encrypted data," Bergman continued. "But then you're faced with a whole new problem of decrypting the data ... and there are ciphers out there that are essentially impossible to solve using current methods."
In preliminary tests, the ANN was able to identify 92 percent of the stego images and flagged only 10 percent of the innocent images, and the researchers hope those results will get even better. An investigator with the Iowa Department of Criminal Investigation is currently field-testing the program to help evaluate its usefulness and a graphical user interface is being developed to make the program more user friendly.
"Hopefully we can come up with algorithms that are strong enough and the statistics are convincing enough for forensic scientists to use in a court of law," Bergman said, "so they can say, 'There's clearly something suspicious here,' similar to the way they use DNA evidence to establish a link between the defendant and the crime."
The project is funded by the Midwest Forensics Resource Center. The MFRC, operated by Ames Laboratory, provides research and support services to crime laboratories and forensic scientists throughout the Midwest.
Ames Laboratory is operated for the Department of Energy by Iowa State University. The Lab conducts research into various areas of national concern, including energy resources, high-speed computer design, environmental cleanup and restoration, and the synthesis and study of new materials.
|
<urn:uuid:103b99af-17af-45a5-88b2-954a00d84d94>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2006-05/dl-fcf052406.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00170-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.934695 | 1,069 | 3.140625 | 3 |
This group of 36 nuclear exporting states was established in 1971 with the purpose to maintain a "trigger list" of (1) source of special fissionable materials, and (2) equipment or materials especially designed or prepared for the processing, use, or production of special fissionable materials. Additionally the committee has identified certain dual-use technologies as requiring safeguarding when they are supplied to non-nuclear weapon states to be used for nuclear purposes. These include explosives, centrifuge components, and special materials. The Zangger Committee, named after its first chairman Claude Zangger of Switzerland, is an informal arrangement and its decisions are not legally binding upon its members. As of 2008 there were 36 members of the Zangger Committee: Argentina, Australia, Austria, Belgium, Bulgaria, Canada, People's Republic of China, Croatia, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Japan, South Korea, Luxembourg, Netherlands, Norway, Poland, Portugal, Romania, Russia, Slovakia, South Africa, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom and the United States.
Zone of application
The zone of application of a nuclear-weapon-free zone generally means the whole of the "territories" of the contracting parties within the defined region. Defining where the zone is applicable has often been a subject of difficult negotiations.
Zone of Peace, Freedom and Neutrality (ZOPFAN)
In November 1971, foreign ministers from ASEAN member states—Brunei Darussalem, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand and VietNam—met and adopted the "ZOPFAN vision" in the Kuala Lumpur Declaration to establish the Zone of Peace, Freedom and Neutrality in Southeast Asia. The declaration states that ASEAN nations "are determined to exert initially necessary efforts to secure the recognition of, and respect for, Southeast Asia as a zone of peace, freedom and neutrality, free from any form or manner of interference by outside Powers”.
|
<urn:uuid:ccb83ac3-1942-48a6-9ce8-ce68b6887cd2>
|
CC-MAIN-2016-26
|
http://www.ctbto.org/glossary/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00020-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.923079 | 422 | 2.96875 | 3 |
The Encrypted File System (EFS) is a J2 filesystem-level encryption through individual key stores. This allows for file encryption in order to protect confidential data from attackers with physical access to the computer. User authentication and access control lists can protect files from unauthorized access while the operating system is running; however, it’s easy to circumvent the control lists if an attacker gains physical access to the computer.
One solution is to store the encrypted files on the disks of the computer. In EFS, a key is associated to each user. These keys are stored in a cryptographically protected key store and upon successful login, and the user's keys are loaded into the kernel and associated with the process credentials. When the process needs to open an EFS-protected file, the system tests the credentials. If the system finds a key matching the file protection, the process is able to decrypt the file key and file content. The cryptographic information is kept in the extended attributes for each file. EFS uses extended attribute Version 2, and each file is encrypted before being written on the disk. The files are decrypted when they are read from the disk into memory so that the file data kept in memory is in clear format. The data is decrypted only once, which is a major advantage. When another user requires access to the file, their security credentials are verified before being granted access to the data even though the file data is already in memory and in clear format. If the user is not entitled to access the file, the access is refused. File encryption does not eliminate the role of traditional access permissions, but it does add more granularity and flexibility.
In order to be able to create and use the EFS-enabled file system on a system, the following prerequisites must be met:
- Install the CryptoLite in C (CliC) cryptographic library.
- Enable the RBAC.
- Enable the system to use the EFS file system.
How is AIX EFS different from others available in the market?
AIX® EFS encryption is at the file system level. Each file is protected with a unique file key, and protection is created against malicious root.
Frequently used commands
efsenable command activates the EFS capability on a
system. It creates the EFS administration keystore, the user keystore, and
the security group keystore. Keystore is a key repository that contains
EFS security information. The access key to the EFS administration
keystore is stored in the user keystore and the security group keystore.
efsenable command creates the /var/efs directory. The
/etc/security/user and /etc/security/group files are updated with new EFS
attributes on execution of this command.
efskeymgr command is dedicated to all key management
operations needed by an EFS. The initial password of a user keystore is
the user login password. Group keystores and admin keystores are not
protected by a password but by an access key. Access keys are stored
inside all user keystores that belong to this group.
When you open a keystore (at login or explicitly with the
efskeymgr command), the private keys contained in this
keystore are pushed to the kernel and associated with the process. If
access keys are found in the keystore, the corresponding keystores are
also opened and the keys are automatically pushed into their kernel.
efsmgr command is dedicated to the files encryption and
decryption management inside EFS. Encrypted files can only be created on
the EFS-enabled JFS2 file systems. Inheritance is set on the file system
or the directory where the file is being created using this command. When
inheritance is set on a directory, all new files created in this directory
are encrypted by default. The cipher used to encrypt files is the
inherited cipher. New directories also inherit the same cipher. If
inheritance is disabled on a subdirectory, the new files created in this
subdirectory will not be encrypted.
Setting or removing inheritance on a directory or a file system has no
effect on the existing files. The
efsmgr command must be used
explicitly to encrypt or decrypt files.
Let's take a scenario of a company that has three departments, namely sales, marketing, and finance. These three departments share the same AIX machine to store their confidential content. If EFS is not enabled, the potential of having the data exposed between the three departments is extremely high. See Listing 1 below to learn how to make this threat-prone machine become a safe location to store data.
To enable EFS on AIX, type the following:
Listing 1. EFS enablement in AIX
# efsenable -a Enter password to protect your initial keystore: Enter the same password again:
Enter the following to see the directories created to facilitate EFS:
# cd /var/efs # ls efs_admin efsenabled groups users
All of the EFS capabilities should now be enabled.
You are now going to create a separate file system for all the three departments. The creation of an EFS is similar to the creation of a normal file system. The only difference is that you have to enable the EA2 efs = yes attribute.
Listing 2 illustrates how to create an encrypted file system through the System Management Interface Tool (SMIT):
Listing 2. EFS creation through SMIT
Add an Enhanced Journaled File System Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] Volume group name rootvg SIZE of file system Unit Size Megabytes + * Number of units # * MOUNT POINT [/sales] Mount AUTOMATICALLY at system restart? no + PERMISSIONS read/write + Mount OPTIONS + Block Size (bytes) 4096 + Logical Volume for Log + Inline Log size (MBytes) # Extended Attribute Format + ENABLE Quota Management? no + Enable EFS? yes + Allow internal snapshots? no +
You can also create the same file system through the command line, as shown here in Listing 3:
Listing 3. EFS creation through the command line
#crfs -v jfs2 -g rootvg -m /sales -a size=100M -a efs=yes #crfs -v jfs2 -g rootvg -m /marketing -a size=100M -a efs=yes #crfs -v jfs2 -g rootvg -m /finance -a size=100M -a efs=yes
You have now successfully created three separate file systems for these three departments.
Creating keystores for users and groups
In order to handle or maintain these individual file systems, you need create three different users and create a keystore for them (see Listing 4). (A key store for an user is created when a password is set for that user).
Listing 4. Creation of users
#mkuser salesman #passwd salesman #mkuser marketingman #passwd marketingman #mkuser financeman #passwd financeman
This creates three separate keystores for these three users in the /var/efs/users directory (see Listing 5).
Listing 5. Keystore location for users
# pwd /var/efs/users # ls .lock salesman marketingman financeman root
You can also create keystores for the groups with EFS (see Listing 6).
Listing 6. Keystore creation for groups
#efskeymgr -C finance # pwd /var/efs/groups # ls .lock finance security
Creation of a keystore for a group requires at least one user under it.
Creating EFS directories and setting properties
This section shows how you can create encrypted files and directories in the EFS file system and manipulate their properties. In order to create EFS directories, you need the EFS file system to be mounted (see Listing 7).
Listing 7. Creating the EFS directory
# mount /finance # cd /finance #mkdir yearlyreport # efsmgr -E yearlyreport # efsmgr -L yearlyreport EFS inheritance is set with algorithm: AES_128_CBC
The yearlyreport directory is now set for inheritance. It indicates that a file or directory inherits both the property of encryption and all encryption parameters from its parent directory.
There are various options with
efsmgr, which facilitates you
to set the type of cipher to be used on this directory, enable and disable
inheritance, and add or remove users and groups from the EFS access list
of this directory.
Encrypting individual files
In order to carry out any EFS-related activity, you need to load the keystore. If you try to create a file inside this encrypted directory without having access to the keystore that protects it, the following will result:
# cd yearlyreport # ls # touch apr_report touch: 0652-046 Cannot create apr_report.
This happens when you don't have the keystore loaded to perform the EFS activity (see Listing 8).
Listing 8. Loading EFS keystore to the shell
# efskeymgr -o ksh financeman's EFS password: # touch apr_report
Now that you have loaded the keystore, any information that is added to this file is encrypted at the file system level (see Listing 9).
Listing 9. Encrypted file in EFS
# ls -U apr_report -rw-r--r--e 1 financeman system 0 Nov 28 06:14 apr_report
The "e" set for this file means that it's encrypted and no one other than the owner who possesses the key store can access and read its content (see Listing 10).
Listing 10. Listing encrypted file attributes
# efsmgr -l apr_report EFS File information: Algorithm: AES_128_CBC List of keys that can open the file: Key #1: Algorithm : RSA_1024 Who : uid 0 Key fingerprint : 4b6c5f5f:63cb8c6f:752b37c3:6bc818e1:7b4961f9
With the different flags available with the
you can change the cipher and other attributes of the file. If you want to
create a file that does not come under any encrypted directory, then you
need to use the following option to encrypt such standalone files (see Listing 11):
Listing 11. Encrypting a single file
#cd /finance #touch companylist # ls -U total 16 -rw-r--r--- 1 root system 8 Nov 28 06:21 companylist drwxr-xr-x- 2 root system 256 Nov 28 05:52 lost+found drwxr-xr-xe 2 root system 256 Nov 28 06:14 yearlyreport # efsmgr -c AES_192_ECB -e companylist # ls -U companylist -rw-r--r--e 1 root system 8 Nov 28 06:24 companylist
Facilitating the access of other users for your files
Now you have seen that each department has created a separate file system and has a keystore to guard them. If the scenario requests that a person from finance wants to access the encrypted files from sales, then you need to be able to grant him or her permission to do so (see Listing 12 and Listing 13).
Listing 12. vi output when the file is encrypted
#vi sales_report ~ ~ ~ ~ ~ ~ ~ ~ "sales_report" Security authentication is denied.
Listing 13. Passing keystore access to another user
# efskeymgr -k user/salesman -s user/financeman
This command now sends the access key of the "salesman" user to the "financeman" user.
If you try to edit a file owned by
salesman, you can read and
access the content in its plain format, as you now possess the keystore of
the user who created the file (see Listing 14).
Listing 14. vi output after receiving keystore access
#vi sales_report Sales report for this financial year ~ ~ ~ ~ ~ ~ ~ ~ "sales_report" [Read only] 1 line, 36 characters
Granting and revoking access to individual files
Instead of sending the complete access key to another user, you can also set access permissions on individual files residing on EFS.
Let's now suppose you have a file in the /marketing filesystem directory and you wish to give access to a particular /marketing/strategy.txt file to the "salesman" user and to the "finance" group. In order to accomplish this task, you need to review Listing 15 and Listing 16.
Listing 15. Granting access to an user
# efsmgr -l strategy.txt EFS File information: Algorithm: AES_128_CBC List of keys that can open the file: Key #1: Algorithm : RSA_1024 Who : uid 0 Key fingerprint : 4b6c5f5f:63cb8c6f:752b37c3:6bc818e1:7b4961f9 # efsmgr -a strategy.txt -u salesman # efsmgr -l strategy.txt EFS File information: Algorithm: AES_128_CBC List of keys that can open the file: Key #1: Algorithm : RSA_1024 Who : uid 0 Key fingerprint : 4b6c5f5f:63cb8c6f:752b37c3:6bc818e1:7b4961f9 Key #2: Algorithm : RSA_1024 Who : uid 204 Key fingerprint : f91b5a79:53bdd7f1:58987a33:f5701a38:99145b24
Listing 16. Granting access to a group
# efsmgr -a strategy.txt -g finance # efsmgr -l strategy.txt EFS File information: Algorithm: AES_128_CBC List of keys that can open the file: Key #1: Algorithm : RSA_1024 Who : uid 0 Key fingerprint : 4b6c5f5f:63cb8c6f:752b37c3:6bc818e1:7b4961f9 Key #2: Algorithm : RSA_1024 Who : uid 204 Key fingerprint : f91b5a79:53bdd7f1:58987a33:f5701a38:99145b24 Key #3: Algorithm : RSA_1024 Who : gid 201 Key fingerprint : 8cb65011:2a42e9f0:91f7b712:20e36bb7:5eb0db0a
If you need to revoke the access that was provided to the "finance" group,
then use the "-r" flag with the
efsmgr command, as shown in
Listing 17 below.
Listing 17. Revoking access to a group
# efsmgr -r strategy.txt -g finance # efsmgr -l strategy.txt EFS File information: Algorithm: AES_128_CBC List of keys that can open the file: Key #1: Algorithm : RSA_1024 Who : uid 0 Key fingerprint : 4b6c5f5f:63cb8c6f:752b37c3:6bc818e1:7b4961f9 Key #2: Algorithm : RSA_1024 Who : uid 204 Key fingerprint : f91b5a79:53bdd7f1:58987a33:f5701a38:99145b24
For complete list of flags and options of EFS commands, see the Resources section.
EFS is a great feature presented with AIX 6.1, which helps you encrypt and safeguard your data. This article provided you with basic information on EFS that helps in enabling AIX 6.1 machines with EFS. You learned how to create encrypted files and directories and how to change ciphers and inheritance through commands. You also examined a use case scenario detailing the configuration and usage of EFS.
- Information for System p: Visit this this site for additional information.
- Check out the following IBM® Redbooks®:
- The AIX 6 Advanced Security Features, Introduction and Configuration Guide—highlights and explains the security features enhancements on AIX 6.1.
- The AIX 5L Version 5.2 Security Supplement—you can use this document as an additional source for security information.
- AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills.
- New to AIX and UNIX?: Visit the "New to AIX and UNIX" page to learn more about AIX and UNIX.
- AIX Wikis: Discover a collaborative environment for technical information related to AIX.
- Search the AIX and UNIX library by topic:
- developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts.
- Podcasts: Tune in and catch up with IBM technical experts.
Get products and technologies
- IBM trial software: Build your next development project with software for download directly from developerWorks.
- Participate in the developerWorks blogs and get involved in the developerWorks community.
- Participate in the AIX and UNIX and PowerVM forums.
|
<urn:uuid:fe877990-d3de-4e3b-ba99-69bb12c7aa05>
|
CC-MAIN-2016-26
|
http://www.ibm.com/developerworks/aix/library/au-efs/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00147-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.844783 | 3,781 | 2.921875 | 3 |
The United States is not the only country in the world who has, or is, facing fiscal problems. Our neighbor to the north faced some severe fiscal problems during the mid-1990s when they had high budget deficits and when the Mexican peso crisis seemed to make Canada the next focus of international worries. To help illuminate Canada's solution to their fiscal problems, the American Enterprise Institute and the Macdonald-Laurier Institute hosted an event featuring a panel of Canadian officials and later remarks by former Finance Minister and later Prime Minister of Canada Paul Martin.
The event was designed to showcase what Canada went through and what lessons can be learned to help the United States with our own fiscal problems. Canada's problems in the mid 1990's in some ways resemble ours: Canada ran deficits during good times, which grew larger during bad times, and their debt was spiraling out of control. Canada was downgraded and The Wall Street Journal referred to Canada as "an honorary member of the Third World." In addition, the Canadian government was spending a whopping 36 percent of its budget on interest on the debt, which was the largest single expenditure in the budget.
Prime Minister Martin said that coming in as the finance minister in 1993, he recognized the precarious situation Canada's finances were in. He feared that a financial crisis would strike somewhere in the world and spread to them and argued that had they not acted, the 1997 Asian financial crisis "would have done us in." Politicians recognized the need for a deficit reduction plan that everyone had to take part in. If too many sacred cows were left untouched, if some groups got a better deal, it would ruin the sense of shared sacrifice. At the same time, because of the severity of the cuts that were needed, politicians feared that they would not be re-elected if they went through with them. As the story goes, the cuts happened, and the voters rewarded lawmakers by re-electing them.
Martin argued that to sell deficit reduction to the public, it had to be justified more than in terms of pleasing the markets; rather, deficit reduction needed to be framed in terms of how it would improve citizens' everyday lives. For Martin, this was relatively easy because of the huge amount of interest spending that was crowding out other priorities in the budget.
In addition, he said that all interested parties needed to engage in the debate, describing televised roundtables where different interest groups would debate the relevant issues. The lesson from these roundtables for Canadian citizens was that tough choices needed to be made, but that there were no perfect solutions.
Prime Minister Martin added that having to do multiple rounds of deficit reduction was risky -- do it poorly the first time and it is harder to do the second time. This gives further credence to arguments that going big in a deficit reduction package increases the chances of success. He said that lawmakers have to be open about the scope of the problem and the shared sacrifice needed. He warned us on our fiscal cliff, noted that both parties ran away from Bowles-Simpson, and said that our deficit is a big problem that we have to address. Finally, he stated that as in Canada, if we do not work to address the problem before it got even worse, it would be harder to protect health-care spending and investments in our future.
Lawmakers would do well to take some of the advice that Prime Minister Martin laid out at this event.
|
<urn:uuid:9a1513ba-b7a4-47f2-a77a-0cc00e07b9b5>
|
CC-MAIN-2016-26
|
http://crfb.org/blogs/lessons-canada
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00173-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.984356 | 692 | 2.59375 | 3 |
Core Value of Amyloids
Fibrils show therapeutic effect in mice
For decades, experts in brain disorders have fixated on the toxic properties of amyloids, the generic name for misfolded proteins stuck together in a particularly indestructible way. After all, amyloid plaques in the brain, formed by the protein named amyloid β, are the original defining pathological feature of Alzheimer's disease (AD). A counterintuitive new view is emerging in which some amyloids may wield a potential for doing good—and to a far greater extent than anyone suspected, according to the latest evidence from a mouse model of multiple sclerosis (MS) published 3 April in Science Translational Medicine.
In the new study, researchers saw remarkable therapeutic results using only amyloid-forming "cores" from a dozen different proteins. When injected with the smallest protein pieces that can reliably form amyloid fibers, mice disabled by an MS-like neuroinflammatory disorder could walk again (Kurnellas et al., 2013).
The paralysis returned with a vengeance when the treatments ended. That suggests that an entire category of sticky amyloid-forming cores may be active biological agents with therapeutic potential in MS and other neuroinflammatory diseases, say Lawrence Steinman, a neuroimmunologist at Stanford, and his co-authors. (A related story at AlzForum addresses the implications of the findings for AD.)
"This is an important piece of science," says Terrence Town, a neurobiologist at the Zilkha Neurogenetic Institute at the University of Southern California in Los Angeles, who was not involved in the study. "This paper is interesting because it shows what might be pathological in one disease can be beneficial and therapeutic in another." Indeed, the findings pull together several lines of research in MS, immunology, and amyloids in unexpected ways, say researchers contacted for this article. Scientists from different fields are finding that amyloid-forming proteins can assemble into many varying structures, some of which may be extremely neurotoxic and others that may be benign or even beneficial. "This paper takes that idea to the next level," Town says.
Following the trail to amyloid in MS
All proteins contain short segments that could, in theory, aggregate to form amyloid fibrils, but few proteins can indulge their dark sides, says Ulrich Hansmann, a biophysical chemist at the University of Oklahoma in Norman. In a normally folded protein, the crucial short sticky segments that can form the core (or “spine”) of amyloids are usually tucked away too deep to be tempted to aggregate with other such regions in neighboring proteins. When exposed by damage or misfolding of the protein, though, the amyloid segments may irresistibly huddle together side by side in an interlocking pleated strip called a beta sheet. The amyloids can assume various shapes and sizes from there, based on the other parts of the protein. Even with the most nefarious amyloids implicated in human diseases, scientists do not know which structural versions might be toxic nor how they are causing disease (Eisenberg and Jucker, 2012).
In the case of MS, one of the amyloid-forming proteins investigated by the Stanford group—αB-crystallin (HspB5), which belongs to the small heat shock protein family—first appeared on the radar 2 decades ago. Heat shock proteins are also called "chaperones," because they help to prevent partially folded or assembled proteins from aggregating in damaging amyloid clumps. The full-length αB-crystallin protein has advanced to early phase II clinical testing in Europe as a potential therapy for MS, based on studies that suggest it might silence autoimmune attacks against the myelin sheath. The potential therapeutic is under development by Delta Crystallon BV in Leiden, The Netherlands. "This study adds another layer of beneficial activity of the protein," says Hans van Noort, a biochemist at the drug company, of the new Stanford work. "We're working with the entire protein, but this shows the ability of bits and pieces of it to promote resolution of the problem in the brain."
Paradoxically, van Noort first identified αB-crystallin as a potential trigger of destructive inflammatory activity in MS nearly 20 years ago (van Noort et al., 1995) after taking a fresh look at tissues from a brain bank to ask what might be provoking an immune attack. Working with brain samples from individuals with MS lesions and from others without neurological disease, he and his co-authors separated the proteins from the myelin-forming cells into different test tubes. They then added white blood cells—extracted from people with or without MS—to each tube and looked for a reaction. It turned out that αB-crystallin, which was isolated only from the MS lesions, stimulated a dramatic response in both sets of white blood cells.
Until then, the scientific interest in αB-crystallin had been mostly the domain of evolutionary biologists, van Noort says. The 10-member crystallin family originally included all the proteins found in the eye lens of most vertebrates, from fish to people. They are named by molecular weight, starting with alpha for the biggest ones.
In MS, the obscure heat-shock protein appears to accumulate in myelin-forming cells before immune cells infiltrate the brain, van Noort says. “It is therefore not a secondary response to inflammation associated with an MS lesion. Instead, it triggers the formation of an MS lesion." He remains convinced that αB-crystallin is the main target of the immune attack in MS (van Noort et al., 2010).
Why would the immune system start picking a fight with this protein? The trigger might be an infection with Epstein-Barr virus (EBV), the only known link between MS and an infectious agent, van Noort speculates. It turns out that B cells spit out αB-crystallin when infected with EBV, perhaps inadvertently teaching T cells to fight both the virus and the protective protein (van Sechel et al., 1999). In the continuous immune surveillance of the brain, "hundreds of thousands of T cells are rummaging around, testing the waters," van Noort says. A problem in the brain, particularly in the myelin-forming cells (one of the few other cell types that produce αB-crystallin under stress), may lead to a buildup of the same protein that some T cells were programmed to attack in concert with EBV. Under siege, myelin-forming cells may spew out even more of the protein—attempting to protect the cellular machinery, block cell death, and recruit microglia as cellular bodyguards, but instead further inflaming T cells in a vicious cycle.
"If you want to stop the MS lesion from developing and stop T cells in the brain from causing all this trouble, you don’t need to suppress the entire immune system," van Noort says. "The only thing you need to address is the T cell reaction against this single protein." He and his colleagues are pursuing a strategy of building tolerance to the protein using small doses designed to reprogram errant T cells.
Steinman was an enthusiastic reviewer of van Noort's 1995 paper, but the Stanford scientist began studying αB-crystallin only after genomic and proteomic studies documented its abundance in MS lesions and its absence in normal brain tissue. Steinman, a co-author on some of those profiling studies, wondered about a potentially protective role for the protein. "In response to inflammatory injury, the brain doesn’t roll over and play dead," he says. "We've heard brains can't reproduce neurons or bounce back from injury. That's not true. The brain produces guardian molecules that counter [inflammatory damage]."
In a first round of experiments more than 6 years ago, Steinman and his colleagues showed that mice missing the gene for αB-crystallin developed worse experimental autoimmune encephalomyelitis (EAE) (see "Animal Arsenal") than did their normal counterparts. Injections of the protein into peripheral circulation made both types of mice better, reducing the severity of paralytic disease and attenuating inflammation in the brain (Ousman et al., 2007). Absence of αB-crystallin also aggravated disease in experimental models of stroke and brain trauma, they found, while injections of it appeared therapeutic, suggesting a more general anti-inflammatory role for the small heat shock protein.
Now the question was: How were injections having a therapeutic effect in the EAE mice? How was αB-crystallin affecting the immune system? In the Steinman lab, neuroimmunologist Michael Kurnellas and biochemist Jonathan Rothbard picked up the investigation. First, they mixed αB-crystallin with blood samples from people with MS, rheumatoid arthritis, or amyloidosis (a rare condition of protein buildup in organs), as well as from mice with EAE. The heat shock protein selectively trapped a broad, common set of inflammatory molecules and effectively disarmed them. At slightly higher temperatures, such as those found at a site of inflammation, the quantities of the trapped inflammatory molecules doubled or tripled (Rothbard et al., 2012). The data were consistent with the molecule's chaperone function.
The unexpected idea that the molecule might be functioning in an amyloid state first dawned on the members of the Steinman team when a Japanese group reported that a larger related crystalline protein in the eye lens worked as a chaperone only when it could form amyloid fibers. The amyloid-forming fragments of the full-size heat shock protein were both sufficient and essential, the other paper showed, for its ability to bind potentially harmful molecules (Tanaka et al., 2008). When the Steinman team tested αB-crystallin to learn if the same was true in their system, the answer was yes. They reported last September that a 20-amino-acid-long snippet of the protein that contains the amyloid-forming section was as potent as the full protein in reducing paralysis in EAE mice (Kurnellas et al., 2012). "When we altered one amino acid, we disrupted formation of the amyloid fibril, and there was no therapeutic activity," Kurnellas says. "We were excited that the amyloid itself could be therapeutic."
In a parallel project, another research team in the Steinman lab reported last August on the anti-inflammatory and potentially therapeutic properties in EAE mice of amyloid-forming pieces of amyloid β (Grant et al., 2012; “Split Personality”).
Amyloid versus amyloid
Even with the hints of therapeutic potential, the researchers remain concerned about possible toxic properties of the amyloid pieces. Some smaller amyloid-forming fragments known as oligomers have a newly discovered ability to form a cylinder shape that can pierce cell membranes and are thought to be among the most neurotoxic amyloid structures, Steinman says (Laganowsky et al., 2012). For the new study in Science Translational Medicine, the investigators wanted to strip down the peptides to the minimum string of amino acids needed to assemble amyloid fibrils to test in the EAE mice.
Fortunately, structural biologists have been making rapid progress in sorting out the atomic details of disease-related amyloid-forming proteins of various sizes and shapes (Eisenberg and Jucker, 2012). The Steinman team turned to the amylome, a database of all the proteins with regions that can zip together in the amyloid spine configuration. The resource contains information about experimentally solved and computationally predicted regions compiled by David Eisenberg, a structural biologist at the University of California, Los Angeles, and his colleagues (Goldschmidt et al., 2010).
The Steinman team tested 18 amyloid-forming sequences, each six amino acids long—including those from αB-crystallin, amyloid β A4, tau, major prion protein, amylin, serum amyloid P, and insulin B chain. First, Rothbard and Kurnellas checked that each peptide sequence actually assembled into amyloid structures in solution by using a dye that scatters light differently when bound by an amyloid fiber. Based on atomic structural data from others, the researchers believe the hexamer peptide strands in the test tubes likely fuse together in the side-by-side interlocking and layered beta sheet strips, six amino acids wide, that form the core amyloid spine (Sawaya et al., 2007). In these experiments, the dynamic fiber assemblies seemed to reach a steady-state length of about 20 to 25 strands for each paired beta sheet on average, with some dropping off and others joining in the zipper-Velcro-like assembly. "They don't form infinite ribbons," Rothbard says.
The team then injected each batch of different amyloid cores into the abdominal cavities of EAE mice, where the proteins moved into the bloodstream and circulated throughout the body. The animals’ symptoms improved, usually moving from complete hind limb paralysis to hind limb weakness or tail weakness. "It doesn't act immediately," Rothbard says. "It takes two to three injections. Once you stop the injections, the symptoms come back a day or two later." The researchers found no additional evidence of toxicity in the major organs of the mice.
The researchers also checked the chaperone activity of two representative hexamer peptide fibrils in a routine test with damaged insulin molecules. The small fibers prevented the insulin proteins from aggregating, but the amyloids could not undo those that had already knit themselves together in a larger misshapen amyloid structure. The results showed a correlation between the chaperone function and protection in mice by the amyloid fibers. "One amyloid can reduce another amyloid," Rothbard says.
At this stage, the researchers only have hints about how the short amyloid peptides are working. The fibrils seem to subdue the immune system by reducing levels of circulating inflammatory molecules, but multiple mechanisms may be at work, the authors and other researchers say. "They definitely have an effect on the immune system," Kurnellas says. "We're not even sure the fibrils are getting into the brain. The therapeutic effect may be an indirect result of what is being targeted in the periphery."
In further tests using blood plasma from three people with MS, the scientists observed that amyloid fibers of tau protein removed most of the same innate and adaptive immune proteins as had αB-crystallin in the experiments described in their September 2012 paper. In mice, the tau amyloid injections also reduced levels of cytokines, particularly interleukin 6, which stimulates additional pro-inflammatory mediators.
The Stanford team is now preparing a follow-up paper for publication that probes the actions and mechanisms of the small amyloid fibers in more detail, including their effect on white blood cells. For now, the long-held view of amyloids as unrelenting villains in neurodegenerative diseases makes their investigations about a potentially therapeutic role of amyloids a hard sell among their colleagues, the researchers say.
"If judges at the supreme court of chemistry were adjudicating the guilt or innocence of amyloids sitting as the accused in the dock, then they can't rule that this suspect—amyloid—is guilty," Steinman says. "There are too many extenuating circumstances that show this molecule is a good player and innocent." Those who remain undecided can count on reviewing additional evidence before passing a verdict.
Correction (3 May 2013)
The final paragraph has been modified to clarify its meaning.
Key open questions
- What are the mechanisms underlying the therapeutic effect of the amyloid fibrils in EAE mice? What cell types are involved?
- Are the injected amyloid fibrils reaching the brains and spinal cord of mice?
- What effect do naturally occurring amyloid fibrils in the brains of people with MS have on the immune system?
- Do the amyloids formed from the short hexapeptides have any toxicities?
Thumbnail image on landing page. Courtesy of Heather McDonald.
|
<urn:uuid:f4764870-8a69-435d-9aa6-a52fdd3747b3>
|
CC-MAIN-2016-26
|
http://www.msdiscovery.org/news/new_findings/5839-core-value-amyloids
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00099-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940664 | 3,395 | 2.859375 | 3 |
The Persian and Hellenistic Periods
Michael D. Coogan
The conquest of Babylon by the Persian king Cyrus the Great in 539 BCE brought significant changes. In keeping with his policy of respecting the various deities worshiped throughout the empire, a decree by Cyrus in 538 (see Ezra 1.1–4; 6.1–5 ) authorized the rebuilding of the Temple in Jerusalem and the return of the Temple vessels captured by Nebuchadnezzar. In addition, Cyrus allowed any of the exiles who wished to return to Judah to do so. Within the exilic community in Babylon the anonymous prophet known as “Second Isaiah” (Isa 40–55 ) strongly supported Cyrus and urged the exiles to return to Judah. Although historical sources are few and not always easy to interpret, it appears that only a small minority of the exiles and their descendants returned to Judah, most choosing to remain in Babylonia. This latter group became the nucleus of a large and highly significant Jewish Diaspora community (Jews of “the dispersion,” that is, living outside Palestine), which strongly influenced the development of Judaism and Jewish culture during the following centuries.
Despite the decree of Cyrus, the Temple in Jerusalem was not rebuilt until 520–515 BCE. The reasons for the delay were various. Persian control over the western territories may actually have been tentative until after the Persians conquered Egypt in 525. The economy of Yehud (the name by which the Persian province of Judah was known) was weak, and there appears to have been friction between the population that had remained in the land and the small but powerful group who returned from exile with the authorization and financial backing of the Persian king. Conflicts with the neighboring territories of Samaria and Geshur and Ammon in Transjordan also complicated the situation. Within the Bible the prophetic books of Haggai and Zechariah and portions of Ezra 1–6 refer to this period, but these sources have to be read and interpreted critically, for they are neither consistent with one another nor easy to understand on their own terms. At least during the early part of Persian rule the governors of Judah appear to have been prominent Jews from the Diaspora community, one of whom, Zerubbabel, was actually a member of the Davidic royal family. The province of Yehud itself was very small, consisting of Jerusalem and the territory surrounding it within a radius of about 24–32 km (15–20 mi).
Once the Temple was rebuilt, it became the nucleus of the restored community, and consequently a focus of conflict (Isa 56–66; Malachi). The high priestly family, which had also returned from the Diaspora, became very powerful, and at least on occasion was in conflict with the governor appointed by the Persian king. Although the details are often not clear, there appears to have been continuing conflict during the fifth century between those Jews whose ancestors had been in exile and those whose ancestors had remained in the land. Those who returned from the Diaspora styled themselves the “children of the exile” and referred rather contemptuously to the rest as “people of the land,” as though their very status as Jews was in question. In fact, the question of the limits of the community was one of the most contentious issues of the period, reflected both in the controversy over mixed marriages between Jewish men and ethnically foreign women (Ezra 10; Neh 13 ) and also in conflicts within the Jewish community over who had the right to claim the traditional identity as descendants of “Abraham” and “Israel” (see Isa 63.16 and more generally “Third Isaiah,” Isa 56–66 ). Although the conflicts between various contending groups in early Persian period Yehud are largely cast in religious terms, there is no question that they were also in part socioeconomic (see Neh 5 ). All of these conflicts and efforts toward redefinition of the community, however, took place within the reality of Persian imperial control. Thus it is not by accident that the two most prominent figures involved in various reforms of mid‐fifth century Yehud, Ezra and Nehemiah, were Diaspora Jews of high standing, carrying out tasks that had been specifically authorized by the Persian kings.
Because this was a period of self‐conscious reconstruction, it was also a time of immense literary activity, as traditional materials were collected, revised, and edited, and new works composed. Although much of the Pentateuch may have existed in various forms during the time of the monarchy, it was probably reworked during the Persian period into something close to its final form. Indeed, some have suggested that this revision may have been undertaken under the sponsorship of the Persian government, reflecting Persia's interest in achieving stability throughout its empire by means of religious and legal reforms in the provinces. Although a history of Israel and Judah known as the Deuteronomistic History (Deuteronomy through 2 Kings) had been composed during the latter years of the monarchy and updated during the exile, a new version of that history, 1–2 Chronicles, was prepared during the Persian period (ca. 350 BCE). It clearly reflects the concerns of the postexilic community, focusing almost exclusively on the history of Judah and giving particular emphasis to the institution of the Temple. The books of Ezra and Nehemiah interpret events from the decree of Cyrus in 538 until the late fifth century.
In addition to the prophetic books composed at this time (Isa 56–66 , Haggai, Zechariah, Malachi, and perhaps Joel), there is evidence that the texts of older prophets were also edited and reinterpreted. Psalmody had been an important element of worship at the First Temple, but appears to have taken on an even more significant role in the Second Temple. Although the expansion and revision of the book of Psalms may have continued until well into the Hellenistic or even Roman period, an important shaping of the psalter, perhaps including its division into five “books,” was part of Persian period activity. Wisdom writing, too, flourished during this time. The book of Job, parts of the book of Proverbs, and perhaps Ecclesiastes were likely composed then.
|
<urn:uuid:bf77d7e7-ff2c-4777-85c3-2410720fb53f>
|
CC-MAIN-2016-26
|
http://oxfordbiblicalstudies.com/article/book/obso-9780195288803/obso-9780195288803-chapter-12
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00016-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.98412 | 1,283 | 4.09375 | 4 |
Reading Fluency:→ What is reading fluency?
→ Why is reading fluency so important?
→ How can we tell if a student is having problems with fluency?
→ How do we teach reading fluency?
→ How can I find out more about reading fluency?
Reading Coaches:→ What is a reading coach?
→ What do reading coaches do?
→ What skills will a reading coach need?
→ How can I find out more about being a reading coach?
Differentiated Instruction:→ What is "differentiated instruction"?
→ Does this mean that we are returning to the days of tracking students, with "high" groups and "low" groups of students?
→ How can a teacher manage to provide differentiated instruction? What do the other students do when the teacher is working with one small group?
→ How can I find out more about how to differentiate instruction?
Reading fluency is the ability to read text with appropriate speed and accuracy. Fluent readers also read with good expression. Return to Questions...
Reading fluency has been identified by the National Reading Panel (2000; www.nationalreadingpanel.org) as one of the five critical components of reading, along with phonemic awareness, phonics and decoding, vocabulary, and comprehension. The contribution of comprehension and vocabulary knowledge to skillful reading has long been understood by researchers and practitioners. Discussions about the importance of phonemic awareness and phonics have been continuing for decades, and an increasing body of evidence strongly underscores the fundamental roles these two elements. However, the focus on the value of fluency is relatively new.
Fluency is now understood to be a unique and fundamental component of skilled, proficient reading because of its close link to comprehension and motivation. Students who struggle with fluency, even if their phonemic awareness skills and vocabulary knowledge is strong, and even if they have good word analysis, phonics and decoding skills, will most likely have difficulty understanding what they have read. These students will also be much less likely to read for pleasure and enjoyment.
If a student is struggling with reading, we must check to see if fluency is contributing to their difficulty, rather than just focusing in on helping improve that student's comprehension skills. As Joe Torgeson has stated: "There is no comprehension strategy that compensates for difficulty reading words accurately & fluently." Return to Questions...
There are 3 different roles for fluency assessments: screening, diagnosis, and progress monitoring.
Screening: Screening assessments are used to FIND those students who may be having problems in reading. The "gold standard" of screening tools all use some kind of fluency measure (accuracy + speed/rate) such as The Reading Fluency Benchmark Assessor (RFBA) or DIBELS. The assessment of oral reading fluency by listening to a one minute sample of oral reading from unpracticed, grade level text, has been shown to predict overall reading ability with a high degree of accuracy, especially in the primary grades. Oral reading fluency scores of words correct per minute (wcpm) can be compared to benchmark norms to determine if a student may need assistance in reading. The Hasbrouck & Tindal norms were developed for this purpose.
Hasbrouck, J., & Tindal. G. (2005). Oral Reading Fluency Norms Grades 1-8. Table summarized from Behavioral Research & Teaching (2005, January). Oral Reading Fluency: 90 Years of Assessment (BRT Technical Report No. 33), Eugene, OR: Author. http://www.brtprojects.org.
Hasbrouck, J., & Tindal, G. A. (2006) Oral reading fluency norms: A valuable assessment tool for reading teachers. The Reading Teacher, 59(7), 636-644.
Diagnosis: Once it has been determined that a student is having problems with reading, it is important to determine "why"? What is contributing to or causing these problems? Diagnostic assessments are used by teachers to determine a student's strengths and needs in the five key areas of phonemic awareness, phonics, fluency, vocabulary, and comprehension. Diagnostic assessments of fluency are similar to the screening assessments, except now the one minute assessments of oral reading fluency are conducted in instructional level text rather than grade level text. For example, if a 5th grade student is reading at about the 3rd grade level, we would assess his fluency using unpracticed passages of 3rd grade text. That score can then be compared to benchmark scores of 3rd graders to determine if that student's fluency is on track for their level of reading development. (NOTE: a 5th grader who is reading at the 3rd grade level will clearly need a serious reading intervention that will likely include some fluency practice. It is also possible that diagnostic assessments will indicate that the cause of this student's reading problems are primarily in the areas of phonics/decoding or even phonemic awareness.)
To diagnose phonics and decoding concerns, you may want to use a tool like the Quick Phonics Screener (QPS), developed by Dr. Jan Hasbrouck. The QPS is an informal, individually administered diagnostic assessment. Teachers can use the results to plan students' instructional or intervention programs in basic word reading and decoding skills and to monitor students' progress as their phonics skills develop.The QPS is available at www.readnaturally.com
Progress monitoring: Fluency measures are also used to help determine if a student's SKILLS ARE IMPROVING in an instruction or intervention program. Using weekly or bimonthly one minute assessments of oral reading fluency using unpracticed passages at a student's instructional level or goal level can be used by a teacher to make decisions about the effectiveness of an instructional program.Return to Questions...
Researchers have identified three ways to improve students' reading fluency: teacher modeling, repeated reading, and progress monitoring. The Read Naturally strategy has combined these three components:
1. A student reads a challenging piece of text aloud and
records the words correct per minute score on a
2. The student then reads along, aloud, while that same
piece of text is read aloud by a narrator (on an audio
tape, CD, computer, or read by a teacher or tutor).
The purpose of this step is to build the student's
accuracy in reading the text. It will typically take
about 3 readings of the text to develop sufficient
comfort and accuracy of each reading,
until a predetermined goal is met (usually about 30
to 40 words above the original
reading in Step #1.) Students will often need to do
about 4 or 5 practice readings to meet their goal.
3. The teacher listens to the student read the text to
determine if the fluency goal has been met.
4. The student gets to graph this new, successful score
on the graph in a second color.
5. Additional activities can be added to these steps,
including an oral or written retell, answering questions
about the passage,etc. Return to Questions...
A Focus on Fluency is a free publication available through the Pacific Resources
for Education and Learning (www.prel.org)
Developing Fluent Readers Link to White Paper
A reading coach can be defined as: "an experienced teacher who has a strong knowledge base in reading and experience providing effective reading instruction to students, especially struggling readers. In addition, a reading coach has been trained to work effectively with peer colleagues to help them improve their students' reading outcomes and receives support in the school for providing coaching." (Hasbrouck & Denton, 2005).Return to Questions...
Many people think that the primary role of a reading coach doing is to watch a teacher teach a reading lesson, and then provide feedback to that teacher, including making suggestions for how to improve the lesson.
This is certainly something that reading coaches can do. It may even be the centerpiece of their coaching efforts, but…coaching is much more complex and involved than this.
In order to observe and provide feedback to a teacher, the coach first has to establish a professional relationship with that teacher. Given that the role of "reading coach" is so new to most schools, the role itself needs to be introduced to the teachers and administrator. Decisions will need to be made about several issues: What services will the coach be providing? How will the coach be evaluated by the principal/supervisor? How will issues of confidentiality be handled? How will the coach find the time to provide coaching services to colleagues?Return to Questions...
Coaches who are skillful and experienced reading teachers will often need to learn several new skills to become an equally skillful coach. These skills include:
establishing a professional, collaborative relationship with colleagues (trust building/"entry");
managing professional time
communicating effectively with colleagues, parents, and administrators, especially when discussing emotionally challenging topics;
working effectively with a team to address student or school concerns;
collecting and analyzing data for problem-solving and coaching (conducting interviews, observations, and assessments);
providing specific feedback to a teacher for improving instructional skills and strategies;
designing and conducting professional inservice trainings
helping provide systems-level consultation to address school-wide or district-wide concerns.Return to Questions...
The Reading Coach: A How-to Manual for Success by Jan Hasbrouck, Ph.D. and Carolyn Denton, Ph.D. published by Sopris West (www.sopriswest.com) Return to Questions...
This term means different things to different people. In general when educators talk about differentiating instruction they mean planning lessons and providing instruction and practice activities that are appropriate for each student's individual background and skill levels. It suggests that at least some instruction would be provided to small groups of students. Return to Questions...
Creating permanent, homogeneous groups of students based on their academic ability has been shown to be an ineffective way to differentiate instruction. The small groups should instead by flexible and reformed from time to time to allow groupings of students for different reasons and sometimes even pair students at different skill levels. Return to Questions...
This is an important question that must be addressed if a teacher is going to be successful with differentiating instruction. A key place to start is to rethink how the classroom is organized and managed that will allow a teacher to work with the whole class but also have the time to work with small groups. To get started, an instructional schedule needs to be developed, to map out the blocks of time to provide whole class instruction and a few blocks of 20-25 minute periods where the teacher can teach smaller groups of students. Teachers should also think about creating a list of jobs for students-and train them how to do the jobs-- so students can manage their time while the teacher is busy teaching. A system for managing paperwork, supplies, and learning centers also need to be developed. Return to Questions...
|
<urn:uuid:abe82c31-82b5-4e22-8c0b-8960e2c73a3c>
|
CC-MAIN-2016-26
|
http://www.jhasbrouck.com/q_a.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00174-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946989 | 2,279 | 4.125 | 4 |
What does LOSA mean in Sports?
This page is about the meanings of the acronym/abbreviation/shorthand LOSA in the field in general and in the Sports terminology in particular.
Lake Ontario Steelhead Association
Find a translation for LOSA in other languages:
Select another language:
What does LOSA mean?
- The nuraghe Losa is a complex prehistoric building in the shape of a tholos tomb. Its central structure has a triangular shape. On the west side, a turreted wall is linked to it. The whole built complex is surrounded by a wider wall, which encloses the settlement of the original village of huts and other additional buildings constructed in the late-Punic, imperial Roman, late Roman and high Middle Ages periods.
|
<urn:uuid:bef16711-7250-4b12-818e-f8e41a3e7c70>
|
CC-MAIN-2016-26
|
http://www.abbreviations.com/term/299420
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00098-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933924 | 166 | 2.765625 | 3 |
Gerard N. Magliocca. Andrew Jackson and the Constitution: The Rise and Fall of Generational Regimes. Lawrence: University Press of Kansas, 2007. xi + 186 pp. $29.99 (cloth), ISBN 978-0-7006-1509-4.
Reviewed by Matthew Warshauer (Department of History, Central Connecticut State University)
Published on H-Law (July, 2008)
In Andrew Jackson and the Constitution, Gerard N. Magliocca, associate professor of law at Indiana University, has a written a little book with a lot of big ideas. In only 129 pages of text, he covers everything from the growth and challenges of Supreme Court- and president- influenced constitutional doctrine, to its relation to race during the Jacksonian period, to the link between the rise of abolitionism and Indian rights, to the meaning of judicial review and the Fourteenth Amendment, and much in between. Without question, Professor Magliocca is a learned and broad thinker. His ideas will cause readers to think, and that is what any good book should do. Yet at the same time, the big ideas are sometimes not as fully engaged as some readers might like. This is most certainly due to the limited size of the book, and I wondered why it was not expanded to provide an even deeper consideration of the many ideas that are presented.
Readers should note that Magliocca's is not the first book titled Andrew Jackson and the Constitution. The original was written by Francis Norene Ahl in 1939. The title, however, is the only thing that these books have in common. Whereas Ahl provided a rather cursory overview of Jackson's actions as president and, for the most part, concluded that his dominant personality forced change that reflected "a wholesale contempt for the law," Magliocca asserts that the seventh president's actions were far more reflective of a generational response to the federal government's growing power. The key catalysts for the Jacksonian generation's actions, or what Magliocca might more properly define as reactions, were chief justice John Marshall's 1819 McCulloch v. Maryland decision in which he acknowledged the constitutionality of the Bank of the United States, and the Worcester v. Georgia decision (1832) in which he attempted to provide a safeguard against government action regarding the Cherokees.
Before going further, it is important to understand that Magliocca's work falls within the realm of what one might call "serious" legal scholarship. I do not mean this in a trite way, insinuating that other legal scholarship is not serious. The readers of H-Law will surely grasp the historiographical and analytical significance of what Magliocca is attempting in offering a broad, and at times sweeping, discussion of constitutionalism and the seminal importance of particular cases as they relate to political movements and change. The author even acknowledges in the conclusion that one of his goals is to "offer some fresh ideas about our constitutional past" and to try "to sketch out fruitful lines of research for others to pursue" (p. 129). Thus, in many ways, Magliocca is attempting to spark debate and considered thought, rather than derive "conclusions." One of the rubs in his approach, however, is that those who are interested in constitutional ideas and how they intersect with politics, especially the rise of political parties, may find themselves scratching their heads at some of the broad ideas offered and the how the specificity of Magliocca's legal focus relates to the bigger movements of politics in the Jacksonian period. As one who skirts the boundaries of both constitutional and political history, I believe that this book will spark the interest of those readers who fall more within the constitutional/legal camp of scholars.
That said, Magliocca's broad approach is matched with a broad theory regarding the nature of what he calls "constitutional generations," or "generational cohorts," offering that "The cycle in constitutional law is fueled by the fact that each generation goes through a unique set of collective experiences that sets its views apart from its predecessors" (p. 2). He follows by asserting that "This claim finds support in the literature on 'generational cohorts,' which explains that people who come of age at the same time tend to view political and social issues in the same way throughout their lives" (p. 3). Magliocca sums up that "The main point that flows from this temporal analysis is that the friction from the regular clashes between 'constitutional generations' is the primary force shaping the first principles of the Republic" (p. 3). At its most basic level, Magliocca's thesis is that one generation's actions spark the next generation's reaction, and hence the original generation "carries the seeds of its own destruction, as its very success eventually triggers a backlash" (p. 7).
Whereas Magliocca's contention, that action sparks reaction, is fundamentally sound, some may take issue with his rather clear-cut generational depictions. How long, for example, does a generation last? When does it start? How do we define a clear end? What happens when those of the same age group, those who essentially "come of age" at the same time, do not "view political and social issues in the same way throughout their lives"? Do they still belong to the same generation? What if one group harkens back to an earlier generation for legitimacy? What if both groups claim a previous generation's legitimacy? Surely this was the case in the early years of the republic, when Federalists and Republicans divided so steadfastly, and angrily, over the meaning of the Revolution and what it should foster for the future of the young nation.
I do not mean that social and political movements can never be defined by generations. Tom Brokaw properly defined those who lived through World War II as the "greatest generation." In this instance, one can see that a collective experience shaped the beliefs and intentions of an entire group, a generation. I am not, however, certain that such a clear point of demarcation, such as Pearl Harbor or the Nazi threat, can be determined for those in the Jacksonian period. Magliocca pin points the beginning of the Jacksonian generation in the combination of the Panic of 1819 and John Marshall's McCulloch decision, referring to it as a "lighted match" that provoked a crisis leading to the "the demise of his constitutional generation" (p. 9). "Marshall and his allies were now on the defensive, confronted by a new movement dedicated to a major revision of constitutional principles" (p. 11).
Though there is no question that Jackson and many of his followers were opposed to McCulloch (Jackson made that abundantly clear in his Bank Veto message), it does not follow that the begining of this opposition materialized in the immediate aftermath of the Court's decision or even with the burgeoning of the Jackson coalition in 1824, or in 1828. In fact, it is hard to say at the outset what the new Jacksonian Democratic Party represented, beyond very general calls for "reform" and a dedicated opposition to John Quincy Adams and Henry Clay. One of Jackson's greatest difficulties was in satisfying all of the elements of his party. Clay believed that very problem would be the president's downfall if he vetoed the Bank bill. To be sure, the inconclusive nature of Jackson's coalition is certainly not lost on Magliocca, who acknowledges that, "In Congress, the situation was muddled at best. People calling themselves Democrats were a majority, but the president could not rely on them because there was no agreement on what being a Democrat meant" (p. 21). I would argue, then, that the logical extension of this is to be very careful in attempting to define such a clear-cut Jacksonian generational regime.
Political historians will also consider additional factors in the rise of the second American party system, other than focusing almost solely on McCulloch. Magliocca, for example, makes no mention of the Missouri crisis, which was very much a catalyst for Martin Van Buren, arguably the architect of the new party system. Additionally, in his broad overview of the period and McCulloch's place in it, the author lumps together some rather complex issues, asserting, for example, that the Bank represented a "contest over whether America's future rested with the commercial world exemplified by the Bank or with the simple agrarian life that evoked the Minutemen of the Revolution" (p. 11). Such a statement is too general. Jackson, and many in his party, were not simple, rustic, yeoman farmers. They were every bit as capitalistic as were the opposing Whigs. The issue was about power and access to wealth.
Finally, the point of generational regimes and Jackson as a leader who forced change intersects a major issue of debate among Jacksonian scholars: to what extent did Jackson set a clearly defined agenda and carry it through as a forceful, modern president who shaped legislation, expanded the veto power, and ushered in a new era of understanding regarding the power of the chief executive? Magliocca's focus on generational regimes clearly depicts Jackson as a clear-minded agent of change who led his cohorts and shaped the veto into a powerful weapon.
The other major focal point for Magliocca is the Worcester decision, the ensuing Jacksonian generation's onslaught against Native Americans, and the corresponding destruction of rights for all nonwhites. Magliocca draws insightful parallels between the treatment of the Cherokee, the Dred Scott decision, and the writing of the Fourteenth Amendment. He similarly makes direct connections, as have other scholars, between the abolitionists' concern for the plight of slaves and their outrage over the fate of Native Americans. Magliocca ultimately concludes that through their reaction to the Worcester decision, the passage of the Removal Act in 1830, and what followed, "the Jacksonian generation was largely defined by the struggle over Cherokee rights" (p. 103).
Though there is no question about the racial proclivities of Jackson and his followers, political historians will take issue with the argument that the Cherokee issue "defined" the Jacksonian generation. They look more consistently to the Bank War, and some include the Nullification Crisis and Jackson's response to it as a major episode in the perpetuity of the Union and the nature of the Constitution. Interestingly, with Magliocca's narrow focus on Supreme Court decisions and the specifics of legal history, nullification and the serious constitutional issues tied to it barely appear in the book. Historians from a variety of fields will also debate Magliocca's statement that "there was a big difference between the way Tribes were treated before the 1830s and after" (p. 13). His meaning here is, again, in terms of strict legal/governmental policy, but there is no mistaking the fact that Native Americans were robbed of their lands and treated as less than whites long before Andrew Jackson came on the scene.
Though I have certainly taken issue with some of what Magliocca argues, his book has caused me to think and formulate ideas in regard to his points. I have benefited from that. One of the items that was of particular note, and which transcends both legal and political history, is the author's assertion regarding how precedents are created. He describes a sort of haphazard growth of constitutionalism, rather than a legalistic crafting. More often than not, changes were far more reactionary than forward-thinking in constitutional terms: "Many of the constitutional principles that are now considered fundamental began as nothing more than offshoots of a generational conflict. The great engine of legal creativity is the primal desire to win. As a result, leaders caught up in the emotions unleashed by a fight for power often reached for unorthodox solutions to attract support. Innovations introduced in the heat of battle often became pillars of constitutional order over time" (p. 47). I agree whole-heartedly with this analysis and have documented it rather clearly when discussing the precedent that Jackson created when he declared martial law in New Orleans in 1814-15 and what the Congress did with that action in the 1840s. The Democrats pushed hard to essentially legalize Jackson's use of emergency powers without much consideration of who might utilize those powers in the future or how they might impact the Constitution.
One of the other ideas that I found intriguing was the constitutional importance of William Henry Harrison's premature death. Magliocca argues persuasively that had Harrison lived he would have signed a new Bank bill into law and thus offered the Roger Taney-led, and Jacksonian-generationally influenced, Supreme Court to overrule McCulloch. When John Tyler, really a Democrat, vetoed the Whig-sponsored Bank bills, he robbed the Court of such an opportunity. Magliocca concluded that "The replacement of Harrison by Tyler is a prime example of the role chance plays in constitutional politics" (p. 81).
All in all, Andrew Jackson and the Constitution is, as I have stated, a small book with a lot of big ideas. Gerard Magliocca is to be commended for offering both constitutional/legal and political historians something to contemplate.
. Francis Norene Ahl, Andrew Jackson and the Constitution (Boston: Christopher Publishing House, 1929), 22.
. Matthew Warshauer, Andrew Jackson and the Politics of Martial Law: Nationalism, Civil Liberties and Partisanship (Knoxville: University of Tennessee Press, 2006).
If there is additional discussion of this review, you may access it through the network, at: https://networks.h-net.org/h-law.
Matthew Warshauer. Review of Magliocca, Gerard N., Andrew Jackson and the Constitution: The Rise and Fall of Generational Regimes.
H-Law, H-Net Reviews.
Copyright © 2008 by H-Net, all rights reserved. H-Net permits the redistribution and reprinting of this work for nonprofit, educational purposes, with full and accurate attribution to the author, web location, date of publication, originating list, and H-Net: Humanities & Social Sciences Online. For any other proposed use, contact the Reviews editorial staff at [email protected].
|
<urn:uuid:9342453e-9ebd-47f8-812f-fa63ec10fa24>
|
CC-MAIN-2016-26
|
http://www.h-net.org/reviews/showrev.php?id=14701
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00101-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959962 | 2,933 | 2.890625 | 3 |
The selection of this volcanic butte 20 miles from Boise City as the Initial Point for a survey of the Idaho Territory in 1867 was due to the isolation of its prominence, and that it was far enough west that the meridian would extend northward through the narrow panhandle of the territory all the way to the Canadian border. Today the mound is due south of the town of Meridian, and is topped with a viewing platform reachable via a rough dirt road. Due to the prominance and isolation of the location, it is often vandalized.
The Initial Point viewing platform was built in 1962 by a consortium that included the BLM, the Idaho Society of Professional Engineers, and the county. A marble column was also constructed and inscribed with a text describing the history of the Initial Point. Over the following years the column and other parts of the monument, including the official brass discs, were defaced, shot up, spray painted, or stolen. In 1990, the site was cleaned up and re-dedicated, but by 1996 the new plaques were stolen and the site shot up again. The current Initial Point disc was set into the concreted floor of the platform by the BLM in 2008.
The Initial Point butte rises 170 feet above the surrounding plain.
The access road to the butte lines up with the western baseline.
The latest marker in the concrete pad, from 2008.
The name of the butte is Initial Point.
The Initial Point was used to survey all of Idaho.
|
<urn:uuid:4c42f932-ba35-41f8-a9ca-bfec5c5ef1a8>
|
CC-MAIN-2016-26
|
http://www.clui.org/section/boise-meridian
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972939 | 306 | 3.03125 | 3 |
Soka Gakkai International
Buddhism in Action for Peace
History & Philosophy
Stories and reflections on the Buddhist approach to life
Updates and reports from around the world
The Soka Gakkai International (SGI) is a lay Buddhist organization upholding the tradition that originated with Shakyamuni (Gautama Buddha) and developed as it was inherited by India’s Nagarjuna and Vasubandhu, China’s T’ien-t’ai and Miao-lo and Japan’s Dengyo and Nichiren.
The specific Buddhist tradition embraced by the SGI is based on the Mahayana scriptures and the Lotus Sutra in particular. The SGI is engaged in faith practices and activities in society that correspond with the compassionate spirit of the Lotus Sutra in the contemporary world.
The founder of Buddhism, Shakyamuni, was born some 2,500 years ago to the royal family of an area in what is now Nepal. Shakyamuni observed the sufferings of aging, sickness and death and, although he was then young and healthy himself, perceived that they were unavoidable aspects of human life. He renounced secular life and embarked on a quest for a true philosophy that would elucidate the meaning of life for all people.
Shakyamuni studied both traditional teachings and new teachings of his time but was not satisfied. He practiced meditation and contemplated deeply upon the root cause of suffering and a way to overcome it. Through this, he awakened to the eternal and universal law permeating the universe and the lives of each and every individual. This Law (Dharma) to which Shakyamuni awakened is the essence of Buddhism.
Shakyamuni realized that people were suffering due to ignorance of the sanctity of their own lives and to self-centeredness arising from attachment to elusive desires and destructive egotism. He taught that by awakening to the universal Law one could release oneself from the smaller self and manifest one’s pure state of life. He explained that this was the most dignified and essential quality needed in order to live fully human lives. related article Timeline “A great human revolution in just a single individual will help achieve a change in the destiny of a nation and, further, will enable a change in the destiny of all humankind.” [© JeanL Photography/Getty Images] This statement from SGI President Daisaku Ikeda encapsulates the core philosophy of the SGI: That each of us has unfathomable potential, and in striving to bring this forth—spur
In other words, his aim was the revival of human vitality and the awakening of unsurpassed dignity in individuals’ lives so that they could unlock their boundless potential through activating their inner wisdom.
Shakyamuni also stressed that an awareness of the dignity of one’s own life should lead to respect for the dignity and value of the lives of others.
Following Shakyamuni’s death, his teachings, at the core of which were always compassion and wisdom, were compiled into various sutras, which became the basis for the establishment of a system of doctrines and schools of Buddhism.
The Mahayana Buddhist movement about 500 years after Shakyamuni’s time constituted a kind of Buddhist Renaissance, during which many new sutras were compiled, the Lotus Sutra being one of them.
The Lotus Sutra describes Shakyamuni’s vow made in the distant past to elevate the life state of all living beings to that which he had attained. It states that this vow was fulfilled in teaching the Lotus Sutra. The Lotus Sutra repeatedly calls for acts of compassion in order to inherit and actualize Shakyamuni’s eternal hope.
The Lotus Sutra is a great literary work in the form of a dialectic that takes place between Shakyamuni and his disciples. Through these dialogues, we learn that all people possess the life condition of the Buddha and the Buddha’s wisdom. The Sutra also clarifies the path to enlightenment for all people. Secondly, it clarifies that the teachings in the Lotus Sutra represent the foundational teaching of all Buddhas. Thirdly, it teaches that at times when people have fallen into suffering, disbelief and worry, the teachings of the Lotus Sutra should be shared among the people as it will provide hope, courage and security. The Lotus Sutra expresses the essential wish to attain unshakable happiness for oneself and all others and reveals Shakyamuni’s core teaching of how to lead people to overcome the root cause of suffering.
Learning from this sutra, Nagarjuna, Vasubandhu, T’ien-t’ai, Miao-lo and Dengyo devoted themselves to enabling people to reveal their unlimited potential within their respective cultural contexts.
related article Thoughts on Friendship by Daisaku Ikeda In this excerpt from "Discussions on Youth", SGI President Daisaku Ikeda discusses friendship in the context of Buddhism. The Lotus Sutra has been transmitted and embraced down the centuries across numerous cultures. In India, Nagarjuna and Vasubandhu widely propagated the ideas and teachings of Mahayana Buddhism and the Lotus Sutra. In East Asia, in the sixth and eighth centuries respectively, T’ien-t’ai and Miao-lo from China wrote about the superiority of the Lotus Sutra over various other sutras. In the ninth century, Dengyo introduced their teachings to Japan and worked to promote widely the concept of enlightenment of all people, as expounded in the Lotus Sutra.
Through this, the teachings of the Lotus Sutra and Shakyamuni’s true intent became clarified and universalized, gaining a multilayered richness.
Nichiren, who lived during a time of great conflict and upheaval in 13th-century Japan, empathized greatly with the suffering of the people and searched for a way to overcome suffering.
His intention was to become a true disciple of Shakyamuni, who taught Buddhism as a way to realize the genuine happiness and dignity of all people. Through his studies of the Buddhist sutras and his predecessors’ commentaries, he realized that it is the Lotus Sutra that enables the infinite potential of all people to flourish and permeate throughout society.
Strongly determined to actualize a harmonious society, Nichiren worked to establish true happiness and dignity for humanity. Although he suffered oppression and persecution from those in power who adhered to what he saw as mistaken beliefs about Buddhism, Nichiren risked his life to encourage and revitalize the people, just as the Lotus Sutra taught. Through this process, he established the practice of chanting Nam-myoho-renge-kyo, inscribing as the object of devotion a mandala known as the Gohonzon. Nichiren established a concrete practice for attaining Buddhahood based on the essential teaching of the Lotus Sutra.
Nichiren’s guiding principle throughout his life was to uphold human dignity as a spiritual backbone for human society toward the creation of a peaceful world where people can enjoy fulfilling lives.
related article Tsunesaburo Makiguchi Tsunesaburo Makiguchi (1871-1944) was a reformist educator, author and philosopher who founded the Soka Kyoiku Gakkai (the forerunner of the Soka Gakkai) in 1930. This process continues an effort—ongoing since Shakyamuni’s times—to overcome the deep-seated and destructive nature of egotism that erodes human life and society. Today, the members of the SGI, based on the teachings of Nichiren, have inherited this mission. Their task is, in short, the realization of a new humanism—the pursuit of happiness for both self and others, where trust, value creation and harmony are key.
Through their daily practice, people are able to challenge various obstacles and, through the process of chanting, reflect deeply on themselves and draw forth hope and a spirit of challenge and courage. They are also able to develop a sense of values firmly grounded in humanity and construct a rich personality. SGI Buddhists call this process of inner-motivated change, “human revolution.”
The practice of Nichiren Buddhism concerns itself with realizing one’s inherent potential and fulfilling one’s responsibility to the fullest, whether it be in the home, community or workplace. It is also about proactively contributing to finding a solution to the various problems facing the world. SGI members are committed to promoting the importance of peace and the ideal of respecting the dignity of life and human rights through various activities, such as through holding exhibitions about the threat of nuclear weapons or humanitarian relief activities. The SGI is also working to raise awareness of environmental issues confronting the planet.
The SGI is an organization dedicated to revitalizing this legacy of Buddhist humanism, at the core of which are belief in the Buddha nature and compassionate action to reveal that nature. This is a legacy inherited from Shakyamuni and passed down by Nichiren.
Regarding it as the essence of Buddhism, the SGI aims to transmit this tradition and spirit in contemporary society and onward into the future.
|
<urn:uuid:487e841a-68a4-4e02-82f1-fa21fa1c6dda>
|
CC-MAIN-2016-26
|
http://www.sgi.org/about-us/buddhist-lineage/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00089-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956113 | 1,921 | 2.53125 | 3 |
Joined: Oct. 2005
Dang. Here's one which is very similar, but doesn't exactly match the ICC:
dishonest creationist textbook:
|Critics of the fossil succession argument|
point out that what is true of animals is also
true of plants. For example, flowering plants
appear suddenly in the early Cretaceous period,
145-125 million years ago. This rapid appearance
is sometimes called the angiosperm big bloom.
“The origin of the angiosperms remains unclear,”
writes one team of researchers. “Angiosperms
appear rather suddenly in the fossil record…with
no obvious ancestors for a period of 80-90
million years before their appearance.”10 This
contradiction was so perplexing that Darwin
himself referred to it as “an abominable mystery.”11
As a result, critics say the pattern of fossil
appearance does not support Darwin’s picture of
a gradually branching tree.
Index of creationist BS:
In the Cambrian explosion, all major animal groups appear together in the fossil record fully formed instead of branching from a common ancestor, thus contradicting the evolutionary tree of life.
Wells, Jonathan, 2000. Icons of Evolution, Washington DC: Regnery, pp. 40-45
1. The Cambrian explosion does not show all groups appearing together fully formed. some animal groups (and no plant, fungus, or microbe groups) appearing over many millions of years in forms very different, for the most part, from the forms that are seen today.
2. During the Cambrian, there was the first appearance of hard parts, such as shells and teeth, in animals. The lack of readily fossilizable parts before then ensures that the fossil record would be very incomplete in the Precambrian. The old age of the Precambrian era contributes to a scarcity of fossils.
3. The Precambrian fossils that have been found are consistent with a branching pattern and inconsistent with a sudden Cambrian origin. For example, bacteria appear well before multicellular organisms, and there are fossils giving evidence of transitionals leading to halkierids and arthropods.
4. Genetic evidence also shows a branching pattern in the Precambrian, indicating, for example, that plants diverged from a common ancestor before fungi diverged from animals.
|
<urn:uuid:ea5b0d8b-583d-421d-82c7-818ef69a5ee3>
|
CC-MAIN-2016-26
|
http://www.antievolution.org/cgi-bin/ikonboard/ikonboard.cgi?act=SP;f=14;t=5133;p=67616
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00190-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.91411 | 501 | 3.46875 | 3 |
The low-light photographic performance of smartphones could soon get a significant boost, thanks to the development of a new type of color filter. Created by an engineer at the University of Utah, the new filter is said to let in three times more light than conventional filters, resulting in brighter and sharper images with better color reproduction.
Most digital cameras, with exceptions such as the Fujifilm X-Trans and Sigma Foveon cameras, use a Bayer filter to help capture color information. These filters sit over the image sensor and filter light into a mosaic pattern of red, blue and green on a pixel level, before "demosaicing” it into a final image with full color information. However, this absorptive color-filter array is said to be inefficient, as it prevents 50 to 70 percent of light from ever reaching the sensor.
As such, we’ve recently seen developments like Panasonic's Micro Color Splitter which aim to address the problem and get more light to the image sensor. The latest development comes from Computer Engineering professor Rajesh Menon of the University of Utah, who has created a new transparent diffractive-filter array, which lets in three times more light than its Bayer alternatives.
The new transparent filter measures just a micron thick (100 times thinner than a human hair) and consists of a wafer of glass with precisely-designed microscopic ridges etched on one side. This bends the light in certain ways as it passes, and creates a series of at least 25 new codes or color patterns which are, in turn, read by software.
Because three times more light reaches the sensor, and the filter is producing more color information (25 or more codes compared to the traditional red, green or blue) this is said to result in brighter images with more accurate color representation, and virtually no digital grain.
While the filter could be used for any kind of digital camera, Menon is developing it specifically for smartphone cameras where low-light performance is a big issue. He thinks the first commercial products to use this new filter could be out within the next three years. He also sees industrial applications such as for robots, security cameras and drones. For example, this type of filter could allow self-driving cars to better decipher objects on the road at night.
The paper "Ultra-high-sensitivity color imaging via a transparent diffractive-filter array and computational optics" was recently published in the journal Optica.
Source: University of Utah
|
<urn:uuid:a3a8aa95-8311-4b36-b555-5f765756ff20>
|
CC-MAIN-2016-26
|
http://www.gizmag.com/camera-filter-boosts-low-light-performance/40131/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00006-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936572 | 502 | 3.546875 | 4 |
When it comes to “free energy,” most scientists like to shove it under the pseudoscience category, because they can’t seem to grasp the idea that free energy is real. Conventional scientists have a hard time accepting the idea that a “free energy” device, such as the Quantum Energy Generator (QEG), can produce limitless energy because it violates the laws of thermodynamics.
Why “Free Energy” Machines Can Defy the Laws of Thermodynamics
In broad terms, the laws of thermodynamics are a set of scientific laws that describe the transport of heat and work in thermodynamic processes. In other words, they describe energy transformation or the process of changing energy from one form to another. The laws of thermodynamics don’t support “free energy” generator because these laws are based on a closed system.
Most systems of the Universe are based on an open system. These open systems are able to defy the countervailing force of entropy and thus they can defy the laws of thermodynamics. Most conventional scientists don’t understand this concept, because they have been studying how energy works in a closed system. Furthermore, their big ego, ignorance and arrogance are preventing them from discovering many great secrets of the Universe. These old scientific ways of thinking are coming to an end due to the fact that they are becoming more irrelevant as the new paradigm emerges.
As published at Blue-Science.org.
Most theorists envision the Universe in its entirety as a closed system; meaning it is self-contained, energetically finite and tends toward a state of equilibrium (maximum entropy and disorder). Many of our technologies mirror this flawed principle in that they are designed to intake a finite energy source (petroleum for example) and inefficiently dissipate it in exchange for work (always resulting in a COP < 1.0).
Open systems on the other hand operate much differently. Take for example technologies such as windmills, solar cells, or water wheels. They take their energy from environmental sources that can be considered infinite for all practical purposes, and can therefore operate at COP > 1.0.
Why “Free Energy” Devices are Now Coming Out
The old scientific paradigm doesn’t support quantum energy device, antigravity, time travel and superluminal technology, because it has become a religion misdirected by greedy and wealthy elites. Any scientific and technological developments that threaten the elites’ controlling empire are taken out of the public domain and labeled as classified in the name of “national security.” Fortunately for us, the elites are losing control of the information war, causing their “brainwashing” psychological operations to lose their effects.
Due to the collapse of the old paradigm and the elites losing the ability to silence free energy technology, in the next few years we should start seeing more technologies that will defy many conventional scientific laws, ushering a new age in which we can free ourselves from the control of the oil cartel. One of these technologies is the Quantum Energy Generator. Some free energy activists are already claiming that this “free energy” machine works.
What is the Quantum Energy Generator and Its benefits?
The Quantum Energy Generator (QEG) is based on the works of Nikola Tesla, a famous but yet not well-known scientist who patented one of the first “free energy” generators over 100 years ago. According to Fix the World Organization, this QEG uses less than 1 kilowatt (kW) of power but can produce up to 10 kW of power without relying on gasoline.
The QEG is an open source free energy technology. Its build manual is being given freely to the people of the world. Its power comes from harnessing the energy of frequency, resonance and vibration; therefore, it doesn’t need fossil fuel to operate. Many free energy supporters throughout the world are building and testing the QEG. Some of them are located in Morocco, Taiwan, South Africa, Canada and the United States.
If the Quantum Energy Generator is embraced by the people and is used to power many businesses throughout the world, it will dramatically reduce the cost of goods and services, because the price of goods and services depends heavily on energy. It will also make us less reliant on fossil fuel, which will help reduce environmental pollution.
How to Build the Quantum Energy Generator
For an open source document on how to build the Quantum Energy Generator or for more information about it, visit this page.
Here are two videos about the Quantum Energy Generator:
QEG Resonance in Morocco – OPC: Aouchtam
Hope Moore Presents the QEG at Rising Earth Symposium
For more content related to this article, check out these empowering and enlightening books!
Donate to Help Make a DifferenceEnergyFanatics.com is a true independent blog because it is not financed by banksters. If you like reading the articles on this site, please consider sending the author/editor a donation to fund his research and expenses. Your generous support will help him to continue his quest to educate and teach people to create a better future for everyone.
Category: Free & Alternative Energy
|
<urn:uuid:921e823d-5273-4193-b49e-27d80c23d891>
|
CC-MAIN-2016-26
|
http://energyfanatics.com/2014/05/27/quantum-energy-generator-working-free-energy-machine/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00189-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.924465 | 1,059 | 2.609375 | 3 |
Human Anatomy, Physiology, and Medicine. Anything human!
4 posts • Page 1 of 1
I am a bit confused...what is the connection between loop of henle and vasa recta, doesnt vasa recta take away the NaCl and probably needs to take water as well...its weird it provides the blood and it also drains the blood...
Where is the sense in that? what kind of role do the solutes and water play in this? Why are the loop of henle and vasa recta collaborating?
I would appreciate any sensible information
On the descending loop the vasa recta removes NaCl and urea from the Loop of Henle whilst donating water back (about 20% of reabsorbed water has it done here). This creates a high Na+ concentration where it's needed in the medulla of the kidney for the collecting ducts later.
On the ascending loop very little water is allowed back in because the walls are impermeable, so the high concentration of solutes in the Loop is maintained despite the increasing concentration of water in the interstitial space around it. However it also has many active channels for taking up the sodium, potassium and others to increase the concentration even further. However it's also important to note that because no water can get out either this solution can become quite dilute, and it's through this mechanism that we control the dilution of urine.
Hope this helps
Hehe i kinda still dont understand, as I though descending loop of henle is permeable to water and it looses water and the water probably goes into the vasa recta? so how would that be possible that vasa recta then looses the salt? or does loop of henle donate the water to medulla so vasa recta that has the salt, follows the low osmolality of the medulla? it is very tricky...
Sorry I was a little unclear on the first bit! As the solution descends the loop of henle it loses water to the vasa recta, causing its concentration to rise to meet the very hypertonic osmolarity of the interstitial fluids in the renal medulla (so it gets concentrated and stays that way).
In the ascending loop the membrane is impermeable so water cannot get in, but there is an active reabsorption of ions like Na+, K+ and Cl- meaning that the solution gets steadily more dilute towards the distal convoluted tubule.
Important points here: medulla always has a high osmolarity. Vasa recta keeps salts often due to an active process rather than just diffusion.
4 posts • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 0 guests
|
<urn:uuid:be2ea1d5-dd79-4513-8951-dfccaaa2d5eb>
|
CC-MAIN-2016-26
|
http://www.biology-online.org/biology-forum/post-131514.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00141-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946904 | 566 | 2.84375 | 3 |
"Signature" was a widely used term in medical literature up to the 17th century. The shape and colour of plants and their parts was believed to reveal their medical properties, as a "signature" prepared by Divine Providence in order to lead Man to his physical remedies. Heart-shaped leaves showed that the plant was good for heart diseases.
Finnegans Wake is studded with signatures, or stickers, tags, labels, pointers, more or less disguised references, which can lead the reader to quotes, passages, and books beyond Joyce’s own text. However, these extraneous texts sometimes appear to relate to each other, at least as neighbouring books on our shelves should, thus contributing their external impulses to the dynamics of the original text. This transtextual effect is far easier to observe in the Wake than in Ulysses but I hold that the same method can be detected in Ulysses with the help of sufficient Wake training. The following observation may serve as a typical example of signatures in Finnegans Wake (68/18ff).
In an Italian cluster, preceded by "sfidare" (‘defy’) and thrice "tease fido" = ti sfido (‘I defy you’) one line above, and followed by "Bissavolo" on the next line (meaning Great-Great-Grandfather', thus dating at least 4 generations back), the highlighted invocation "Angealousmei!" (68/18), Angel, Angelus mine, is obviously matched by "Tawfulsdreck!" (68/22: ‘Devilshit’), a similar invocation, invoking not only the enhanced contrary of an angel but also Professor Teufelsdröckh1 from Thomas Carlyle’s Sartor Resartus, a quite revolutionary and revolting satirical pamphlet of 1831.
The evident symmetry of the two signatures suggests an equally important literary background for "Angealousmei!", preferably revolutionary, age-matched, and at least as evocative to well-read readers. These conditions are met, not in English but in Italian literature, by Giacomo Leopardi`s revolutionary poem "Ad Angelo Mai" of 1820, written "When he had found Cicero`s books on the Republic".
Angelo Mai, 1782-1854, a Jesuit philologist, Librarian of the Ambrosiana (Milan) and the Vaticana, appointed Cardinal in 1837, discovered and edited a number of Greek and Latin manuscripts. Giacomo Leopardi, 1798-1897, was one of the precursors of the Italian revolution, the Risorgimento, and his poem `To Angelo Mai', hailed as the clarion call of the Risorgimento, evokes an ideal Italy as in Roman times, in stark contrast with the miserable political realities of his own time:
Leopardi attempted to publish his collected works in Naples in 1835, but the Bourbon Government suppressed the last two of the planned four volumes, and the pages that had been printed ended up as pulp. A similar disaster later struck Joyces Dubliners which were burned, as celebrated in his polemical poem "Gas from a Burner" (in turn inspired by a similar poem on a similar disaster by James Charles Lever, an earlier Dublin writer who also ended up in Trieste)
Conclusion: Joyce confronts his readers with signatures, indicating "thereby hangs a tale". These stickers or labels are marked by their prominent position in the text, and by further signals, in this example, exclamation points. Their function is identical with that of the signal for " appended document" in current computer programs: readers may open it if they obtain the key, that is, if they find out that Carlyle’s Sartor Resartus is indicated by “Tawfulsdreck”, and that Leopardis Ad Angelo Mai is indicated by “Angealousmei”. They will be rewarded with a discourse resulting from the parallel properties of both texts. But Joyce is not an easy author, and his Angelo Mai has been so carefully hidden that even the Italian translation missed it completely - despite the great popularity of Leopardi’s poems among senior literate Italians who still carry them about in their pockets in suitable miniature volumes (Le Poesie, Barbèra, Florence 1900)
1 Teufelsdröckh`s friend who provides the information for Sartor Resartus is "Herr Hofrath Heuschreck" (‘Grasshopper’): an precursor of The Ondt and the Gracehoper”?
|
<urn:uuid:6059f920-5548-4def-bf89-d87026ef0b9e>
|
CC-MAIN-2016-26
|
http://www.joycefoundation.ch/An%20Occasional/Isler1.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00015-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948148 | 975 | 2.78125 | 3 |
1 Answer | Add Yours
The style in which Rushdie presents his work helps to bring out its thematic development. I think that Rushdie's grasp of historiography is driven by the expressions of the subjective. Rushdie tells the history of the Indian subcontinent before, during, and after Partition through the individual perception of Saleem. Through this narrative, Rushdie utilizes historiographical elements. The assembling of history is done so through an individual voice. Saleem is aware of his meta-fictional and meta-historical condition for he understands his unique powers being a child of Midnight and being a historian of this "tryst with destiny." Yet, this voice is fraught with errors. Saleem makes many assertions that are contradicted with the record of historical development. This form of "errata" is exactly what Rushdie seeks to create. In the end, all historiography is flawed with what it includes and what it excludes. There can be no super- historical record, no overarching voice that claims to have complete and totalizing authority. This is where Rushdie's claims of historiography are the most powerful. Rushdie does not intend his work to become a historical collection of data. Yet, in the process, Rushdie brings out questions about this collection of data in the first place. In the final analysis, there is significant question as to how history is collected and assessed, and just like Saleem, that has errors in his retelling, yet knows this is the only retelling out there, we, as individuals of historiography, must live with our own limitations, seeking to bring other voices into the discourse to enlighten and enhance our own. It is in this where Rushdie's work becomes powerful and almost transcendent in a condition where contingency is the limiting factor for all.
We’ve answered 328,172 questions. We can answer yours, too.Ask a question
|
<urn:uuid:a74b2251-a5de-45a3-a17d-d3b5f9334c14>
|
CC-MAIN-2016-26
|
http://www.enotes.com/homework-help/how-does-salman-rushdie-present-histogriografic-239825
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00106-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960738 | 391 | 2.5625 | 3 |
A Sustainable Future?
The story of Midwest farm life is reflected in the evolution of its farmhouses, many built in the 1800s, some surviving into the twenty-first century. Who can say whether the social and economic factors that brought these homes to the vast prairies will ever again welcome a single home on 160 acres? Certainly the current structure of agriculture puts significant pressure on that smallness, yet a continuing movement for "sustainable agriculture" seeks to restore a place for it.
Farming Then and Now
In the 1800s each farmer grew enough food each year to feed three to five people. By 1995, each farmer was feeding 128 people per year. In the 1800s, 90 percent of the population lived on farms; today it is around one percent. Over the same period, farm size has increased, and though the average farm in 1995 was just 469 acres, 20 percent of all farms were over 500 acres.1 And the trend has continued to accelerate.
One by one, farmers have retired or given up farming. Outside investors or neighbors have bought out those vacated farms. Homesteads now sit abandoned. Farmers of the larger tracts are faced with tearing down these old houses and plowing across their yards. Occasionally the homesteads windbreak is left, testament to the family that once lived on this acreage and now remains as a ridge of shade inside an expanse of corn or soybeans.
Aldo Leopold, writing in the 1920s - 1940s, was perhaps one of the first voices calling for a different approach to agriculture. In his essay, The Land Ethic, he writes,
"Quit thinking about decent land use as solely an economic problem. Examine each question in terms of what is ethically and esthetically right, as well as what is economically expedient. A thing is right when it tends to preserve the integrity, stability and beauty of the biotic community. It is wrong when it tends otherwise."2
Leopold noted in another essay, that modern agriculture was were tearing up soil, increasing the yield of the soil, and once again changing the face of the prairie, faster than ecosystems could keep up.
In 1970s, organic farming was in its infancy. The Rodale Institute had been publishing a magazine called Organic Gardening, but they were now beginning to look beyond the garden. Organic management systems originally helped cut costs of modern soil "inputs," yet early leaders were also in touch with the environment in a deeper way.
They believed in themselves, their labor, and the value of their product. Meanwhile consumers were beginning to seek out organic foods for some of the same reasons that farmers sought out organic farming practices.3
Defying Definition Sustainable Agriculture
Where does the term "sustainable agriculture come from, and what does it mean? In 1972, one of the first uses of the word sustainable in an environmental context appeared in the British Journal, Ecologist. In a special issue called A Blueprint for Survival, it stated, "the principal defect of the industrial way of life with its ethos of expansion is that it is not sustainable We can be certain that sooner or later it will end "
Dana Jackson, co-founder of the Land Institute in Salina, Kansas, was part of an early group of leaders seeking to define alternatives in energy and farm practices. Jackson remembers people speaking of permanent agriculture (farming that did not destroy the soil, water, or people) and regenerative agriculture (farming that helped restore the land).
In the Upper Midwest, new farm and land management organizations began forming in the late 1970s and early 1980s, in part as a response to the farm crisis of 1982, and in part because of the environmental ethic that was rising. Opinions differed greatly about the key elements of "sustainability."
Some believed it meant saving the soil from chemical pollution or erosion. Others thought it meant keeping the farmer on the land through higher prices and parity. One definition held that sustainable agriculture is farming that does not erode its own base of soil, water, farmers, or children willing to farm.
Most agree, however, that sustainable agriculture encompasses
Because of these differing points of view, the phrase sustainable agriculture has defied formal definition and still does. Whatever definition current leaders in the sustainable agriculture movement adopt, it is clear that they do not mean "sustaining" agriculture in its current bigger-is-better form.
New Pressures on Farming:
The 160-acre farmstead may not be completely a thing of the past, but most pressures in the countryside seek to destroy it. Contract farming is growing rapidly as farmers, hoping for a degree of economic security, increasingly agree to contracts with meatpackers to produce beef, pork, and chicken. Agribusiness buys from the farmer wholesale and sells back to the farmer at retail. Markets enlarge and globalize. Farm business mergers create huge concentrations of control over specific markets. In all this, there continues an illusion of agricultural efficiency that may now be reaching its limits (though the limits of the earth under us are still seldom incorporated in the pricing of food in America).
And there are other pressures. Many European nations and Japan are declaring that genetically modified organisms will not be sold in their markets. Although it is complex and expensive, farmers are beginning to imagine a dual-track of marketing that will allow them to identify the genetically modified crops. Some say that genetic modification is simply the laboratory doing faster what nature has always done. But some American eaters, along with the Europeans and Japanese, are wondering whether they want to continue to be part of the experiment.
We may never see real rejuvenation of the rural countryside, the re-building of homes on 160 or 300 acre parcels, and more common farming of vegetables or meat intended for local residents to eat. For now, farming is still a major activity on the expanse of Midwest prairie that held the houses and fostered the lifestyles explored in Death of the Dream. Although vast areas now seem quite literally empty, those farmers that stay in the countryside must show great creativity.
Creative options now include "community supported agriculture" where urban eaters buy shares of a farmers take, usually of vegetables, and "congregationally supported agriculture" where church members provide markets for freshly raised meats, homemade cheeses, or homegrown eggs.
Slowly, eater by eater, we may take back the system of farming in the Midwest, claiming the soil as natures gift and re-claiming our connection to our own food supply. It will not be easy. The percentage of Americans now farming has dropped about as low as it can go. The forces that vacated the countryside continue today, yet rays of hope shine across the kitchen tables of many who are thinking seriously about food quality and the way we eat. For quality food, we need farmers on the land.
1 Economic Research Service, USDA, Washington, DC
2 Aldo Leopold, essay, The Land Ethic
3 Conversation with Carmen Fernholz, organic farmer, Madison, Minnesota
|HOME||FILM AND MORE||HOMES ON THE PRAIRIE||LITERARY CONNECTION||VIRTUAL FARMHOUSE||A SUSTAINABLE FUTURE||RESOURCES|
|
<urn:uuid:dca97116-67aa-40be-8ece-d4c81ca146bc>
|
CC-MAIN-2016-26
|
http://www.pbs.org/ktca/farmhouses/sustainable_future.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00034-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958889 | 1,490 | 3.453125 | 3 |
actin, a protein abundantly present in many cells, especially muscle cells, that significantly contributes to the cell's structure and motility. Actin can very quickly assemble into long polymer rods called microfilaments. These microfilaments have a variety of roles—they form part of the cell's cytoskeleton, they interact with myosin to permit movement of the cell, and they pinch the cell into two during cell division. In muscle contraction, filaments of actin and myosin alternately unlink and chemically link in a sliding action. The energy for this reaction is supplied by adenosine triphosphate.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
|
<urn:uuid:4c752004-6b75-4850-951f-439d92260fca>
|
CC-MAIN-2016-26
|
http://www.factmonster.com/encyclopedia/science/actin.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00150-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945961 | 150 | 3.53125 | 4 |
Age Group: Adults, Seniors & Teens
4/26/2014 • 2 p.m. - 4 p.m.West Charleston Library
Room: Lecture Hall
Balancing Body Systems through Iridology and Nutrition
- What Iridology is
- Signs to observe in the iris and signs on the body that relate to specific bodily symptoms and imbalances. For example: fatigue, weight gain, falling hair, poor circulation, toe fungus, and cold hands and feet are all signs of thyroid deficiency.
- Signs to observe in the iris and on the body for deficiencies in bodily organs and systems including the bowels, lungs, skin, kidneys, circulatory, and lymph systems.
- Specific foods and nutrients necessary for organ and system regeneration.
- Herbs, poultices, baths, and natural remedies traditionally used to bring vitality back to tired inflamed organs.
- How to test the pH in the body and alkalinize the acids that cause illness.
- How to test for Candida (yeast/fungus) in the body.
- How to test for hypo or hyper thyroid function.
Ellen Tart-Jensen, Ph.D., D.Sc., CCII, has been an iridologist, nutritionist, author and herbalist, for 25 years. She studied for two years at the Prasura Health Clinic in Switzerland and spent five years training and working with Dr. Bernard Jensen (known as the Father of Iridology). She has authored several books including, Techniques in Iris Analysis. Ellen now teaches natural healing methods, nutrition, and iridology throughout the world.
Free and open to the public. For more information call 702-507-3964.
|
<urn:uuid:9725b28a-bfaf-42cd-9d59-ca2b2f39342b>
|
CC-MAIN-2016-26
|
http://lvccld.org/events/event.cfm?nID=2864
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.899272 | 361 | 2.546875 | 3 |
The National Marine Fisheries Services' Recent Release Of Loggerhead Sea Turtles
For over 100 million years Sea Turtles have roamed the oceans, providing a vital link in marine and shoreline ecosystems. From leatherbacks to loggerheads, six of the seven species of sea turtles are threatened or endangered at the hand of humans. Sadly, the fact is that they face many dangers as they travel the seas — including accidental capture and entanglement in fishing gear (also known as bycatch), the loss of nesting and feeding sites to coastal development, intentional hunting (poaching), and ocean pollution.
NOAA’s National Marine Fisheries Service has worked in conjunction with numerous partners to identify and resolve many of the man made threats to the sea turtles and their survival. From commercial shrimping trawlers to the University of Texas, researchers are devising workable solutions to the threat of entanglement in nets and the effects of pollutants on the turtles themselves.
On a recent sunny morning Ben Higgins returned 28 loggerhead turtles to the coastline of Florida for release. These 3 year old turtles were hatched on Brevard County beaches and have been used in several research trials furthering our understanding of man’s impact on the species.
For more information on Sea Turtles and how you can get involved in saving these gentle creatures go to:
|
<urn:uuid:4e88c775-74d2-4ada-90ba-c29836b2a685>
|
CC-MAIN-2016-26
|
http://wfit.org/post/national-marine-fisheries-services-recent-release-loggerhead-sea-turtles
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00193-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.908676 | 269 | 3.734375 | 4 |
In the past, there was some debate as to whether the brochs were a native idea or whether they were introduced by foreign invaders in much the same way as the Normans introduced the motte-and-bailey castle to help them control a hostile countryside. But now it is generally believed that they were an entirely native development from the local roundhouse tradition. Circular houses had been a distinctive feature of the British landscape for well over a thousand years before brochs appeared, at least since the Early Bronze Age, which began early in the second millennium BC.
In the lowland parts of Scotland, in the south and east, most of these buildings were made out of timber and consequently have not survived except as crop marks in aerial photographs. These show a ring of posts, which—as we know from excavated examples—would have formed the framework for a wattle-and-daub wall supporting a thatched conical roof. In the uplands of northern and western Scotland, timber was a scarce and very valuable resource so stone was used instead. Thousands of these have survived in various states of preservation.
Bu Broch, Orkney
It was John Hedges’ rescue dig at the site of Bu in Orkney that first suggested the connection. Under a mound of loose stone in a farmer’s field overlooking the Bay of Navershaw just east of Stromness, Hedges uncovered a round building 19.5 metres in diameter with a solid outer wall some 5.2 metres thick. Although the thickness of the wall seemed to indicate the building was a broch, excavations revealed that it had been modified during the course of its occupation and that the outer 1½ metres or so had been added later, meaning that originally it would not have supported much weight. Today it survives to a height of about 1½ metres and there is little to suggest it was every any taller (although the site may have been robbed for building material after it was abandoned).
Inside, was a more or less circular area covering about 70 square metres, with a flagstone floor, a central hearth, a stone-lined tank and a stone cupboard. It was partitioned around the perimeter by upright slabs and these were used for storage and sleeping. It is quite possible that the occupants shared their dwelling with their livestock—a fairly common practice in the farming world—which would have kept the inside quite cosy (if somewhat pungent). Four radiocarbon dates have been published from Bu, ranging from the around 800-600 BC.
Even simple roundhouses such as Bu would have been imposing structures, with conical roofs rising as much as 5 metres above the walls. The later thickening of the walls was most likely done in order to make the walls even higher and increase the dramatic effect. Where we have evidence, the roundhouses appear to have stood alone, without any out buildings or other houses.
In the second half of the first millennium BC, roundhouses began to get more complex and one of the best examples of this development is the site of Crosskirk in Caithness, which was excavated by Horace Fairhurst from 1966-72. Radiocarbon evidence suggests that it was occupied for nearly a thousand years, from the eighth century BC until the first couple of centuries AD. As was the case at Bu, the outer wall of the main structure is quite substantial—nearly 6 metres thick—but once again excavation showed that it had been thickened by the addition of an outer casing. In addition, the core of the wall was made up of earth and rubble, and could never have supported the weight of a sizable tower. The excavator suggests a maximum height of about 4.5 metres (or about one third the height of Mousa).
However, Crosskirk did have a number of features that are associated with brochs and not found in your basic roundhouse. For example, there was a small ‘guard room’ opening off the entrance passage and a larger cell within the thickness of the wall that contained a set of steps. Also, Crosskirk was enclosed by an outer rampart and ditch and the area was filled with secondary structures. Ian Armit calls this type of building a Complex Roundhouse.
The site of Howe is located just east of Stromness in Orkney, in a field overlooking the Bay of Ireland and quite close to the Loch of Stenness. Excavations, conducted from 1978-82 by John Hedges, revealed that the site had had a complex history. Originally the site of a Neolithic tomb the site lay abandoned for over two thousand years until the eighth century BC when Iron Age domestic occupation begins.
At some point in the fourth or third century BC, the site was levelled and the old tomb chamber was converted into a souterrain (a type of underground storeroom). The souterrain was capped with a thick layer of clay and a large roundhouse was built on top. This building seems to have collapsed at some point and not much of it has survived. It was replaced by a structure known in the reports as Broch 1—again, despite the fact that there is no evidence that it was ever very high. As was the case at Crosskirk, the main building was surrounded by a stone rampart and ditch, with a number of secondary buildings in the enclosed space. Little has survived of the internal arrangements but there were guard cells off the entrance passage and two intramural staircases. However, these weakened the walls, which were only some 3½-4 metres thick, to such an extent that they never could have supported the weight of a tower. In fact, like its predecessor, Broch 1 collapsed.
It would seem that the occupants survived because the building was immediately rebuilt. The new version, known as Broch 2, had a much thicker outer wall, ca. 5½ metres. Even so, it too appears to have come tumbling down—this time during the actual construction. The people were nothing if not persistent and started all over again. The entrance passage was 5.5 metres long—with a sill and doorjambs about 1.5 metres from the inner end, but no guardrooms. The floor of the broch was largely cleared out sometime later in the Iron Age and many features, such as the hearth, have all but disappeared.
The interior was subdivided by upright slabs into three main areas and has a very similar layout to Bu, which is located nearby. In the centre is a near circular area about 4 metres in diameter. To the right of the entrance is a passage about 2 metres wide that runs about two-thirds of the way around the perimeter. The remaining space is taken up by three two-storeyed cupboards divided by partitions. There was an intramural cell about 1.3 metres above the floor on the west side that contained a set of stairs. The walls are only some 4 metres high and there was neither evidence for an upper floor—no scarcement nor evidence of wooden posts.
From Roundhouse to Broch
It is clear from the evidence that, far from being a response to the specific threat of Roman invasion,
|
<urn:uuid:0d1ed3b7-b9e7-491e-93cd-cfd229efd6f0>
|
CC-MAIN-2016-26
|
http://www.odysseyadventures.ca/articles/brochs/brochs_roundhouse.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00169-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.987025 | 1,471 | 4.03125 | 4 |
Hi, i have this question
how do i solve this? thank you.Quote:
The weight of a box of cereal has a normal distribution with mean = 340g and standard deviation = 5g.
(c) 30 boxes of this cereal are selected at random for weighing. Find the probability that the sample variance is more than 36.69.
|
<urn:uuid:78b61a11-f685-4801-b71f-84e86b795c2c>
|
CC-MAIN-2016-26
|
http://mathhelpforum.com/advanced-statistics/54645-normal-distribution-question-print.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00155-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94281 | 72 | 3.21875 | 3 |
Under clear skies, the wind-swept waters of Great Bay are a deep, muted blue, and it makes the shimmering marsh grass look so much greener.
"Iridescent," is how Brian Braudis first describes it. "Make that luminescent. Luminescent is a better word."
Either way, the grass is greener at the Edwin B. Forsythe National Wildlife Refuge, the vibrant green being just one smudge on God’s palette. The egrets glow white, blanched by salt marshes and sun. The wings of the red-winged blackbird pop like fluorescent epaulets; a red only matched by the beak of the American oystercatcher. There is no black darker than the neck of black skimmer.
All that color, overlooked by so many.
The Forsythe refuge is the largest shoreline preserve in the East. It covers 47,000 acres, spread from Bay Head to Atlantic City. Bits and pieces are found over 44 miles of Parkway exits.
It begins just south of Brick, where upper and lower branches of the Metedeconk River spill into Bay Head Harbor. It continues, in spots, down both sides of the Barnegat Bay, where the marshes are fed by the Toms River and both prongs of the Forked River. It dominates the western shores of lower Barnegat Bay and Little Egg Harbor, and is shaped like a cup that contains the Great Bay.
From the Parkway, the best view is between Exits 50 and 48, just south of Bass River. The bridge over the Mullica River skies above the preserve, and the river meanders east, zigzagging through grassy marshes to Great Bay.
"Almost all of the preserve is east of Route 9," said Braudis, who runs the federal preserve. "So about 80 percent of it is salt marsh."
Ecologists liken salt marshes to tropical rainforests in terms of life support for bugs, fish and birds. The preserve is a landing and lodging strip in the Atlantic Flyway for hundreds of species of migratory birds. The American black duck, the snow goose, the Atlantic brant. The bald eagle comes through, and peregrine falcons nest there. The osprey has made a comeback.
The lower, larger part of the preserve was protected way back in 1939 and was called Brigantine National Wildlife Refuge. The upper portion, called Barnegat was added in 1967. In 1984, they were merged and renamed to honor Forsythe, a South Jersey congressman and conservationist.
In the 71 years since Brigantine was set aside, bays and barrier islands from Brick to Atlantic City have changed dramatically. For one thing, the Parkway came through. The vacation home buildup of the barrier islands grew exponentially. Inland, it was retirement homes. Gambling came to Atlantic City.
The bay-man days of old, of duck hunters and their sneakboxes (boats), of oystermen and clammers and their rakes, of muskrat trappers and skiff builders, was over.
The refuge was, well, the last refuge; a safe haven for the fish and game, and the humans they attract. There are access roads all off Route 9 leading to places with old-time names: Eno’s Pond, West Creek Dock, Graveling Point, Scotts Landing, and the Holgate section at the end of Long Beach Island. These are the places to hike, fish and watch birds in unspoiled surroundings.
The most popular part of the refuge is Wildlife Drive, a mile south of Smithville on Route 9. The drive is an elevated 8-mile loop into the bay, where bird-watchers like Sandra Keller spend hours with their spotting scopes. She travels across the state from Barrington "to see the shore birds."
"We’re an urban refuge," Braudis said, "which can be good, because we have a great connection to the community."
There is a "Friends" group, which helped finance a new boardwalk overlook, and other volunteers who do things like bird counts. The boardwalk looks out across Great Bay to Atlantic City.
In late afternoon, the sun glints off the glass and metal gaming towers, lighting up the skyline like some pinkish Oz.
Pinkish. Not a real color.
More of Mark DiIonno's Jersey Shore Diary:
|
<urn:uuid:a5652e61-75ba-489d-b0a3-6c34110f70c2>
|
CC-MAIN-2016-26
|
http://blog.nj.com/njv_mark_diionno/2010/08/forsythe_national_wildlife_ref.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00149-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949184 | 928 | 2.78125 | 3 |
With vast shale gas reserves, will China move from coal-to-chemicals processes to gas-to-chemicals processes going forward? Emerson’s Douglas Morris, a member of the alternative energy industry team explores this question in today’s guest post.
We’ve had a number of posts where shale gas is discussed and this week is no different. An interesting article about a startup company was highlighted in the Wall Street Journal this week, which discussed a unique process for producing ethylene from natural gas.
The article, “A New Use for Shale Gas“, discusses the company Siluria Technologies and their lower temperature process for converting natural gas into ethylene. This company certainly has the potential to change the market if they can demonstrate this technology with their planned pilot facility.substitute natural gas (SNG).
China has the third largest coal reserves in the world and is using this resource to produce both power and chemicals. (I’ve written before and discussed how China is leading the world in chemical production from coal). For shale gas, China has the world’s largest reserves.
This past year, the country has started some hydraulic fracturing (fracking) to release gas deposits. But, it is currently limited on how much it can produce because it does not have enough water to support fracking on a large scale. Large portions of their reserves are in the arid part of Central China. As far as coal chemical is concerned, the country finds itself also water limited for these plants.
Looking into the future, though, if a process like Siluria’s or another low-cost olefin pathway is commercialized, China is ripe to take advantage because of its vast shale gas reserves. Will this make China move from a large coal to chemical industry to a gas to chemical industry instead?
|
<urn:uuid:b0f5e0bd-c1f8-4c6e-8fd7-66289b248e5b>
|
CC-MAIN-2016-26
|
http://www.emersonprocessxperts.com/2012/09/the-impact-of-shale-gas-on-olefins/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00074-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948894 | 376 | 2.734375 | 3 |
Breaking the Cycle of Poverty
Reducing the disparities in children's achievement will require reaching beyond the educational system.
A child who comes to school malnourished, from a poor household, having a mother with less than a high school education, or a parent whose primary language is not English is much more likely than a classmate without those factors to have academic and behavioral problems later on.
That means that radically improving children’s chances for success requires reaching beyond the education system.
As Valerie E. Lee and David T. Burkham write in their 2002 book Inequality at the Starting Gate, “We should expect schools to increase achievement for all students, regardless of race, income, class, and prior achievement.” But, they add, it is unreasonable to expect schools to completely eliminate any large pre-existing inequalities, especially if the schools themselves are “underfunded and overchallenged.”
And where children live in the United States further affects the challenges they’re likely to face. Compared with a youngster in Massachusetts, for example, an infant born in Mississippi is 49 percent more likely to have a low birth weight, slightly over twice as likely to live in a poor household, and 56 percent more likely to live in a family where neither parent has a postsecondary degree. He or she is also less likely to have health insurance or working parents.
As the statistics on the following pages make clear, education does not exist in a vacuum. Rather, broader social policies may be needed to address issues of changing demographics, health care, concentrated poverty, and an economy increasingly stratified by wealth.
Click on links to view charts.
“Equal opportunity,” Richard Rothstein, a research associate at the Washington-based Economic Policy Institute, argues, “requires a full menu of social, economic, and educational reforms: in employment policy, health care, housing, and civil rights enforcement, as well as in schools.”
There are 73 million children in the United States, from birth through age 18. About four in 10—28.4 million—live in families with annual earnings of $40,000 or less, about twice the poverty level for a family of four, according to the National Center for Children in Poverty at Columbia University. Just over 18 percent live in families earning less than $20,000 annually.
More than six in 10 black and Latino children, and nearly six in 10 children of immigrant parents, live in low-income households.
While chances exist at every level of education—early-childhood, K-12, and postsecondary—to help break the cycle of poverty, a recent volume by the Washington-based Brookings Institution suggests that too often schools perpetuate rather than reduce class differences. That’s in part because children from low- income families generally attend schools that by any measure—school resources, student achievement, qualified teachers—lag behind those of their more affluent peers.
Vol. 26, Issue 17, Pages 20-22, 24,26-27Quality Counts is produced with support from the Pew Center on the States.
|
<urn:uuid:a3a002d1-b8a2-41fd-98c0-42a9710275ad>
|
CC-MAIN-2016-26
|
http://www.edweek.org/ew/articles/2007/01/04/17wellbeing.h26.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00078-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956105 | 636 | 3.59375 | 4 |
Biochip implantation - When humans get tagged
By Renjith VP, SiliconIndia | Thursday, 25 November 2010, 17:03 Hrs | 2 Comments
Microchip implants and mind control related to cybernetics is an area discussed way back in 1948 in a book by Norbert Weiner. From then till now, theories have been formulated and materialized into the real tangible entity of biochip. GeneChip, one of the first commercial biochips, contained thousands of individual DNA sensors for use in sensing defects or to understand single nucleotide polymorphisms to put it technically in tumor suppressor genes and genes related to breast cancer. But it gained wide approval as a device which can be installed inside pet animals by injecting through a small hypodermic needle and make it easy for owners to track them down.
While biochips promised immense help in the field of medical diagnosis, it was tarnished with much negative publicity as it was projected as a device which is inserted inside human to track his actions and haunt him down. Now we really don't like being followed, do we? EPIC's Hoofnagle once said the technology carries the same privacy concerns as a national ID card. "Human identification systems are tools that have historically been used for social control," he said. Hoofnagle also expressed concern that the biochips might be "spoofed," allowing anyone to access data on the chip or monitor people without them knowing it. "It sounds like it's an easy technology to invade," he said. So what about bio chips is really concerning us?
When it comes to the use of biochips on humans, it works a little bit differently. The chip is implanted in a way where it is able to bind with your DNA. Many government agencies have been working with biochips which can be used for identification purposes. When we think of this as an invasion of privacy, we should also look at the positive side of the technology. This would be a great use to find missing children, if this technology goes as far as an implant at birth, those who have been kidnapped or missing, can be easily found. This type of implantable chip is being researched by defense departments in India and abroad in hopes to be used for soldiers, to monitor their location and relay health information if the soldier gets wounded in battle. This would be a great way to get medical data relayed of what the doctors may be dealing with before the patient ever gets to the hospital. Not only that, a biochip will make it easier to find that wounded soldier.
But there are certain areas which always lack definite explanations. You can't value human life and you can limit his identity. It questions our morality when it comes to cloning humans and similarly we find it weird when we get 'tagged' by some minute chip. Whatever lies in the future for biochips, its implantation in humans still pricks our conscience.
Experts on SiliconIndia
Post your Comment
All form fields are required.
|
<urn:uuid:8a5d4862-1e54-4a10-8066-300aa93e7f30>
|
CC-MAIN-2016-26
|
http://www.siliconindia.com/shownews/Biochip_implantation__When_humans_get_tagged-nid-74531-cid-2.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00199-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.9635 | 611 | 2.609375 | 3 |
I want to create a user in Linux and add two user. But i want that Both user cannot access tha data with each other. In case if OS corrupt then tha data will stored in another location for future use. Because if the os corrupt then the os will corrupt not the data because the data stored in another location. How is it possible
migrated from stackoverflow.com Dec 14 '11 at 13:06
This question came from our site for professional and enthusiast programmers.
closed as not a real question by RedGrittyBrick, Sathya♦ Dec 14 '11 at 17:02
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
If you are wanting to give someone root privileges but not give them full access to the system you may want to look at SUDO. The config file is located in /etc/sudoers
Here is a website explaining how to configure sudo: LINK (linuxhelp.net)
This will allow you to define exactly what access and privileges the user can have. Also you should look at user, group, and world permissions with chmod.
Adding users to a group is in /etc/groups
Best of luck!
|
<urn:uuid:c7e8ecf3-ca11-4245-88b1-52bc62991742>
|
CC-MAIN-2016-26
|
http://superuser.com/questions/367883/create-a-user-account-under-the-root-in-linux
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00172-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.9395 | 309 | 2.671875 | 3 |
Special Issue "Analytical Chemistry of Water"
A special issue of Water (ISSN 2073-4441).
Deadline for manuscript submissions: closed (31 May 2013)
Prof. Dr. Maria Filomena Camões
Department of Chemistry and Biochemistry Faculdade de Ciências de Lisboa Universidade de Lisboa C8, Campo Grande, 1749-016 Lisbon, Portugal
Phone: +352 21 7500008
Interests: electroanalytical and environmental chemistry: ionic solutions, pH and acidity; potentiometric analysis; ion chromatography; seawater, coastal waters and low ionic strength aqueous solutions; air-water interfaces and exchanges; metrology in analytical chemistry
Water, H2O, the most abundant substance on the Earth’s surface and the only one to be naturally present in all three physical states, solid, liquid and gas, moves continuously through the hydrological cycle and covers more than 70 % of its surface, the reason why the planet is known as the Blue Planet. Water is decisive in terms of climate regulator. Water is the main component of the human body and it is essential for all forms of life. Water is used in almost every industrial process. More than 97% is salty and forms the oceans. The remaining less than 3% is fresh water, but most of it is frozen or, although liquid, it is trapped as ground water. Only 0.014% is readily accessible in lakes, streams and rivers. Water dissolves, to smaller or bigger extent, most substances it contacts with, being often mentioned as a universal solvent. What is usually called “water” is in fact some aqueous mixture, solution or suspension. This makes it difficult to find water with a degree of purity adequate to most uses that Man needs it for. Although drinking water is not necessarily the most demanding use in terms of purity, this is the one people seem to be more sensitive to. The variety and concentration of chemical species in the aquatic systems can be quite diversified, presenting a challenge in terms of both purification strategies and quality control. Water plays an important role in the world economy, its quality being regulated by national and international legislation.
This special issue will compile review articles and recent research focusing on a selection of representative topics pertaining to analytical chemistry of aqueous solutions, some of which are quite emerging issues.
Dr. Maria Filomena Camões
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Water is an international peer-reviewed Open Access monthly journal published by MDPI.
|
<urn:uuid:6e7b9145-904b-489c-af22-ce2d0ed0e415>
|
CC-MAIN-2016-26
|
http://www.mdpi.com/journal/water/special_issues/analytical-chemistry
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00062-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.926646 | 689 | 2.90625 | 3 |
It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Seeing that many did take in hand to set in order a narration of the matters that have been fully assured among us, as they did deliver to us, who from the beginning became eye-witnesses, and officers of the Word...
I found this handy infographic on wikipedia. It shows how 35% of Luke is unique to Luke, 20% of Matthew is unique to Matthew, and only 3% of Mark is unique to Mark. I'd say that makes the claim "the majority of the text is copied" not incorrect.
Relationship between Synoptic Gospels
I'm not sure where you heard otherwise. The book was built up in 3 phases, probably by many different authors who were probably part of an early Johannine community.
PS: (and sorry now for what is such a huge post), but I'm curious as to what you mean by morality stuff in the NT that wasn't there before. Because, at least according to Christian tradition
whether you were good or bad is irrelevant, you're damned in the afterlife if you didn't believe that Jesus Christ came as God incarnate to sacrifice himself for your sins.
Christianity has some interesting ideas, e.g. turn the other cheek to violence and let him without sin cast the first stone, but it is interesting how for example, the first has never been used properly in history (and now people say it is some sort of metaphorical thing or some other excuse), and the second is supposedly an addition to the main text.
That is the idea of the synoptic gospels.... that being three different accounts of the same events written some years apart
Which brings up the idea of the Q source which is nothing more then speculation... though very possible... nothing provable
Forgive and you will be forgiven, show mercy and you will be shown mercy, yet if you do not show said attributes you will be shown none?
But is the "Golden Rule" all that unique to Christianity? I'd say it certainly wasn't something morally new that Christianity introduced, in fact, people say it is there in some form or another in almost every religion.
Q is indeed speculation,
I don't speak for Christianity... I only know what it should be...
do onto others as you want done onto you... how does that apply when one takes upon himself what he doesn't want done to him or others... and still does not retaliate?
I'm not talking about the Quran.
My apologies. Perhaps I should've used the word "Christian Bible" in there instead of "Christianity". My point being, the Golden Rule wasn't something new that Christianity invented. And it fits perfectly in your scenario: "Do unto others, as you would have them do to you", not "Do unto others what they are doing to you".
"Turn the other cheek"- as in non-resistance to the point of facilitating violence or oppression (someone strikes you, give them the other cheek to strike, someone takes your coat, give them your coat as well, etc.), is a somewhat disturbing philosophy even in theory if you ask me,
|
<urn:uuid:c4b1288c-7349-4971-8d5e-6323ef9f1a11>
|
CC-MAIN-2016-26
|
http://www.abovetopsecret.com/forum/thread969963/pg10
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00177-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96962 | 688 | 2.625 | 3 |
||To determine the effects of 3 cutting and 3 seedbed treatments
on white ash regeneration.
||expected to continue
||Bartlett Experimental Forest, NH. Topography is nearly level.
Soil is black with organic matter to a depth of 3 ; this is
underlain by a mottled sandy layer. Drainage is moderate to
poor, but standing water is never found there during the growing
season, even after a heavy rain. Since white ash in New Hampshire
does best on moist sites, the area is a reasonably favorable
one for this species. There were no ash seed trees in the study
area and nearby vicinity at the beginning of the study. Before
treatments the area was an immature stand about 30 yrs old with
stems ranging from 3" to 12" d.b.h. Red maple sprouts dominated
the overstory among which was scattered blackcherry, paper birch,
white pine, and balsam fir trees. Undergrowth consisted of a
rather light cover of ferns, balsalm fir seedlings, wild-raisin
(Viburnum cassinoides), and other low plants.
||3 0.25-acre plots within 1 compartment.
|Likelihood of Locating Study Areas:
||Compartment cut: removal of trees >2.5"d.b.h. which removed
40%, 20%, and 0% of the basal area of the 3 plots.
Sowings for 0.083-acre subplots:
1) sowing with white ash seeds followed by a scarification with
a Rich fire tool.
2) scarification followed by sowing.
3) sowing on undisturbed litter.
||Seedling estimates; For each estimate, two random strips of
14 0.001-acres each across each subplot were tallied; this amounted
to 33% of the treated areas. Strip locations used in 1959 and
1960 were not the same: 1959 and 1960.
Soil Moisture; Only the upper 3" of soil were sampled. Each
sample was taken separately adjacent to each of two seedlings
in each of the 9 understory-overstory treatment combinations
-- a total of 18 samples per week.
|Variables and Sampling Frequency:
||Numbers of seedlings were estimated at the end of the first
and second growing seasons: 1959 and 1960.
1960: 1- and 2-year old seedlings were counted separately: 1960.
Heights were measured (to the nearest 1/20 " on the 405 competitive
understory seedlings: 1960 and 1961. Also in 1961, stem diameters
1/2 " above the root collar were measured to the nearest 1/64
" with a micrometer.
Soil moisture, as a percentage of oven-dry weight, was determined
from samples taken weekly in summer of 1961 in each understory-overstory
The plots and subplots were relocated, the overstory tallied,
and all white ash seedlings/saplings (>4.5 ' tall) were recorded
by d.b.h. class: 1992.
||data on tally sheets.
summarized on paper.
|Global Change Research Applications:
||Studies of Ecosystem Processes
|Publications and Reports:
||Leak, William B. 1963. effects of seedbed, overstory, and
understory on white ash regeneration in New Hampshire. Res.
Paper NE-2. Upper Darby, PA: U.S. Department of Agriculture,
Forest Service, Northeastern Forest Experiment Station.
Leak, William B. 1993. Effects of seedbed and overstory on
White Ash regeneration: a 34-year record. Field Note. Durham,
NH: U.S. Department of Agriculture, Forest Service, Northeastern
Forest Experiment Station.
||William Leak, USDA Forest Service, P.O. Box 640, Durham NH
03824. (603) 868-7655
|
<urn:uuid:821ae31b-289e-4373-a9fb-4c702d5301d6>
|
CC-MAIN-2016-26
|
http://www.fs.fed.us/ne/global/ltedb/catalogs/cat20.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.888513 | 847 | 2.703125 | 3 |
PatientPlus articles are written by UK doctors and are based on research evidence, UK and European Guidelines. They are designed for health professionals to use, so you may find the language more technical than the condition leaflets.
Synonyms: acetaminophen poisoning
Paracetamol is widely available and has been around since the 1950s. It is widely prescribed and cheap to buy over-the-counter, making it a common drug taken in overdose. It is a very useful analgesic (alone or in combination) and also is an antipyretic. It is normally found as a 500 mg tablet but it is often combined with other active ingredients in various preparations.
In the UK it is the most common agent of intentional self-harm. Between 2000-2008 there were 90-155 deaths from paracetamol poisoning every year. In addition, there are deaths resulting from paracetamol compounds. It is the most common cause of acute liver failure (ALF).
To reduce the incidence of paracetamol overdose, legislation was passed in the UK in 1998 to limit the number of tablets that could be bought in one purchase: 16 tablets at present (up to 32 tablets in pharmacies). Furthermore, paracetamol was supplied in blister packs making obtaining the actual tablets take longer.
It is important to remember that, when used at therapeutic levels, paracetamol is usually safe and effective. However, taking 4 g per day (or slightly more) for a few days has been known to result in hepatotoxicity
Paracetamol overdose may occur intentionally and accidentally - the latter due to the high number of combination products available over-the-counter. There are also frequent cases of accidental poisoning in children.
Based on the dose of paracetamol ingested (mg/kg body weight):
- Less than 150 mg/kg - unlikely.
- More than 250 mg/kg - likely.
- More than 12 g total - potentially fatal.
Yet paracetamol can cause serious or fatal adverse effects at around 150 mg/kg for many adults. There is considerable interpatient variability which depends on age, health and substances taken with the paracetamol.
The level is higher for young children.
There is a theoretical argument for increased risk with enzyme induction or low glutathione reserves. There are case reports of those with chronic alcoholism taking relatively small overdose or even therapeutic doses of paracetamol who develop liver failure. However, close examination of these case reports shows up some inconsistencies and suggests that it is unclear that these all provide any substantial evidence supporting the hypothesis.
- Add notes to any clinical page and create a reflective diary
- Automatically track and log every page you have viewed
- Print and export a summary to use in your appraisal
After taken orally, paracetamol is well absorbed from the stomach and small intestine. It reaches a peak plasma concentration in one hour but this may be 30 minutes if taken in liquid or rapidly absorbed form. It is mainly inactivated by the liver by conjugation leading to two metabolites; glucuronide or sulfate. It is then renally excreted through urine.
- When taken in overdose the liver conjugation becomes inundated, causing paracetamol to be metabolised by an alternative pathway.
- This results in a toxic metabolite, N-acetyl-p-benzoquinone imine (NAPQI), which is itself inactivated by glutathione, rapidly preventing any harm.
- When glutathione stores are depleted to less than approximately 30%, NAPQI reacts with nucleophilic aspects of the cell, leading to necrosis. Necrosis occurs in the liver and in the kidney tubules.
Toxicity is increased in patients with induction of the P450 system through drugs such as rifampicin, phenobarbital, phenytoin, carbamazepine and alcohol. This also occurs in patients with low glutathione reserves, as a product of:
- Genetic variation.
- HIV-positive status.
- Alcohol-related or other liver disease.
Paediatric patients (under the age of 5 years) seem to fare better after paracetamol poisoning, perhaps due to a greater capacity to conjugate with sulfate, enhanced detoxification of NAPQI or greater glutathione stores. However, it should not be assumed that treatment in children should be different than for adults, since no controlled studies have supported any alternative paediatric therapy.
- Commonly, patients are asymptomatic for the first 24 hours or have nonspecific abdominal symptoms (such as nausea and vomiting).
- Hepatic necrosis begins to develop after 24 hours (elevated transaminases, right upper quadrant pain and jaundice) and can progress to acute liver failure.
- Patients may also develop:
- Renal failure - usually occurs around day three.
- Lactic acidosis.
- Number of tablets, formulation, any concomitant tablets (include herbal remedies as substances, such as St John's wort - an enzyme inducer).
- Time of overdose.
- Suicide risk - was a note left?
- Any alcohol taken (acute alcohol ingestion will inhibit liver enzymes and may reduce the production of the toxin NAPQI, whereas chronic alcoholism may increase it).
- Usually there is very little to find, until the patient develops ALF.
- If ALF develops, the following may be seen: jaundice, hepatic flap, encephalopathy and tender hepatomegaly.
- Paracetamol level: take paracetamol level four hours post-ingestion, or as soon as the patient arrives if:
- Time of overdose is greater than four hours.
- Staggered overdose (in staggered overdoses, the level is not interpretable except to confirm ingestion).
- U&E, creatinine - to look for renal failure and have a baseline.
- LFTs: may be normal if the patient presents early but may rise to ALT >1000 IU/L. This is the enzyme level taken to indicate hepatotoxicity.
- Glucose: hypoglycaemia is common in hepatic necrosis and capillary blood glucose should be checked hourly.
- Clotting screen: prothrombin time is the best indicator of severity of liver failure and the INR should be checked 12-hourly.
- Arterial blood gas; acidosis can occur at a very early stage, even when the patient is asymptomatic. It is seen in up to 10% of patients with ALF.
- FBC and salicylate levels are not routinely required.
The Medicines and Healthcare products Regulatory Agency (MHRA) changed the guidelines on management of paracetamol overdose in September 2012. These are much simplified and include an updated, single line nomogram.
It should be noted that this nomogram is ultra-conservative and that there is lack of consensus internationally on the management of paracetamol overdose.
All patients who have a timed plasma paracetamol level plotted on or above the line drawn between 100 mg/L at 4 hours and 15 mg/L at 15 hours after ingestion, should receive acetylcysteine. This is regardless of any risk factors they may have for hepatotoxicity.
If there is any doubt about the timing of the ingestion (including a staggered overdose over one hour or more), acetylcysteine should be given without delay. There is no need to refer to the treatment nomogram.
Paracetamol poisoning linked to modified-release paracetamol, intravenous paracetamol, massive paracetamol doses (>1 g/kg) and multiple-drug overdose should be discussed with a toxicology expert whenever possible.
Refer to ICU if there is fulminant liver failure - those treated with N-acetylcysteine (NAC) to the medical team and all para-suicides to the psychiatric team.
N-acetylcysteine (NAC) treatment
NAC is believed to work by a number of protective mechanisms. It acts as a precursor for glutathione, promoting normal conjugation of any remaining paracetamol, and also supplies thiols that function as antioxidants. It is virtually 100% effective in preventing liver damage when given within eight hours of ingestion. After eight hours, efficacy decreases sharply.
The initial dose of acetylcysteine should be given as an infusion over 60 minutes. This should reduce the number of dose-related adverse effects. The infusion should be in 5% glucose, with 0.9% sodium chloride as an alternative. There are now no specific contra-indications to acetylcysteine use. Even if there is a previously reported reaction, the benefits of treatment outweigh the risks.
Specific weight-related dosing tables are available to guide the health professional. Children receive the same doses and treatment as adults but with a reduced quantity of intravenous fluid, as fluid overload is a potential risk.
A full treatment course comprises three consecutive doses, administered sequentially, with no break between infusions.
Treatment usually continues for the duration once NAC is started, regardless of any plasma levels. This usually takes 24 hours. NAC may be stopped if started before an appropriate paracetamol level is done, if the level is below the treatment line (when the nomogram is valid) and the patient has normal LFTs and is asymptomatic. NAC is usually continued if blood tests are still significantly abnormal after the first course. The dose depends on local protocols but is often at the rate of the third (last given) bag.
Prior to discharge it is sensible to re-check the INR, renal tests and LFTs. Patients should be advised to return if vomiting occurs after discharge.
The treatment of patients presenting more than 24 hours after ingestion is controversial. Management is detailed on Toxbase® and is similar to presentation between 8 and 24 hours after the overdose.
- Measure INR, creatinine, ALT and venous blood acid/base balance or bicarbonate.
- If any of these is abnormal discuss with your nearest National Poisons Information Centre (0870 600 6266).
- The patient is on long-term treatment with enzyme inducers - eg, carbamazepine, phenobarbital, phenytoin, primidone, rifampicin, St John's wort.
- The patient regularly consumes alcohol in excess.
- The patient has pre-existing liver disease.
- The patient is likely to be glutathione-depleted - eg, eating disorders, cystic fibrosis, HIV infection.
NB: the plasma paracetamol concentration >24 hours after overdose is likely to be below the limit of detection, even after substantial overdose. A measurable paracetamol concentration more than 24 hours after ingestion either indicates a very large overdose, or suggests a mistake in time of ingestion, or a staggered overdose. A full course of antidotal therapy should normally be given to patients in whom paracetamol is detected.
Paracetamol overdose during pregnancy
Paracetamol is the most common drug taken in overdose during pregnancy. The resulting toxic metabolites can cross the placenta and lead to hepatocellular necrosis of maternal and fetal liver cells.
NAC can bind the toxic metabolites in the mother and fetal circulation as it crosses the placenta. NAC appears to be safe during pregnancy and therefore should be administered.
Criteria for referral to a specialist unit
- Encephalopathy or raised intracranial pressure (ICP). Signs of CNS oedema include BP >160/90 mm Hg (sustained) or brief rises (systolic >200 mm Hg), bradycardia, decerebrate posture, extensor spasms, and poor pupil responses. ICP monitoring can help.
- INR >2.0 at or before 48 hours or >3.5 at or before 72 hours (so measure INR every 12 hours). Peak elevation occurs around 72-96 hours. LFTs are not good markers of hepatocyte death.
- Renal impairment (creatinine >200 μmol/L). Monitor urine flow and daily U&E and serum creatinine (use haemodialysis if >400 μmol/L).
- Blood pH <7.3 (lactic acidosis results in tissue hypoxia).
- Systolic BP <80 mm Hg despite adequate fluid resuscitation.
- Metabolic acidosis (pH <7.3 or bicarbonate <18 mmol/L).
King's College Hospital criteria for liver transplantation in paracetamol-induced acute liver failureList for transplantation if:
- Arterial pH <7.3 or arterial lactate >3.0 mmol/L after adequate fluid resuscitation; OR
- If all three of the following occur in a 24-hour period:
- Creatinine >300 μmol/L.
- PT >100 seconds (INR >6.5).
- Grade III/IV encephalopathy.
- Arterial lactate >3.5 mmol/L after early fluid resuscitation.
The mortality from severe liver failure is <5% with good supportive care.
Although liver transplantation only has a limited application, patients must be identified as early as possible, preferably on the second day. Current data indicate a poor prognosis if:
- An arterial pH <7.30 (hydrogen ion concentration >50 nmol/L) on or after day two following overdose (found in ~70% of cases with a poor prognosis).
- A combination of a prothrombin time of more than 100 seconds (INR >6.5), plasma creatinine >300 μmol/L and grade 3 or 4 hepatic encephalopathy (only a 17% survival rate).
- An increase in prothrombin time between day three and day four after overdose.
Liver transplantation is probably contra-indicated in patients with severe hypotension, severe cerebral oedema and serious infection.
Further reading & references
- Hawton K et al; Impact of different pack sizes of paracetamol in the United Kingdom and Ireland on intentional overdoses: a comparative study. Biomed central (2011)
- Bateman DN; Limiting paracetamol pack size: has it worked in the UK? Clin Toxicol (Phila). 2009 Jul;47(6):536-41.
- Treating paracetamol overdose with intravenous acetylcysteine: new guidance; Medicines and Healthcare products Regulatory Agency (Sept 2012)
- Acetylcysteine 200 mg/ml injection for infusion; Medicines and Healthcare products Regulatory Agency (archived content)
- Wilkes JM, Clark LE, Herrera JL; Acetaminophen overdose in pregnancy. South Med J. 2005 Nov;98(11):1118-22.
- Dargan PI, Jones AL; Acetaminophen poisoning: an update for the intensivist. Crit Care. 2002 Apr;6(2):108-10. Epub 2002 Mar 14.
Disclaimer: This article is for information only and should not be used for the diagnosis or treatment of medical conditions. EMIS has used all reasonable care in compiling the information but make no warranty as to its accuracy. Consult a doctor or other health care professional for diagnosis and treatment of medical conditions. For details see our conditions.
Dr Hayley Willacy
Dr Roger Henderson
Dr Adrian Bonsall
|
<urn:uuid:3e428cd1-6928-4c0d-8712-c0e11dfa5639>
|
CC-MAIN-2016-26
|
http://patient.info/doctor/paracetamol-poisoning
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00189-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.900992 | 3,275 | 2.9375 | 3 |
With her second child growing larger by the day, Liz is experiencing the tyranny of her pregnancy. Her belly seems impossibly huge to her. Easy sleep is a distant memory now that she must contend with tens of pounds of extra girth. With fiery heartburn following every meal, she feels as if she is subsisting on a diet of small volcanoes.
But Liz is not just any late-term mother-to-be. She is also a neuroscientist studying the changes that occur in a mother's brain—in fact, she co-authored this article. Although it will not relieve her indigestion, she derives some comfort from a new and growing body of research that is revealing the marked and generally positive alterations that accrue to a mother's brain.
Because the maternal brain emerges gradually, the construction site it becomes in the interim can cause some problems for its owner. Some mothers complain of fuzzy-headedness, and certain data even show minor brain shrinkage during pregnancy. But the compensations are great. Research suggests that motherhood enhances certain types of cognition, improves resistance to stress and sharpens some kinds of memory. On the face of it, the fact that the nervous system manages to transform a new mother from a self-centered organism into an other-focused caregiver is actually quite impressive. All it takes is for new neurons to sprout, certain brain structures to blossom in size and waves of powerful hormones to batter the pregnant woman's physiology. The result is a different and in some ways better brain—or at least one capable of juggling the challenges of everyday life while maintaining a laserlike focus on the baby.
A Sensory Trigger
A baby does what he can to attract and hold his mother's attention. A young son's distinctive cry, his unique scent and the way he curls his fingers around his mother's are just a handful of the sensations that shower down on her highly sensitized nervous system. The infant creates a rich environment that stimulates the mother, pushing her brain into a higher gear.
Of all the senses, smell—olfaction—plays the largest role in reproduction. Females rely on their sense of smell from the very beginning to help them select their mates all the way through to the weaning of their young, during which scents act as a form of communication between mother and child. An extreme example of the power of smell is known as the Bruce effect, a phenomenon in which certain scents induce abortions in pregnant rodents. If a female's mate disappears after conception and an interloper starts hanging around, the new male's smell will inhibit the production of key hormones, causing the female's pregnancy to abort. Otherwise, chances are high that the interloper would end up killing and eating the pups, thereby obtaining a high-protein meal and removing a rival's genes in the bargain. In a kind of “Sophie's choice” for rodents, the female is basically making a cold calculation—better to lose the young as embryos than as pups.
Because of our limited ability to peer into human brains, rodents help us approximate the changes that are taking place inside mothers such as Liz. What we have seen so far is that the mammalian brain possesses a dramatic ability to shape-shift when life demands it. During a rat's pregnancy, for example, we know that the olfactory system starts churning out new neurons. The theory is that the extra neurons allow moms to become more adept at processing the cues hidden in infant odors. Indeed, mothers distinguish themselves quite obviously in how they react to smells. Whereas virgin female rats find the odors of infants noisome, once they become pregnant those smells attract them. Human mothers also demonstrate these effects, as psychologist Alison Fleming of the University of Toronto Mississauga and her colleagues reported. They found that mothers are much more likely to rate their infants' odors as pleasant, as compared with nonmothers.
To transform women's perceptions of smells, the olfactory system may rely on a region known as the medial amygdala, suggests neurobiologist Michael Numan of Boston College and his colleagues. This brain area could be acting as a hub for the olfactory system, with information arriving here to be processed for emotional content. The olfactory tweaks may aid in solidifying the mother-child bond by making babies' odors alluring. Before she had her first child, Liz had avoided the smells of children, even those to whom she was related. But with the birth of her son, she discovered she had no problem stuffing her nose into his diaper to determine if he needed a change.
Caution and Courage
If Liz devoted all her attention to her infant, however, both mother and child would perish. A mother rat that stays safely in the nest with its offspring also dooms them to death from hunger and thirst. Mothers of both species must find ways to resolve the competing demands on their time. In other words, women are not the only members of the animal kingdom who find themselves juggling the duties of a working mom.
To allow a rat mother to toggle between caring for its young and heading out to find food, an area of the midbrain called the periaqueductal gray (PAG) acts as a circuit breaker. In 2010 researchers at the University of São Paulo proposed that the PAG weighs the balance between eating and acting maternally by evaluating input from the brain's limbic system, a set of structures that governs survival-type behaviors. No exact parallel to the PAG's toggle function in rats has been identified in humans yet, but much has been made of a mother's superhuman ability to multitask, perhaps reflecting a similar adaptation.
When a mother ventures into the world, she puts her vulnerable baby at risk. But she may be more attuned to potential threats, perhaps even exaggerating them, suggests research at the Health Sciences Federal University of Porto Alegre in Brazil. Researchers there have shown significant alterations in the architecture of dendrites in the medial nucleus of the amygdala, which in addition to its important role in the olfactory system also controls defensiveness and avoidance behavior. Indeed, when Liz shops she scans the stores for risks to her baby, avoiding the creepy guy by the magazines or the sketchy teens by the vending machines.
Although overall Liz is more cautious, she is also probably much bolder in the face of a threat than she was before becoming a mother. Psychologist Jennifer Wartella, now at Virginia Commonwealth University, has found that, compared with virgins, mother rats exposed to a stressful open-field maze were less likely to freeze in place, explored more readily and appeared to experience less fear (that is, Wartella saw fewer switched-on neurons in the amygdala). With its fear response in check, a rat mom may be able to forage more efficiently and return to its nest and vulnerable offspring more quickly.
Helping a mother navigate the world is her improved ability to decipher the clues in the environment. Recently our student Kelly Rafferty and her colleagues at our lab have been investigating a mother's ability to plan ahead. They allowed mother and virgin rats to forage in an unfamiliar maze that contained water. The rats were then returned to their home cages, some of which contained a water bottle and some of which did not. Subsequently they were moved back to the maze containing water. The mother rats assigned to the waterless home cage spent more time near the maze's water sources and drank more water, as compared with both mothers with full access to water and virgin females. After accounting for potential differences in the rats' thirst, the neuroscientists concluded that the mothers appear to anticipate a future environment and plan for it.
As the previous experiments demonstrated, mother rats seem to excel at tasks that require enhanced attention. Behavioral neuroscientist Kelly Lambert of Randolph-Macon College and her colleagues have collected other evidence of sharp-witted mothers. In 2009 they showed that when it comes to identifying which cue among several signals food, mother rats perform best. And work by Amy Au and Tommy Bilinski, then working in our lab, has begun to identify the rats' strengthened ability to deduce the meanings of symbols. The researchers designed experiments where a rat in an environment learns to associate, say, a triangle or a set of wavy lines with a food reward. After being moved to a new environment, lactating females transferred their knowledge from the old setting to the new one better than virgin females did, again suggesting a heightened attention to detail.
A human mother's brain undergoes a striking structural metamorphosis, too. In 2010 using magnetic resonance imaging studies, neuroscientist Pilyoung Kim, now at the National Institute of Mental Health, and her colleagues found significant increases in gray matter in mothers' brains in the weeks and months after they give birth. Gray matter, which got its name from the color of unmyelinated axons, is a layer of tissue packed with neurons. The growth the scientists saw was particularly visible in the midbrain, parietal lobes and prefrontal cortex—all areas involved in infant care. The mothers with the biggest increase in gray matter volume also reported the more positive perception of their babies.
As the time of delivery nears, powerful hormones swing into action. Although the most obvious players are oxytocin, which stimulates uterine contractions and milk letdown, and prolactin, which instigates milk production, other hormones trigger changes inside the brain, too. Neuroanatomists at the Victor Segalen Bordeaux 2 University in France have observed a dramatic structural remodeling of the hypothalamus, a brain region that acts as a major regulator of the hormones associated with basic emotional behaviors such as fighting and sex. Neurons in a part of the hypothalamus known as the medial preoptic area, or mPOA, grow bigger and become more active. Indeed, lesions of the mPOA can eliminate maternal behavior.
Meanwhile the hypothalamus ramps up the feelings of pleasure a mother receives. Robert S. Bridges of the Tufts Cummings School of Veterinary Medicine and his colleagues found different concentrations of opioid receptors in female rats depending on whether the rodent was a virgin, pregnant or lactating. But the phenomenon fades with experience. Females that go through several pregnancies show a decline in sensitivity to their own opioids, much like addicts who require ever greater doses of a drug to get high.
The drug analogy, by the way, is not spurious. Animals may in fact be engaging in maternal behavior simply because it feels good. Many human mothers report a very pleasurable feeling as they breastfeed their infants. After pups attach to a female rat's nipple, the mom receives a “hit” of reinforcing opiate. But the rat's body imposes a natural limit. As the pups continue to suckle, the mother's core body temperature rises. Soon enough the mother begins to feel uncomfortable and moves away. Later, desiring another burst of opiates, the rat comes back to the nest, the pups reattach and the cycle begins again.
As an added benefit, maternal hormones may well make the brain more resilient. In 2010 neurobiologist Teresa Morales Guzmán of the National Autonomous University of Mexico showed that the brain of a lactating female is more resistant to the effects of a neurotoxin. The hormones of pregnancy appear to construct a neural shield that protects a mother from damage that otherwise might compromise a rat's ability to care for its young.
The continuous ebb and flow of steroid hormones prompts brain cells to grow many tiny protrusions. Somewhat similar in appearance to thorns on the stem of a rose, these nubs are called dendritic spines. They add surface area to an existing neuron, allowing for more synaptic contact and therefore better information processing. Such spines can grow on a neuron after hormonal stimulation as well as after repeated bouts of stimulation from nearby connecting neurons.
Our lab has built on previous findings from the Rockefeller University showing that dendritic spine densities in the hippocampus increased in concert with the hormonal changes of a female rat's estrus cycle, which is similar to the human menstrual cycle. Best known for its role in memory, the hippocampus also supports maternal behavior. Even after just a few hours of elevated estrogen, the growth was dramatic.
But we learned that the spines are not caused simply by the presence of estrogen. We tested three groups—late-pregnancy females, females treated with a drug that mimics late-pregnancy hormones and females that had recently begun lactating—and saw that all three showed significant increases in dendritic spine concentrations. Unlike the other two groups, lactating females have very low levels of estrogen. We believe that although a mother's hormones initiate spine growth, the process is maintained by the many stimuli a child generates.
With such a thorough remodeling in progress, it is no wonder that many women complain of “pregnancy brain.” The collateral damage of these changes might include an occasionally faulty memory. Human moms experience postpartum memory deficits, too, as work by clinical psychologist J. Galen Buckwalter of the University of Southern California and his colleagues suggests. They found that on cognitive tests of memory for words and numbers, pregnant women and new mothers fared worse than nonpregnant women of about the same age. Their performance on tasks unrelated to child care seemed to suffer.
For the most part, though, the finished product will more than make up for the hiccups a mother may experience as her brain restructures itself. Producing an offspring requires a mother to jeopardize her own health, safety and survival, so her behavioral system kicks in to protect and defend that investment. With the landscape of her brain buffeted by the hormones of pregnancy and pressures of motherhood, she emerges more efficient and geared for survival.
For Liz, the compensation for the downsides of motherhood comes not just from science but also from the heart. By the time we finished writing this article, she had given birth to a healthy baby girl. All the neurobiology in the world pales in comparison to that blissful, ineffable bond that exists between a mother and her baby. Science may explain the maternal brain, but the real marvel—especially when you are gently tucking the blanket around your baby's chin as she sleeps in your arms—might simply be the beauty of a new child's existence.
|
<urn:uuid:f987b311-1759-4e15-809d-983a5b2ed431>
|
CC-MAIN-2016-26
|
http://www.scientificamerican.com/article/maternal-mentality-2012-10-23/?mobileFormat=true
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00090-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96117 | 2,952 | 3.109375 | 3 |
The number pad keys are arranged in four columns and five rows. In our
first lesson we will cover the 4-5-6-+ row. speed
When at rest the fingers of the typist's right hand are positioned, lightly,
on the 4-5-6-+ keys.
The right index finger will control the 4 key.
The right middle finger will control the 5 key.
The right ring finger will control the 6 key.
The right little finger will control the + key.
The 0 is controlled by the right thumb.
IF = index finger, MF = middle finger, RF = ring finger, LF = little finger
The 5 key often has a small raised bump on its top, a tactile aid for
The locations of all the other keys on the number pad are learnt in relation
to this home key so the touch-typist must be able to locate the home key
Using the raised bump on the 5 key as a guide, see if you can correctly
place your fingers on the 4-5-6-+ keys without looking at the keyboard.
Before you start the exercises make sure you are sitting up straight, your feet
flat on the floor. Keep your elbows close to your body, your wrists straight and
your forearms level.
When you are ready to begin, select an exercise and strike the key requested.
Try not to look at the keyboard. It will be difficult at first but as the exercise
progresses you will find it becomes easier and your fingers will begin to move without
you consciously deciding which finger is associated with which key.
You may find it helpful to quietly say the name of the key as you strike
Don't let your mistakes cause you to lose heart, touch typing is a skill that
can be learnt by practice.
Repeat each exercise at least three times.
Vote this speed typing online game.
With the typing online game you can assess your typing skills within your web
Test Your Typing Skills. Typing Test is a free, full-featured typing tester for
Windows. Test your typing skills with this free online typing test. Typing Free
Game! Learn to type 3-5 times faster with this touch typing tutor. Typing Games
is a free package of three challenging typing games, which have been designed for
typists to improve their typing speed and reactivity as well as cut down on errors.
Free typing game to learn and practice typing on a computer keyboard. Part of
Power Typing -a free online tutorial which teaches US standard 101-Qwerty keyboard
and US alternate ANSI standard-Dvorak keyboard.
Speed typing key words
easy typing games, typing lessons, blind typing system, typing lesson, free typing
course, free typing software, typing skills, typing games free, typing tutor, typing
skill test, online typing lessons, typing test games, typing speed, online typing
test, free online typing test, fun typing games, what is a good typing speed, free
online touch typing course, typing test online, typing online games, online touch
typing, on-line typing games, free online typing games for kids, typing speed game,
free typing classes, online typing master, test your typing speed, online typing
speed test, free online touch typing games,
|
<urn:uuid:09f99811-a054-4542-aacd-b21baddcd7a3>
|
CC-MAIN-2016-26
|
http://www.ababasoft.com/typing/typing_lesson05.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00041-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.867242 | 685 | 2.71875 | 3 |
Environment is the subject of gravest human concern and controversies all over the world. For the past few decades we have seen many great changes to take place in the world environment and most of these environmental disasters are considered to have destructive effects on the natural living beings and to a great extent to human livelihood. For many decades we are being constantly alarmed of the scarcity and crisis of environmental resources that are really crucial for the survival of modern civilized society. Some of these most important environmental resources are water, petroleum, minerals, etc. and majority of the social, economic and individual concerns regarding the environmental resources are centered on these earthly as well as primordial produces of the earth. Naturally twentieth century scientific interests are drawn to this increasing issue and the awareness as well as initiative for protecting environmental resources became too much important and for the crisis ridden people in environmental resources in many countries this became a priority. The environmental science and management discipline that concerns the protection and initiatives in regard to the crisis in environment is termed as environmental resource management.
The significance of environmental resource management
Environmental resources are significant for all types of natural living hood and modern civilized living of human being. The crisis and scarcity of environmental resources like water can extinguish many species from the face of earth and simultaneously can endanger the human living condition in many respects. As the recent statistics shows, many of the world's biggest rivers and water streams are now endangered by the imminent and increasing level of pollution and a great many nations are facing the health care disaster because of the water pollution. The three factors regarding the water problem like decrease in the level of underground water, scarcity of drinking water and most important of all pollution of water play the most significant role in prioritizing the protection and enrichment of the water resources in the world.
Except water natural energy resources like petroleum is the most concerning area in regard to the environmental resource management. As automobile and transport system of the world is still dependent on the supply of oil energy for its continuance and sustenance the syndrome of going down of crude oil in major oil producing countries is a major concern globally. As estimated before the present century reaches the halfway mark the world is supposed to encounter massive shortage in the supply of oil and natural gas and this concern of continually decreasing level of world's oil reserve is making the thinkers of worldwide environmental resource management to take initiative towards the goal of conservation of petroleum and finding alternative source of energy that can replace the use of petroleum in the coming future.
Third most significant area in regard to the global initiative and raging concerns of environmental resource management is the natural mineral resources and mineral reserve. As for the worldwide production, manufacturing and construction various types of materials like iron, aluminum, copper are essential substances and increasing use of various materials in the world's growing industrially hyper active scenario is likely to extinct many variants of minerals in the near future and simultaneously as modern living cannot afford to depend on the woods or forest resources for replacing the use of the reserve of minerals environmental resource management is becoming more significant for saving the mankind from the future crisis of all these minerals.
Environmental resource management is the most significant management discipline in regard to breaching the great emerging crisis and gap between natural resources as well as energy and the scientific research and methods in finding the alternative.
Ask Questions? Discuss: What is Environmental Resource Management?
Post your Comment
|
<urn:uuid:786d490f-2848-40eb-a8ba-05a47e06c04e>
|
CC-MAIN-2016-26
|
http://www.roseindia.net/management/what-is-environmental-resource-management.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00128-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.944547 | 667 | 3.0625 | 3 |
Elections were held in the United States throughout 2012. These included many federal elections on Election Day, November 6, 2012, most prominently the 57th presidential election, Senate elections (where 33 seats were decided), and House of Representatives elections (to elect all 435 members of the House for the 113th United States Congress). It also featured 13 state and territorial governors' races; state and territorial legislature races, special elections, and various other state, territorial, and local races and referenda on votes held in November as well as throughout the year.
Little overall change occurred on the Federal level. Incumbent President Barack Obama was elected to a second term, with the national popular vote percentage being 51.1% to 47.2%, and the Electoral College vote being 332 to 206, for Obama and challenger Mitt Romney, respectively. The Democratic Party held control of the Senate and the Republican Party maintained a majority in the House of Representatives. Republicans also held on to a majority of governorships.
The election resulted in New Hampshire being the first state with an entirely female congressional delegation and with Wisconsin electing the first openly LGBT member of the Senate. Three state referenda passed legalizing same-sex marriage, while Minnesota became the first state in history to reject a proposed state-level constitutional ban of same sex marriage. Two states approved and one rejected the legalization of recreational marijuana, and one more state voted to approve allowing marijuana for medical use. A referendum was also held in Puerto Rico regarding the future political status of the U.S. unincorporated territory, with voters agreeing towards acquiring statehood.
The 2012 election cycle was the first to be impacted by the Supreme Court's Citizens United decision, which prohibited the government from restricting independent political expenditures by corporations and unions. The projected cost of the 2012 federal election races is estimated to be over 5.8 billion dollars, with approximately $1 billion of that coming from "outside" groups (groups not directly controlled by the candidate's campaign or officially controlled by the party). During the elections there was much spending by the lobbies, particularly the fossil fuels lobby. This election season became the most expensive in American history.
|
<urn:uuid:10c3ea9e-a27b-4ab6-bc7d-88a4b2637c2d>
|
CC-MAIN-2016-26
|
http://www.breakingnews.com/topic/2012-elections/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00195-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.977017 | 432 | 3.21875 | 3 |
Coal burning has existed for centuries, and its use as a fuel has been recorded since the 1100s. It powered the Industrial Revolution, changing the course of first Britain, and then the world, in the process. In the US, the first coal-fired power plant – Pearl Street Station – opened on the shores of the lower East River in New York City in September 1882. Shortly thereafter, coal became the staplediet for power plants across the world. Today, coal is used to produce nearly 40% of the world’s electricity. However, burning coal is one of the most harmful practices on the planet.
|
<urn:uuid:c7e8df4e-01a5-405d-a0ef-bf7920051984>
|
CC-MAIN-2016-26
|
http://www.greenpeace.org/international/en/publications/reports/true-cost-coal/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00158-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.974026 | 128 | 3.484375 | 3 |
Explain and give an example for each of the following types of variables: (a) equal interval, (b) rank order, (c) nominal, (d) ratio scale, (e) continuous.
1. Explain and give an example for each of the following types of variables: (a) equal interval, (b) rank order, (c) nominal, (d) ratio scale, (e) continuous.
2. Following are the speeds of 40 cars clocked by radar on a particular road in a 35 mph zone on a particular afternoon: 30, 36, 42, 36, 30, 52, 36, 34, 36, 33, 30, 32, 35, 32, 37, 34, 36, 31, 35, 20, 24, 46, 23, 31, 32, 45, 34, 37, 28, 40, 34, 38, 40, 52, 31, 33, 15, 27, 36, 40. Make (a) a frequency table, (b) a histogram, and (c) a frequency polygon. Then, (d) describe the general shape of the distribution.
3. Give an example of something having these distribution shapes: (a) bimodal, (b) approximately rectangular, and (c) positively skewed. Do not use an example given in this book or in class.
Write a 350 to 700 word response summarizing the three dimensions of love and how they interrelate to identify a specific type of love relationship. Follow APA formatting.
Basic help starts at : $25
Note: The price above is for the most basic, custom help we offer with this question, it includes some outlining and limited research on your topic. We also offer advanced help as well as the entire tutorial.
Your Shopping CartYour cart is empty
Visit The Shop
Note: Currently, Regular Priority times are 3-5 days. Log in to upload files with your questions. Tutorials you buy shall be emailed to your PAYPAL email. Talking about quality references: finding and referencing an (n+1)th article for your tutorial requires substantially more time than the (n)th article referenced, therefore you will see the price increasing with the number of references you require in some questions. Wanted to contact us over something related to this question? Email us: support AT oxenmine.com.
|
<urn:uuid:c768dc02-f49c-4dd5-a7e0-05c693a23f8c>
|
CC-MAIN-2016-26
|
http://oxenmine.com/7038/psychology/explain-and-give-an-example-for-each-of-the-following-types-of-variables-a-equal-interval-b-rank-order-c-nominal-d-ratio-scale-e-continuous/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00163-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.897835 | 487 | 3.28125 | 3 |
There is no mistaking the twang of the banjo. Now synonymous with bluegrass music, the banjo was first designed by slaves brought over from Africa. The first banjo songs dealt with the daily experiences of these slaves and the lives of people in rural areas. Soon this type of music spread throughout the South, especially after the advent of Bill Monroe and the Bluegrass Boys, who first appeared in 1939. Earl Scruggs joined the band and revolutionized banjo playing with his three-finger picking, rolling style, now known as "Scruggs style." Today there are many styles of playing the banjo and many technical terms to learn. Here is a glossary of the most popular terms relating to the beloved banjo:Banjeaurine: A five string banjo with a scale ranging from 20 to 22 inches that is tuned two whole notes higher than a standard five string banjo. It was used in the late 19th and early 20th centuries for playing the lead in banjo orchestras.Bluegrass banjo: Banjo most commonly used in bluegrass music. Bluegrass banjos have a resonator on the back and a tone ring where the head is stretched. This tone ring is usually made of bronze or brass. Bluegrass banjos are heavy and have a deep sound. Bear claw: Tail pieces with finger-like extensions and a curl that hold down the strings.Bridge: The bridge transfers vibrations of the strings to the head of the banjo, which ultimately amplifies the sound. The tension of the strings holds the bridge in place. Clawhammer: A style of playing the banjo that is more melodic than frailing. Clawhammer style makes use of a downstroke where the fingernails of the index and middle fingers strike the strings. Gourd banjo: One of the first banjos, a gourd banjo was a simple instrument consisting of a gourd with a hide stretched over its hole. Hammer-on: This occurs when a finger on the left hand strikes against a string after picking the same string with the right hand. Head: The area on the banjo that vibrates, often made of calfskin or mylar. End pin: A screw with a round or hex-shaped end that goes from the banjo rim to the dowel stick and secures the tailpiece bolt.Fingerboard: The part of the banjo neck that holds the strings and is pressed when making notes.Fifth string peg: A fifth string peg tunes the fifth string and is most often inserted in the side of the neck. Frailing: Another style of playing the banjo where the strings are struck with the index and middle fingers, while the thumb strikes the fifth string. Also called "knocking" or "framming." Melodic: A style of banjo playing that plays each note individually rather than in blended together, continuous rolls. Pull-off: A technique where the finger releases a fretted note by plucking. Resonator: A resonator attaches to the back of the banjo and creates a louder sound. Scruggs style: A style of playing created by Earl Scruggs. Scruggs style involves rolls, where the notes are played in a seemingly continuous stream.
Besides being a mainstay of bluegrass music, banjos are often heard in pop songs and are even used in some punk bands. Banjos are more versatile than many people imagine, and they involve a lot of technique, especially for playing the intricate Scruggs style. If you're interested in playing the banjo, consider taking a few lessons and, as banjo players say, "Get rolling!"
|
<urn:uuid:9c4fa674-9232-4b89-bb9c-acf8ee96a4c0>
|
CC-MAIN-2016-26
|
http://www.zzounds.com/edu--banjoglossary
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00135-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953712 | 758 | 3.484375 | 3 |
This post comes to us from biologist Steve MacLean, director of The Nature Conservancy’s Bering Sea Program in Alaska.
Last summer, when biologists walked along the rocky cliffs on Rat Island, one of more than 2,000 islands in Alaska’s Aleutian chain, they encountered an eerie silence. This place should have been a cacophonous and lively melee of bird calls.
The reason for the silence? Invasive rats. They colonized the island after a Japanese fishing vessel wrecked against its rocky shore in 1798. Their numbers multiplied, and for more than two centuries the voracious rats have preyed on bird eggs and young chicks. The birds gone, silence spread from shore to shore.
This summer, biologists discovered something new at Rat Island: encouraging signs of bird life. The reason? A seabird restoration project is at work.
Over the course of a week-and-a-half in the fall of 2008, our project team, led by the U.S. Fish and Wildlife Service, Island Conservation and The Nature Conservancy, broadcast bait across the island. Our objective was this: removing the rats and reclaiming the island as productive seabird breeding habitat.
We’ve received initial evidence that the invasive rats from Alaska’s remote Rat Island are no more. Biologists report three peregrine falcon nests. Several nesting bird species — black oystercatchers, ptarmigan, Aleutian cackling geese and others — appear to be more abundant. This means that the lively din of puffins, auklets and other birds may soon return.
But with the good news of returning nests comes an unexpected report.
In June, project biologists reported the discovery of some dead birds: Most notably, 43 bald eagles and 213 glaucous-winged gulls.
It’s an absolutely shocking discovery. In Alaska, where the population of bald eagles numbers 50,000, we have an undeniable attachment with eagles — that eagles have died is deeply disappointing to us.
To learn more, we’ve expedited tissue samples to the National Wildlife Health Center in Madison, WI. We expect pathologists’ results from these tests in coming weeks.
In the meantime, it leaves us with a lot of time for serious soul-searching. Was our own restoration project responsible for this devastating loss? What’s the future of Rat Island?
For now, we know the project was responsible for at least some of the bird deaths. This deeply saddens us, and reinforces our commitment to ensuring that our projects provide the maximum benefit to Alaska’s biodiversity and minimize the risk associated with each project. While no restoration project like this one can be entirely risk free, we know we can learn from this project to ensure that all restoration projects around the world are conducted as safely and effectively as possible.
As for the future of Rat Island, we’re optimistic. For more than 200 years, invasive rats have ruled this remote island, essentially killing off the puffins, auklets, sandpipers, ducks and songbirds that would normally nest in what is otherwise optimal habitat.
The arc of experience tells us our optimism is warranted. Worldwide, more than 300 similar rat eradication efforts have proved successful. Nonetheless, much work remains for restoration ecologists. Invasive rats have been introduced to about 90 percent of the world’s islands. These unwelcome predators account for 40 to 60 percent of all recorded island bird and reptile extinctions.
As we continue, we’re prepared to commission an independent review to fully evaluate the project. We have every reason to expect that the project will be completely successful and the birds of the Aleutians will once again fill Rat Island with a cacophonous celebration of avian diversity.
If, after two years of careful monitoring, Rat Island proves to be once again free of the rats that have decimated the bird populations, we hope to remove the outdated moniker, and restore the island’s original Aleut name: Howadax, pronounced “How-a-tha” meaning “entry” or “welcome.”
Please join us in welcoming back the birds of Howadax.
|
<urn:uuid:9f75ad2c-a1a1-4bb1-8273-dc4ab15f2067>
|
CC-MAIN-2016-26
|
http://www.mnn.com/earth-matters/wilderness-resources/stories/welcoming-birds-back-to-a-remote-alaskan-island?magic_tabs_comment_callback_tab=0&referer=node%2F32853&args=a%3A1%3A%7Bi%3A0%3Bs%3A5%3A%2232853%22%3B%7D
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00166-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.921605 | 882 | 3.328125 | 3 |
All through the series on understanding where electrons are, and how they flow, we’ve been talking about how the basis of chemistry is that opposite charges attract and like charges repel, and that in reactions, electrons flow from “electron rich” areas to “electron poor” areas.
Today, we’ll officially give a name to the types of species that are considered “electron rich“ and “electron poor”.
They’re called nucleophiles and electrophiles.
Let’s start with “nucleophiles” (from “nucleus loving”, or “positive-charge loving”). A nucleophile is a reactant that provides a pair of electrons to form a new covalent bond.
Sound familiar? It should! This is the exact definition of a Lewis base. In other words, nucleophiles are Lewis bases.
When the nucleophile donates a pair of electrons to a proton, it’s called a Brønsted base, or simply, “base”.
Here are some examples of Lewis bases you’re probably familiar with. As you can see, nucleophiles all have pairs of electrons to donate, and tend to be rich in electrons. [Moving ahead, there are actually three classes of nucleophiles you’ll meet in organic chemistry, but let’s focus on the simple examples for now.]
Again, this should sound familiar: this is the definition of a Lewis acid!
An electrophile that accepts an electron pair at hydrogen is called a Brønsted acid, or just “acid”.
Here are some examples of Lewis acids you’re familiar with.
Two more things:
We can vaguely define “nucleophilicity” as “the extent to which a species can donate a pair of electrons”. [There’s actually a more precise definition we’ll discuss in the next post, but this will do for now.]
Similarly, the extent to which a species can accept a lone pair of electrons is called “electrophilicity”.
Let’s look at an example we’re familiar with: hydroxide ion.
When hydroxide ion donates a pair of electrons to an electrophilic atom (such as carbon here) to form a new covalent bond, it is acting as a nucleophile.
And as we’ve seen before, when hydroxide ion donates a pair of electrons to an (acidic) proton to form a new covalent bond, we say it’s acting as a “base”.
So species can be both nucleophiles and bases? Yes!!! In fact, the “basicity” we’ve been talking about is just a subset of “nucleophilicity” – the special case where the electrophile is a proton!
As well, species can be both electrophiles and acids. And “acidity” is just a subset of “electrophilicity”.
Let’s go even further here: the vast majority of the reactions you’ll see (>95%) – will be reactions where a nucleophile donates a pair of electrons to an electrophile. Nucleophile attacks electrophile. There are very few exceptions!
This is why understanding where electrons are, and how electrons flow is so important – because the electron richness (or poorness) of an atom (or molecule) determines its nucleophilicity or electrophilicity, which in turn determines its reactivity.
It’s not an exaggeration to say that nucleophilicity and electrophilicity are the fundamental basis of chemical reactivity. They are truly the yin and the yang of chemistry.
|
<urn:uuid:24ff9987-d67d-48a6-a12d-c49b1a99ce6f>
|
CC-MAIN-2016-26
|
http://www.masterorganicchemistry.com/2012/06/05/nucleophiles-and-electrophiles/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00036-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.896484 | 825 | 3.59375 | 4 |
There has been some recent press surrounding Toxoplasmosis gondii, a parasite that has been identified as one of the leading causes of foodborne related deaths and hospitalizations. This parasite is acquired when individuals consume undercooked meat that is infected with the cysts imbedded in the tissue or when people come in contact with contaminated cat feces (cats are a natural host organism and they excrete/poop the more resistant oocyte). It can be a major health issue in immunosuppressed individuals including pregnant women with the infection being passed on congenitally, and it can cause mild illness in healthy individuals. (CDC link below) It can cause acute ocular disease. (Other studies have linked T. gondii infection with schizophrenia - citation below).
The concern proposed in this journal article is that organically raised meat is more likely to be a source of T. gondii. Free range pigs (organically raised) are more likely to be contaminated with the organism in that their diet is less controlled and so they are more likely to eat in places contaminated by cat feces. In one study, 17 of 33 organically raised pigs from Michigan were contaminated with T. gondii (in another study, the level in commercial pork was much lower ~0.3%). In free range chickens, the prevalence was higher (est greater than 17%) compared to commercially raised poultry (0%). The organism or the antibodies to the organism have also been found in sheep, goats (and unpasteurized goat milk), and venison.
Adequate cooking and freezing are important to prevent infection, especially free range/organically raised pork (as well as goat and sheep). Of course, preventing contaminated by infected cats is important. (Outdoor cats are more likely to become contaminated than indoor cats.)
(Free range animals may also be a higher risk for other pathogenic paracites such as Trichinella).
Jeffrey L. Jonesa and J.P. Dubeyb
Clinical Infectious Disease. (2012)
Toxoplasmosis can be due to congenital infection or acquired infection after birth and is one of the leading illnesses associated with foodborne hospitalizations and deaths. Undercooked meat, especially pork, lamb, and wild game meat, and soil contaminated with cat feces on raw fruits and vegetables are the major sources of foodborne transmission for humans. The new trend in the production of free-range organically raised meat could increase the risk of Toxoplasma gondii contamination of meat. Foodborne transmission can be prevented by production practices that reduce T. gondii in meat, adequate cooking of meat, washing of raw fruits and vegetables, prevention of cross contamination in the kitchen, and measures that decrease spread of viable oocysts into the environment.
Maternal Exposure to Toxoplasmosis and Risk of Schizophrenia in Adult OffspringAlan S. Brown, M.D.; Catherine A. Schaefer, Ph.D.; Charles P. Quesenberry, Ph.D.; Liyan Liu, M.D., M.Sc.; Vicki P. Babulas, M.P.H.; Ezra S. Susser, M.D., Dr.P.H.
Am J Psychiatry 2005;162:767-773. 10.1176/appi.ajp.162.4.767
OBJECTIVE: The authors examined the relationship between maternal antibody to toxoplasmosis and the risk of schizophrenia and other schizophrenia spectrum disorders in offspring. Toxoplasmosis is known to adversely affect fetal brain development. METHOD: In a nested case-control design of a large birth cohort born between 1959 and 1967, the authors conducted serological assays for Toxoplasma antibody on maternal serum specimens from pregnancies giving rise to 63 cases of schizophrenia and other schizophrenia spectrum disorders and 123 matched comparison subjects. Toxoplasma immunoglobulin (Ig)G antibody was quantified by using the Sabin-Feldman dye test. The Ig titers were classified into three groups: negative (<1:16) (reference), moderate (1:16–1:64), and high (≥1:128). RESULTS: The adjusted odds ratio of schizophrenia/schizophrenia spectrum disorders for subjects with high maternal Toxoplasma IgG antibody titers was 2.61 (95% confidence interval=1.00–6.82). There was no association between moderate Toxoplasma Ig antibody titers and the risk of schizophrenia/spectrum disorders. CONCLUSIONS: These findings suggest that maternal exposure to toxoplasmosis may be a risk factor for schizophrenia. The findings may be explained by reactivated infection or an effect of the antibody on the developing fetus. Given that toxoplasmosis is a preventable infection, the findings, if replicated, may have implications for reducing the incidence of schizophrenia.
Taxoplasmosis from the CDC website http://www.cdc.gov/parasites/toxoplasmosis/gen_info/index.html
Epidemiology & Risk Factors
Toxoplasmosis is caused by the protozoan parasite Toxoplasma gondii. In the United States it is estimated that 22.5% of the population 12 years and older have been infected with Toxoplasma. In various places throughout the world, it has been shown that up to 95% of some populations have been infected with Toxoplasma. Infection is often highest in areas of the world that have hot, humid climates and lower altitudes.
Toxoplasmosis is not passed from person-to-person, except in instances of mother-to-child (congenital) transmission and blood transfusion or organ transplantation. People typically become infected by three principal routes of transmission.
The tissue form of the parasite (a microscopic cyst consisting of bradyzoites) can be transmitted to humans by food. People become infected by:
· Eating undercooked, contaminated meat (especially pork, lamb, and venison)
· Accidental ingestion of undercooked, contaminated meat after handling it and not washing hands thoroughly (Toxoplasma cannot be absorbed through intact skin)
· Eating food that was contaminated by knives, utensils, cutting boards, or other foods that had contact with raw, contaminated meat
Animal-to-human (zoonotic) transmission
Cats play an important role in the spread of toxoplasmosis. They become infected by eating infected rodents, birds, or other small animals. The parasite is then passed in the cat's feces in an oocyst form, which is microscopic.
Kittens and cats can shed millions of oocysts in their feces for as long as 3 weeks after infection. Mature cats are less likely to shed Toxoplasma if they have been previously infected. A Toxoplasma-infected cat that is shedding the parasite in its feces contaminates the litter box. If the cat is allowed outside, it can contaminate the soil or water in the environment as well.
People can accidentally swallow the oocyst form of the parasite. People can be infected by:
· Accidental ingestion of oocysts after cleaning a cat's litter box when the cat has shed Toxoplasma in its feces
· Accidental ingestion of oocysts after touching or ingesting anything that has come into contact with a cat's feces that contain Toxoplasma
· Accidental ingestion of oocysts in contaminated soil (e.g., not washing hands after gardening or eating unwashed fruits or vegetables from a garden)
· Drinking water contaminated with the Toxoplasma parasite
Mother-to-child (congenital) transmission
A woman who is newly infected with Toxoplasma during pregnancy can pass the infection to her unborn child (congenital infection). The woman may not have symptoms, but there can be severe consequences for the unborn child, such as diseases of the nervous system and eyes
|
<urn:uuid:9b7889af-a54b-4511-be82-6e6e92a6a090>
|
CC-MAIN-2016-26
|
http://pennstatefoodsafety.blogspot.com/2012/06/increased-risk-of-t-gondii-in-free.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00024-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.934192 | 1,646 | 3.234375 | 3 |
This website is best viewed in a browser that supports web standards.
Skip to content or, if you would rather, Skip to navigation.
Nov. 1, 2011ST. JOSEPH, MO | By: Swetnam
The summer flooding of the Missouri River may have had a significant impact on soil and the availability of crop nutrients.
Fallow syndrome is not uncommon for flooded soil. It creates a phosphorus deficiency when corn crops are planted. Wayne Flanary with the University of Missouri Extension office Holt county says soil tests should only be conducted once soils dry and returns to normal moisture levels. Flanary also advises adding phosphorus fertilizer to even high testing soils to help crop growth. For more information regarding soil testing of flooded fields contact your University of Missouri Extension office.
|
<urn:uuid:a7030c4d-5daa-42ea-b3a1-afd15dcc902e>
|
CC-MAIN-2016-26
|
http://www.kxcv.org/news/2011/11/flooded-soil.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00045-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.911201 | 156 | 3.046875 | 3 |
A proposal to store nuclear waste below the Canadian shore of Lake Huron has drawn high criticism from environmental advocates.
Ontario Power Generation is seeking federal approval to build underground facilities to store low and intermediate level nuclear waste produced by the Pickering, Darlington, and Bruce nuclear plants.
The storage facilities would be built 2,230 feet below the ground and about mile away from Lake Huron near Kincardine and on the site of the Bruce nuclear power plant. It would hold about 200,000 cubic meters of waste.
Low and intermediate level waste include discarded supplies, resin and filters that are highly radioactive, despite classified as having intermediate level of radioactivity, said Debra Myles, co-manager of the Deep Geological Respiratory Joint Review Panel.
“Low level radioactive waste consists of industrial items that have become contaminated during routine clean up and maintenance activities at nuclear generating stations.”
The Canadian Nuclear Safety Commission and the Canadian Ministry of the Environment created the Joint Review Panel to conduct hearings and discussions on the environmental impacts of the project.
It will consider the effectiveness of engineered and geological barriers and the reliability of computer modeling of safety predictions.
“We do believe it is a safe project,” said Scott Berry, spokesperson at the Ontario Power Generation. “The proposal is based on several years of technical and environmental assessment reviews by our company, international experts and local municipal agencies.”
The idea for the permanent storage of the waste came from Kincardine officials, said Berry.
Such waste is now temporarily stored above the ground at the Bruce nuclear plant site in Kincardine.
Supporters say the concept relies on multiple natural barriers, combining engineered containers and the rock formation to safely store the waste while the radioactivity decreases.
“We chose the site because it has a stable rock formation of over 450 million years. It is extremely dense and we expect it to remain like that for another million years,” Berry said.
But opponents argue that eventually the waste might escape the repository.
Kay Cumbow, spokesperson for the Blue Water Sierra Club, a Michigan-based environmental protection group, said there could be safety hazards.
“This dump could eventually leak into groundwater and Lake Huron,” said Cumbow. “If that happens these wastes could then contaminate drinking water, fisheries and aquatic life and ruin tourism for many communities who depend on the fresh waters from Lake Huron and downstream.”
The threat is long-term, she said.
“Earthquakes, or perhaps other future unexpected natural or technical disasters, centuries from now could also result in release from the site.”
Cumbow noted that some radioactive substances are very long lived, and that the threat will last for many thousands of years. The waste must be isolated from the food chain for as long as it is hazardous, she said. That is difficult to guarantee, especially since the containers for high-level waste are only guaranteed for 100 years, she said.
According Ontario Power Generation the radioactivity in the low –level waste will decay within about 300-400 years but the intermediate–level radioactivity will remain thousands of years.
“The dump may have implications on similar projects throughout the Great Lakes,” Cumbow said.“If this deep underground dump is allowed to be built, the way will be cleared legally and administratively to build more deep underground radioactive dumps in close proximity to the Great Lakes.”
Brennain Lloyd, project coordinator at Northwatch, a coalition of environmental and citizen organizations in Ontario, agrees with Cumbow.
“It could serve as a dangerous precedent, becoming the poster child for other nuclear waste burial schemes,” Lloyd said.
OPG said the site has the lowest earthquake frequency in North American continent and there are several examples of such facilities operating safely in the United States, Finland and Sweden.
The Joint Review Panel is collecting public comments on the project until August but it most likely will extend, said Berry.
“We suppose the hearings will start sometime at the beginning of 2013.”
Saodat Asanova contributed to this report
|
<urn:uuid:69115dff-23d9-4a6c-a9c2-342626cfd28f>
|
CC-MAIN-2016-26
|
http://greatlakesecho.org/2012/05/30/canadians-consider-nuclear-waste-storage-near-lake-huron/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00009-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94309 | 858 | 3.015625 | 3 |
How to build a business case part 3 – interpreting the business case
Return on investment, or ROI is a fairly simplistic assessment of the value of a business case. It is simply the ration of benefits compared to costs of a given period of time. It is simplistic because it does not include any assessment of risk. The actual ROI of a project will be impacted by a wide variety of factors, many of which will be beyond the control of the project. A longer period of time brings with it more uncertainty and for that reason, some projects will be expected to deliver an ROI in a short time in order to be considered robust enough to proceed.
The return on investment in the example is 640,000/473,000 = 135%
The concept of internal rate of return or IRR be difficult to understand especially if it is explained in technical terms. In simple layman’s terms, imagine that instead of investing in this project, the same costs were put in the bank to earn interest. The internal rate of return, or IRR, is the equivalent interest rate you would need to earn in order to achieve the same return as the project.
Using Microsoft Excel, use the formula =IRR(A99:Z99) where “A99:Z99” represents the cash flow row of your spread sheet.
In the example project, the IRR is 7%. Compared to cash invested, that is an attractive and realistic IRR (at least at the time of writing) however, a good IRR does not guarantee a project will be approved. Every organisation has a limited amount of funds to invest in projects and if your project is in competition with others with a greater IRR, they will be favoured
The payback period is the point at which benefits outweigh the costs. It is best to use the business case chart. It is the point at which the cumulative cash flow line crosses the horizontal axis.
Short pay back periods are attractive. It is important that a project pays for itself as early as possible, obviously to save money but it also provide reassurance to stakeholders that the investment was sound and the project is successful. Extended payback periods are unattractive. The longer it takes to get a return, the more uncertainty there is about the ROI.
This guide is in four parts that can be found here
|
<urn:uuid:ee5fa17a-0c5d-499f-8e4f-d6194dda8f91>
|
CC-MAIN-2016-26
|
http://purchasinginsight.com/resources/how-to-build-a-business-case/how-to-build-a-business-case-part-3-interpreting-the-business-case/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954131 | 479 | 2.609375 | 3 |
Details about Research Methods and Statistics in Criminal Justice:
Formerly a Nelson Hall title, this comprehensive introduction to the basics of criminal justice research introduces simple research methodologies and gradually advances to more complicated ones. The text describes elementary descriptive and inferential statistics, demonstrates research techniques, and examines the various scientific perspectives used in research today. This approach encourages students to think critically about the research they will encounter in their studies as well as think about ways they can conduct their own research.
Back to top
Rent Research Methods and Statistics in Criminal Justice 3rd edition today, or search our site for other textbooks by Jack D. Fitzgerald. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Wadsworth Publishing.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our Criminology tutors now.
|
<urn:uuid:ffc95fb9-4b81-4c24-a251-5dedf0220712>
|
CC-MAIN-2016-26
|
http://www.chegg.com/textbooks/research-methods-and-statistics-in-criminal-justice-3rd-edition-9780534534370-0534534376
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00041-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.925496 | 177 | 2.703125 | 3 |
Schools across New York are recycling, reducing waste, saving energy, conserving resources, preventing runoff pollution and working to eliminate toxic materials. In addition to the long-range benefits of good environmental stewardship, green policies help schools provide healthier surroundings for their students and staff.
See our extensive list of teacher resources related to the environment, including lesson plans, workshops, poster contests, and green chemistry information.
Conservationist for Kids
Conservationist for Kids is DEC's very own nature and outdoors magazine for students. The magazine is sent to all public school 4th graders three times a year. Visit the website for additional activities and resources for educators and fun "green" activities for your school.
Lesson Plans and Workshops
DEC offers environmental education lesson plans and training for educators, and can provide printed materials and support to help teachers focus on environmental issues in the classroom. DEC also runs several poster contests that children can enter.
Project Learning Tree's GreenSchools! program (leaving DEC's website) has a number of investigates for youth to run at their school building, examining waste & recycling, water use, engergy use, and the entire school site as a whole. Project Learning Tree is sponsored in New York by the DEC's Bureau of Environmental Education.
[*Note: you have to sign up to view the investigations. It is free to sign up.]
New York Recycles!
New York Recycles! is part of a national event - America Recycles. Visit the webpage for teacher information and an educational booklet with activity pages and resources for use in the classrooms. Also find other ideas for incorporating recycling into the classroom on our Green Schools - Recycling page.
Green Schools Challenge
The Green Schools Challenge recognizes schools that are developing programs for waste reduction, reuse, recycling, composting, and or purchases of recycled products and packaging.
NYS Environmental Excellence Awards Program
The Environmental Excellence Awards honor schools, organizations, individuals, businesses and others for achieving exceptional environmental benefits and improving and protecting New York State's environment.
NYS Green Ribbon Schools
The NY State Education Department's Green Ribbon Schools program (leaving DEC's website) recognizes schools taking a comprehensive approach to greening their school. A comprehensive approach incorporates environmental learning with improving environmental and health impacts. It is the New York nomination process for the US Education Department's Green Ribbon Schools program.
Environmental Justice Community Impact Grants
Environmental Justice grants are available for projects such as cleanup of lead or mercury contamination in schools and education projects connecting inner-city students to nature.
Matching grants for school districts are available through the Municipal Waste Reduction and Recycling State Assistance Program.
Urban Forestry Grants
Grants are available for eligible urban forestry projects including plantings on school properties.
Toxics Elimination, Reduction
Many schools have mercury and chemicals that need to be removed. This program can provide schools with chemical waste management practices and information on how to start a mercury clean out in your school. There is also an introduction to green chemistry practices to minimize hazardous waste.
Solid Waste Reduction, Recycling
Recycling is mandatory in schools in New York State. This site provides assistance in establishing comprehensive waste reduction programs in schools.
Hazardous Waste Management
DEC's Waste Management program provides guidance managing proper disposal of fluorescent lamps, mercury-containing equipment and other hazardous waste, such as lab chemicals.
Outdoor Air Quality
Heavy-duty vehicles, including diesel trucks and school buses, are prohibited from idling for more than five minutes at a time, with few exceptions; learn about the anti-idling law and required vehicle emissions inspections.
You can also visit the EPA's Clean School Bus USA program (leaving DEC's website) for more information reducing children's exposure to diesel exhaust and the amount of air pollution created by diesel school buses.
Schools and day care centers that apply pesticides must meet DEC and State Education Department requirements. Information about alternative pest management methods are also available.
Learn how state standards for stormwater management may apply to your school. There is also information and guidance for adopting techniques for green infrastructure, including green roofs and rain gardens.
Green Cleaning Program Online Training
New York State Office of General Services (OGS) offers online training courses to facility managers, school administrators, educators, parents, and citizens. These courses provide a wealth of free information and tools to promote adoption of effective green cleaning practices, leading to healthier indoor environments. You can access these courses by visiting the OGS Green Cleaning Program online training (leaving DEC's website).
|
<urn:uuid:d8720448-4de3-4a05-b2ac-d1a3e82a0aa2>
|
CC-MAIN-2016-26
|
http://www.dec.ny.gov/education/41746.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00010-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932615 | 936 | 3.296875 | 3 |
. Diamond firetail finch
|This page is Sponsored By:
Your Name Your Address
Refer to "Advertise on web" web page
|We specialise in xxxxxxxx birds / product
Contact us on: (0X) XXXX XXXX
or e-mail us @ .............
- An Australian Finch
- Scientific Name: Emblema guttata,
(previously Stagonopleura guttata)
- Common Name/s:
DIAMOND FIRETAIL FINCH, DIAMOND FINCH, DIAMOND SPARROW.
- Sub Species: None
- Origin / Distribution:
Southern Queensland down east coast through New South Wales,
Victoria and parts of South Australia.
- Habitat In Wild: Eucalypt
forest and woodland and mallee country. Will inhabit farmlands
and grasslands. Spends significant amount of time on the
ground finding seeds and insects.
- Status In Wild: Declining.
Loss of suitable habitat may be one of the causes of population
decline. Listed as near threatened.
- Status In (Australian) Captivity:
- Age To Sexual Maturity: 9 - 12
months. They form pair bonds early, usually before reaching
sexual maturity. Hens have been known to breed as early as 5
months of age, but this should be avoided..
- Adult plumage: attained at about 4
months of age.
- Best breeding years (estimate):
12 months - 5th year
- Lifespan (estimate): 7 or
more years. Sometimes up to 10 years.
- Sexing: Monomorphic
(Difficult to sex)
- Mutations: Yes. Pure "normal"
colour birds are still readily available.
- Availability: Bird dealers
- Temperament: Popular bird.
Attractive bird. Best
as a single pair in a mixed collection but can be kept as a colony
in a larger aviary. Generally a good breeder and may breed
throughout the year. Generally
poor breeding results occur when housed in a cage or cabinet.
- Cost (Victoria) Per Pair: -
Normal colour (Approx) $100
- Description Of Adults:
- Length: Approx. 120 mm (or about 5 inches)
- Colour ("normal" colour): Refer
photo/s above if available.
- Weight: Approx. 20 gms (or 2/3 oz)
on "Finches - Australian" web page
and use in conjunction with details
outlined on this page.
Level Of Knowledge Required:
Beginner / Intermediate / Advanced / Specialist Breeders Only.
Government Regulations &
By-Laws: Refer to "Government Laws" page.
Housing Requirements: Click on "Housing birds"
web page for general details on the housing
of Australian Finches or read on for specific details for this finch.
A fully roofed aviary is preferred. Best results are achieved in a planted
aviary. Generally poor breeding results occur when housed in a
cabinet / cage.
Can be housed as a colony in a large planted aviary. They can be
included safely in a mixed species finch collection.
The Diamond Firetail prefers a large
planted aviary but they can be bred in a Canary style breeder cage of
about 900mm long x 400mm high x 400mm deep (36 x 16 x 16 inches).
Only one breeding pair per cage.
In an aviary, some birds can become very
territorial especially around their nest area.
Aggression between pairs can occur in a
colony situation. One bird or pair can become dominant and cause
stress to the other less dominant birds.
Diet / Feeding:
Click on "Feeding birds"
web page for general details on the
nutrition of Australian Finches or read on for specific details for this
The Diamond firetail finch requires a good quality finch mix, seeding grasses
and some fruits (e.g. apple) and green leafy vegetables. Live food is
essential especially at breeding season. Mealworms, small
cockroaches and small crickets are commonly
used. Sprouted or soaked seed if available. Niger seed and hulled oats
can be offered. Seeding grasses are an important part of the diet
during the breeding season and usually gives better, healthier young.
Green leafy vegetables can include silverbeet, cos lettuce and endive.
Basic seed mix should include Canary
seed, White French Millet, Japanese Millet, and Yellow and Red Panicum.
In the wild the Diamond firetail finch birds will forage for
foods, insects and seeds on the ground. This habit continues in
A basic overview only.
- Roosting nest: Yes
- Nesting months: Spring to
Autumn. May breed throughout the year if conditions
- Nesting receptacles:
The Diamond firetail finch will build a nest in a shrub or dry brush such as tea tree.
Will use half open nest boxes and other commercially available
- Nest: Compared with
most finches they build a large nest. The nest is made from grasses and
other pliable materials and has
an entrance tunnel. Nest is lined with feathers and soft fine
grasses such as Swamp or November grasses.
- Who incubates the eggs:
Hen / cock / both share.
Generally intolerant of nest inspections.
Nests are usually reused so adequate new nest material must be available
for the parents to rebuild or reline the nest for the next clutch.
Nests are usually in the upper half of
the aviary. They may build a communal roosting nest in the non
breeding season. These nests are usually less sturdy than a nest
and does not have an entrance tunnel.
More details on finch nests
and a selection of finch nest photos
can be located on the "nests", "finch nests"
and "finch nest photos"
web pages. Click on "Up" then "nests" then
"finch nests" and "finch nests photos" in
the navigation bars.
Egg Colour White. Clutch/s
per year 3. Eggs per nest 4 - 7. Incubation
approx 14 days. Fledge approx. 21 - 23 days.
Independent approx. another 3 - 4 weeks. The young may return to the nest for
about one week after fledging. The young are usually well
feathered when they leave the nest.
Both parents will feed the young after
they leave the nest. Hen may start to lay another clutch of eggs
while the cock bird is still feeding the young. Usually safe to leave the young in the
aviary with the parent birds.
The Diamond firetail finches tend to pair bond at an early age.
Pair bonding is strong. Best breeding results are achieved by pairing up juvenile birds.
Adults can be paired up with a new mate
If breeding in a colony, the dominant
pairs may breed but the less dominant pairs may produce less young or fail to breed. A
single pair of Diamond firetail finches per aviary in a mixed species collection will eliminate this problem.
The Diamond firetail finch hen should be allowed time to fully
mature before commencing breeding. Best results are achieved if the hen
is 9 - 12 months old prior to starting breeding.
Generally a good breeder and may breed throughout the year. Restrict breeding pairs to no more than 3 clutches per breeding
season. This is also applicable when breeding these birds in an
indoor room. These birds may breed year round if conditions are
As with most insect eating finches, the increased consumption of
livefoods is a good indication that there is young in the nest.
The young can usually be heard when they are about 3 days of age.
Artificial incubation, hand rearing or fostering will not be
covered on this web site. It is too complex and diverse in nature to be
attempted here. Refer "Specific References" as listed below and
"General References" listings.
Refer "Avian Health Issues"
web page for information and references.
- Worming and parasite control and Quarantine
requirements of new birds or sick birds are considered to
require veterinary advice and therefore not covered on this web
site. Refer above "Avian Health Issues"
web page option.
- Avian medicine is advancing at a rapid pace. Keep
updating your knowledge and skills.
Refer to references listed on "Book
References" web page.
- A/A Vol 59 No. 12 Dec 2005 Page 277-281.
- A/A Vol 57 No 6 June 2003 Page 132-133.
- A/A Vol 54 No 2 Feb 2000 Page 30-35 (Inc photo)
- A/A Vol 48 No 11 Nov 1994 Page 267-269
- A/A Vol 42 No. 6 Jun 1988 Page
- A/A Vol 41 No. 7 Jul 1987 Page 173-175
- A/A Vol 36 No. 2
Feb 1982 Page 30-35
- A/A Vol 35 No. 1 Jan 1981 Page 14-27
- A/A Vol 34 No. 6 Jun 1980 Page 106-108
- A/A Vol 34 No. 1 Jan 1980 Page 12-19
- A/A Vol 33 No. 8 Aug 1979 Page 133-134
- A/A Vol 32 No. 7 Jul 1978 Page 104-107
- A/A Vol 27 No. 6 Jun 1973 Page 90-91
- A/A Vol 26 No. 9
Sept 1972 Page 149-150
- A/A Vol 25 No. 3 Mar 1971 Page 29-30.
- A/A Vol 22 No 4 Apr 1968 Page 67-70.
- A/A Vol 13 No 8 Aug 1959 Page 109-111, 122-124 (Inc colour
- A/A Vol 13 No 5 May 1959 Page 74.
- A/A Vol 13 No 3 Mar 1959 Page 44-47.
- A/A Vol 11 No 7 Jul 1957 Page 102.
- A/A Vol 10 No 1 Jan 1956 Page 1-2.
- A/A Vol 8 No 2 Feb 1954 Page 18-19.
- A/A Vol 7 No 10 Oct 1953 Page 122-123.
- A/A Vol 5 No 1 Jan 1951 Page 11.
- A/A Vol 4 No 4 Apr 1950 Page 52.
- A/A Vol 3 No 12 Dec 1949 Page 131.
- A/A Vol 3 No 9 Sept 1949 Page 98 (Sexing Aust. finches).
- The Bulletin No 2, July 1942 Page
- Australian Birdkeeper
- ABK Vol 15 Issue 6. Dec-Jan 2003 Page 328-330.
- ABK Vol 13 Issue 4. Aug-Sept 2000 Page 204-207.
- ABK Vol 4 Issue 8. Apr-May 1991 Page 381-385
- ABK Vol 2 Issue 11. Oct-Nov 1989 Page 435-438
Top of - Diamond firetail finch- Page
|
<urn:uuid:83232e3e-2d1c-4728-8c15-e3e2fcffcf1c>
|
CC-MAIN-2016-26
|
http://www.birdcare.com.au/diamond_firetail_finch.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00111-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.835648 | 2,365 | 2.625 | 3 |
Expanding Robotics Education at Historically Black Colleges and Universities
NSF and ARTSI form a partnership to increase minority participation in science and engineering
The National Science Foundation (NSF) commemorated the upcoming birthday of Martin Luther King, Jr. with a kick-off for the new Advancing Robotics Technology for Societal Impact (ARTSI) Alliance. Funded by a three-year, $2 million grant from NSF, ARTSI will develop outreach and education programs to encourage African-American students at both the K-12 and college level to pursue degrees and careers in computer science and robotics.
The event earlier this week featured demonstrations by the SpelBots, Spelman College's celebrated robotics team, a presentation of a robot designed by a University of the District of Columbia student and a keynote speech by ARTSI director Andrew Williams, a computer science professor at Spelman and founder of the SpelBots.
ARTSI began as a collaboration between Williams and Carnegie Mellon University computer science professor David Touretzky, using the Tekkotsu robot programming framework developed in Touretzky's lab. It has grown into a unique community of predominantly African-American computer science and robotics faculty members from several major research universities and eight historically black colleges and universities (HBCUs) focused on promoting robotics and computer science education for African-American students.
"Traditionally African-Americans have been left behind in emerging high tech fields, such as robotics," Williams said. "So we decided to focus our ARTSI activities around socially conscious robotics that will inspire a new generation of black robotics engineers and computer scientists."
The need for this focus is clear. Although African-Americans make up almost 13% of the population, they hold less than five percent of jobs in the computer and information sciences fields, a sector that is projected to be one of the fastest growing in the next decade.
To increase the number of African-Americans who pursue degrees and careers in these fields, ARTSI will develop outreach programs that target young people of color both before and during their college years, and institute robotics courses that allow HBCU students to do hands-on robotics research in order to spark their interest in this exciting field. ARTSI will also provide professional development activities for HBCU computer science and robotics faculty.
"Some of these schools are getting their first research-quality robots," Touretzky said, and need to develop a basic robotics curriculum. Other HBCUs like Spelman already have a robust robotics program that continues to grow through ARTSI.
Working with HBCUs is important because of the vital role they play in creating African researchers and scholars. Though they comprise only three percent of U.S. colleges and universities, they award approximately 23% of bachelors degrees earned by African. In the field of computer science, HBCUs award 35% of all degrees earned by African-Americans.
Yet building successful partnerships between major research universities and HBCUs requires new approaches, according to Jan Cuny, the program officer at NSF who oversees efforts to broaden minority participation in computer science. "Building meaningful collaborations between minority serving institutions and major research universities is difficult," Cuny said, "but this is a creative effort. It is just starting, but the faculty involved at all of the schools are terrific and the focus on robotics in service to society will, I believe, be very compelling to students."
In addition to Spelman, the participating institutions are Brown University, Carnegie Mellon University, Duke University, Florida A&M University, Georgia Institute of Technology, Hampton University, Morgan State University, Norfolk State University, University of Alabama, University of Arkansas-Pine Bluff, University of Pittsburgh, University of the District of Columbia, University of Washington and Winston-Salem State University.
In addition to NSF, ARTSI is also receiving support from Seagate Technologies, iRobot, Juxtopia, Boeing and Apple.
In his keynote speech, Williams linked the work of ARTSI and other efforts to increase the number of African-American in high-tech fields with the ongoing effort to achieve Martin Luther King, Jr.'s dream of a society that provides opportunities for all of its citizens based on their abilities and character and not their ethnic or social background.
"Dr. King was a role model for bringing diverse people together to improve our society in essential civil rights," Williams said. "Through the support of NSF and other industry partners, ARTSI is bringing together a diverse community of educators and researchers to increase African Americans' educational opportunities using robotics projects centered around improving healthcare, creative arts, and entrepreneurship."
Cuny and others believe ARTSI will have a measurable impact on the education of African-American students and will produce broader benefits for society as a whole. "I have very high expectations for this project," she said.
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2016, its budget is $7.5 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives more than 48,000 competitive proposals for funding and makes about 12,000 new funding awards. NSF also awards about $626 million in professional and service contracts yearly.
Useful NSF Web Sites:
|
<urn:uuid:592b79e5-5ff7-4b87-869f-39f44402ab0d>
|
CC-MAIN-2016-26
|
http://www.nsf.gov/news/news_summ.jsp?org=NSF&cntn_id=111020&preview=false
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00153-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95048 | 1,115 | 2.65625 | 3 |
Imagine, over 100 years ago you are an artist in France and you are asked to come up with an idea of what the world will be like in 100 years – what would you draw?
Well, images of air battles and school classroom, of hairdressing salons and flying postmen have been doing the the internet rounds for a few months now, but if you haven’t seen them they are definitely worth a look. Public Domain Review probably have the best little write up and presentation of images like the one above, they tell us:
France in the Year 2000 (XXI century) – a series of futuristic pictures by Jean-Marc Côté and other artists issued in France in 1899, 1900, 1901 and 1910. Originally in the form of paper cards enclosed in cigarette/cigar boxes and, later, as postcards, the images depicted the world as it was imagined to be like in the year 2000. There are at least 87 cards known that were authored by various French artists, the first series being produced for the 1900 World Exhibition in Paris.
I love content like this as a GeekDad. It provides so many opportunities to engage with my children. I have used these image to get my children thinking about the future, about how they would draw images of the year 2100 – and considering that they could be alive then! We’ve talked about what in the images was right, and what was wrong, and how difficult it is to predict so far into the future. So, thank you internet, please keep providing awesome creative commons content like these pieces by French artists for the World Exhibition in Paris over 100 years ago. They make being a GeekDad even more awesome.
If you know any other great images that could help children understand the future, and the history of the future – please share them in the comments.
(via Public Domain Review)
Please note: the author of this blog acknowledges some readers may find the use of the word “awesome” in this post excessive. Your concerns have been noted.
|
<urn:uuid:540dc9ff-1ba3-49bd-aab4-3c364c6fe7a8>
|
CC-MAIN-2016-26
|
http://archive.wired.com/geekdad/tag/21st-century/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970645 | 418 | 2.734375 | 3 |
Gilleland Family Cemetery:
A Preservation Challenge
Recently, a New Jersey resident came Catawba County to visit the last tangible memory of his ancestors. In a neglected cemetery in Sherrills Ford, he discovered the stones memorializing his long-dead family members. Passing behind an electric fence, he was saddened to learn that cows had trampled the grave stones, all of which were laying flat upon the ground. Many had been broken--all had suffered indignities that only cows can inflict. The cemetery had been the resting place for many pioneer members of the Gilleland family who arrived in Catawba County before the Revolutionary War.
The cemetery is currently inaccessable, with no public access, contrary to NC General Statutes (see below). The landowner has expressed no inclination to protect the gravesite, in spite of offers by family members to enclose the cemetery at their expense. Such disregard for our precious past and for the memories of the long-dead is challenge constantly faced historians who lack the resources to protect these historic treasures.
To better understand the historic nature of the cemetery and the men, women and children who lie buried there, a list of the graves stones and their inscriptions are provided here. Of note is the fact that some of these people were born well before the Revolutionary War, during a period when the Catawba Valley was under attack by Indians.
stones were read in 1986 and their incriptions recorded.
Since then, many have disappeared and others have been destroyed.
above cemetery survey was was performed on 23 March 1986
Bray Sherrill, Samuel F. Sherrill and Jan Brown.
The ultimate indignity
This is the cemetery as it currently exists, represented by photos taken by Jeffrey Thomas, descendant of the Gilleland family.
Additional pictures of the
gravestones in this
threatened cemetery may be found here.
As a footnote, Mr. Thomas has attempted to gain the attention of the public by contacting local officials and the newspaper. Thus far, those responsible for enforcing state and local ordinances have been strangely unreceptive. The Hickory Daily Record has been exceptionally helpful in publicizing the plight of this threatened cemetery. A front page report and the follow-up article by the NC State News follows:
Cows overrun cemetery
By KIM GILLILAND
CATAWBA - Jeffery Thomas wants to preserve the graves of his ancestors. They rest in a burial site off the beaten path and inside what is now a cow pasture.
The landowner wants to use his land as he sees fit. Hes raising 200 head of beef cattle and does not want them disturbed.
Can the two sides agree about the property?
It may take the courts to decide this dispute between the graves and the grazers.
For 15 years, Thomas, a registered nurse from New Jersey, has been tracing his genealogy. His search ended in a remote area of southeastern Catawba County.
Thomas visited there recently on his way back from a trip to Georgia. On Aug. 4, with directions from the county librarys genealogist in hand, Thomas drove the twisty rural roads to the plot where his fifth great-grandfather, Thomas Gilleland and his wife, Mary, lie.
What he saw dismayed him. Just 200 feet off Hopewell Church Road is a stand of trees. Cows stood grazing, hunkered in the shade trying to beat the midday heat. The headstones of Thomas beloved ancestry, dating back to 1824, lay on the ground. As Thomas surveyed the scene, his heart sank.
"I actually stood on that cemetery and cried," he said. "I find it frightening that the landowner allowed his cows to take over the cemetery. This is my family history."
Landowner Gary Dellinger bought the property about 15 years ago. His cows freely roam the land.
Dellinger, who owns a pre-cast and rental company in Denver, discovered the cemetery as the land was being cleared. He says he wanted to leave the cemetery intact.
"If I didnt want the graveyard there, I would have pushed it down," Dellinger said.
Thomas thinks the cemetery belongs to his family, not Dellinger. Hes willing to spend the money to clean it up and put a fence around it.
"Its our main objective," he said. "That cemetery is a part of Catawba County history."
Ward Sutton owns a cemetery relocation business in Rocky Mount. He deals with cases like this every day.
Sutton says Dellinger should allow Thomas access to the cemetery.
"He (Dellinger) didnt buy the cemetery, he bought the land up to that," Sutton said. "Those people have rights to that cemetery."
N.C. law seems to agree. General Statute 65-74 requires that "A descendant, with the consent of the public or private landowner, may enter the property of another to discover, restore, maintain or visit a private grave or abandoned cemetery."
Statute 65-75 states that a descendant can petition the clerk of court to allow access to the cemetery.
Dellinger has no problem with allowing a fence around the cemetery. He does have a problem with access to the cemetery. He says he cant leave the gate unlocked all the time.
Dellinger wants to work with Thomas. He just hasnt heard from him.
"If he really cares about his ancestors, and was truly sincere about keeping up the cemetery, he would call me," Dellinger said. "If I have a problem, I go to the source, I dont go to a newspaper."
Thomas says he has tried numerous times to reach Dellinger, but was unsuccessful.
"I honestly didnt know how to reach him," Thomas said. "Im more than willing to work with him, Im not trying to cause him any problem at all.
"I dont want this to go to court. At the same time, hes got an electric fence. What am I supposed to do, jump the cattle gate?"
[email protected] | 322-4510 x5406 or 304-6913
the Catawba County area, voices speak
in support of preservation of the historic cemetery.
Regarding the story in the Aug. 27 paper, "Cows overrun cemetery," I can certainly understand Mr. Thomas's anger and frustration.
Like him, my roots are also in the Catawba-Lincoln-Gaston County area, and I have done my share of searching through cemeteries that have been and still are subjected to abuse and neglect from man and the elements. However, my husband and I have been most fortunate in that we have made it our habit to ask the landowner's permission to visit a particular cemetery and we have always been treated with the utmost courtesy on those occasions.
Mr. Dellinger is quoted as having said, "If I didn't want the graveyard there, I would have pushed it down." I hope he was misquoted and that he most certainly is not of the opinion that he has a right to destroy an irreplaceable piece of history simply because it happens to lie within the confines of his property.
After all, one has only to access the genealogical resources in local libraries to confirm that the Dellingers were instrumental in settling this portion of the Piedmont and that the remains of these dear people, as well as those of my own Bollinger and Hager ancestors, are scattered throughout the region.
I hardly think Mr. Dellinger would want the graves of his kin desecrated by having someone "push it down."
Quite often, the fragile old tombstones contained in these abandoned cemeteries are the only written testimony to a life once lived. The history contained in their inscriptions might be our only link to the past. These places need to be protected, and I do hope that Mr. Thomas and Mr. Dellinger can come to a mutual understanding on this matter.
Today, we spend countless thousands of dollars in our efforts to preserve our history by restoring landmarks and older homes in commemoration of our forefathers, yet we deny many of these simple folks from bygone years the dignity of a final resting place free from the encroachment of man. What is wrong with this picture?
Judith McSwain Rock Hill, S.C.
[Hickory Daily Record, "Your Voice," (editorial page, September 5, 2006]
UPDATE: This cemetery has been saved from destruction. A fence has been erected around it and the broken gravestones will be repaired. A dedication will take place during the summer of 2007. Details to follow.
Clarification of Laws pertaining to cemeteries
in North Carolina, please visit the
North Carolina Department of Cultural Resources web page
PRESERVING CEMETERY DATA
THE NORTH CAROLINA CEMETERY SURVEY
--Derick S. Hartshorn
Member Assn. for Gravestone Studies
These pages are copyrighted in the name of the NCGenWeb Project and/or the submitters and webmaster of this project.
They may not be used, housed or copied by any for-profit enterprise. Fair Use Doctrine allows for exerpting limited portions.
Derick S. Hartshorn - ©2008
|
<urn:uuid:a39c657c-7567-4eff-8795-6a0b80c75670>
|
CC-MAIN-2016-26
|
http://www.ncgenweb.us/catawba/Gilleland-cemetery.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00002-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97153 | 1,927 | 2.640625 | 3 |
Using Ariane 1
, Giotto was launch
ed from Kourou
, French Guiana
at 2nd of July 1985 11:23 UT. The Giotto-mission was ESA
s first deep-space mission. The project was supposed to be a joint U.S. - European mission, but due to financial problems, NASA pulled out. ESA
decided to go on alone, since it would be 75 years until the next opportunity. Giotto was named after the famous Italian artist with the same name who lived 1266-1337.
After a successful launch, Giotto waited, and at March 13, 1986, at a distance of 0.89 AU
from the sun, Halley's Comet
went by, and Giotto got some great pictures. Giotto was never meant to survive the meeting with the comet, due to the massive amount of dust and particles. Although some of the instruments on board giotto were damaged, Giotto itself was still operational. After the mission was successful, Giotto was remotely switched off.
In 1992 ESA
came up with a new mission for Giotto: Grigg-Skjellerup
The problem was that ESA
did not know if Giotto still worked after being exposed to the harsh environment of space. Neither did they know which direction the antenna was facing. The only thing they could do was to send the wakeup-signal, and hope that the low-range omni-directional antenna received it. It took two hours after the wake-up signal had been sent, before signals from Giotto was received at the NASA Deep Space Network
ground station near Madrid
. After a week Giotto was fully operational again (well, apart from the damaged instruments). To be able to get close to Grigg-Skjellerup
, the orbit of Giotto had to be altered. This was done by using the earth's gravitational field. Giotto flew 22730 Km above earth, and successfully got into the right orbit*.
*Giotto was the first probe to use earth's gravity to alter its orbit
On 10th of July, 1992, It was time for Giotto to prove that he was still alive and kicking: Grigg-Skjellerup
was close. Grigg-Skjellerup
flew past Giotto at 17000 Km, but unfortunately, Giotto's camera missed by a mere 100 to 200 km, due to differences in the fly-by conditions between Halley's
After some minor orbit-adjustments, Giotto was once again switched off. There are still no plans to revive it, but there is reason to believe that Giotto still is operational, and can be re-activated.
Obtain the first close-up images of a comet nucleus (from a distance less than 500km).
Determine the elemental and isotopic composition of ices in the cometary coma.
Study the physical and chemical processes that occur in the comet's atmosphere.
Determine the elemental and isotopic composition of cometary dust particles.
Measure the comet's total gas-production rate.
Measure the amount of dust around the comet and its size/mass distribution.
Determine the relative amounts of dust and gas in the near-comet environment.
Investigate the interaction between the comet and the electrically charged particles of the solar wind.
The first probe to pass by two asteroids.
First probe to photograph a comet nucleus.
Europe's first deep space mission.
First deep space mission to change orbit by returning to Earth for a gravity assist.
Discovered the size and shape of Comet Halley 's nucleus.
Made the closest comet flyby to date by any spacecraft (200 km from Comet Grigg-Skjellerup).
Discovered a black crust and bright jets of gas on the nucleus of Comet Halley.
Measured the size, composition and velocity of dust particles near two comets.
Measured the composition of gas produced by two comets.
Discovered unusual magnetic waves near Comet Grigg-Skjellerup.
Weight: 960 kg (including fuel)
Launch Vehicle: Ariane 1
Equipment on board:
A narrow-angle, multicolour camera to obtain pictures of the nucleus.
Three mass spectrometers to measure gas and dust composition.
A dust impact detector to measure the mass of dust particles striking the shield.
Two plasma experiments to study the solar wind and charged particles.
An energetic particles analyser to study electrons, protons and alpha-particles.
A magnetometer to study changes in the magnetic field.
An optical probe to study brightness of the coma.
A radio science experiment to investigate the electron environment was also carried out by comparing signals sent at different frequencies from the spacecraft.
- NASA web-site
- ESA web-site
- My local library
|
<urn:uuid:344cb2c5-1218-4ae5-8ab5-c768f0b1066d>
|
CC-MAIN-2016-26
|
http://everything2.com/title/Giotto
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95561 | 1,012 | 3.515625 | 4 |
“Every textbook should have a soundtrack” - Teaching History with Music
3 December 2011 - 10:02am
After reading Alex Zukas’ “Different Drummers: Using Music to Teach History,” a 1996 article about incorporating music in the history classroom, I was inspired to see that if anyone had created resources to teach history with music in the internet era. I was lucky enough to discover the historyteacher’s YouTube Channel. Created by Amy Burvall and Herb Mahelona, two history teachers from Hawaii, “History for Music Lovers” features parodies of popular songs detailing historical events and figures. The focus is on Ancient Civilizations and Early Modern Europe. My favourite so far is “Black Death,” to the tune of “Hollaback Girl” by Gwen Stephani:
It begins with the lyrics:
Uh huh, it’s the plague/
Gonna kill you in a few days/
A pandemic so so severe/
The Black Death caused such horror and fear/
And there ain’t no cure for that, girl/
You’ll be dead in no time flat, girl
Recently, Burvall and Mahelona gave a TEDx Talk in Honolulu entitled “What I learned from Napoleon and MTV”. During the talk, they describe the origins of their project and what keeps them going. In order to make their lessons more relevant and enticing to their students, they decided that they needed to begin to use the internet. So, in 2008, they shot their first video, about Henry VIII, to the tune of Abba’s “Money, Money, Money”.
Today, they are guided by three Cs: create, collaborate, and celebrate. They hope that their videos will inspire students to do all three: the create content to enrich the learning experience; to collaborate with those who have different skills and knowledge; and to celebrate the tradition of how humans are drawn to both music and story telling. They are also guided by their belief that “every textbook should have a soundtrack,” which they discussed in their TEDx Talk.
In an interview earlier this year on arttrav.com, Burvall describes how she incorporates these videos into her classroom. She has used them as both ‘hooks’ to begin a unit, and as a tool to help review before a test. Also, she has had students create their own videos as an option for a final assignment.
There are many ways to incorporate music in the history classroom, from using historical music to better understand a time period, to creating learning aids with current music. Different ways serve different purposes, but they can all work to engage students.
Do you incorporate music in your classroom? If so, how?
|
<urn:uuid:28fbaef8-ec8a-45c2-a891-73e2ec1b49bc>
|
CC-MAIN-2016-26
|
http://thenhier.ca/en/content/%E2%80%9Cevery-textbook-should-have-soundtrack%E2%80%9D-teaching-history-music
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00161-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961176 | 587 | 3.046875 | 3 |
Forty percent of Alzheimer's patients don't eat enough...patients eating from red plates consumed 25 percent more food than those eating from white plates.
By Max Wallack
Alzheimer's Reading Room
Most of the time, caregivers for Alzheimer’s patients at home are surprised by how often Alzheimer’s patients are hungry. These patients often forget when they last ate and assume it’s been a very long time -- they just can't remember.
Many times, they are ready to eat another meal in just minutes after their last meal.
- Alzheimer's CareGiving -- Insight and Advice
- Test Your Memory for Alzheimer's (5 Best memory Tests)
- Communicating in Alzheimer's World
- What is Alzheimer's? What are the Eight Types of Dementia?
- Does the Combination of Aricept and Namenda Help Slow the Rate of Decline in Alzheimer's Patients
- Alzheimer's Disease Statistics
- Is it Really Alzheimer's or Something Else?
- Ten Symptoms of Early Stage Alzheimer's
- Ten Tips for Communicating with an Alzheimer’s Patient
The article is very interesting in finding some reasons for the insufficient food intake.
Apparently, vision plays a role in Alzhemer's patients reluctance to eat. This phenomena is explained by Boston University bio-psychologist Alice Cronin-Golomb and her research partners in an article subtitled --
“Nursing home staff often complain that Alzheimer’s patients do not finish the food on their plates even when staff encourages them to do so. Forty percent of individuals with severe Alzheimer’s lose an unhealthy amount of weight. Previous explanations for this phenomenon included depression, inability to concentrate on more than one food at a time, and inability to eat unassisted."
According to the BU Today article, Cronin-Golomb and her colleagues took a different approach.
"They believed this behavior might be explained by the visual-cognitive deficiencies caused by Alzheimer’s.As a result of these findings, some nursing homes have switched to using only red plates, and one company has marketed special red plates for this purpose.
Patients with the disease cannot process visual data—like contrast and depth perception—as well as most other seniors. So Cronin-Golomb’s team, led by then-BU postdoctoral fellow and current Senior Lecturer in Psychology Tracy Dunne (GRS’92, ’99), tested advanced Alzheimer’s patients’ level of food intake with standard white plates and with bright-red ones.
What they found was astonishing -- patients eating from red plates consumed 25 percent more food than those eating from white plates."
Sometimes, a very simple solution can make a big difference!
Original content +Bob DeMarco , the Alzheimer's Reading Room
|
<urn:uuid:0156fbb7-4029-4345-a895-4ccc088aef80>
|
CC-MAIN-2016-26
|
http://www.alzheimersreadingroom.com/2010/08/what-color-is-your-plate.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.916001 | 579 | 2.671875 | 3 |
home :: North America :: USA :: Economy :: U.S. Economic System :: Markets and the Problem of Scarcity :: How and Why Market Prices Change :: Changes in Supply
How and Why Market Prices Change, Changes in Supply
The supply of most products is also affected by a number of factors. Most important is the cost of producing products. If the price of natural resources, labor, capital, or entrepreneurship rises, sellers will make less profit and will not be as motivated to produce as many units as they were before the cost of production increased. On the other hand, when production costs fall, the amount producers are willing and able to sell increases.
Technological change also affects supply. A new invention or discovery can allow producers to make something that could not be made before. It could also mean that producers can make more of a product using the same or fewer inputs. The most dramatic example of technological change in the U.S. economy over the past few decades has been in the computer industry. In the 1990s, small computers that people carry to and from work each day were more powerful and many times less expensive than computers that filled entire rooms just 20 to 30 years earlier.
Opportunities to make profits by producing different goods and services also affect the supply of any individual product. Because many producers are willing to move their resources to completely different markets, profits in one part of the economy can affect the supply of almost any other product. For example, if someone running a barbershop decided to sign a contract to provide and operate the machines that clean runways at a large airport, this would decrease the supply of haircutting services and increase the supply of runway sweeping services.
When suppliers believe the price of the good or service they provide is going to rise in the future, they often wait to sell their product, reducing the current supply of the product. On the other hand, if they believe that the price is going to fall in the future, they try to sell more today, increasing the current supply. We see this behavior by large and small sellers. Examples include individuals who are thinking about selling a house or car, corn and wheat farmers deciding whether to sell or store their crops, and corporations selling manufactured products or reserves of natural resources.
Finally, the number of sellers in a market can also affect the level of supply. Generally, markets with a larger number of sellers are more competitive and have a greater supply of the product to be sold than markets with fewer sellers. But in some cases, the technology of producing a product makes it more efficient to produce large quantities at just a few production sites, or perhaps even at just one. For example, it would not make sense to have two or more water and sewage companies running pipes to every house and business in a city. And automobiles can be produced at a much lower cost in large plants than in small ones, because large plants can take greater advantage of assembly-line production methods.
All these different factors can lead to changes in what consumers demand and what producers supply. As a result, on any given day prices for some things will be rising and those for others will be falling. This creates opportunities for some individuals and firms, and problems for others. For example, firms producing goods for which the demand and the price are falling may have to lay off workers or even go out of business. But for the economy as a whole, allowing prices to rise and fall quickly in response to changes in any of the market forces that affect supply and demand offers important advantages. It provides an extremely flexible and decentralized system for getting goods and services produced and delivered to households while responding to a vast number of unpredictable changes.
[ad >] Are you lazy student? The smallest wireless audio headset will help you out! 6mm diameter (0.24 inch) - it hides inside ear completely and has no wires. Go to www.microearpiece.com to read about it. [< ad]
|
<urn:uuid:0a8d45bb-7521-4f08-bb01-ae5a809119a9>
|
CC-MAIN-2016-26
|
http://www.countriesquest.com/north_america/usa/economy/u_s_economic_system/markets_and_the_problem_of_scarcity/how_and_why_market_prices_change/changes_in_supply.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00164-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96358 | 798 | 3.890625 | 4 |
For the last 50 years, meteorologists have drawn weather maps of upper air conditions using constant pressure surfaces. These charts are prepared for several mandatory pressure levels twice daily (0000 Z and 1200 Z) from the temperature, humidity and wind data provided by the operational radiosonde network, supplemented with data from aircraft reports and satellite-derived wind data in data sparse regions.
Meteorologists use these constant pressure charts rather than constant altitude charts for several reasons. Since most aircraft of the time used pressure altimeters, most "constant altitude" flights were actually flown on constant pressure surfaces. Furthermore, the radiosonde data (from which the charts are prepared) are reported in terms of pressure. Finally, use of pressure as the vertical coordinate simplifies many of the thermodynamic equations and computations.
In this section, upper air charts will be studied at three separate levels of the atmosphere - one in the lower troposphere at an altitude of approximately 5000 ft (1.5 km), a second in the mid troposphere at approximately 18,000 ft (5.5 km) and the third in the upper troposphere, near the tropopause, at approximately 30,000 ft (10 km).
Each level furnishes a slightly different perspective of the atmosphere; hence, the meteorologist looks for certain features in each. The atmospheric variables typically plotted on these isobaric maps include (1) the height of the pressure surface; (2) the air temperature; (3) the wind speed and direction; and (4) when applicable, the dewpoint, an indicator of atmospheric humidity. Essentially all these charts can be produced with analyses that include height contours (lines connecting all points on the surface having the same altitude) and isotherms (lines of each temperature). Some charts, primarily the 300 mb chart, may have "isotachs", which are lines of equal wind speed. An optional background discussion of the salient features of an isobaric chart appears below.
The 850 mb chart, representing weather conditions in the lower troposphere, is at a level that is above approximately 15 percent of the atmosphere in terms of mass. At an altitude of approximately 1500 meters (5000 feet), this level is above most of the influences of surface friction in the many sections of the country. Unfortunately, the 850 mb intersects and goes below the terrain in the Rocky Mountains. For example, the "Mile High City" of Denver, CO usually has a surface pressure - a measured value not corrected to sea level - of approximately 830 mb, which places it at a higher altitude than the 850 mb surface. Meteorologists often look at the analyzed temperature field of this level, because over the non-mountainous regions, the diurnal temperature cycle is much less than at the surface. They can frequently tell correctly that precipitation falling in regions with an 850 mb temperature of 0 degrees Celsius will probably fall as snow, while rain would more than likely fall at warmer temperatures.
The 500 mb chart represents weather conditions in the mid- troposphere, at a level where approximately half the mass of the atmosphere lies below this level. This level is at an altitude of approximately 5,500 meters (18.000 ft). This level is often used to represent upper level flow conditions because the level is well above the effects of topography and friction and the level is below the region in the upper troposphere where the air flow may experience strong accelerations and decelerations when in the vicinity of the upper jet streams. Since many weather systems tend to follow the wind flow at this level, this level is often considered to symbolize the steering level of these systems.
The 300 mb chart is in the vicinity of the tropopause, at the top of the troposphere. Only 30 percent of the mass of the atmosphere lies above this level. The altitude of the 300 mb surface is near 9000 meters (30,000 ft) - at a level where many long-distance commercial jet aircraft fly. This level also corresponds the level of the upper tropospheric jet stream, a region of very fast winds that move across the country. Inspection of the isotach patterns at these levels not only reveal the location of the jet streams, but aid the meteorologist in locating the regions of largest acceleration, deceleration and wind shear (rapid changes in wind speed and/or direction); these regions contribute to the upper level horizontal divergence and convergence patterns that influence surface weather systems.
The constant pressure charts differ slightly from the constant altitude charts, such as the surface analysis, which display weather information at the same geometric altitude. A constant pressure surface (or "isobaric surface") can be visualized as a reasonably horizontal, but undulating, three- dimensional surface in the atmosphere, where all points on the surface have the same reported atmospheric pressure. The altitude of the isobaric surface above sea level depends upon the density, and hence the temperature, of the intervening air column. In regions where the air in that column is cold and dense, the altitude of that isobaric surface will stand lower than over a region where the air is warmer and less dense.
Since isobaric surfaces are three dimensional surfaces, "height contours" (or simply, "contours") drawn upon an isobaric chart represent the topography of that pressure surface in identical fashion as isopleths of the same name drawn by cartographers upon topographic maps to depict the terrain. Contours separate regions of high valued height for a given region from lower altitude regions. Because of the contour patterns, the higher altitude regions representing poleward intrusions of warm air, are identified as "height ridges" or simply, "ridges". On Northern Hemisphere upper air charts, these ridges can be identified as regions where the height contours deviate far to the north. Strong ridges are usually associated with warm and dry surface weather. On the other hand, the lower altitude portions of the pressure surface are "height troughs", or "troughs", with equatorward intrusions of cold polar air. Troughs can be identified on a upper air chart as regions where height contours are deflected far to the south. Stormy weather and cold temperatures at the surface are often found under upper level troughs.
The isotherms and the resultant analyzed temperature field on many of the upper air charts often supports the above relationships. Typically, the best agreement occurs in the lower to mid troposphere. Some displacement of the isotherms away from the ridges and troughs may occur especially in the upper troposphere.
On the upper tropospheric charts isotachs are often drawn to identify the jet stream. Typically, a region of winds are considered to be a part of the jet if the winds were at least 70 knots (where a knot is the unit used for upper air charts, which is equivalent to one nautical mile per hour). These regions as highlighted by the isotachs may be elongated and frequently found near the southern excursion of a trough.
Last revision 10 June 1996© Copyright, 1996 Edward J. Hopkins, Ph.D. [email protected]
Master Links Page / Current Weather Page /ATM OCN 100 Home Page /AOS Dept. Home Page
|
<urn:uuid:a42521b5-eba7-400d-a078-ba006218e85d>
|
CC-MAIN-2016-26
|
http://www.aos.wisc.edu/~hopkins/aos100/upairmap.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.915294 | 1,488 | 3.875 | 4 |
Here's the puzzle:
The bird we eat on Thanksgiving is an exclusively North American animal. It is found in the wild on no other continent but ours. It evolved here. So why is this American bird named for a Eurasian country?
To find out, I went back to an interview I did almost 30 years ago on NPR's Morning Edition with Mario Pei, a Columbia University professor of Romance languages, who died shortly after our conversation.
I return to his answer because it is still the best one available.
Professor Pei had two theories.
First, in the 1500s when the American bird first arrived in Great Britain, it was shipped in by merchants in the East, mostly from Constantinople (who'd brought the bird over from America).
Since it wholesaled out of Turkey, the British referred to it as a "Turkey coq." In fact, the British weren't particularly precise about products arriving from the East. Persian carpets were called "Turkey rugs." Indian flour was called "Turkey flour." Hungarian carpet bags were called "Turkey bags."
If a product came to London from the far side of the Danube, Londoners labeled it "Turkey" and that's what happened to the American bird. Thus, an American bird got the name Turkey-coq, which was then shortened to "Turkey."
Or…Theory No. 2 (and maybe both theories are correct): Long before Christopher Columbus went to America, Europeans already had a wild fowl they liked to eat. It came from Guinea, in Western Africa. It was a guinea fowl, imported to Europe by, yes, Turkish merchants. It was eaten in London. So it got the nickname Turkey coq, because it came from Constantinople.
When British settlers got off the Mayflower in Massachusetts Bay Colony and saw their first American woodland fowl, even though it is larger than the African Guinea fowl, they decided to call it by the name they already used for the African bird. Wild forest birds like that were called "turkeys" at home.
Why not use the same name in Plymouth? And Boston? And Rhode Island? So a name attached to an African bird got reattached to an American one.
The point is for 500 years now, this proud (if not exactly brilliant) American animal has never had a truly American name.
And just to keep this ball rolling…all over the world, people now can eat American Turkeys, but they don't call them Turkeys.
Across Arabia, they call our bird "diiq Hindi," or the "Indian rooster."
In Russia, it's "Indjushka," bird of India.
In Poland, "Inyczka"— again "bird from India."
And what, we wondered, do the Turks call our turkey?
Well, they call it "Hindi," again, short for India.
So in 1492, because Columbus wanted to be in the "Indies," our North American bird got robbed of its American-ness, which is why tonight, when you look down at your turkey, don't call it "sahib."
Call it "dude."
|
<urn:uuid:94c40b71-3004-44a9-b80c-ff543d7ee48e>
|
CC-MAIN-2016-26
|
http://www.npr.org/templates/story/story.php?storyId=97541602
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00136-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.975484 | 654 | 3.5 | 4 |
Learn to play an instrument
Editor's Note: This article originally appeared in the December 2008 issue of Macworld, prior to the introduction of GarageBand '09.
If you want to record music, you must know how to play it. Learning to play an instrument takes practice and a good instructor, whether you’ve never sat down at a piano bench or whether you want to add another instrument to your musical repertoire.
Practicing is up to you; but with your Mac, some software, and access to the Internet, you can learn to play (or improve upon) an instrument as well as learn something about what makes music work.
You may have the moves and clothes, but true guitar heroes must know the basics of getting around their instrument. Several resources can help you on your way.
Beginner Guitar Lessons For wannabe guitar players, iPlayMusic’s $40 Beginner Guitar Lessons, is a good start. The boxed version of the software includes a DVD with more than four hours of video lessons demonstrating chord construction, strumming techniques, and drills. It offers movies in a split-screen presentation so you can view the instructor and each of his hands. And you can slow down or speed up the movie without changing the audio’s pitch. Beginner Guitar Lessons also includes an 80-page PDF guide that walks you through the topography of the guitar, shows some basic tablature, offers tips for practicing, and reinforces some material presented in the videos.
Additionally, the lessons include 26 songs that you can strum along with—either accompanied by just the rhythm guitar part or by the song fleshed out with the rest of the instruments and vocals. These songs display the instructor’s left and right hands, and highlighted chords and lyrics scroll from right to left beneath the video. You can create iPod-compatible versions of the video and send them to iTunes songs by clicking on an Export button.
If you click on the Create button when one of these songs is selected in the interface, GarageBand opens with each part laid out as a real instrument (digital audio) track. At this point, you can play along or record your own parts using GarageBand’s built-in recording and editing tools.
The program is extensible. Launch it and you see a Download Store button. Click on it and you have access to additional lessons and songs that you can download and play with the iPlayMusic player. For example, you can download the Electric Pack, a collection of intermediate electric guitar exercises, or the Folk Song Pack, each for $10. Individual songs cost 99 cents. You can try iPlayMusic for free—download it from iPlayMusic’s Web site and then download the Free Basics Videos collection via the Download Store interface.
Guitar Method Another software package, eMedia Music’s $60 Guitar Method 4 is a little old-fashioned in its presentation, but it has the elements necessary to help you get started with the guitar. Those elements include split-screen QuickTime movies of the instructor, text explanations, tab notation, audio files associated with a particular part of the lesson, and a virtual fretboard so you can see which strings to press.
Freeguitarvideos.com The Freeguitarvideos Web site provides downloadable instructional videos in QuickTime format as well. While not as slick as those produced by iPlayMusic, the lessons are nicely produced, feature engaging instructors, and include a split-screen view that lets you see what the instructor’s hands are doing. While some lessons are indeed free, many of the better ones cost between $5 for shorter lessons and $10 to $15 for hour-plus lessons (the site also offers similarly priced lessons for other instruments—bass, mandolin, and banjo).
For Children If you’re looking to introduce your kids to playing music, you have a few options. iPlayMusic’s $30 Play Music Together software is a DVD collection of 36 videos that teaches children just enough guitar (tuning, strumming, and five chords) so they can play through the included kids songs. Like the company’s Beginner Guitar package, this one lets you export songs to GarageBand and convert the videos for iPod playback. Some lessons also include a Muppet-like character named Capo who encourages kids to sing along with the words that scroll across the bottom of the video. Included in the package is a separate video DVD for watching the lessons and songs on your TV.
And then there’s Little Kids Rock, which offers the free Guitar Lessons, a series of 20 free guitar lessons targeted at kids on iTunes. In addition to the low-res videos, you can download a PDF file for each lesson (or download all the lessons as a single PDF). Little Kids Rock also has Drum Lessons. The group is trying to keep kids interested in music as schools cut their music programs, and seeks donations on its Web site.
Although it may not seem like it, guitar players don’t rule the musical world. If you’re an aspiring keyboard player, check out the Piano Lessons Online video podcast on iTunes. Presented in both high- and low-res versions, these are snippets from David Sprunger’s Playpianotoday.com, a Web site that offers a series of fee-based piano courses. The host Web site is pretty heavy-handed, making you sit through a long ad and then demanding an e-mail address so that you can gain access to the free material, but the guy can clearly play.
On the software side, eMedia Music also has a package for keyboard players—the $60 Piano and Keyboard Method 2. Like the company’s Guitar Method, the lessons are solid but the presentation is on the quaint side. You’ll learn the names of the notes, scales, chords, and fingerings as well as the basics of notation and rhythm. In addition to text, screens often feature audio and video snippets as well as the occasional MIDI track that, by default, uses QuickTime’s synthesizer sounds.
Beyond the Basics
For those who already have a handle on playing their instruments, iVideosongs offers downloadable instructional videos presented in HD, and largely built around learning a particular tune or technique. In some cases, a video’s instructor is the musician who played on the original track. You’ll find guitar videos from such players as Jeff Carlisi (.38 Special), John Oates (Halls & Oates), and Alex Lifeson (Rush). Chuck Leavell, of Allman Brothers and Rolling Stones fame, shows you the piano part to the Allman’s “Jessica” and is featured in boogie-woogie and blues piano videos. Famed session drummer Russ Kunkel can also be found on iVideosongs.
Some of these videos are more helpful than others. The introductory titles (which you can find for free on iTunes)—Beginning Guitar 101, Blues Concepts, Acoustic Guitar Techniques, Warm-Ups, Lead Guitar Concepts, and Left Hand Techniques—are strictly instructional. On some of the pay titles there’s a fair bit of storytelling from the artist in addition to some not-very-detailed instruction. Fortunately, you can preview sections of each title before purchasing them for $10 on average.
Music is about more than plucking, strumming, hammering, blowing, and bowing. It’s also about understanding the elements that make up music—theory, harmony, and counterpoint.
Ars Nova’s Practica Musica 5 ($100 for the download version with digital textbook, $125 for the CD-ROM standard edition with printed textbook) has been around for years and it remains the Mac’s most comprehensive music training software. The program features interactive activities that help you learn to read music, understand intervals and chord construction, and train your ear to recognize notes, chords, and rhythm. You can interact with the program with your Mac’s keyboard or a MIDI keyboard. The textbook, written by the program’s author, Jeffrey Evans, provides a solid introduction to music theory.
Although Sibelius hasn’t updated the Mac version of its $119 ear-training software, Auralia 2 in years, the program is still compatible with the current version of OS X and is a worthwhile tool to help you recognize pitch and melody. Sibelius also offers the Groovy Music series—Groovy Music Shapes for five- to seven-year-olds, Groovy Music Jungle for seven- to nine-year-olds, and Groovy Music City for nine- to 11-year-olds—that focuses on musical concepts including rhythm, pitch, notation, and musical terminology. Each package costs $69 or you can buy all three for $175.
[Senior editor Christopher Breen has had the honor of entering the word 'Musician' in the Occupation blank of his tax forms for 15 years.]
|
<urn:uuid:641c397b-c7fd-458e-a9f4-0bf903791b7e>
|
CC-MAIN-2016-26
|
http://www.macworld.com/article/1138596/play_instrument.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00075-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933937 | 1,863 | 2.78125 | 3 |
23 Feb 11 - When I first opened this email I laughed. How ridiculous.
But then I learned that Banks Peninsula, just south of Christchurch,
consists of two overlapping extinct volcanoes, the Lyttelton Volcano and
the Akaroa Volcano.
Banks Peninsula, New Zealand, showing
Credit: Neil Love
According to NASA, Banks Peninsula, named for explorer Captain Cook’s
botanist, consists of two overlapping extinct volcanoes, the
Lyttelton Volcano and the Akaroa Volcano. Since the last eruptive
activity some six million years ago, the volcanoes have been heavily
eroded, dropping them from a peak of 1,500 meters down to around 500
Breaches in the crater walls have produced two long harbors: Lyttelton
Harbour to the north and Akaroa Harbour to the south.
Reader Neil Love added red dots to an aerial view of the Banks Peninsula (above)
to show the location of these "extinct" volcanoes.
Here's what Neil had to say:
I am worried that the situation in Christchurch is going to be even more
destructive than it is already. I see it as being a large volcanic
eruption coming on. Perhaps these two picture can give viewers a better
comprehension of what is developing. In my opinion the NZ government
should completely evacuate Christchurch and just leave it empty. I say
this because when the volcano "Mount Cook" explodes it is going to kill
everyone and flatten everything within a 100 kilometer range of it.
Neil Love, British Isles
Response from New Zealand resident
Mount Cook isn't a volcano. It never has been a volcano, isn't in
an active volcanic region and doesn't contain volcanic rock/basalt
in his make-up, I believe mainly greywacke, sandstone and mudstone.
Banks peninsula comprises 15 volcanic vents/cones - not 3. These have
not been active for 6 million years and the subduction zone and 'hot
spot' that made their creation possible has long since moved away - much
like the Hawaiian chain islands of extinct volcanoes.
GNS scientists here in NZ have tested the hot springs around Banks
Peninsula post-quakes. The chemical composition of that water is not
that which would indicate volcanic activity/source. The massive amount
of liquefaction and earthquakes that Christchurch has experienced has
caused changes in water table and natural spring levels but again
nothing indicating volcanic activity or presence of magma.
Now, I don't think one or two earthquakes signify an upcoming eruption. Normally
you'd see earthquake swarms leading up to such an event.
However, I heard on the radio last night that more than 1,000 aftershocks have
struck Christchurch just since the large earthquake in September, which puts a
new slant on it.
Update. Almost 5,000 earthquakes
(so-called aftershocks) have struck the Christchurch area since
September - 193 in the past seven days alone.
See this most amazing animated map. It shows every single one of
those quakes, one by one by one
Thanks to Cam McNaughton for this info and link
"No one seems to mention the volcanic nature
of the area near Christchurch," says
But let me be clear. I am not suggesting that the area be evacuated. I don't
begin to have the expertise to say that.
needs a volcano? I'd be petrified to live there just because of the earthquakes
Engineer concurs - Christchurch could be headed for volcanic eruption
Another photo from Neil Love showing what he perceives
to be an even larger extinct volcano.
New Zealand supervolcano?
Credit: Neil Love
Amazing animated map
of Christchurch area
Shows every quake since Sept 4, one by one by one.
(Wait a few
seconds after it loads and you'll see
what I mean.) http://www.christchurchquakemap.co.nz/
Thanks to Cam McNaughton for this link
|
<urn:uuid:61033aaf-ae4e-4f02-b246-f9df8ed19683>
|
CC-MAIN-2016-26
|
http://www.iceagenow.com/Is_Christchurch_headed_for_a_volcanic_eruption.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00013-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931062 | 861 | 3.171875 | 3 |
Gardeners are trying to grow the great pumpkin, a gourd that will top the current world record of 1,810 pounds. At their disposal are new technologies including growth hormone and dual grafting. But the race to grow a one-ton pumpkin looks a little disturbing, at a time when people are questioning how good genetically modified Frankenfood really is for us.
Don Young of Iowa spends $8,000 a year in his quest to grow the biggest pumpkin. The technological methods employed are impressive:
Mr. Young has set state pumpkin records in both Iowa and California — in 2009 Conan O’Brien smashed one of his giant pumpkins on television with a monster truck — and he is a leading figure among those who are fashioning new growing practices. He has invented a grafting technique, for instance, that pushes the food and energy of two pumpkin plants into a single fruit. Other top pumpkin competitors are experimenting with ZeoPro, a synthetic cocktail of supernutrients developed by NASA to grow lettuce and other edible plants in space.
Growers also use PPFM (or pink-pigmented facultative methylotrophs), a pink powdered bacteria that converts the pumpkin plant’s methane into a natural growth hormone found in seaweed, and feed their gourds a “brew” of worm castings, molasses and liquid kelp. Young’s dual grafting technique involves fusing two young pumpkin sprouts:
To explain, he crouched in the dirt, pointing to a double stump that he grafted together in his kitchen last winter. Each stump is the size of a beefy forearm, and the root systems bring in twice the nutrients.
“They told me it couldn’t be done, they told me that for years,” said Mr. Young, who had to sacrifice 300 pumpkin seeds before he discovered the best way to fuse two young pumpkin sprouts. He borrowed a surgical knife from a hog farmer to shave the stems and then clipped them together with hair barrettes. Soon he and his wife, Julie, had to avoid knocking over pots and heat lamps spread around the kitchen counters.
Certainly it puts those of us who pick our pumpkins from a vine at a pumpkin patch (or out of a sagging cardboard bin at the local supermarket) to shame. Of course, those supermarket pumpkins probably have had their share of less-than-natural products sprayed and applied to them. The modern pumpkin-picker prefers a gourd that is perfectly round, smooth and the same orange all over, without bumps and blemishes. Reading about Young’s efforts and those of other members of the Great Pumpkin Commonwealth can give one the inkling for planting some pumpkin seeds in the backyard come next spring.
Last summer, reports of exploding watermelons — Chinese farmers had given them too much growth hormone — appalled American consumers. Young describes one of his mega-size pumpkins doing exactly the same, breaking open after being “juiced up” on too much “brew.” Is growing a one-ton “Frankenpumpkin” not so much of a stupendous gardening feat, but yet another a sign of how technology is changing nature in potentially monstrous ways?
Related Care2 Coverage
Photo by miggell1
|
<urn:uuid:6e83cd69-d17b-44fa-a5bf-32b89aa61c61>
|
CC-MAIN-2016-26
|
http://www.care2.com/causes/frankenpumpkin-growing-a-one-ton-pumpkin.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00166-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953748 | 684 | 3.015625 | 3 |
It's not oil, it's algae -- and that's not necessarily good news.
That was the message Wednesday from LSU scientists looking at samples from a vast area of red-colored water that has been spreading rapidly across Breton and Chandeleur sounds since last week.
And while common, if this bloom persists long enough, it could be harmful to fish and, in rare situations, possibly to humans that consume the fish, Bargu Ates said.
"Some dinoflagellates contain toxins that can be harmful to fish that consume them," she said. "And if a human consumes enough fish that have consumed enough toxins, then they could possibly be affected.
"But we have not yet identified what type of dino this is, and if it has any toxins. That could take a day or two."
Although small algae blooms have been reported by fishers for several weeks, environmentalists searching for remnants of the BP oil spill last week were alarmed by the size and color of the mass stretching across the Breton-Chandeleur area. The red color matched the hues of BP-generated oil slicks that have floated across the Gulf since the Deepwater Horizon exploded in April.
But Bargu Ates said the water contained algae, and she saw no reason why the BP disaster could be linked to this large outbreak.
Algae blooms are common along the Louisiana coast from spring through fall when the nutrient-rich waters of Louisiana's estuaries provide the perfect combination for algae growth: warm, nutrient rich water form the Mississippi River baking in high heat under long hours of sunlight.
Those conditions allow algae to reproduce rapidly, and a small colony can spread across acres in hours. Big blooms racing across open water eventually collapse as their population outgrows the oxygen supply in the water, said Harry Blanchet, coordinator of coastal fisheries programs for the Louisiana Department of Wildlife and Fisheries.
Fish typically swim away from an outbreak, Blanchet said. But if caught in enclosed areas such as marinas and small lakes, or trapped against banks or beaches, fish kills can result. Fish can die from a lack of oxygen in the water, their gills can become clogged with algae and, in some cases, toxins can paralyze or kill the fish.
Gulf Coast communities have long experienced "seafood jubilees," the term for algae blooms called red tides that result in masses of edible seafood floating to the surface where they can easily be scooped up by residents.
Bargu Ates said it was impossible to forecast the effects of this bloom. And any stormy weather approaching the area could result in significant changes.
"It could stir up nutrients that are on the bottom, putting them back in the upper layer of the water, where they would feed more algae," she said. "Or it could just move the bloom to a new location.
"We just have to wait and see."
|
<urn:uuid:f4fb6c4f-5bf1-44c4-bca3-ff2a4c773b79>
|
CC-MAIN-2016-26
|
http://www.nola.com/news/gulf-oil-spill/index.ssf/2010/08/algae_choking_breton_chandeleu.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00081-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957121 | 603 | 3.296875 | 3 |
Primary central nervous system lymphoma
Other Names for this Disease
- Primary lymphoma, CNS
- Primary brain lymphoma
- Primary CNS lymphoma
See Disclaimer regarding information on this site. Some links on this page may take you to organizations outside of the National Institutes of Health.
non-Hodgkin lymphoma in which cancerous cells develop in the lymph tissue of the brain and/or spinal cord. Because the eye is so close to the brain, primary CNS lymphoma can also start in the eye (called ocular lymphoma). The signs and symptoms vary based on which parts of the central nervous system are affected, but may include nausea and vomiting; seizures; headaches; arm or leg weakness; confusion; double vision and/or hearing loss. The exact underlying cause of primary CNS lymphoma is poorly understood; however, people with a weakened immune system (such as those with acquired immunodeficiency syndrome) or who have had an organ transplant appear to have an increased risk of developing the condition. Treatment varies based on the severity of the condition and location of the cancerous cells.Primary central nervous system lymphoma (primary CNS lymphoma) is a rare form of
Last updated: 1/11/2016
- Primary CNS Lymphoma Treatment (PDQ®). National Cancer Institute. September 2015; http://www.cancer.gov/types/lymphoma/patient/primary-cns-lymphoma-treatment-pdq#section/_25.
- Tarakad S Ramachandran, MBBS, FRCP, FRCPC. Primary CNS Lymphoma. Medscape Reference. December 2014; http://emedicine.medscape.com/article/1157638-overview#a1.
- Central Nervous System (CNS) Lymphoma. Leukemia and Lymphoma Society. February 2012; https://www.lls.org/lymphoma/non-hodgkin-lymphoma/treatment/treatment-for-aggressive-nhl-subtypes/central-nervous-system-cns-lymphoma.
- Medscape Reference provides information on this topic. You may need to register to view the medical textbook, but registration is free.
- The Monarch Initiative brings together data about this condition from humans and other species to help physicians and biomedical researchers. Monarch’s tools are designed to make it easier to compare the signs and symptoms (phenotypes) of different diseases and discover common features. This initiative is a collaboration between several academic institutions across the world and is funded by the National Institutes of Health. Visit the website to explore the biology of this condition.
- Orphanet is a European reference portal for information on rare diseases and orphan drugs. Access to this database is free of charge.
- PubMed is a searchable database of medical literature and lists journal articles that discuss Primary central nervous system lymphoma. Click on the link to view a sample search on this topic.
|
<urn:uuid:01318ebc-8b6e-40df-8433-bdbb113baf0f>
|
CC-MAIN-2016-26
|
https://rarediseases.info.nih.gov/gard/9318/central-nervous-system-lymphoma-primary/resources/1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00085-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.88996 | 618 | 2.8125 | 3 |
Computer Program To Help Kids Lose Weight Is A Fail
A group of Doctors, scientists and researchers conducted a two-year study of overweight and obese adolescent children (12 and 13 years old). The group tried to to modify dietary behavior by incorporating a web-based computer-tailored intervention program. The program aimed to increase physical activity, decrease sedentary behavior, and promote healthy eating among adolescents. The “FATaintPHAT” study did not produce the desired positive long-term outcome researchers were hoping for. However, there were positive short-term effects on eating behaviors, according to the report published online by Archives of Pediatrics & Adolescent Medicine and on the JAMA/Archives journals.The intervention study included 20 schools in the Netherlands, and over 800 children ages 12 and 13 years old. The objective of this study was to improve the dietary behavior of the children. The intervention included reducing consumption of sugar-sweetened beverages, and increasing intake of fresh fruits, grains and vegetables, and increase physical activity.
“The high prevalence of overweight and obesity among adolescents is a major public health concern because of its association with various chronic diseases,” the authors write as background information. “Computer tailoring has been recognized as a promising health communication technique to promote energy balance-related behaviors.”
To evaluate short- and long-term effectiveness of a web-based computer-tailored intervention on preventing excessive weight gain in adolescents, Nicole P. M. Ezendam, Ph.D., then of Erasmus University Medical Center, Rotterdam, the Netherlands, now of Tilburg University, Tilburg, the Netherlands, and colleagues developed the online school-based, FATaintPHAT intervention.
The FATaintPHAT intervention program addressed the issues of weight control and exercise-behavior. The completed analysis showed no intervention effects on BMI (body mass index), waist-line measurements, or the percentage of students being overweight or obese declining. In the four-month follow-up, students were less likely to drink more than 16 ounces of sugar-sweetened beverages per day compared with students in the control group. Self-reported snack consumption was lower in the intervention group than the control group at the four-month follow-up; however, the difference was not statistically significant at the two-year follow-up.
On a more promising note, after the two year study and follow-up, students reported eating more fruit than those in the control group at the four-month follow-up stage.
“The FATaintPHAT intervention was associated with positive short-term effects on diet but with no effects or unfavorable effects on physical activity and sedentary behavior,” the authors write. “In conclusion, our study shows that the computer-tailored intervention FATaintPHAT was not effective in modifying anthropometric outcome measures but that it can have a positive effect on dietary behaviors among adolescents at short-term follow-up.”
With most school systems and now the States taking charge at eliminating sugar-sweetened beverages on campus, do you think depriving kids of sodas is helping our kids or is it hurting them?
|
<urn:uuid:4a8187a0-1650-4c0a-b258-4336fd80eb3c>
|
CC-MAIN-2016-26
|
http://1280ksli.com/computer-program-to-help-kids-lose-weight-is-a-fail/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00045-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948795 | 644 | 3.65625 | 4 |
When you come across a belligerent person in a fur coat, waving bejeweled hands in the air, cussing out the person behind the counter and holding up the line — your line — you know there’s something wrong with the picture. It just means that money doesn’t necessarily equate automatically with breeding. What am I driving at? As I’ve heard it said often, “those rich #$@%^ ain’t got any class!” Unfortunately, many of those who acquire wealth tend to grow their ego in proportion to their riches. There’s nothing more unsightly than those who engage in power tripping as a sport. They dare declare themselves part of the “upper crust” and we normally don’t argue with that.
But when social scientists talk about economic or social class or standing in society, it’s not just all about the money, which clearly explains why some rich people can still behave like brutes. To explain this further, I dug up this cool piece from the New York Times expounding on the subject through an interactive tool. I decided I would deconstruct the tool and here’s what I found:
What Is Class? Some Facts And Figures
Class issues can be controversial. It means different things in different countries and is definitely much more emphasized in certain nations much more so than in others. I picked up some of these general facts from the Wikipedia and the New York Time polls.
Simpler and more primitive societies use physical power to determine pecking order while larger, more complex societies use economic power to determine “who rules”.
Our advanced (e.g. developed) societies therefore use the following components as the basis for class:
- Education and qualifications
- Income, personal, household and per capita
- Wealth or net worth, including the ownership of land, property, means of production, et cetera
Additionally, other factors that influence class distinctions include:
- Level of prestige
- Costume and grooming
- Manners and cultural refinement
- Political standing vis-à-vis the church, government, and/or social clubs, as well as the use of honorary titles
- Reputation of honor or disgrace
- Language, style of speaking
What is the class breakdown in America? Here’s one of the more detailed configurations I’ve seen.
o Upper-upper class; (ca. 1%) Old money stemming from inherited wealth. Persons in this class typically have an “Ivy league college degree.”
o Lower-upper class; (ca. 1%) This is the “Success elite” consisting of “Top professionals [and] senior corporate executives.” People in this class have degrees from “Good colleges.”
o Upper-middle class; (ca. 19%) Also called the “Professional and Managerial” class, it consists of “Middle professionals and managers” with a college and often graduate degrees.
o Middle-class; (ca. 31%) This class consists of “Lower-level managers; small-business owners; lower-status professionals (pharmacists, teachers); sales and clerical” workers. Middle class persons had a high school and some college education.
o Working class; (ca. 35%) This class consists of “Higher blue collar (craftsman, truck drivers); lowest-paid sales and clerical” workers. Younger individuals in 1978 who were members of this class had a high school education.
Lower Americans (ca. 13%)
o Semipoor; This class had a partial high school education and consisted of “Unskilled labor and service” workers.
o The bottom; Those who are “Often unemployed” or rely on welfare payments. These individuals typically lack a high school education.
Class Mobility: Getting Richer or Poorer By The Generation
Class mobility describes the movement or shifting across socio-economic classes. A comparison between the U.S. population in 1988 and 1989 showed that in the span of 10 years, some amount of class shifting went on. Class mobility, which is the core of the American Dream, reflects how households find themselves on the economic ladder. It looks like the top and bottom most classes tend to get stuck at their levels more so than those in the middle classes. For the top fifth and bottom fifth of society, a little over half remained at the same levels over the length of a decade. Goes to show that class mobility happens most easily for the folks in the middle 60% of the population.
Class mobility is “stickiest” when children are most like their parents and ancestors. The more similar you are to your kin, the less likely it is that things change for you in your generation. A typical poor family making around 20% of the average income can take up to 4 generations to reach the average income level.
Something I didn’t want to hear: we live at a time when class mobility may not be occurring as much as it used to. It looks to be slowing throughout the decades.
I was terribly surprised to find that in a study involving households in 5 developed nations followed through 4 generations: the United Kingdom, United States, France, Canada and Denmark — the U.S. scored almost the lowest in terms of class mobility. Though the U.K. had the lowest standing, its scores weren’t that far off from the U.S. This just means that it was easier to get out of poverty in France, Canada and Denmark than it was in the U.S. and the U.K.
Other Findings About Class
- Most respondents (most participants) to a recent Times poll believe that it takes an income of $100,000 – $299,999 to be recognized as wealthy in America. Sounds like it doesn’t take considerable wealth to be considered rich!
- Those with lower incomes have a greater tendency to recognize or admit that tension exists between the rich and the poor.
- More than half of respondents believe that the rich have too much power, but those who primarily think so have less money.
- More wealth means better or improved health. This makes total sense to me.
- Those with higher income spend more time with their family. This also makes a lot of sense since money can buy time.
- Respondents with less money portray a greater faith in God.
- Most people, regardless of their social class, think that they can achieve the American Dream in their lifetime, if they haven’t done so already. This just shows that hope springs eternal no matter what our financial situation happens to be.
All these findings I find tremendously interesting. Some things I already knew about, while other facts were news to me! These dispel some presumptions people make that wealth and becoming rich automatically gives you a free pass into the exclusive world of high class society since other factors such as reputation, manners, carriage, education and occupation are also components. More intriguing still is what happens when people become suddenly wealthy like when they win the lottery or receive a massive inheritance out of nowhere. What happens when their new situation inflates their ego and causes them to behave badly? They find themselves between two worlds — the world of old friends and family lost and estranged and the world of the elite who won’t give them the time of day because they just don’t fit in. So a message to all the “classless” rich… get a grip: with money comes tremendous responsibility (and not just power or influence) that we only hope you wield in positive ways.
Image Credit: The Ugly Duchess by Massys
Copyright © 2007 The Digerati Life. All Rights Reserved.
|
<urn:uuid:a33e030e-74f1-47d7-9a70-52ae56d7032e>
|
CC-MAIN-2016-26
|
http://www.thedigeratilife.com/blog/index.php/2007/07/12/does-achieving-wealth-make-you-upper-class-facts-about-class/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950794 | 1,606 | 2.59375 | 3 |
(417) 886-EYES (3937) • (800) 995-3180
The eye’s lens is responsible for helping to focus light on the retina in the back of the eye. Cataracts occur when proteins within the lens begin to cluster together, causing the lens to cloud. If the lens is cloudy, it cannot properly focus the image on the retina. This makes vision blurry and colors indistinct. When your lifestyle is threatened by cataracts, it is time to consult a surgeon at Mattax Neu Prater Eye Center about your options.
What causes the lens to cloud? In most cases, the culprit is the normal aging process. If you are age 65 or older, you probably have cataracts, but they may not have progressed to the point that they affect your vision. Certain lifestyle choices and relatively common health conditions, like diabetes, may hasten cataract development. Nutrition may play at least a limited role. Heavy salt consumption, for example, appears to increase the risk of significant cataract development. Some research suggests that antioxidant vitamins, like vitamin A (beta-carotene), vitamins C and E, and selenium, may slow cataract development. All of these are available in common multivitamin formulas. Beyond that, the use of nutritional supplements carries its own risks; you should consult your physician before adding them to your diet.
Yes and no.
If you live long enough, you will almost certainly develop cataracts, because they are part of the normal aging process. However, studies suggest accumulated exposure to ultraviolet light causes the natural lens to cloud, and that certain lifestyle choices and relatively common health conditions, like diabetes, hasten cataract development.
Cataracts do NOT generally cause pain, discomfort, redness, discharge, or sudden, alarming vision changes that would lead you to seek immediate help. The changes caused by cataracts generally develop so slowly that you won’t notice them until they are serious enough to affect your normal lifestyle. Ask yourself these questions: Am I having difficulty driving at night?
Note: Even if you think you do not have cataracts, you should seek medical attention if you are having troublesome eye symptoms.
During the outpatient cataract procedure, your surgeon removes the clouded lens and implants an artificial replacement lens. The type of lens that is implanted depends upon which of the three available options you have selected before surgery.
The incision heals naturally and no stitches are necessary. The procedure is performed in as little as fifteen minutes. After the procedure, you will be allowed to return home. Vision improves immediately following surgery, with complete recovery in a few days.
“Before my cataract surgery, I wore glasses for 60 years. I wanted the best vision that I could possibly have because of the things I like to do. The people at Mattax Neu Prater are very knowledgeable. I wouldn’t hesitate to tell a friend that Mattax Neu Prater is the place to go.”
- Denny, Actual Patient
|
<urn:uuid:384a251f-6667-4fad-8182-b10ebfcb3a8c>
|
CC-MAIN-2016-26
|
http://mattaxneuprater.com/springfield-missouri-cataracts/springfield-missouri-cataracts-questions.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00094-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942342 | 634 | 2.5625 | 3 |
Kids these days: Are they ready for the next generation of jobs? Whether there’s truly a shortage of engineers and scientists in the global workforce, either now or in the near future, is actually still a matter of debate.
In the US, that speculation is certainly being treated seriously.
Pressure has been mounting for some years now to bolster the country’s educational standards in STEM (science, technology, engineering, and math) fields, and the White House is now attempting to answer the call with a $4 billion proposal to bring computer science to K-12 students all over the country. Unveiled Feb. 9 as part of the Obama administration’s 2017 education budget, the program is hugely ambitious—if perhaps also a little questionable in its efficacy: As critics have pointed out, $4 billion is chump change next to the country’s overall half-trillion-dollar education budget, and the plan hinges on “continued investments” from states and districts.
More pointedly, there are few hopes that this budget, the last to be submitted by US president Barack Obama, will be embraced by the Republican-controlled Congress. But to the extent the document represents the sitting president’s vision for the nation’s educational system, it highlights an intriguing shift in American attitudes toward teaching and learning.
“We have to make sure all our kids are equipped for the jobs of the future—which means not just being able to work with computers, but developing the analytical and coding skills to power our innovation economy,” Obama said in a Jan. 30 address explaining his Computer Science for All initiative.
What the administration suggests raises an important question about the intrinsic purpose of education. Might it be reoriented, even at the elementary-school level, as more than an important intellectual pursuit—but also a tool for economic gain?
A soaring goal of—what, exactly?
There’s no shortage of research on the need for STEM workers in the US. Standouts include a Brookings report in 2014 that found STEM job vacancies take twice as long as other positions to fill, and a 2012 National Science Foundation study noting that the country’s science and engineering workforce, between 1950 and 2009, has grown 15 times faster than the country’s population. Obama’s proposal is quite obviously an attempt to address these gaps.
Implementation, unlike the idea, definitely wouldn’t be straightforward. Schools and districts would have to rework their basic curriculums, schedules, and hiring processes; though several Silicon Valley giants like Google and Facebook have pledged some form of support to these changes, the grunt work is due to occur at the local level. Just training teachers in computer science is a giant endeavor on its own.
The country will essentially “have to start from scratch—start with teachers the way we would start with children,” Barbara Stengel, a professor at Vanderbilt University’s Peabody College of Education and Human Development, tells Quartz.
But the practical barriers to putting computer science education in America’s schools may be dwarfed by the philosophical obstacles.
In need of payoff: education as a value-add to society
Some say the Obama initiative poses computer science education as far too career-centric a pursuit. And then there’s the criticism that the overall push in the US to get people learning how to code is a bit reductive. People in either camp might argue that learning to code is best undertaken as a creative venture—learning for learning’s sake, not for employment’s sake.
On the other hand, perhaps a jobs-oriented approach to coding education is what’s necessary in the US. The tech world is particularly keen on this view: Plenty of prominent tech leaders have tacked their names and company brands onto projects like Code.org, a nonprofit that encourages coding education, or have spoken out about the importance of coding skills in the future workforce. Meanwhile, coding bootcamps—intensive, non-degree-granting sessions that offer a crash course in programming—are getting so popular that some applicants are paying thousands of dollars for prep programs in an attempt to boost their chances of acceptance.
Creativity and enjoyment may be end goals here. But so is career advancement. Learning to code is a time investment, and the payoff is clear: better jobs, bigger paychecks, nicer lives.
“Knowledge is not the commodity—the commodity is being able to take these various parents of a problem and put together in a new creative way.” Perhaps, some tech leaders suggest, that attitude should be brought directly into America’s schools.
“The framework that has existed in schools is not a very sustainable framework,” Aza Steel, CEO of education software company GoGuardian, tells Quartz. He believes, as do many of his peers, that students should simply be taught that which helps them better contribute to society later on.
“The types of learning that schools were set up for isn’t going to continue to prepare students for the world they’re entering. Knowledge is not the commodity—the commodity is being able to take these various parts of a problem and put them together in a new creative way,” Steel says.
To an extent, the Obama administration’s proposal reflects this sentiment. The White House isn’t outright suggesting that it’s more important for elementary schoolers to pick up keyboards than novels—but there is a degree of practicality being emphasized here that marks an inflection point for federal education policy.
The message is clear: The US government wants to secure its own economic future, and it’s working toward that goal from the ground up. What remains to be seen is not just how well the plan might work, but whether the rest of the country takes to it.
|
<urn:uuid:83442881-9ca8-4c69-952c-fa91a7359c19>
|
CC-MAIN-2016-26
|
http://qz.com/608355/education-in-america-is-on-the-cusp-of-a-dramatic-change-will-the-country-let-it-happen/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00191-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956211 | 1,212 | 3.25 | 3 |
"The Handicrafts best fitted for children under nine seem to me to be chair-caning, carton-work, basket-work, Smyrna rugs, Japanese curtains, carving in cork, samplers on coarse canvas showing a variety of stitches, easy needlework, knitting (big needles and wool), etc."--Charlotte Mason, Home Education
One of the original PNEU programmes for Form 1A (second- and third-graders) recommends these (vintage) books on handicrafts: "Carton Work, by G.C. Hewitt (King, Halifax, 2/-); make a pin tray, a salt-cellar, a book-mark, and a table. Japanese Curtains (see Aunt Mai's Annual, 1894, Glaisher, 2/8). Self-Teaching Needlework Manual (Longmans, 9d.) : children to be exercised in stitches, pages 1-6. Use coarse canvas and wool, then coloured cotton and coarse linen." For the youngest class that same year, these handicrafts were recommended: "Attend to garden (see Aunt Mai's Annual, 1894, Glaisher, 2/6). Smyrna rugs (Aunt Mai's Annual, 1894, Glaisher, 2/6). Carton Work, by G. C. Hewitt (King, Halifax, 2/-): make a pillar-box, a match box, a pen tray, and a vase. Self-Teaching Needlework Manual (Longmans, 9d.): children to be exercised in stitches, pages 1-15. Use coarse canvas and wool, then coloured cotton and coarse linen. Make a pair of cuffs."
What's carton-work? I think it's like Paper Sloyd--making models and useful things with cardboard. Besides the PNEU-recommended book above, here's another vintage title you might look up: The 'A.L.' carton-work, by Joseph Henry Judd, 1909. "Being a combined scheme of planning, drawing, folding, cutting, supermounting, and constructing in paper and cardboard, for lower forms, junior elementary, and secondary schools."
What's a Japanese curtain? If you don't have Aunt Mai's Annual, The Boy Mechanic Volume 1 has full instructions for you.
What's a Smyrna rug? More complicated, because there are real Smyrna rugs, and there is a kind of Smyrna weaving, and there are Smyrna embroidery stitches, but I think I have the answer. According to Charles Dickens' Household Words, an old-fashioned method of knitting rugs was re-popularized by Paul Schulze, author of Designs for the home-knit oriental (Smyrna) rugs", 1884. From the same article on this hot new craft:
"A revival of a very old-fashioned kind of knitting is finding favour amongst ladies who want something easy to work, and that entails but little trouble. It has been registered by Mr. Paul Schulze, and is known as Smyrna rug-knitting, but it is an adaption of the old rug-knitting that used to be done with odds and ends of worsteds, and without any design. Mr. Schulze has patented a quantity of patterns of decidedly Eastern design, and has elevated a rather ugly accomplishment into something decorative and really useful. All people are now familiar with the peculiarly soft colouring and intricate design of the carpets and mats that come from the East, and also with the soft, imperceptible way the patterns melt into each other, and are not cut and marked out like our European workmanship. This blending of colour with colour is seized upon in the Smyrna rug-knitting, and mats of all shapes and sizes, for drawing-room, carriage, or bedroom use, and even strips for the sides of beds, or to place upon polished floors, are made in this way.The Book of the Home: An Encyclopaedia of All Matters Relating to the House and Household Management, by H. C. Davidson (1905) promises that "As the rugs are formed of strips, to be sewn together afterwards, they are not difficult to hold, and even an indifferent worker, with practice, can make sure of good results." And Varied Occupations in String Work, by Louisa Walker, calls the same kind of knitting “String Rugs,” and says that "This occupation is particularly suitable for the boys, because the work is rather firm and needs strong little fingers to hold a large piece." Walker's book (you can Google-Books it) gives clearer directions than does the first article, and uses bits of cloth and string rather than fancy wools. I'm not sure which version Charlotte Mason had in mind for children's handicrafts, but I'm guessing that it was something closer to Varied Occupations in String Work.
The materials required are the Smyrna wools, which are of very soft shades and of six-strand make; steel knitting-pins, No. 13; a wooden stick with a groove down it upon which to wind the wool and cut it to its proper length; the pattern, and the fine twine or cotton upon which to knit the tufts. The strips are made as wide as possible, but it is better to try a short length first. The work is done as follows: Cut up the wool on the stick and arrange it in little heaps as to colour, cast on the number of stitches required—say twenty or forty, two stitches for one stitch on the pattern—knit the first stitch plain, take up the piece of wool required, put it across the work, one end on each side of the knitting, and knit the second stitch, pass the end of the wool on the wrong side of the work round the knitted stitch to the front of the work; knit the next plain, and put wool between the third and fourth stitch, knit the fourth stitch, pass the end of the wool on the wrong side across it and to the front, and knit the fifth stitch; and so on to the end of the row, always consulting the pattern as to the colour of the wool. Work a perfectly plain row between each wool row. Always work the row in which the wool is inserted with the back of the knitting towards the worker, and the plain knitting row with the right side of the work towards the worker. By this arrangement the dots made by inserting the wool can easily be counted, and the pattern followed, as each dot represents one stitch of the design. The design will not be seen until a good length of the work is done: it will then be found that the various colours amalgamate very prettily, and that the Eastern appearance is rather heightened than not by the irregularities in the length of the wood inserted, it being impossible to knit every piece in quite evenly. Messrs. Fandel and Phillips keep the material for this new work. The rugs and carpets so made will cost less than real Smyrna articles, although cheap imitations can be had at a less price; but ladies will find their own rug-work lasts much longer than the latter, and will also have the pleasure of doing it. Very little eyesight is required, and the strips can be joined together by overcasting when finished so as to make carpets."
What does all this come down to? Lifetime skills...hobbies...making useful and decorative things...and looking for crafts that adults like to do as well as children. Forty years ago that might have been making macrame plant hangers. For my Squirrelings the big fad a few years ago was embroidery-floss friendship bracelets and bead jewelery. ..which they still enjoy doing, along with knitting and crocheting. Pick out something you all like, and enjoy working together!
|
<urn:uuid:5d14a15c-336c-4d53-ae9d-5bce66d2ee26>
|
CC-MAIN-2016-26
|
http://deweystreehouse.blogspot.ca/2010/04/whats-japanese-curtain-and-other-fun-cm.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00127-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955172 | 1,641 | 2.640625 | 3 |
While the unconventional gas industry is working to manage the water it uses in fracking, Gasfrac Energy Services Inc. thinks it has a better solution.
A propane solution.
Gasfrac, based in Calgary and Houston, uses a propane-based gel in its operations, instead of injecting water into the underground cracks that make up a fracking operation.
“We use liquid petroleum gas [LPG], which is largely propane. It’s really the solution to two problems,” says Jim Hill, Gasfrac’s president and chief executive officer in Calgary.
“Propane has low surface tension – water has about 10 times more surface tension than propane. So customers tend to get significantly more production out of our operations,” Mr. Hill explains. Liquid with lower surface tension slips more easily into the tinier fissures of a fracking operation, helping to open the cracks more, to get at more gas. Propane also allows more gas to flow out to be collected than water does.
“It means that we’re not using water, which is becoming a significant environmental issue, not just in Canada but also in the rest of the world.”
In major energy plays across North America, there’s increasing competition for water between the energy and agricultural sectors, Mr. Hill points out.
Water use by the unconventional gas sector varies according to each location and operation, but according to the Canadian Society for Unconventional Resources (CSUR), a typical fracking operation might use 20,000 cubic metres of water as its primary fracturing fluid for a relatively small section of a fracking operation.
That’s enough water to grow nine acres (3.6 hectares) of corn in a year, or to keep a typical golf course well watered for 28 days, says CSUR, an industry umbrella group. That amounts to millions of litres of water for fracking, given that unconventional gas production reached 15 per cent of worldwide production in 2010, and is expected to rise to 80 per cent by 2040.
It’s important to search for alternatives to water for fracking operations, says Adam Goehner, a technical analyst for the Pembina Institute, an environmental watchdog that engages with the energy sector. But it’s not easy.
“There are a number of alternatives, and I know that companies are exploring them. Propane is one, but it’s not going to be applicable to every single type of [hydraulic] fracture,” he says.
The issues surrounding water use in fracking operations are complicated by the complexity of unconventional gas development and cost.
Unlike conventional gas projects, where the resource is drawn from a reservoir relatively easily, unconventional gas must be extracted from crisp, brittle sedimentary shale. Those millions of litres of fluid, usually water, are mixed with chemicals and sand and injected underground, putting pressure into subsurface fractures so the gas can flow toward the surface.
Mr. Hill says Gasfrac has found that its propane solution works more effectively than water because, in addition to having lower surface tension, propane also has lower viscosity and density. It dissolves more easily in the cracks and distributes more all along the fracking lines, leaving less chance for the proppant – the sand and grit that expands the cracks – to get jammed into micro-cracks, so the gas will flow.
“There are lots of new technologies being explored from the perspective of water use and unconventional gas,” says Dan Allan, CSUR’s executive vice-president.
Water management technologies fall into different, in many ways unrelated, categories.
Some companies, such as Trican Well Service Ltd., have developed fracking fluids that can be classified as non-toxic: While there are still chemicals in the water that need to be recovered after a fracking operation, they are not classified as a threat to water quality, so the used water is easier to treat and restore.
Other companies are experimenting with different alternatives, such as using saline water found in deep aquifers in British Columbia’s Horn River Basin. There are a lot of identified saline aquifers in Western Canada, says CSUR’s Mr. Allan, and using them would mean it’s not necessary to tap into the fresh water also coveted by farmers, cities and towns.
Using saline aquifers has possibilities, says Pembina’s Mr. Goehner, but there are still questions that need to be resolved.
For example, what happens after a huge amount of saline water is drawn from deep in the earth, is used to extract gas and then is on the surface? Where should it go?
Another issue is the cost, because saline water has to be prepared for fracking. “It’s still much cheaper to use fresh water and add additives than to start with saltwater and re-engineer,” Mr. Goehner says. “From a cost perspective water is still the cheapest, so it’s hard to move away.”
Gasfrac’s Mr. Hill is more optimistic about alternatives to water for fracking, especially propane.
“Our company was founded in 2006, we started manufacturing in 2008, demonstrated our proof of concept in 2010 and now we’re actually doing work in the field. We’re active in a number of reservoirs,” Mr. Hill says.
“Since the fourth quarter of 2010 we’ve moved forward as a full-fledged operating company, with more than 2,000 fracturing treatments, largely in the Canadian sedimentary basin.”
Gasfrac is focusing on North America, where approximately 75 per cent of fracking operations take place, but the company is seeing interest from around the world, Mr. Hill adds.
The ability to turn propane into a gel and inject it under high pressure is leading-edge technology that’s already proving itself, and it wasn’t even available until recently, he notes.
“A number of things have come together and frankly, one of them is computer power. There are calculations you need to make that you couldn’t do 15 years ago, and now you can do them on your cellphone.”Report Typo/Error
Follow us on Twitter:
|
<urn:uuid:26fdb9fc-7b9c-43f8-9da6-f1918ab5b1c4>
|
CC-MAIN-2016-26
|
http://www.theglobeandmail.com/report-on-business/breakthrough/taking-the-water-out-of-fracking/article13876363/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00103-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947908 | 1,305 | 2.65625 | 3 |
November 7, 1998OBITUARY
Bob Kane, 83, the Cartoonist Who Created 'Batman,' Is Dead
By SARAH BOXER
Bob Kane, the cartoonist who created Batman the Caped Crusader and his sidekick, Robin the Boy Wonder, died on Tuesday at
Cedars-Sinai Medical Center in Los Angeles. He was 83 and lived in Los Angeles.
Batman and Robin, the characters that Mr. Kane created with his partner, Bill Finger, nearly 60 years ago, are some of the longest-lived
comic-book heroes in the world. They are ''up there with Superman, Mickey Mouse, Bugs Bunny and Oz, said Paul Levitz, the executive vice
president and publisher of DC Comics.
Born in New York City, Mr. Kane attended Cooper Union and the Art Students League. His first comic strips, ''Peter Pupp'' and ''Hiram
Hick,'' were published in 1936.
In 1938 he started drawing adventure strips, ''Rusty and His Pals'' and ''Clip Carson,'' for National Comics. That same year, a comic-book
hero called Superman appeared. Vincent Sullivan, the editor of National Comics, who also owned Superman, asked Mr. Kane and Mr.
Finger to come up with a Supercompetitor. They developed Batman on a single weekend. Mr. Kane was 18.
The first Batman strip came out in May 1939 in Detective Comics, one year after the debut of Superman. Batman's first adventure was called
''The Case of the Chemical Syndicate.'' And he was another kind of superhero entirely. Batman wasn't as strong as Superman, but he was
much more agile, a better dresser and had better contraptions and a cooler place to live.
He lived in the Batcave, drove the Batmobile, which had a crime lab and a closed-circuit television in the back, and owned a Batplane. He also
kept a lot of tools in his utility belt, including knockout gas, a smoke screen and a radio.
''Since he had no superpowers, he had to rely only on his physical and his mental skills,'' said Allan Asherman, the librarian of DC Comics.
Batman's fictional history, which was created years after the character himself, was dark. According to Batlegend, under his cape Batman was
really a man named Bruce Wayne who, as a child, watched as his parents were murdered in the dark streets of New York City while they
were walking home from a movie. Traumatized, young Bruce vowed to avenge their deaths by punishing criminals everywhere. He studied
criminology, trained his body and assembled an assortment of tools to fight crime in Gotham. One night, startled by a bat outside his window,
he made up his mind to dress up as a bat to put fear into the ''cowardly and superstitious'' hearts of criminals.
While ''Superman is an optimist's myth,'' Mr. Levitz said, Batman is a hero for the guy who thinks ''the world is a tough place.''
In creating Batman Mr. Kane said he drew on a number of sources: a 1920's movie called ''The Mark of Zorro,'' a radio show called ''The
Shadow'' and a 1930 movie called ''The Bat Whispers.'' That movie featured a criminal with a cape who shines his bat insignia on the wall just
before he is about to kill his victims, and who, in at least one scene, stands on the roof of a building and spreads his cape out. ''I guess anyone
who wears a cape is tempted to do that,'' Mr. Asherman said.
Mr. Kane also credited Leonardo da Vinci: ''I remember when I was 12 or 13 I was an ardent reader of books on how things began . . . and
I came across a book about Leonardo da Vinci. This had a picture of a flying machine with huge bat wings . . . . It looked like a bat man to
But it was Mr. Finger, Mr. Kane said, who chose some of Batman's most memorable features. He suggested the cape with scalloped edges,
the cowl and the blank white eyes. ''When I first drew him I had eyes in there and it didn't look right,'' Mr. Kane once admitted. ''Bill Finger
said, 'Take them out.' '' Mr. Finger also came up with the name Bruce Wayne (which, it has been observed, sounds a lot like Bob Kane).
Robin, the Boy Wonder, came out a year after Batman. ''He was basically a younger edition of Batman.'' Mr. Asherman said. ''There was a
need for a kid sidekick, so kids could identify with him, and for a character who would tone down the violence.''
Robin's story echoed Batman's. Under his costume, Robin was Dick Grayson, an aerialist who saw his parents fall to their deaths when the
circus failed to pay some racketeers protection money. Bruce Wayne happened to be watching and offered to become Grayson's guardian.
Robin got a less elaborate costume: a red vest, green boots, a yellow cape and no tights. But he was a far better flyer.
Batman's villains also had pedigrees. The Joker is descended not only from the face on playing cards but also, Mr. Asherman said, from ''The
Man Who Laughs,'' a 1920's movie based on a Victor Hugo story about a disfigured man in medieval France who moves in royal circles. The
Penguin, Mr. Asherman said, was probably inspired by the penguin who used to be on packets of Kool cigarettes. The Riddler? Mr. Asher
had no idea: ''He is just a psychopath who likes to send riddles to the police.''
For an action comic, the style of Batman was strangely quiet, and much was made of the shadows.
As the superhero became popular, he began starring in other venues. In the 1940's Columbia Pictures released two serial films, ''Batman,'' in
15 chapters, and ''The New Adventures of Batman and Robin,'' also in 15 segments. In 1966 came the television show, which starred Adam
West. Beginning in the 1980's there were more Batman movies. And Batman toys and costumes continue to sell year after year.
''He adapts to each era,'' Mr. Kane said. ''He fights against all injustices in the world. He fights the battle for the little man.'' But it was not
Batman's sense of justice, Mr. Kane believed, that made him so popular. It was his campiness.
''Batman and Robin were always punning and wisecracking and so were the villains,'' he said in an interview in 1965. ''It was camp way ahead
of its time.'' Did that mean he didn't take his superheroes seriously? ''How could you?''
As Batman's popularity increased, Mr. Kane did less and less of the drawing. Although his name appeared on the strip until 1964, the work
was done mostly by other artists, whom Mr. Kane called his ''ghosts.''
He is survived by his wife, Elizabeth Sanders Kane, an actress; a daughter, Deborah Majeski of New Jersey; a sister, Doris Atlas of New
York; a grandson, Matthew Alderman, and, of course, Batman, Robin, the Joker, the Riddler, the Penguin and the Catwoman.
|
<urn:uuid:151d4302-3179-4c55-8877-0030cef2d46f>
|
CC-MAIN-2016-26
|
http://www.nytimes.com/learning/general/onthisday/bday/1024.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.974365 | 1,546 | 2.609375 | 3 |
Published or Revised August 17, 2010
The internet is described very accurately using the definition of Michael Gorman, former Library Dean of California State University, Fresno:
“Take a book, remove the cover, remove the title page, remove the table of contents, remove the index, cut the binding from the spine, fling the loose pages that remain so they scatter about the room. Now, find the information you needed from the book. This is the Internet.”
A way must be found to sift through all that information. That is the purpose of the search engine.
The Internet, to most students, appears easier, quicker and is familiar. There are drawbacks. How do you know the information is reliable? How do you know the information is accurate?
Most library resources are checked for accuracy. The Internet can also be time-consuming because of the thousands of hits that can either be relevant or not to the subject search. The library databases will have fewer hits and be more focused and relevant to the search topic. Also the information, especially in-depth information, may not be there because the search engines may search only a fraction of the Internet and everything is not online.
Google Scholar is a search engine dedicated to scholarly literature. It can search across many disciplines and sources: peer-reviewed papers, theses, books, abstracts and articles from academic publishers, professional societies, preprint repositories, universities and other scholarly organizations.
Google Scholar helps you identify the most relevant research across the world of scholarly research. Many of the items are not full text and a subscription is required or purchase of the book. Government items are full-text as they are on the government sites.
The Student Library Handbook (PDF) pages 15-17 explains how to evaluate a Web site.
|
<urn:uuid:808540da-4d3a-497d-80cc-5b3f3bb1b833>
|
CC-MAIN-2016-26
|
http://www.parisjc.edu/index.php/pjc2/main/search-internet
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00148-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.913896 | 365 | 3.75 | 4 |
Arizona State University researchers are finding ways to improve infrared photodetector technology that is critical to national defense and security systems, as well as used increasingly in commercial applications and consumer products.
A significant advance is reported in a recent article in the journal Applied Physics Letters. It details discovery of how infrared photodetection can be done more effectively by using certain materials arranged in specific patterns in atomic-scale structures.
It's being accomplished by using multiple ultrathin layers of the materials that are only several nanometers thick. Crystals are formed in each layer. These layered structures are then combined to form what are termed "superlattices."
Photodetectors made of different crystals absorb different wavelengths of light and convert them into an electrical signal. The conversion efficiency achieved by these crystals determines a photodectector's sensitivity and the quality of detection it provides, explains electrical engineer Yong-Hang Zhang.
The unique property of the superlattices is that their detection wavelengths can be broadly tuned by changing the design and composition of the layered structures. The precise arrangements of the nanoscale materials in superlattice structures helps to enhance the sensitivity of infrared detectors, Zhang says.
Zhang is a professor in the School of Electrical, Computer and Energy Engineering, one of ASU's Ira A. Fulton Schools of Engineering. He is leading the work on infrared technology research in ASU's Center for Photonics Innovation. More information can be found at the center's Optoelectronics Group website at http://asumbe.eas.asu.edu/
Additional research in this area is being supported by a grant from the Air Force Office of Scientific Research and a new Multidisciplinary University Research Initiative (MURI) program established by the U.S. Army Research Office. ASU is a partner in the program led by the University of Illinois at Urbana-Champaign.
The MURI program is enabling Zhang's group to accelerate its work by teaming with David Smith, a professor in the Department of Physics in ASU's College of Liberal Arts and Sciences, and Shane Johnson, a senior research scientist in the ASU's engineering schools.
The team is using a combination of indium arsenide and indium arsenide antimonide to build the superlattice structures. The combination allows devices to generate photo electrons necessary to provide infrared signal detection and imaging, says Elizabeth Steenbergen, an electrical engineering doctoral student who performed experiments on the supperlattice materials with collaborators at the Army Research Lab.
"In a photodetector, light creates electrons. Electrons emerge from the photodetector as electrical current. We read the magnitude of this current to measure infrared light intensity," she says.
"In this chain, we want all of the electrons to be collected from the detector as efficiently as possible. But sometimes these electrons get lost inside the device and are never collected," says team member Orkun Cellek, an electrical engineering postdoctoral research associate.
Zhang says the team's use of the new materials is reducing this loss of optically excited electrons, which increases the electrons' carrier lifetime by more than 10 times what has been achieved by other combinations of materials traditionally used in the technology. Carrier lifetime is a key parameter that has limited detector efficiency in the past.
Another advantage is that infrared photodetectors made from these superlattice materials don't need as much cooling. Such devices are cooled as a way of reducing the amount of unwanted current inside the devices that can "bury" electrical signals, Zhang says.
The need for less cooling reduces the amount of power needed to operate the photodetectors, which will make the devices more reliable and the systems more cost effective.
Researchers say improvements can still be made in the layering designs of the intricate superlattice structures and in developing device designs that will allow the new combinations of materials to work most effectively.
The advances promise to improve everything from guided weaponry and sophisticated surveillance systems to industrial and home security systems, the use of infrared detection for medical imaging and as a road-safety tool for driving at night or during sand storms or heavy fog.
"You would be able to see things ahead of you on the road much better than with any headlights," Cellek says.
|Contact: Joe Kullman|
Arizona State University
|
<urn:uuid:e7e689c3-7010-4a15-a9c1-7ec975129536>
|
CC-MAIN-2016-26
|
http://www.bio-medicine.org/biology-technology-1/New-nano-material-combinations-produce-leap-in-infrared-technology-21702-1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00040-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929816 | 894 | 2.828125 | 3 |
Ways of Life, Dress
In many regions of the world, people wear traditional costumes at festivals or holidays, and sometimes more regularly. Americans, however, do not have distinctive folk attire with a long tradition. Except for the varied and characteristic clothing of Native American peoples, dress in the United States has rarely been specific to a certain region or based on the careful preservation of decorative patterns and crafts. American dress is derived from the fabrics and fashions of the Europeans who began colonizing the country in the 17th century. Early settlers incorporated some of the forms worn by indigenous peoples, such as moccasins and garments made from animal skins (Benjamin Franklin is famous for flaunting a raccoon cap when he traveled to Europe), but in general, fashion in the United States adapted and modified European styles. Despite the number and variety of immigrants in the United States, American clothing has tended to be homogeneous, and attire from an immigrant's homeland was often rapidly exchanged for American apparel.
American dress is distinctive because of its casualness. American style in the 20th century is recognizably more informal than in Europe, and for its fashion sources it is more dependent on what people on the streets are wearing. European fashions take their cues from the top of the fashion hierarchy, dictated by the world-famous haute couture (high fashion) houses of Paris, France, and recently those of Milan, Italy, and London, England. Paris designers, both today and in the past, have also dressed wealthy and fashionable Americans, who copied French styles. Although European designs remain a significant influence on American tastes, American fashions more often come from popular sources, such as the school and the street, as well as television and movies. In the last quarter of the 20th century, American designers often found inspiration in the imaginative attire worn by young people in cities and ballparks, and that worn by workers in factories and fields.
Blue jeans are probably the single most representative article of American clothing. They were originally invented by tailor Jacob Davis, who together with dry-goods salesman Levi Strauss patented the idea in 1873 as durable clothing for miners. Blue jeans (also known as dungarees) spread among workers of all kinds in the late 19th and early 20th centuries, especially among cowboys, farmers, loggers, and railroad workers. During the 1950s, actors Marlon Brando and James Dean made blue jeans fashionable by wearing them in movies, and jeans became part of the image of teenage rebelliousness. This fashion statement exploded in the 1960s and 1970s as Levi's became a fundamental part of the youth culture focused on civil rights and antiwar protests. By the late 1970s, almost everyone in the United States wore blue jeans, and youths around the world sought them. As designers began to create more sophisticated styles of blue jeans and to adjust their fit, jeans began to express the American emphasis on informality and the importance of subtlety of detail. By highlighting the right label and achieving the right look, blue jeans, despite their worker origins, ironically embodied the status consciousness of American fashion and the eagerness to approximate the latest fad.
American informality in dress is such a strong part of American culture that many workplaces have adopted the idea of “casual Friday,” a day when workers are encouraged to dress down from their usual professional attire. For many high-tech industries located along the West Coast, as well as among faculty at colleges and universities, this emphasis on casual attire is a daily occurrence, not just reserved for Fridays.
The fashion industry in the United States, along with its companion cosmetics industry, grew enormously in the second half of the 20th century and became a major source of competition for French fashion. Especially notable during the late 20th century was the incorporation of sports logos and styles, from athletic shoes to tennis shirts and baseball caps, into standard American wardrobes. American informality is enshrined in the wardrobes created by world-famous U.S. designers such as Calvin Klein, Liz Claiborne, and Ralph Lauren. Lauren especially adopted the American look, based in part on the tradition of the old West (cowboy hats, boots, and jeans) and in part on the clean-cut sportiness of suburban style (blazers, loafers, and khakis).
[ad >] Are you lazy student? The smallest wireless audio headset will help you out! 6mm diameter (0.24 inch) - it hides inside ear completely and has no wires. Go to www.microearpiece.com to read about it. [< ad]
|
<urn:uuid:151b3ac6-e387-45df-8ddd-4280bdda0893>
|
CC-MAIN-2016-26
|
http://www.countriesquest.com/north_america/usa/culture/ways_of_life/dress.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00146-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972959 | 945 | 3.484375 | 3 |
Details about Giggly and Wiggly A Book About Feelings (Sesame Street):
Teaching toddlers how to express and communicate their feelings is a challenge for many parents and caregivers. Elmo and friends make learning about feelings simple and fun as they show that there can be more than one word to describe the same emotion. The rhyming text encourages interaction as it presents feelings including giggly and highspirited, wiggly and antsy, shy and bashful. The die-cut handle makes this a book that toddlers can take anywhere.
Back to top
Rent Giggly and Wiggly A Book About Feelings (Sesame Street) 1st edition today, or search our site for other textbooks by Naomi Kleinberg. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Random House Books for Young Readers.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now.
|
<urn:uuid:4b7b7a56-d5d6-4006-a628-1aab779b698a>
|
CC-MAIN-2016-26
|
http://www.chegg.com/textbooks/giggly-and-wiggly-a-book-about-feelings-sesame-street-1st-edition-9780375845352-0375845356
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00042-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933385 | 203 | 3.640625 | 4 |
- Done with the quality of being tonotopic
- With a spatial organization which is based upon frequency
- 2005 Henckle et al Neuroscience
- The central nucleus of the inferior colliculus (CNIC) is comprised of an orderly series of fibrodendritic layers. These layers include integrative circuitry for as many as 13 different ascending auditory pathways, each tonotopically ordered.
|
<urn:uuid:6516e05c-d0ea-4d1e-8354-688d49ec68c3>
|
CC-MAIN-2016-26
|
https://en.m.wiktionary.org/wiki/tonotopically
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00154-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.877045 | 81 | 2.671875 | 3 |
In Nicole Karsin's debut documentary weaving provides a metaphor for the unity between indigenous people in Colombia whose communities and lands are threatened by violence and armed conflict. The film follows three women activists who assume leadership positions in their tribal governments to protect their community, customs, and lands - all of which are caught in the crossfire of civil wars and narcotics trafficking.
In 1991, the Colombian constitution recognized the rights of 102 different tribal groups in the country. However, these rights are being threatened by the armed conflict in Colombia that started in the 1960s over the narco-trade. The guerillas, whose activities are funded by the cocaine sales, moved to the mountains where the indigenous communities lived peacefully. The Colombian army pursued them into the mountains and paramilitary groups emerged to defend the wealthy against insurgents. As part of the war against drugs, the United States has given money to support the Colombian military in its effort to eradicate the insurgents and guerillas. Native peoples are caught in the middle of the conflict. More than five million people have been displaced because of the violence. Thirty-four indigenous groups are in danger of extinction. Frequently native people are accused of being insurgents and men, women, children and elders are executed, raped, and arrested. Their human rights are violated. Rarely are the aggressors brought to justice. Doris, Ludis and Flor Ilva belong to three different indigenous groups whose communities have been torn apart by guerilla warfare and military violence.
The documentary highlights the way women are affected by this violence. Many are left to raise children alone after their husbands are wrongly convicted of being insurgents. The men are often either imprisoned or executed by paramilitary groups without investigations or consequences. Women leaders have emerged in roles that were once only occupied by men to govern and organize their communities in the face of this violence. Flor Ilva is the first woman governor in 300 years of the Nasa tribe. Doris organized over 540 people who were forced to abandon all of their belongings and move to safety. These women leaders advocate for the peaceful evacuation of their lands from all military and paramilitary activity. They contend that tribal lands must be peaceful. They work collectively to create economic opportunities for women and for the community through weaving. Because of their dedication to the survival of their people and customs, Flor Ilva refers to the women weavers as "guerreras de pensamiento" [warriors of thought].
Karsin powerfully captures the ruin and destruction caused by civil war in this moving 80 minute documentary.
We Women Warrior will be screening as part of the 16th annual DocuWeeks. It premieres in New York at the IFC Center on August 10-16 and in Los Angeles at Lammie NoHo7 on August 24-30.
|
<urn:uuid:2159cbc6-6d3a-4955-8562-6bc1d975efa0>
|
CC-MAIN-2016-26
|
http://www.huffingtonpost.com/vanessa-perez/we-women-warriors_b_1757446.html?ncid=edlinkusaolp00000008
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97154 | 565 | 3.6875 | 4 |
A little over a dozen years ago, the US Green Building Council (USGBC) established LEED in response to a perceived need that specific and determined standards and third-party verification be required in order for structures to be considered “green,” or environmentally friendly. LEED, which stands for Leadership in Energy and Environmental Design, is a certification program focused primarily on new, commercial-building projects and based upon a points system.
The more points you earn, the higher your rating. Acquisition of LEED status can require significantly higher upfront expenditure on the part of a corporation or builder, but also may yield massive cost savings over time in the form of state and local tax breaks and higher rents as well as other perks. Originally designed to help green the planet, LEED is a widely recognized tool which can be maximized efficiently for its stated purpose, capitalized upon for financial gain, or both. Do the costs outweigh the benefits of getting LEED-certified? And does the planet benefit either way?
Green Or Greed?
LEED-certified buildings when well maintained, produce less waste products and are more energy efficient than they would be otherwise. The ratings system by which buildings can achieve certification, however, has come under scrutiny as well as criticism for granting points that require little, if any, effort on behalf of the builder. No-brainer points given for check-list items such as proximity to public transportation or location within a densely populated area, can mean the difference between silver, gold or platinum certification. These levels represent more than just a nice wall plaque; buildings with higher ratings benefit from higher tax breaks, which can result in literally millions of dollars in savings over time.
“There are no federal-level tax breaks given for LEED certification,” says Lane Burt, USBGC’s Director of Technical Policy. According to Burt, familiarity with the program has yielded a greater desire for corporations to take advantage of the state-based tax credits given for LEED certification, as well as the added perk of expedited permitting. But Burt is clear; the LEED program was not designed with those types of financial benefits in mind as the end goal but rather as incentives, capable of supporting continued growth and broad-based use of the highly popular program.
How Green Is Green Enough?
Despite its name and Washington, D.C. location, the USGBC is an independent non-profit organization and not a government-run agency, which receives most of its funding through certification fees and educational conferences. As early as 1998, the USGBC determined that the lack of definition and consensus about what constituted a green building had the potential to create a wild west atmosphere, in which anyone could claim practically any building as being environmentally friendly and sustainable. The first LEED-certified building went up in 2000 and currently, there are more than 10,000 structures worldwide that tout this status. Clearly, the incentive to erect green buildings exists, but how sustained and effective LEED’s impact will ultimately be on global warming and reduced pollution is up for debate. What it clearly can do, however, is provide a definitive framework within which builders can operate when green is the goal, and also incite greater use of environmentally friendly measures like installation of low-VOC emitting carpets and low-flow water systems, better ventilation and even more daylighting in schools and office buildings. These simple line items can result in better working conditions and arguably, happier, more productive students and employees. Such was the case at PNC Financial Services Group, who LEED certified several of its branches as early as 2002 and now claims higher employee engagement and raised awareness of meaningful sustainability goals, as well as cost savings, as a result. But for those truly interested in highly impactful, greener initiatives globally, is this enough?
Is LEED Worth Its Weight In Green Or In Gold?
USGBC has become fairly fluid, continually updating this consensus-based program and improving its point-based criteria over time. While the program is far from perfect, arguably, financial incentives may be necessary in order for it to work. Would corporations be just as likely to instill environmentally beneficial improvements without the promise of financial gain? It’s hard to say, but in the long run, does it really matter? The LEED program, by providing third-party verification as well as a defined, manageable framework, can have an impact on the future of the urban landscape as well as our global carbon footprint. How effectively it is utilized and for what purpose is up to the user. Financial gain aside, Burt states USBCG’s blue sky goal as being its own eventual disbandment of the LEED program due to a lack of need for these types of environmentally friendly measures. Let’s hope it does not dissolve due to lack of effectiveness or public interest in a cleaner, more sustainable planet.
Corey Whelan is a freelance writer in New York. Her work can be found at Examiner.com.
|
<urn:uuid:b6abeeb7-5b51-47df-9ca2-c2eb65521e64>
|
CC-MAIN-2016-26
|
http://tampa.cbslocal.com/2013/04/10/building-certification-what-does-leed-really-mean/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00105-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967573 | 1,027 | 2.875 | 3 |
Smoking accounts for more than a third of cases of the most severe and common form of rheumatoid arthritis, indicates research published online in the Annals of the Rheumatic Diseases.
And it accounts for more than half of cases in people who are genetically susceptible to development of the disease, finds the study.
The researchers base their findings on more than 1,200 people with rheumatoid arthritis and 871 people matched for age and sex, but free of the disease. The patients came from 19 health clinics in south and central Sweden, while their healthy peers were randomly selected from the population register. All the participants were aged between 18 and 70.
They were quizzed about their smoking habits and grouped into three categories, depending on how long they had smoked.
Blood samples were taken to assess all the participants' genetic profile for susceptibility to rheumatoid arthritis and to gauge the severity of their disease, as indicated by their antibody levels.
More than half of those with rheumatoid arthritis (61%) had the most severe form of the disease, which is also the most common form, as judged by testing positive for anticitrullinated protein/peptide antibody (ACPA).
Those who were the heaviest smokers - 20 cigarettes a day for at least 20 years - were more than 2.5 times as likely to test positive for ACPA. The risk fell for ex-smokers, the longer they had given up smoking. But among the heaviest smokers, the risk was still relatively high, even after 20 years of not having smoked.
Based on these figures, the researchers calculated that smoking accounted for 35% of ACPA positive cases, and one in five cases of rheumatoid arthritis, overall.
Although this risk is not as high as for lung cancer, where smoking accounts for 90% of cases, it is similar to that for coronary artery heart disease, say the authors.
Among those with genetic susceptibility to the disease, and who tested positive for ACPA, smoking accounted for more than half the cases (55%). Those who smoked the most had the highest risk.
The authors point out that several other environmental factors may contribute to an increased risk of rheumatoid arthritis, including air pollutants and hormonal factors. But they suggest that their findings are sufficient to prompt those with a family history of rheumatoid arthritis to be advised to give up smoking.
|
<urn:uuid:e7a55209-06cb-4490-b9e1-d3774763c76b>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2010-12/bmj-sbm121310.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00048-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.976145 | 491 | 2.734375 | 3 |
Smoking a hookah, which has become increasingly popular with college students, could be as harmful as cigarettes, warns a new study. //
Since the smoke is filtered through water in a hookah, many youngsters feel that harmful agents are removed, the UPI news wire reported.
"Links have been found between water-pipe (hookah) usage and oral, lung and bladder cancer, in addition to heart disease and clogged arteries," it said quoting a report of the American Lung Association (ALA).
"There are a lot of misperceptions about hookah tobacco use. There's very little information in the public realm," said Thomas Carr, national policy manager of ALA.
Earlier research, though limited, has shown that nicotine in the body increased by 250 percent after just one 40-45 minute session of hookah smoking.
Since people spend a longer period of time smoking hookah, they may inhale more carcinogens - possibly up to the equivalent of 100 cigarettes, the report noted.
Another risk in smoking water pipes is inhaling harmful chemicals from the charcoal or wood fragments used for heating the tobacco, such as carbon monoxide or metals, it added.
Hookah bars and cafes have gained popularity in recent years, often springing up in large cities and near colleges. The sweetened, flavored tobacco makes smoking hookah less irritating than smoking cigarettes.
The novelty, mystique and social camaraderie surrounding this ancient practice, which originated in Persia and India, is another reason for its spread, said Thomas Eissenberg, a professor at the Institute for Drug and Alcohol Studies at Virginia Commonwealth University.
Source-IANSPage: 1 Related medicine news :1
. Hookah And The Gums: A Dangerous Mix2
. Hooked on the Hookah, a’int no good3
. Cigarette, Pipe or the Hookah , Equally Bad to Make You Go Up in Smoke !4
. Islamabad Bans Hookah Smoking in Public Places5
. Lean Protein Could Be Key to Obesity Drugs
. Nasal Spray Could Take Drugs Direct to Brain.7
. Nasal Spray Could Take Drugs Directly to Brain8
. Oxygen Usage During Exercise Could Indicate Heart Problems9
. Ultrasound Screening Could Improve The Outcome Of Critically ill Patients10
. Anger Could Be Linked To Weight Gain11
. A Seizure Late In Life Could be A Stroke Warning
|
<urn:uuid:f38ac296-071c-4680-a665-08c4e09a6a71>
|
CC-MAIN-2016-26
|
http://www.bio-medicine.org/medicine-news/Hookah-Could-Be-as-Harmful-as-Cigarettes-19312-1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00071-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.917902 | 503 | 2.9375 | 3 |
Intellectual detective work sifts fact from mystery in the stories spread across the ancient world by Greek adventurers.
Though not an archaeologist, Fox (Ancient History/Oxford Univ.; The Classical World, 2006, etc.) seems to possess a precise mental catalogue of every significant pottery shard recently surfaced in the Mediterranean and Near East. Equally important, he knows what has not yet been found and acknowledges it, often with anticipation. These objects, along with the excavated sites of ancient habitation, burial mounds, cemeteries and shipwrecks, comprise an extraordinary, if sometimes tentative roadmap of the roving Greeks’ trajectory in the eighth century BCE. They traveled east and west, trading, raiding and sometimes settling in a time of cultural awakening. Virtually illiterate since Mycenaean Era syllabic script had been abandoned 400 years earlier, they adapted a Semitic alphabet around 750 BCE. They took with them, in oral tradition, the epic poems of Homer and the myths in which heroes from a glorious past challenged the gods, performed miraculous feats, won great victories, slew monsters, avenged rape and murder, rescued kidnapped virgins, etc. Tracing the impact of these “travelling stories” throughout the world the Greeks influenced, the author’s acumen shines like a beacon. For example, cults to Heracles (Hercules to the Romans) spread from Asia Minor to Spain; place names attributable to Io, a maiden seduced by Zeus and transformed into a cow, track the migration of those stories eastward from Argos. Fox focuses on the island of Euboea as an origin of the travelers, citing proven links along with tantalizing leads. Throughout, his intellectual discipline is impressive. “Culture-heroes do approximately similar things is different societies,” he stresses, warning against “mistaking parallel stories for causes and origins.” Fox notes that although Homer’s tales were of the distant past, the poet was “often precise” about landscapes and places from his own time.
Heady stuff for those with interest in the subject, but so dense that casual history buffs may fall by the wayside.
|
<urn:uuid:36cabed0-6bcd-4ec8-bdcd-7b7486a604da>
|
CC-MAIN-2016-26
|
https://www.kirkusreviews.com/book-reviews/robin-lane-fox/travelling-heroes/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00119-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953741 | 455 | 3.0625 | 3 |
Celebrating 20 years of learning and success.
The Aboriginal Head Start in Urban and Northern Communities (AHSUNC) Program is a community-based children’s program funded by the Public Health Agency of Canada. AHSUNC focuses on early childhood development (ECD) for First Nations, Inuit and Métis children and their families living off-reserve.
Since 1995, AHSUNC has provided funding to Aboriginal community-based organizations to develop programs that promote the healthy development of Aboriginal preschool children. It supports the spiritual, emotional, intellectual and physical development of Aboriginal children, while supporting their parents and guardians as their primary teachers.
There are 133 AHSUNC sites serving over 4,800 children and their families living in urban and northern communities. AHSUNC sites typically provide structured half-day preschool experiences for Aboriginal children (3-5 years of age) focused on six program components: Aboriginal culture and language; education and school readiness; health promotion; nutrition; social support; and parental involvement.
|
<urn:uuid:9c95a434-8250-49c0-bef4-1b214ff4681f>
|
CC-MAIN-2016-26
|
http://origin.phac-aspc.gc.ca/hp-ps/dca-dea/prog-ini/ahsunc-papacun/index-eng.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00110-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952799 | 211 | 2.78125 | 3 |
During the years immediately following Zefram Cochrane's first warp flight in 2063, several groups of human colonists struck out into the unknown reaches of interstellar space. A series of human settlements, all named "Terra," was established, including one on the lone planet of the Cepheus System, a world with a crystalline surface and considerable volcanic activity; because it was the tenth in the series of colonies, this world was dubbed "Terra Ten," although the descendants of the original colonists would come to know it years later as "Terratin." Because of the peculiar spiroid energy waves bombarding the surface of their world, each of the colonists (known by the 23rd century as "Terratins") shrank to only a few millimeters in height. The population's survival was in considerable doubt before their relocation to the planet Verdanis by the crew of the U.S.S. Enterprise.
|
<urn:uuid:5dd613d4-f32b-4faf-acad-7ee4b5d5783b>
|
CC-MAIN-2016-26
|
http://www.startrek.com/database_article/terra-ten
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00011-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973928 | 190 | 3.328125 | 3 |
Landscape and other related art (seascapes, riverscapes, cityscapes, and so on) represent many of the illustrations or paintings of outdoor scenery. Natural in setting, landscapes often focus on features such as mountains, trees, plants, or rivers. This environment-oriented content makes for an excellent way to introduce your child to the artistic process.
In this activity, invite your child to capture the world around her with this outdoor scene clay scape that will let her explore clay in a very different way than what she may be familiar with. This activity encourages and supports creative development and critical thinking skills.
What You Do:
- Ask your child to find an interesting outdoor scene. This can be your yard, the park, a photograph from a book, an illustration or painting, or a vacation photo.
- Using a pencil, encourage your child to sketch the landscape onto the cardboard background. This can be an excellent opportunity to discuss art vocabulary such as perspective, horizon line, shape, foreground, middle ground, and background.
- Give your child an assortment of clay colors. Have her apply the clay over her landscape sketch filling in as much of the cardboard as desired. This should be done by blending and smoothing a thin layer of clay over the surface of the cardboard. She will essentially be "painting" the surface of the cardboard with the clay. Clay colors may be layered together to form new shades. Each landscape object should be a distinct color. For example, a tree should be a different color than a mountain against it in the background.
- Finally, she'll use the homemade clay tools to create interesting textures or patterns in the landscape and finish off the scene by smoothing any rough edges and adding in details.
The finished product will look more like paint than clay. It may have an impressionist style quality to it. Try adding this activity to a post-museum-visit discussion or when reading a book about artists. This is a great activity for your child to do outside over the summer to keep her mind active those creative faculties fresh!
Erica Loop has a MS in Applied Developmental Psychology from the University of Pinttsburgh's School of Education. She has many years of teaching experience working in early childhood education, and as an arts educator at the Carnegie Museum of Art in Pittsburgh.
|
<urn:uuid:14d1aff4-f579-48bd-bd72-29c34c14552f>
|
CC-MAIN-2016-26
|
http://www.education.com/activity/article/outdoor-scene-clayscape/?coliid=655408
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00033-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937164 | 476 | 3.78125 | 4 |
Miltenberg, Germany – Rhine River Cruise:
|RIVER CRUISE GUIDE|
Miltenberg is bounded by (from the north and clockwise) the city of Aschaffenburg, the districts of Aschaffenburg and Main-Spessart, and the states of Baden-Württemberg (districts of Main-Tauber and Neckar-Odenwald) and Hesse (district of Odenwaldkreis).
During the Middle Ages there was continuous fight between the bishops of Mainz and the counts of Rieneck. Both attempted to rule the region and erected castles in the Spessart mountains. Later other tiny counties became involved in these fights as well.
During the 13th C the cities along the Main River emerged. Due to the trade on the river their wealth grew, and this became a very prosperous region. Prosperity ended abruptly in the Thirty Years’ War (1618-1648), when the area was devastated and depopulated.
In 1803 the clerical states of Germany were dissolved, among them the bishopric principality of Mainz. In 1816 the state of Bavaria managed to annex the entire region.
The district of Miltenberg was established in 1972 by merging the former districts of Miltenberg and Obernburg. The original settlers in the area of present day Miltenberg were the Romans who built two castles here for the protection of the Outer Limes, the northernmost walled frontier of their Empire at the time.
The Road of the Nibelungen passes through Miltenberg as well.
At present, Miltenberg is a lively, romantic and medieval town. The timber-frame houses create an inimitable feeling of German life in the 16th and 17th C. In the same way, the gabled houses in their pristine condition reinforce the impression of town virtually transported from the medieval age into modern times. The extent of the timber-frame construction in this prosperous community is easily explained by the local legislation which granted the citizens of Miltenberg free construction timber for their city homes from the forests which were communally owned.
Furthermore, neither war damage nor fire inflicted any harm upon this unique medieval architectural set up. Historians agree that the relative economic slowdown during much of the 18th and 19th C in the area was nothing short of a blessing in disguise for the preservation of the authentic architectural monuments here. Even affluent citizens were not able to follow the vagaries of fashion and destroy their old homes and replace them with more desirable, contemporary constructions.
The key event in the history of this little community was the construction of the Fortress Mildenburg by the Archbishops and Prince Electors of Mainz in 13th C. Present day Miltenberg really developed under the protection of the powerful Fortress from 1230 on. The Prince Electors and Archbishops of Mainz used Mildenburg as a Tax and Customs collection point as much as frontier garrison on their outer borders towards the Principality of Würzburg.
Significant wine production contributed to the general wealth of the city and affluence of its citizens during the 14th C. Wine was being exported both to Nuremberg and Frankfurt. The special privilege to hold St Michael’s Fair was granted to the citizens of Miltenberg by the Archbishops and Prince Electors of Mainz in 1367. The census of 1620 showed not one farmer living in town. Their prosperity allowed the rich burghers to procure their food for the money from the neighbouring villages.
Unfortunately, both the excellent location and the affluence became heavy liabilities during the Thirty Years’ War which devastated Germany and large areas of Central Europe from 1618-1648. Miltenberg was besieged and pilfered and looted on several occasions during this long lasting continental conflict.
The town changed hands four times during the early 19th C as Napoleon chose to redraw the political map of Germany to serve his political interests and ambitions. Eventually, Bavaria took over the territory in 1816.
The present day Castle Mildenburg was originally built by the Archbishops of Mainz as a military outpost-fortress to secure their defense against the growing threat coming from Würzburg. The residential part of the Castle, the so-called ‘Palas’ was added much later by Archbishop Konrad von Weinsberg between 1390 and 1396.
Mildenburg was seriously damaged during the ‘War of the Landgraves‘ in 1552. The main building was reconstructed in Renaissance style in 1556. The Archbishops and Prince Electors of Mainz lost this property in 1803 to the Princes of Leinigen who in turn sold it as a private residence. The castle was thus in private hands from 1807-1979. Since that time, this impressive monument of German culture has been the property of the city of Miltenberg.
Don’t know where a place is? Try this map (opens in new window): Map of Germany
Or visit our zoomable Google Satellite Map page
Other Germany pages:
Travel to Germany: a Europe Tour that’s a lot more than Beer, Lederhosen and Cuckoo Clocks!
|
<urn:uuid:3135a414-1bc1-4a07-8807-517ab6f543ff>
|
CC-MAIN-2016-26
|
http://www.travelsignposts.com/Germany/destinations/miltenberg
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00001-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966273 | 1,085 | 3.046875 | 3 |
Closed-loop control over the complete process is here. Then again, it’s been here all along.
Open-loop systems become closed-loop systems—loosely defined—whenever they’re put to use. A human closes the loop. The water pouring into your bathtub doesn’t zero in on the temperature you want the way your thermostat-controlled air conditioning does, so you feel the water yourself and adjust the knobs accordingly.
Machining processes employ a similar mechanism. The part is designed, machined, inspected . . . but then comes the hand in the bath water. The inspector compares the actual measurements to what they should be, and engineers analyze any discrepancies to determine how the process or part should change.
This loop can be slow to close. And just as with a CNC servo loop, the slow update time compromises either speed or accuracy. Lead time is lost while humans interpret the data, or else parts are scrapped if the process keeps running while the interpretation is going on.
One perspective on this problem comes from Xygent, a company that began its life as a software division of Brown and Sharpe. Xygent is working toward an open metrology operating system that would (among other benefits) make it practical to capture metrology data as a virtual model that could be used throughout the process and compared to the CAD ideal. (See article)
A representative of the company recently pointed out an area that stands to benefit: prototype manufacturing. Metrology-based models of mating parts could be given virtual loads to simulate fastening, then compared to see how well the parts fit.
Prototype factories do that same job today. Mating test parts may be shipped to a prototype assembly area from shops in various locations. All this assembly area needs is information—namely, whether the parts will fit together—but the most practical medium for conveying that information is the part itself! As a result, transport time is added to the duration of the loop. Metrology-based models could overcome this delay.
The same models could also speed the process inside the shop. With the machined part model and the CAD model brought together, CAM software might adjust the process automatically in response to any differences. Such a process would be “closed-loop” even by a strict definition of the term. It would also close the loop more quickly than the same loop can be closed today.blog comments powered by Disqus
|
<urn:uuid:04037827-7d61-4ffc-831c-a54ed162ac8a>
|
CC-MAIN-2016-26
|
http://www.mmsonline.com/columns/the-loop
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00020-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941724 | 507 | 2.84375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.