text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Hackberry trees (Celtis occidentalis) are quick-growing deciduous trees that bring visual interest, fruit and shade to your home garden. With their fiery-hued berries that darken to purple and their green leaves that become yellow, the trees' presence is also marked by a height of up to 80 feet and spread of nearly 50 feet. While hackberries perform well in a wide variety of uses, from urban areas to the center of your landscape, their many special features result in a need for special care.
Planting Location and Essential Care
Hackberries thrive in full sun to partial shade and are found in nature growing in moist soil. Though they prefer high fertility and a pH of 6.0 to 8.0, these drought-tolerant trees thrive in most soil types. However, to avoid leaf drop that can occur during drought, you should water soil when the top layer feels dry to the touch but avoid the trunk area, as wetness may encourage disease development. You may also add a 2- to 3-inch layer of organic mulch, such as wood chips, to the soil surrounding your tree, without pressing it against the trunk. Mulch increases fertility, conserves moisture and helps keep weeds at bay that may attract pests to the garden. In addition, mulch helps lower soil pH, which is helpful if your soil tends to become highly alkaline, a condition to which hackberries are only somewhat tolerant. Hackberry trees are most successful in U.S. Department of Agriculture plant hardiness zones 3a through 9b.
Pruning and Clean Up
Regular pruning is required for the hackberry tree, as weak growth may lead to branch breakage. Take pruning seriously whether your tree is young or mature, as the first 15 years of this tree's life are quite formative in creating a sturdy structure. Prune during the dormant season to avoid creating accidental wounds. Removal of weak or dying branches or those that grow in a direction counter to the majority of the other branches is essential. For a strong structure, prune to create wide crotches, rather than narrow crotches with branches that grow vertically. In addition to keeping trees tidy with pruning, you need to clean up berry litter, which can be messy and stain porous surfaces.
Hackberry trees are susceptible to infestations of hackberry woolly aphids. These pests display fuzz-covered bodies measuring approximately 1/10 inch in diameter. The wool-like covering results in a fluffy white appearance. Primarily feeding on leaves, these aphids suck sap from plants. As they feed, they release a sticky substance called honeydew, which drips down onto any plant parts in its path. With honeydew often comes the development of a fungus called sooty mold. This black growth may cause the hackberry problems with sun absorption if leaves become saturated with fungus. However, the pest is messier than it is harmful. For some control of this pest, you may release natural enemies, such as convergent lady beetles. Spraying leaves with horticultural oil may also assist in control, killing active aphids on contact and killing overwintering eggs in dormancy. Oils provide no residual control and must be reapplied.
Wood decay may cause problems for hackberry trees. Garden carefully around these trees, as the smallest mechanical injury creates an entry point for the fungi that lead to wood decay diseases. These rot diseases kill the tree's wood, often destroying the inner, unseen portions of the tree. Remove and destroy affected branches. Trees with extensive decay should be removed, as they create a hazard of disease spread and falling. Witches' broom fungal disease also attacks hackberry trees. Caused by powdery mildew disease fungi in combination with the feeding of eriophyid mites, this issue leads to distorted buds, which yield clusters of slim, short shoots. Because the disease does not affect vigor, you may remove clusters on younger trees if you desire. However, there is no cure and control is typically unnecessary.
- University of Florida IFAS Extension: Celtis Occidentalis -- Common Hackberry
- University of Minnesota Sustainable Urban Landscape Information Series: Deciduous Trees
- Modesto Junior College Fruit Science: Pruning Deciduous Fruit Trees
- University of California Integrated Pest Management Online: Hackberry Woolly Aphid
- University of California Integrated Pest Management: Wood Decay Fungi in Landscape Trees
- University of Illinois Extension Integrated Pest Management: Witches' Broom of Hackberry
- University of Missouri Extension: Hackberry
- University of Illinois Extension: Common Hackberry
- University of California IPM Online: Poor Water Management, Poor Drainage
- North Dakota State University Agriculture: Common Hackberry
|
<urn:uuid:ceb37709-acb6-4d59-abd5-649b31e20ab4>
|
CC-MAIN-2016-26
|
http://homeguides.sfgate.com/special-care-hackberry-tree-56609.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00077-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930143 | 979 | 3.59375 | 4 |
An antihistamine is any of a group of substances that block the effects of histamine, a chemical released during allergic reactions (see allergy). Antihistamines are used to relieve the symptoms of hay fever (allergic rhinitis) and hives (urticaria) and other rashes. They are sometimes used in cough and cold remedies because they dry up a runny nose and suppress the nerve centers in the brain that trigger the cough reflex. They are also used in antiemetic drugs, because they suppress the vomiting reflex.
Antihistamines are usually taken or administered orally but may be given by injection in an emergency to aid in treating anaphylactic shock (a life-threatening allergic reaction).
Types of antihistamine
How antihistamines workAntihistamines block the effect of histamine on tissues such as the skin, eyes, and nose. Without treatment, histamine dilates (widens) capillaries, resulting in redness and swelling of the surrounding tissue due to the leakage of fluid from the circulation. Antihistamines also prevent histamine from irritating nerve fibers, which would otherwise cause itching.
Related category• HEALTH AND DISEASE
Sources: U.S. National Library of Medicine; The British Medical Association.
Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact
|
<urn:uuid:fc23b290-0a67-4951-8608-9b71026b8480>
|
CC-MAIN-2016-26
|
http://www.daviddarling.info/encyclopedia/A/antihistamine.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00148-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.886179 | 281 | 3.71875 | 4 |
Not whole but pieces of organs can be cultured on artificial medium. For organ culture care should be taken to handle in such a way that tissue should not be damaged. Therefore, organ culture technique demands more tactful manipulation than tissue culture. The culture media on which organ is cultured are the same as described for cell and tissue culture. However, it is more easy to culture embryonic organs than the adult animals. Methods of culturing embryonic organ and adult organs differ. Besides, culture of whole or part of animal organ is difficult because these require nigh amount of O2 (about 95%). Special serum-free media (e.g. T8) and special apparatus (Towell's Type II culture chamber) are used for adult culture. In addition, the embryonic organs can be cultured by applying any of the following three methods:
Organ Culture on Plasma Clots
A plasma clot is prepared by mixing five drops of embryo extract with 15 drops of plasma in a watch glass placed on a cotton wool pad. The cotton wool pad is put in a Petri dish. Time to time cotton is moistened so that excessive evaporation should not occur. Thereafter, a small piece of organ tissue is placed on the top of plasma clot present in the watch glass. In the modified technique the organ tissue is placed into raft of lens paper or ryon. The raft makes easy to transfer the tissue, excess fluid can also be removed.
Organ Culture on Agar
Solidified culture medium with agar is also used for organ culture. The nutrient agar media may or may not contain serum. When agar is used in medium, no extra mechanical support is required. Agar does not allow to liquefy the support. The tumours obtained from adults fail to survive on agar media, whereas embryonic organs grow well. The media consist of ingredients: agar (1% in basal salt solution), chick embryo extracts and horse serum in the ratio of 7:3:3.
Organ Culture in Liquid Media
The liquid media consist of all the ingredients except agar. When liquid media are used for organ culture, generally perforated metal gauze or cellulose acetate or a raft of lens paper is used. These possibility provides support.
Whole Embryo Culture
During 1950s, Spratt studied how metabolic inhibitors affect the development of embryo in vitro. Old embryo (40 h) was studied upto another 24-48 h in vitro until died. For embryo culture a suitable medium prepared is poured into watch glasses which are then placed on moist absorbent cotton wool pad in Petri dishes. For the culture of chick embryo, eggs are incubated at 38°C for 40-42 h so that a dozon of embryos could be produced. The egg shell sterilized with 70 per cent ethanol is broken into pieces and transferred into 50 ml of BSS. The vitelline membrane covering the blastoderm is removed and kept in Petri dish containing BSS. With the help of a forcep the adherant vitelline membrane is removed. The embryo is observed by using a microscope so that the developmental stage of blastoderm could be found out. The blastoderm is placed on the medium in watch glass placed on sterile adsorbent cotton wool pad in Petri dishes. Excess of BSS is removed from medium and embryo culture of chick is incubated at 37.5°C for further development.
|
<urn:uuid:3cd431ce-b281-47df-8729-5cf4a686cc41>
|
CC-MAIN-2016-26
|
http://www.biocyclopedia.com/index/biotechnology/animal_biotechnology/animal_cell_tissue_and_organ_culture/biotech_organ_culture.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00081-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93187 | 699 | 3.140625 | 3 |
Education Nonprofit to Help Expand Math, Science Program in Oakland Middle Schools
Oakland Unified School District this week became a beneficiary of a major federal grant that will bring science, technology, engineering and math - STEM - educational experiences to as many as two thousand OUSD students.
It is one of many efforts underway to close a "digital divide" in Oakland in which low-income students have less access to the Internet and connected computers.
The U.S. Department of Education this week awarded a $3 million "Investing in Innovation" grant to Citizens Schools, a non-profit that plans to use it in 23 school districts across the country including Oakland Unified. Citizen Schools winning proposal, Closing Inspiration and Achievement Gaps in STEM with Volunteer-Led Apprenticeships, will set up and expand after-school programs in Oakland to be apprenticeships with tech professionals who would involve them in hands-on engineering and computer science projects. Citizen Schools will be recruiting tech volunteers in Oakland.
"These hands-on STEM apprenticeships not only help students build skills but also spark their interest in STEM subjects," said Stacey Gilbert Lee of Citizen Schools when asked about the program that has not yet been formally announced. In Oakland, Citizen Schools will expand a program it already started in three middle schools.
Much is being done in Oakland to try to close the digital divide, with a host of non-profit organizations collaborating with the school district to bring computers into classrooms and train students in digital tools. Yet other organizations work over the summer through summer camps and programs at recreation centers.
This happens as the stakes for being left behind in digital literacy and Internet access become increasingly high in a world that revolves around the Internet.
"As more information becomes electronic, the inability to get online can leave entire communities at an extremely dangerous disadvantage," notes Kimberly Bryant, founder of Black Girls Who Code, which ran a summer camp in Oakland last June.
Yet, according to estimates of Oakland Mayor Jean Quan's administration and the Pew Research Center, about 50 percent of OUSD children whose families earn less than $30,000 a year do not have Internet access at home. That income is the benchmark for qualifying for the federal free and reduced lunch program and 69 percent of OUSD students qualify.
In a loose survey of West Oakland residents done this year by Oakland Technology Exchange West (OTX), another non-profit working hard to close the digital divide, only 22 percent had both Internet access and a currently working computer. Some had Internet access but not a currently working computer. Others had no computer at home. OTX as the non-profit is called, gives away free computers to OUSD high school and middle school students who take its one afternoon course.
OTX is yet another of the plethora of organizations trying to bridge the divide.
At OTX’s vast West Oakland warehouse, retired IBM executive and OTX founder Bruce Buckelew, along with his small staff of local hires, arrange for thousands of refurbished computers to be delivered to public schools across Oakland. Collecting computers from corporations when they replace their stock and then refurbishing them to new condition, OTX through the years has provided 35,000 computers to Oakland school children and low-income adults. It has delivered 18,000 computers to OUSD schools alone, charging the school district about $240 per computer. Then it has handed out another 17,000 to Oakland kids who come with a parent to take a one-afternoon computer course in computer basics at OTX's plant. OTX has also supplied free computers to adults who volunteer time refurbishing donated computers.
Then there's the work of Techbridge. On a recent afternoon at Oakland Technical High School, 25 girls hovered over computers and small robots, working on html coding and figuring out how to arrange gears on the robots so they will move to software commands. Teenagers, the girls are part of the Techbridge after school program at Oak Tech, which is one of nine sites Tech Bridge operates in Oakland with the aim of getting girls interested in math, science and computer programming. For many of the students, the sessions are the first time they've used computers for other than email and viewing YouTube.
In yet another part of Oakland on weekday afternoons, Media Enterprise Alliance hosts dozens of high school students in an OUSD linked learning program about video production. One recent afternoon, four students were editing footage on a wide Mac computer monitor using professional quality Final Cut Pro software, while other students were synching sound with visuals at another computer and a third team filmed an interview. Their project is about gang injunctions in Oakland. If it turns out well, the film might be aired by KQED, said Jeff Key, Media Enterprise Alliance executive director.
Oakland City Government itself is on a bandwagon to bridge the digital divide. Last year it launched "Get Connected Oakland" with the help of OTX and a federal program. Get Connected put 1,500 Internet connected computers in two public places within Oakland Housing Authority campuses.
Additionally, it began promoting the federal "Internet Essentials" program rolled out by Comcast Cable Co. Through Internet Essentials, low income families can purchase Internet access for $9.95 a month and a refurbished computer for $150 if children in the family qualify for the federal free and reduced lunch program.
Quan said she launched Get Connected because too many Oakland residents were being left behind in the digital world. “For 40 percent of Oaklanders, the public library is their most consistent way of getting online,” she said when launching Get Connected Oakland in April of 2011.
Gilbert Lee of the Citizen Schools program that learned this week it will receive the $3 million in federal money, said a divide threatens the national economy in addition to individuals.
"We are finding across the country in low income communities there is a digital divide. As a country, we need more STEM professionals but we are not preparing our students to become STEM professionals."
It will be seeking volunteers in computer and science fields to help its program in Oakland. Visit Citizen Schools for details.
Source: Oakland Local [http://m.oaklandlocal.com/article/education-nonprofit-gets-3-million-expand-math-science-program-oakland-middle-schools]
|
<urn:uuid:8b33ebd0-7258-4d4b-b372-b5c185eccfb5>
|
CC-MAIN-2016-26
|
http://www.kqed.org/news/story/2012/12/21/113201/education_nonprofit_gets_3_million_to_expand_math_science?source=oakland+local&category=bay+area
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00167-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955287 | 1,290 | 2.71875 | 3 |
In a recent survey and study published by the Thomas B. Fordham Institute, which focuses on education issues and public policy, parents were split as to their priorities in K-12 education.
The authors, Dara Zeehandelar and Amber Winkler (both who hold Ph.D.’s), discovered that parents all want a core curriculum based on reading and math with emphases in science, technology, engineering and math education (also known as STEM), but in addition they said that parents wanted a variety of specializations.
Some of the highly-prioritized specializations and emphases for parents were:
- Life skills;
- High academic standards;
- Programs for gifted students;
- Character development;
- Using technology;
- High standards for behavior; and
- Hands-on learning
The specializations that parents put in the middle of their list of concerns, at least in the survey, were:
- Ability grouping;
- Extracurricular activities, other than sports;
- Vocational classes;
- Parental involvement;
- Diversity in the student body;
- High test scores; and
- Test preparation.
What were the lowest priorities among parents? They were “small enrollment,” “after-school programs” and “strong athletics,” with “school uniforms” rounding out the bottom. Thus do parents dismiss the silver bullets of putative education reformers.
Students also had a different take on education as far as their priorities and rankings go:
- Self-discipline and study habits;
- Communication skills;
- Critical thinking;
- College preparedness
- Social skills;
- Love of learning;
- Identify personal interests;
- Self-esteem; and
- Strong morals
What were the lower-ranking aspects of the student survey? That would be “values diversity” and “knows importance of college” along with “job skills,” “foreign language,” and “appreciation of nature.”
Spencer Irvine is a staff writer at Accuracy in Academia.
If you would like to comment on this article, e-mail [email protected].
|
<urn:uuid:9699d0bf-14df-404e-b6c6-0d9db79bd702>
|
CC-MAIN-2016-26
|
http://www.academia.org/what-parents-want/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00064-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959683 | 473 | 2.875 | 3 |
What is Co-operative Education?
- An integration of a student’s academic study with practical work
- An opportunity to obtain secondary school credits through supervised
placements with host employers
- An opportunity to apply and expand knowledge and skills learned in
- Courses in various disciplines may be offered through the cooperative
education program, which can benefit all students, whatever their
Who takes Co-op?
When does Co-op take place?
Where does Co-op happen?
- Students are placed in settings that provide challenging roles and
in the communities and surrounding area businesses
that support each of the local
Why take Co-op?
Co-op gives students the opportunity to:
- Experience future careers
- Develop employability skills
- Strengthen interpersonal skills
- Increase an awareness of the importance of life long learning
How to apply?
- Contact the Co-op Coordinator in your high school.
- Request Co-op when registering for classes.
- Fill out an application form in the Co-op office of your school.
- Wait to be contacted for an interview
How Are Students Assessed?
- A qualified teacher will assess and evaluate a student’s progress in achieving the expectations identified in the student’s Personalized Placement Learning Plan through regular workplace monitoring visits (a minimum of three per credit).
- Student achievement is also assessed through:
- Written assignments, seminar presentations and reflective journals
- Career portfolios
- A culminating independent-study assignment that links the student’s cooperative education placement experience with the curriculum expectations of the related course
- Performance appraisals written by the placement supervisor (a minimum of two).
Working this Summer?
Summer is almost here and many students in Grade 7 – 12 will soon begin summer jobs. It is everyone's responsibility to help them be safe.
The Ministry of Education and Ministry of Labour have prepared information sheets to increase awareness of young people's legal rights and duties in the workplace.
Parent information sheet (pdf)
Student information sheet (pdf)
Your Rights @ Work (pdf)
|
<urn:uuid:53549e6a-9035-424a-8473-5e93a551323e>
|
CC-MAIN-2016-26
|
http://www.hpedsb.on.ca/ec/services/cst/secondary/coop/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00040-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.905494 | 434 | 3.484375 | 3 |
Israel's History in Pictures: Rav Kook in the USA - 1924
Rabbi Abraham Isaac Kook (1865-1935) was a renowned Talmud scholar, Kabbalist and philosopher. He is considered today as the spiritual father of religious Zionism, embracing the Zionist movement as a manifestation of Divine Will and the beginning of the redemption of the Jewish people.
Born in what is today Latvia, Rabbi Kook moved to Ottoman-ruled Palestine in 1904 to take the post of the Chief Rabbi of Jaffa. He appears in many of the historic pictures taken by the American Colony photographers, usually as a bystander, without being identified. One photograph, from the Library of Congress' larger collection, identifies the rabbi, but the surroundings do not appear to be in the Land of Israel and actually look incredibly like a street scene in the United States.
In fact, the picture was taken in Washington DC before or after Rabbi Kook met with President Calvin Coolidge in the White House.
It's a historic fact that Coolidge was in Washington on April 15, 1924, the same day Rabbi Kook's photo was taken. On that day Coolidge threw out the first ball at a Washington Senators baseball game where Walter Johnson shutout the Athletics. Coolidge also spoke at the dedication of the "Arizona Stone" in the Washington Monument.
The picture of the rabbi appears in a larger set of unaccredited pictures taken that week of well-known Washington politicians including Coolidge, the White House press corps, Senate leaders William Borah and Burton Wheeler, the Federal Oil Reserve Board, and more.
But why did Coolidge meet Rabbi Kook, and what was the rabbi doing in Washington?
Rabbi M. M. Epstein,
apparently on a ship
According to an article by Joshua Hoffman in Orot in 1991, Rabbi Kook, then Chief Ashkenazi Rabbi in the Land of Israel, headed a delegation of rabbis to the United States in March 1924 to raise funds for yeshivot in Europe and Eretz Yisrael. He was joined by Rabbi Moshe Mordechai Epstein (pictured left), the head of the Slabodka yeshiva in Lithuania, and Rabbi Avraham Dov Baer Kahana Shapiro, the Rabbi of Kovno and president of the Rabbinical Association of Lithuania. The three rabbis were brought to America by the Central Committee for the Relief of Jews Suffering through the War, better known as the Central Relief Committee (CRC).
According to Hoffman, "The rabbis had originally planned to stay in America for about three months. However, because their fund-raising efforts were not as successful as had been hoped, they remained for eight months. In the end, they raised a little over $300,000, far short of the one million dollar goal which the CRC had set."
Hoffman described the April 15 conversation between the president and the rabbi: "Rav Kook thanked the President for his government's support of the Balfour Declaration, and told him that the return of the Jews to the Holy Land will benefit not only the Jews themselves, but all mankind throughout the world.... The President responded that the American government will be glad to assist Jews whenever possible."
Readers Send a Picture from Rabbi Kook's Meeting at the White House
The caption reads "Central Relief Committee at the White House"
"Yitz" and "Menachem" sent the following comment and photograph:
"I actually have an original of photo of Rabbi Kook and his committee including my Great-Grandfather who served as a translator outside the White House after meeting the President. I had never seen this image until recently when I found it among his son's possessions when I cleaned out his apartment."
Their grandfather, Rabbi Aaron Teitelbaum, played an important role in the meeting, according to this account.
At the meeting, Rav Kook thanked the President for his government’s support of the Balfour Declaration, and told him that the return of the Jews to the Holy Land will benefit not only the Jews themselves, but all mankind throughout the world. He quoted the Talmudic sages as saying that no solemn peace can be expected unless the Jews return to the Holy Land, and therefore their return is a blessing for all the nations of the earth. Rav Kook also expressed the gratitude of Jews throughout the world towards the American government for aiding in relief work during the war. He said that America has always shown an example of liberty and freedom to all, as written on the Liberty Bell, and that he hoped that the country will continue to uphold these principles and render its assistance whenever possible.
The speech, written in Hebrew, was delivered in English by Rabbi Aaron Teitelbaum, executive secretary of the CRC. Rav Kook answered “Amen,” and explained that since he wasn’t fluent in English, he had Rabbi Teitelbaum read his message. By answering “Amen,” he indicated that he consented to every word that had been read. The President responded that the American government will be glad to assist Jews whenever possible. Before leaving Washington, Rabbis Kook and Teitelbaum held a meeting of local rabbis and community leaders to raise money for the Torah Fund
The New York Times' Report on Rabbi Kook's 1924 Visit to the United States and Canada
The New York Times'
The Mayor welcomed "the distinguished Jews from the old world.... We are privileged," he continued, "to greet teachers and spiritual leaders whose intellectual achievements are in themselves worthy of special recognition."
"Rabbi Kook and his companions have undertaken the long and fatiguing journey to the United States and Canada to deliver in person a message to their co-religionists [that] unless the Jewish schools and seminaries in Eastern Europe and Palestine continue to receive ... the support of the American Jews, hundreds of ...educational institutions will have to be closed in 1,300 Jewish communities in the war-stricken lands of Europe. A half a million children... will grow up without religious and secular education..."
British High Commissioner Herbert Samuel and Rabbi Kook
visiting a Jewish neighborhood in Jerusalem (1925). In the
white suit is the mysterious Mendel Kremer,
known as a German and British spy
(Central Zionist Archives, Harvard)
More historical pictures and essays at www.israeldailypicture.com Descriptions based on photo-essays by Lenny Ben-David.
|
<urn:uuid:8574f03f-d595-4eaf-b8f0-e85bf82e9405>
|
CC-MAIN-2016-26
|
http://www.israelnationalnews.com/News/News.aspx/170766
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00070-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967319 | 1,339 | 2.78125 | 3 |
Use chia seeds in place of flax seeds in recipes.
image by Gabriela Ruellan: sxc.hu
Chia, or Salvia Hispanica, is a mucilaginous seed that is grown for its healthy edible sprouts. Mucilaginous seeds form a slippery gel coating when wet thoroughly that supplies all the nutrients they need to grow. Plant them successfully on any moisture holding surface, such as burlap sacks or special sprouting mats called baby blankets. Chia kits are available and include special clay forms to start the seeds on. Simple to grow, adding chia to your edible indoor garden is an easy process.
Choose your growing medium, whether it is a clay form, grow mat or burlap sack. Lay the growing medium in a cookie sheet or empty tray and moisten thoroughly with water.
Spread the chia seeds onto the growing medium. Spread them evenly and attempt to have no seeds touching each other.
Cover the seeds with a second tray or lay plastic wrap loosely on top to help hold in moisture. Chia prefer a warm, dimly lit and humid environment to sprout in.
Mist the growing medium with water every 1 to 2 days. Keep it moist, but not soaking wet.
Remove the tray cover or plastic wrap once seeds sprout in 3 to 5 days. Move chia sprouts to a betterl lit area, such as a sunny window or a brightly lit room. Keep the growing medium moist.
Harvest chia sprouts once the leaves are green and fully open. Cut the sprouts off along the stems just above the growing medium surface.
|
<urn:uuid:bae988b8-8141-4415-b0e4-8dae22dab43f>
|
CC-MAIN-2016-26
|
http://www.gardenguides.com/67990-grow-chia-seeds.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00027-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.910635 | 333 | 2.5625 | 3 |
What is a Lumpectomy?
Lumpectomy is a surgical procedure that involves removing a suspected malignant (cancerous) tumor, or lump, and a small portion of the surrounding tissue from a woman's breast. This tissue is then tested to determine if it contains cancerous cells. A number of lymph nodes may also be removed to test them for cancerous cells (sentinel lymph node biopsy or axillary dissection). If cancerous cells are discovered in the tissue sample or nodes, additional surgery or treatment may be necessary. Women who undergo a lumpectomy normally receive radiation therapy (RT) for about six weeks following the procedure to kill any cancer cells that may have been missed with the removal of the tumor. Lumpectomy is also referred to as partial mastectomy, wedge resection, breast conserving therapy, wide excision biopsy, tylectomy, segmental excision, and quadrantectomy.
A few decades ago, the standard surgical procedure to treat breast cancer was radical mastectomy, which involves the complete removal of the breast, muscles from the chest wall and all the lymph nodes in the armpit. Lumpectomy replaced radical mastectomy as the preferred surgical treatment because lumpectomy is designed to leave the natural appearance and cosmetic quality of the breast mostly intact while removing the malignancy. In addition, studies have shown that lumpectomy with radiation treatment is as effective as mastectomy in treating breast cancer.
The size and location of the lump determine how much of the breast is removed during a lumpectomy. A quadrantectomy, for example, involves removing a quarter of the breast. Before surgery, a woman should discuss with her doctor how much of the breast will be involved so that she can know what to expect.
The size of the cancer in relation to the size of the breast is the main factor that a woman's doctor considers to determine if a lumpectomy is an appropriate treatment. Additionally, some of the features of the cancer (if it is confined to one area of the breast and does not involve the skin or chest wall) help the doctor determine if lumpectomy is appropriate. Most women who are diagnosed with breast cancer, especially those who are diagnosed early, are considered good candidates for lumpectomy. However, under some circumstances, lumpectomy is not a recommended surgery for some women. These factors include the following:
- Multiple cancers in separate locations of the same breast: This means that the potentially malignant tissue cannot all be removed from a single location, meaning that the breast may become drastically disfigured as a result of lumpectomy.
- Prior lumpectomy with radiation: Women who have had a lumpectomy with radiation therapy to remove cancer cannot have more radiation; therefore, they usually need a mastectomy if they experience cancer again in the same breast.
- Extensive cancer: Since a lumpectomy removes a specific area with malignancy, this surgery option would be inappropriate if the cancer has spread to other locations.
- Problematic tumors: A tumor that is rapidly growing or has attached itself to a nearby structure, such as the chest wall or skin, may require surgery that is more extensive to remove the tumor.
- Pregnancy: Radiation therapy, which usually follows the lumpectomy, can damage the woman's fetus.
- Large tumors: Lumpectomy to remove a tumor that is larger than 5cm in diameter may drastically disfigure the breast. However, in some cases, the size of the tumor may be able to be reduced with chemotherapy, or endocrine therapy, to a size that is more manageable with lumpectomy. Small breasts, especially those that contain large lumps, may also be drastically disfigured after lumpectomy.
- Preexisting conditions that make radiation treatment more risky than usual: Radiation treatment may scar or damage connective tissue in women with collagen vascular diseases, such as scleroderma or lupus erythematosus.
- Prior radiation to the chest area, for instance, to treat Hodgkin's disease.
Some women may prefer the idea of a mastectomy to lumpectomy in order to feel more confident that they will not develop breast cancer again. Other women may not feel comfortable with radiation therapy or be able to commit to a series of radiation treatments, which may involve an unacceptable time commitment or extensive travel. In most situations, though, women can safely choose between lumpectomy and mastectomy.
Medically Reviewed by a Doctor on 12/17/2015
Leigh A Neumayer, MD, MS, FACS
Patient Comments & Reviews
The eMedicineHealth doctors ask about Lumpectomy:
|
<urn:uuid:e303da0c-636f-484a-9500-46859014c629>
|
CC-MAIN-2016-26
|
http://www.emedicinehealth.com/lumpectomy/article_em.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00011-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950633 | 958 | 3.609375 | 4 |
DementiaSkip to the navigation
Is this topic for you?
Alzheimer's disease is the most common cause of dementia. This topic focuses on other conditions that cause dementia. For more information on Alzheimer's, see the topic Alzheimer's Disease.
What is dementia?
We all forget things as we get older. Many older people have a slight loss of memory that does not affect their daily lives. But memory loss that gets worse may mean that you have dementia.
Dementia is a loss of mental skills that affects your daily life. It can cause problems with your memory and how well you can think and plan. Usually dementia gets worse over time. How long this takes is different for each person. Some people stay the same for years. Others lose skills quickly.
Your chances of having dementia rise as you get older. But this doesn't mean that everyone will get it.
If you or a loved one has memory loss that is getting worse, see your doctor. It may be nothing to worry about. If it is dementia, treatment may help.
What causes dementia?
Dementia is caused by damage to or changes in the brain. Things that can cause dementia include:
- Alzheimer's disease.
- Strokes, tumors, or head injuries. This type of dementia is called vascular dementia.
- Diseases, such as Parkinson's disease, dementia with Lewy bodies, and frontotemporal dementia.
In a few cases, dementia is caused by a problem that can be treated. Examples include having an underactive thyroid gland (hypothyroidism), not getting enough vitamin B12, and fluid buildup in the brain (normal-pressure hydrocephalus). In these cases, treating the problem may help the dementia.
In some people, depression can cause memory loss that seems like dementia. Depression can be treated.
As you age, medicines may affect you more. Taking some medicines together may cause symptoms that look like dementia. Be sure your doctor knows about all of the medicines you take. This means all prescription medicines and all over-the-counter medicines, herbs, vitamins, and supplements.
What are the symptoms?
Usually the first symptom is memory loss. Often the person who has a memory problem doesn't notice it, but family and friends do. As dementia gets worse:
- You may have more trouble doing things that take planning, like making a list and going shopping.
- You may have trouble using or understanding words.
- You may get lost in places you know well.
Over time, people with dementia may begin to act very differently. They may become scared and strike out at others, or they may become clingy and childlike. They may stop brushing their teeth or bathing.
Later, they cannot take care of themselves. They may not know where they are. They may not know their loved ones when they see them.
How is dementia diagnosed?
There is no single test for dementia. To diagnose it, your doctor will:
- Do a physical exam.
- Ask questions about recent and past illnesses and life events. The doctor will want to talk to a close family member to check details.
- Ask you to do some simple things that test your memory and other mental skills. Your doctor may ask you to tell what day and year it is, repeat a series of words, or draw a clock face.
The doctor may do tests to look for a cause that can be treated. For example, you might have blood tests to check your thyroid or to look for an infection. You might also have a test that shows a picture of your brain, like an MRI or a CT scan. These tests can help your doctor find a tumor or brain injury.
How is it treated?
There are medicines you can take for dementia. They cannot cure it, but they can slow it down for a while and make it easier to live with.
As dementia gets worse, a person may get depressed or angry and upset. An active social life, counseling, and sometimes medicine may help with changing emotions.
If a stroke caused the dementia, there are things you can do to reduce the chance of another stroke. Make healthy lifestyle changes including eating healthy, being active, staying at a healthy weight, and not smoking. Manage other health problems, such as diabetes, high blood pressure, and high cholesterol.
How can you help a loved one who has dementia?
There are many things you can do to help your loved one be safe at home. For example, get rid of throw rugs, and put handrails in bathrooms to help prevent falls. Post reminder notes around the house. Put a list of important phone numbers by the telephone. You also can help your loved one stay active. Play cards or board games, and take walks.
Work with your loved one to make decisions about the future before dementia gets worse. It is important to write a living will and a durable power of attorney. A living will states the types of medical care your loved one wants. A durable power of attorney lets your loved one pick someone to be the health care agent. This person makes care decisions after your loved one cannot.
Watching a loved one slip away can be sad and scary. Caring for someone with dementia can leave you feeling drained. Be sure to take care of yourself and to give yourself breaks. Ask family members to share the load, or get other help.
Your loved one will need more and more care as dementia gets worse. In time, he or she may need help to eat, get dressed, or use the bathroom. You may be able to give this care at home, or you may want to think about using a nursing home. A nursing home can give this kind of care 24 hours a day. The time may come when a nursing home is the best choice.
You are not alone. Many people have loved ones with dementia. Ask your doctor about local support groups, or search the Internet for online support groups, such as the Alzheimer's Association. Help is available.
Frequently Asked Questions
Learning about dementia:
Living with dementia:
Health Tools help you make wise health decisions or take action to improve your health.
Dementia is caused by damage to or changes in the brain. Alzheimer's disease is the most common cause. Stroke is the second most common cause of dementia. Dementia caused by stroke is called vascular dementia.
Some causes of dementia can be reversed with treatment, but most cannot.
Causes that cannot be reversed
Common causes of dementia that cannot be reversed are:
- Parkinson's disease. Dementia is common in people with this condition.
- Dementia with Lewy bodies. It can cause short-term memory loss.
- Frontotemporal dementia, a group of diseases that includes Pick's disease.
- Severe head injury that caused a loss of consciousness.
- Vascular dementia that may occur in people who have a stroke, long-term high blood pressure, or severe hardening of the arteries (atherosclerosis).
Less common causes of dementia that cannot be reversed include:
- Huntington's disease.
- Leukoencephalopathies, which are diseases that affect the deeper, white-matter brain tissue.
- Creutzfeldt-Jakob disease, a rare and fatal condition that destroys brain tissue.
- Brain injuries from accidents or boxing.
- Some cases of multiple sclerosis (MS) or amyotrophic lateral sclerosis (ALS).
- Multiple-system atrophy (a group of degenerative brain diseases affecting speech, movement, and autonomic functions).
- Infections such as late-stage syphilis. Antibiotics can effectively treat syphilis at any stage, but they cannot reverse the brain damage already done.
Causes that may be reversible
When dementia is caused by certain treatable problems, the treatment may also help the dementia. These treatable problems include:
- Underactive thyroid gland (hypothyroidism).
- Vitamin B12 deficiency.
- Heavy-metal poisoning, such as from lead.
- Side effects of medicines or drug interactions.
- Some brain tumors.
- Normal-pressure hydrocephalus.
- Some cases of chronic alcoholism.
- Some cases of encephalitis.
Some disorders that cause dementia can run in families. Doctors often suspect an inherited cause if someone younger than 50 has symptoms of dementia. For more information, see the topic Alzheimer's Disease.
Symptoms of dementia vary depending on the cause and the area of the brain that is affected. Symptoms include:
- Memory loss. This is usually the earliest and most noticeable symptom.
- Trouble recalling recent events or recognizing people and places.
- Trouble finding the right words.
- Problems planning and carrying out tasks, such as balancing a checkbook, following a recipe, or writing a letter.
- Trouble exercising judgment, such as knowing what to do in an emergency.
- Trouble controlling moods or behaviors. Depression is common, and agitation or aggression may occur.
- Not keeping up personal care such as grooming or bathing.
Some types of dementia cause particular symptoms:
- People who have dementia with Lewy bodies often have highly detailed visual hallucinations. And they may fall frequently.
- The first symptoms of frontotemporal dementia may be personality changes or unusual behavior. People with this condition may not express any caring for others, or they may say rude things, expose themselves, or make sexually explicit comments.
It is important to know that memory loss can be caused by conditions other than dementia, such as depression, and that those conditions can be treated. Also, occasional trouble with memory (such as briefly forgetting someone's name) can be a normal part of aging. But if you are worried about memory loss or if a loved one has memory loss that is getting worse, see your doctor.
How quickly dementia progresses depends on what is causing it and the area of the brain that is affected. Some types of dementia progress slowly over several years. Other types may progress more rapidly. If vascular dementia is caused by a series of small strokes, the loss of mental skills may be gradual. If it is caused by a single stroke in a large blood vessel, loss of function may occur suddenly.
The course of dementia varies greatly from one person to another. Early diagnosis and treatment with medicines may help for a while. Even without these medicines, some people remain stable for months or years, while others decline rapidly.
Many people with dementia are not aware of their mental decline. They may deny their condition and blame others for their problems. Those who are aware may mourn their loss of abilities and become hopeless and depressed.
Depending on the type of dementia, the person's behavior may eventually become out of control. The person may become angry, agitated, and combative or clingy and childlike. He or she may wander and become lost. These problems can make it difficult for family members or others to continue providing care at home. Palliative care can offer families a lot of support and assistance, which is why getting palliative care early is so important.
For more information on how palliative care can help people and family coping with dementia, see the topic Palliative Care.
Even with the best care, people with dementia tend to have a shorter life span than the average person their age. The progression varies depending on the disease causing the dementia and whether the person has other illnesses such as diabetes or heart disease. Death usually results from lung or kidney infections caused by being bedridden.
For more information on decisions you may face as your loved one's condition progresses, see the topic Care at the End of Life.
What to think about
Many older people have a slight loss of mental skills (usually recent memory) that doesn't affect their daily functioning. This is called mild cognitive impairment by some. People who have mild impairment may be in the early stage of dementia, or they may stay at their present level of ability for a long time.
What Increases Your Risk
You have a greater chance of developing vascular dementia if you:
When To Call a Doctor
- Numbness, weakness, or inability to move the face, arm, or leg, especially on one side of the body.
- Vision problems in one or both eyes, such as dimness, blurring, double vision, loss of vision, or a sensation that a shade is being pulled down over your eyes.
- Confusion, or trouble speaking or understanding.
- Trouble walking, dizziness, or loss of balance or coordination.
- Severe headache with no known cause.
Call a doctor immediately if a person suddenly becomes confused or emotionally upset or doesn't seem to know who or where he or she is. These are signs of delirium, which can be caused by a reaction to medicines or a new or worsening medical condition.
Call a doctor if you or a person you are close to has new and troubling memory loss that is more than an occasional bout of forgetfulness. This may be an early sign of dementia.
Occasional forgetfulness or memory loss can be a normal part of aging. But any new or increasing memory loss or problems with daily living should be reported to a doctor. Learn the warning signs of dementia, and talk to a doctor if you or a family member shows any of these signs. They include increased trouble finding the right words when speaking, getting lost going to familiar places, and acting more irritable or suspicious than usual.
Who to see
The following health professionals can evaluate symptoms of memory loss or confusion:
- Family medicine physician
- Physician assistant
- Nurse practitioner
To prepare for your appointment, see the topic Making the Most of Your Appointment.
Exams and Tests
Doctors diagnose the cause of dementia by asking questions about the person's medical history and doing a physical exam, a mental status exam, and lab and imaging tests.
Tests can help the doctor learn whether dementia is caused by a treatable condition. Even for those dementias that cannot be reversed, knowing the type of dementia a person has can help the doctor prescribe medicines or other treatments that may improve mood and behavior and help the family.
During a medical history and physical exam, the doctor will ask the affected person and a close relative or partner about recent illnesses or other life events that could cause memory loss or other symptoms such as behavioral problems. The doctor may ask the person to bring in all medicines he or she takes. This can help the doctor find out if the problem might be caused by the person being overmedicated or having a drug interaction.
Although a person may have more than one illness causing dementia, symptoms sometimes can distinguish one form from another. For example, early in the course of frontotemporal dementia, people may display a lack of social awareness and develop obsessions with eating, neither of which occurs early in other dementias.
Mental status exam
A doctor or other health professional will conduct a mental status exam. This test usually involves such activities as having the person tell what day and year it is, repeat a series of words, draw a clock face, and count back from 100 by 7s.
Other tests have been developed to diagnose dementia. Doctors can use one such test, Addenbrooke's Cognitive Examination, to distinguish Alzheimer's disease from frontotemporal dementia. Orientation, attention, and memory are worse in Alzheimer's, while language skills and ability to name objects are worse in frontotemporal dementia.
Many medical conditions can cause mental impairment. During a physical exam, the doctor will look for signs of other medical conditions and have lab tests done to find any treatable condition. Routine tests include:
- Thyroid hormone tests to check for an underactive thyroid.
- Vitamin B12 blood test to look for a vitamin deficiency.
Other lab tests that may be done include:
- Complete blood count, or CBC, to look for infections.
- ALT or AST, blood tests that check liver function.
- Chemistry screen to check the level of electrolytes in the blood and to check kidney function.
- Glucose test to check the level of sugar in the blood.
- HIV testing to look for AIDS.
- Erythrocyte sedimentation rate, a blood test that looks for signs of inflammation in the body.
- Toxicology screen, which examines blood, urine, or hair to look for drugs that could be causing problems.
- Antinuclear antibodies, a blood test used to diagnose autoimmune diseases.
- Testing for heavy metals in the blood, such as a lead test.
- A lumbar puncture to test for certain proteins in the spinal fluid. This test may also be done to rule out other causes of symptoms.
Brain imaging tests such as CT scans and MRI may also be done to make sure another problem isn't causing the symptoms. These tests may rule out brain tumors, strokes, normal-pressure hydrocephalus, or other conditions that could cause dementia symptoms.
MRI and CT scan also can show evidence of strokes from vascular dementia.
Single photon emission CT (SPECT) and PET scan can help identify several forms of dementia, including vascular dementia and frontotemporal dementia.
In some cases, electrical activity in the brain may be measured using an electroencephalogram (EEG). Doctors seldom use this test to diagnose dementia, but they may use it to distinguish dementia from delirium and to look for unusual brain activity found in Creutzfeldt-Jakob disease, a rare cause of dementia.
In rare cases, a brain biopsy may be done if a treatable cause of dementia is suspected.
After death, an autopsy may be done to find out for sure what caused dementia. This information may be helpful to family members concerned about genetic causes.
Some cases of dementia are caused by medical conditions that can be treated, and treatment can restore some or all mental function. But most of the time, dementia cannot be reversed.
Treatment when dementia can be reversed
Sometimes treating the cause of dementia helps the dementia. For example, the person might:
- Take vitamins for a deficiency of vitamin B12.
- Take thyroid hormones for hypothyroidism.
- Have surgery to remove a brain tumor or to reduce pressure on the brain.
- Stop or change medicines that are causing memory loss or confusion.
- Take medicines to treat an infection, such as encephalitis.
- Take medicine to treat depression.
- Get treatment for reversible conditions caused by AIDS.
Palliative care is a kind of care for people who have a serious illness. It's different from care to cure the illness. Its goal is to improve a person's quality of life—not just in body but also in mind and spirit.
Care may include:
- Tips to help the person be independent and manage daily life as long as possible. For more information, see Home Treatment.
- Medicine. While medicines cannot cure dementia, they may help improve mental function, mood, or behavior.
- Support and counseling. A diagnosis of dementia can create feelings of anger, fear, and anxiety. A person in the early stage of the illness should seek emotional support from family, friends, and perhaps a counselor experienced in working with people who have dementia.
For more information, see the topic Palliative Care.
Planning for the future
If possible, make decisions while your loved one is able to take part in the decision making. These are difficult but important conversations. Questions include:
- What kind of care does he or she need right now?
- Who will take care of him or her in the future?
- What can the family expect as the disease progresses?
- What kind of financial and legal planning needs to be done?
Education of the family and other caregivers is critical to successfully caring for someone who has dementia. If you are or will be a caregiver, start learning what you can expect and what you can do to manage problems as they arise. For more information, see Home Treatment.
Treatment as dementia gets worse
The goal of ongoing treatment for dementia is to keep the person safely at home for as long as possible and to provide support and guidance to the caregivers.
Routine follow-up visits to a health professional (every 3 to 6 months) are necessary to monitor medicines and the person's level of functioning.
Eventually, the family may have to consider whether to place the person in a care facility that has a dementia unit.
Taking care of a person with dementia is stressful. If you are a caregiver, seek support from family members or friends. Take care of your own health by getting breaks from caregiving. Counseling, a support group, and adult day care or respite care can help you through stressful times and bouts of burnout.
Dementia is hard to prevent, because what causes it often is not known. But people who have dementia caused by stroke may be able to prevent future declines by lowering their risk of heart disease and stroke. Even if you don't have these known risks, your overall health can benefit from these strategies:
- Don't smoke.
- Stay at a healthy weight.
- Get plenty of exercise.
- Eat healthy food.
- Manage health problems including diabetes, high blood pressure, and high cholesterol.
- Stay mentally alert by learning new hobbies, reading, or solving crossword puzzles.
- Stay involved socially. Attend community activities, church, or support groups.
- If your doctor recommends it, take aspirin.
Home treatment for dementia involves teamwork among health professionals and caregivers to create a safe and comfortable environment and to make tasks of daily living as easy as possible. People who have mild dementia can be involved in planning for the future and organizing the home and daily tasks.
Tips for caregivers
Work with the team of health professionals to:
- Make sure your home is safe.
- Keep the person eating well.
- Manage bladder and bowel control problems.
- Manage driving privileges.
The team can also help you learn how to manage behavior problems. For example, you can learn ways to:
- Make the most of remaining abilities. Reinforce and support the person's efforts to remain independent, even if tasks take more time or aren't done perfectly.
- Help the person avoid confusion.
- Understand behavior changes.
- Manage agitation.
- Manage wandering.
- Communicate clearly.
Nursing home placement
Even with the best care, a person with progressive dementia will decline, perhaps to the point where the caregiver is no longer physically, emotionally, or financially able to provide care.
Making the decision about nursing home placement is often very difficult. Every family needs to consider its own financial situation, emotional capacity, and other issues.
Doctors use medicines to treat dementia in the following ways:
- To correct a condition that's causing dementia, such as thyroid replacement for hypothyroidism, vitamins for lack of vitamin B12, or antibiotics for infections
- To maintain mental functioning for as long as possible when dementia cannot be reversed
- To prevent further strokes in people who have dementia caused by stroke (vascular dementia)
- To manage mood or behavior problems, such as depression, insomnia, hallucinations, and agitation
Medicines to help maintain mental function
- Cholinesterase inhibitors such as donepezil,
galantamine, and rivastigmine.
- These drugs were developed to treat Alzheimer's disease, but they may be tried in other dementias, especially vascular dementia.
- It is not clear how long these medicines will work.
- Side effects include nausea, vomiting, diarrhea, and weight loss.
- Memantine. This medicine is used to treat symptoms of Alzheimer's disease, but may also help with mild to moderate vascular dementia.
Medicines to help control mood or behavior problems
Many behavior problems can be managed without medicines. For more information, see Home Treatment.
In some cases, the doctor may prescribe:
- Antipsychotic drugs, such as olanzapine (Zyprexa) and risperidone (Risperdal).
- Antidepressants, especially selective serotonin reuptake inhibitors.
Medicines to prevent future strokes
- The doctor may prescribe medicines for high blood pressure and high cholesterol, since these conditions are risk factors for vascular dementia. These drugs can't reverse existing dementia, but they may prevent future strokes and heart disease that can lead to further brain damage.
For more information, see the topics:
- Ginkgo biloba. Many people take ginkgo biloba to improve or preserve memory. But studies have not shown that ginkgo biloba helps improve memory or prevent dementia.footnote 1
- Other medicines. Research is ongoing to look at the usefulness of nonsteroidal anti-inflammatory drugs (NSAIDS), antioxidants, and supplements such as citicoline. Be safe with medicines. Read and follow all instructions on the label.
- Reality orientation. People who have dementia may benefit from a structured group program that encourages them to focus on a variety of topics and to think creatively within their limits. This type of program, sometimes called reality orientation or cognitive stimulation therapy, is offered in some day care and residential settings.
- Validation therapy. A person who has dementia may say things that don't make sense. A caregiver's response may be to correct or disagree with him or her. This can be frustrating for everyone. Validation therapy is a way to talk to someone with empathy. It can help to give the person with dementia a feeling of control or calmness. It recognizes his or her feelings and emotions.
- Occupational therapy. Occupational therapists focus on a person's ability to perform daily tasks and take part in social activities.
Other Places To Get Help
- Birks J, Grimley Evans J (2009). Ginkgo biloba for cognitive impairment and dementia. Cochrane Database of Systematic Reviews (1).
Other Works Consulted
- Birks J, Grimley Evans J (2009). Ginkgo biloba for cognitive impairment and dementia. Cochrane Database of Systematic Reviews (1).
- Bourgeois JA, et al. (2008). Delirium, dementia, and amnestic and other cognitive disorders. In RE Hales et al., eds., American Psychiatric Publishing Textbook of Psychiatry, 5th ed., pp. 303–363. Washington DC: American Psychiatric Publishing.
- Butler R, Radhakrishnan R (2012). Dementia, search date July 2011. BMJ Clinical Evidence. Available online: http://www.clinicalevidence.com.
- Drugs for cognitive loss and dementia (2010). Medical Letter on Drugs and Therapeutics: Drugs of Choice, 8(91): 19–24.
- Knopman DS (2009). Alzheimer disease and other dementing illnesses. In EG Nabel, ed., ACP Medicine, section 11, chap. 11. Hamilton, ON: BC Decker.
- U.S. Preventive Services Task Force (2003). Screening for dementia: Recommendation and rationale. Available online: http://www.uspreventiveservicestaskforce.org/3rduspstf/dementia/dementrr.htm.
Primary Medical Reviewer Anne C. Poinier, MD - Internal Medicine
Specialist Medical Reviewer Peter J. Whitehouse, MD - Neurology
Current as ofNovember 20, 2015
Current as of: November 20, 2015
To learn more about Healthwise, visit Healthwise.org.
© 1995-2016 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
|
<urn:uuid:f7b73f90-209b-47f0-9e7c-ad441d6f4950>
|
CC-MAIN-2016-26
|
http://www.uwhealth.org/health/topic/major/dementia/uf4984.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00027-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.921932 | 5,743 | 2.9375 | 3 |
- UNC-TV Series
- UNC-TV Specials
- Programs A-Z
- Owning UNC-TV Programs
- UNC-TV Science
The Legend of Tom Dula
A little over 130 years ago in a small rural North Carolina town in Wilkes County, a young girl of meager means left home to meet her fiancé in the woods. She had hidden her special dress under her house clothes and had packed her belongings in a trundle bag, ready for her new life. She sat in the woods and waited for her beloved, and someone met her there--someone who hated her enough to kill her and drag her to a small grave that the person had dug the evening before. A few months later, her fiancé was captured and tried for the crime. After one appeal, he was condemned for her murder and hanged.
The story of Tom Dula and his unfortunate fiancée Laura Foster made the headlines in 1866, from as far away as New York. The Civil War had ended and the Reconstruction of the South had begun, but not without bitter feelings on both sides. So a murder of a poor, uneducated girl by an equally poor boy sparked a legend in the South and a headline story in the North. Some time after Tom Dula was executed, someone wrote a ballad, put it to music, and the legend of Tom Dula was born.
The song and the story of Tom Dula were passed down through so many people over the years that truth was lost. From romantics to folk singers, hundreds of people have surmised what happened on May 25, 1866. But no one is alive to tell the real story. The tale became a folk legend in the South, and the song passed from one generation to the other, each person proud to be part of the tradition of the Tom Dula story. Then one evening, after the Kingston Trio sang their new song, "The Ballad of Tom Dooley," Tom Dula's story was nationally immortalized.
The Legend of Tom Dula shares the history of the song and some ideas about the story from some people who can trace their roots back to the Happy Valley clan and others who have spent their lives fascinated with this obscure murder. Besides sharing some of the hearsay from the testimony and some opinions about who really committed the deed, the program sheds light on Frank Proffitt's involvement in the song, how the Kingston Trio discovered it, and how Frank finally received credit for the Kingston Trio's version of the song.
What really happened on the night of Friday, May 25, 1866 in the quiet Happy Valley? Who really killed Laura Foster? Because so many folk tales have been spun about the story, no one will know. But this section will give you a chance to preview some of the characters and the "facts" in the case and vote for the murderer yourself. After you've watched "The Legend of Tom Dula" and read through the cast of suspects and looked at some of the more unknown facts or stories (because no one knows which anymore), you can take a vote for who you believe murdered Laura Foster? Tom alone? Tom and an accomplice? Or is Tom completely innocent? You decide.
Other Details in the Case
In 1958, a new song called "Tom Dooley" meant a national hit for the Kingston Trio. For Frank Noah Proffitt, it meant that part of his heritage had suddenly been launched into national fame. Born to Wiley Proffitt and Rebecca Creed Proffitt on June 1, 1913, in Laurel Bloomery, Tennessee, Frank moved to and grew up in Pick Britches, now known as Mountain Dale, at the foot of Stone Mountain in Watauga County. He learned how to make banjos and dulcimers from his father.
Wiley Proffitt was not the only family member who taught young Frank folk songs and instrument-making. Frank learned traditional folk songs from his aunt, Nancy Prather, and from his father-in-law, Nathan Hicks, who also made dulcimers. His grandmother, Adeline Perdue, who lived in Wilkes County during the Tom Dula trial, taught Frank "Tom Dula." According to family legend, she saw Tom riding in a coffin, and as he strolled down the street to his hanging, he sang a song--the same song she taught her grandchildren.
As a family man, Frank made his living growing tobacco and strawberries and making instruments as his father and father-in-law had done. One day in 1937 a couple from New York named Warner visited them to buy one of Nathan Hicks' dulcimers. The man, Frank Warner, was particularly interested in learning Appalachian folk songs, and Nathan sang some of the ones he knew. The next year, when Frank Proffitt was visiting his father-in-law, Frank and Anne Warner returned, and Proffitt sang "Tom Dula" for them.
"His eyes sparkled as I sing Tom Dooley to him and told him of my Grandmaw Proffitt knowing Tom and Laura.I walked on air for days after they left," Frank said about Frank Warner's visit.
The Warners used one of the first battery operated recorders to capture the songs Frank sang for them.
What happened after that visit sparked the eventual recording that made the Kingston Trio famous.
Surprised that others were interested in the folk songs he had grown up with, Frank Proffitt decided to try to collect as many songs as he could. He sent a book of songs to Warner, who modified several of them and performed them himself.
Shortly after that, in 1947, Warner shared "Tom Dula" with Alan Lomax, a professor at New York University, who published it in his collection titled "Folk Songs USA."
In 1958, the Kingston Trio heard the song almost by accident, adapted it, and added it to their stage act. They renamed the song "Tom Dooley" and recorded it for their album that year. Frank Proffitt heard the
Kingston Trio perform the song on the Ed Sullivan show and was completely surprised.
Eventually Proffitt and Warner filed a joint lawsuit for legal claim to "Tom Dooley." Three years later, they began receiving royalties.
Frank Proffitt agreed to accompany Warner to performances in the early 1960s. Proffitt received numerous invitations to perform around the country, with Warner's encouragement. He also participated in workshops in Chicago and at a camp in Massachussetts.
In 1962 Folkways Records and Service Corp. recorded him, and Folk-Legacy Records, Inc. released Frank Proffitt, of Reese, North Carolina as their first album.
Even with the hundreds of invitations and the travel, Frank Proffitt's first priority was always his farmwork. In fact, he eventually refused to sing for free. In fact, he sang the songs for people not out of a motive for personal gain, but to give tribute to the people who had taught him the songs. He said the songs helped him remember his older family members and even picture them.
Frank never let his fame prompt him to move out of Watauga County. On November 1, 1965, he drove his wife, Bessie to a hospital in Charlotte for surgery and returned home. Later that evening, he died, at age 52.
The Kingston Trio's rendition of the song made the legend of Tom Dula a national fascination. Because Frank Proffitt sang the song for the Warners, and the Warners gave it to Alan Lomax, the Kingston Trio launched an old country folk ballad about a century-old murder in a small, rural county into immortality.
Lynip, Amaris O. "Proffitt Sang the Legend of Tom Dooley." The Democrat.
Tom Dula Museum
The "Tom Dooley Art Museum" is housed in the loft of the Whipporrwill Academy. The museum has an exhibit 45 paintings and drawings by Edith F. Carter on the life of Tom Dooley (Dula) who was convicted and hanged for the murder of his girlfriend, Laura Foster.
Edith Ferguson's Tom Dula Exhibit
David Holt's Tom Dula page
David shares some of what he's learned about the Tom Dula mystery.
The Kingston Trio Web site
Their latest and older hits, with links to buy albums.
|
<urn:uuid:13a996c6-70c3-4a6c-8b8e-06c7a7f601a4>
|
CC-MAIN-2016-26
|
http://www.unctv.org/content/folkways/episodes/dula
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00023-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979315 | 1,742 | 2.859375 | 3 |
Creation and the Conservation of Energy
Creation stories are universal among religions, through the ages. The ineffable mystery of life compelled us to explain our existence. In our primitive ignorance of the world, religion was the best we could do to provide the explanations we craved.
Thanks to science, we’re learning more about the universe and illuminating the dark corners of what was once our ignorance.
The word, "create", means: to bring into existence. Thus, if God created the universe, it had a beginning and can not be infinite in both directions of time: forward, yes; backward, no. But why can’t the universe simply be? Why can’t the universe be infinite in both directions of time: forward and backward? Why must it have a beginning? Why must it have been created by a supernatural God?
The first law of thermodynamics – the conservation of energy – makes it clear that nothing is ever created. Matter might change form but it never simply appears or disappears. For instance, we are nourished and grow by eating plants and other animals. Food is transformed into the energy that sustains us and the cells we are made of; including our DNA. Our parents didn’t create us, they transformed us.
Physics' mathematical models break down in a singularity. It is not known whether or not the first law holds in a singularity. If it does, the first law of thermodynamics strips bare the core question of creation and existence. Either the universe always existed . . . or . . . the universe was created by something outside the laws of physics (i.e. something supernatural). Either the universe is truly eternal or God created it. It boils down to physics or the supernatural.
We've had plenty of confirmation of Einstein's famous equation: E=MC2. Energy and mass are equivalent. Before the Big Bang, the entire mass of the universe was contained (as energy) in a super singularity. Whether or not ours is the first and only Big Bang, Big Bangs come from singularities. I believe that, in one form or another (singularity or cosmos), the universe simply is and always was. Not only is there no need for creation or for God: the conservation of energy means there could never have been a time when the universe, in whatever form, did not exist. Something doesn't come from nothing without supernatural intervention.
Because nobody has ever seen anything physically created, the pervasive concept of creation must be a human response to the unfathomable immensity of the eternal. The universe has always existed? What do you mean? Everything comes from somewhere, doesn't it? Yes. But nothing comes from nowhere.
The first law reduces the source of our existence to either the natural or the supernatural. The notion of a personal God is ridiculous to me. But a cosmic God? I can imagine an eternal energy – infinitely hot, infinitely massive – that created the universe in a single, spectacular, explosion that still permeates the entire universe. If you want to call that energy God, I can't refute you.
|
<urn:uuid:d528f787-6a38-405f-ae42-35dfac1464af>
|
CC-MAIN-2016-26
|
http://www.thinkatheist.com/profiles/blogs/creation-and-the-conservation
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00200-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946021 | 637 | 2.96875 | 3 |
SynopsisThe New School for Social Research is not, nor ever was, an art school in the traditional sense. Its importance during the post-World War I years up to and beyond the Abstract Expressionist period was as a haven for artists and intellectuals of all disciplines to gather and discuss controversial matters without fear of political censure. Specializing mostly in adult and continuing education programs, The New School for Social Research has frequently hosted lectures and forums, many of which were attended by important Abstract Expressionist artists and theorists. Much like schools such as the Art Students League and Black Mountain College where abstractionists and other artists learned to master their craft, The New School was where many of them acquired the ideas and philosophies (such as Freudian psychoanalysis, Existentialism, and Marxist aesthetics) that informed their art.
Origins of The New School for Social ResearchDuring World War I, a small and outspoken group of professors working at Columbia University were censured by the school's president, Nicholas Murray Butler, for speaking out against U.S. involvement in the war effort. (Butler himself had been an opponent of U.S. intervention in the war, but changed this position in 1917 when he established the Student Army Training Corps.)
These professors resigned from Columbia and decided to establish their own school, which they opened in 1919 in the lower Manhattan neighborhood of Chelsea, and called it The New School for Social Research (or commonly known for short as "The New School").
The original faculty of The New School included Charles Beard, James Harvey Robinson, Wesley Clair Mitchell, John Dewey, and Alvin Johnson, the university's first president.
The University in ExileThe University in Exile was established as the graduate division of The New School in 1933, founded as a sanctuary for academics escaping persecution in Europe during the initial stages of the Second World War. In the 1920s Alvin Johnson was appointed to co-edit the Encyclopedia of the Social Sciences; in order to compile this work, Johnson traveled frequently to Germany, Poland and other European countries to consult with colleagues. While abroad, Johnson became acutely aware of the growing threat of National Socialism in Germany, posed by the relatively new Nazi political party and the rising dictator Adolf Hitler, who were systematically opposed to democracy and intellectualism.
Sensing a dire need to provide safe haven for many of Europe's scholars and intellectuals, Johnson established a new graduate department in 1933 (coinciding with Hitler's appointment to German Chancellor), called the University in Exile. With the financial assistance of the Rockefeller Foundation and other philanthropy groups, the University in Exile was founded as a new graduate division within The New School and, more importantly, as a rescue program. Nearly two hundred European scholars and professors received visas and teaching jobs in the U.S. from the University in Exile. While many of them taught at The New School, there was never any stipulation from the University in Exile that they were required to do so; Alvin Johnson's main goal was simply to get people out of harm's way.
The Pre-World War II YearsOn January 1, 1931, the "Special Exhibition Arranged in Honor of the Opening of the New Building of the New School for Social Research" opened to the public. Included in the exhibition was Arshile Gorky's painting Improvisation (n.d.) (since its inclusion in the exhibition, the painting has either been lost or inexplicably re-named). The show was organized and curated by artist Katherine Dreier, who had previously exhibited her work at the famed 1913 Armory Show.
From February 14-16, 1936, the First American Artists' Congress Against War and Fascism was held at The New School for Social Research. The first meeting of the Congress had taken place elsewhere the previous spring, but this 1936 gathering constituted the first official congregation of Congress members to sit in on lectures and discussion panels.
Attendees at the 1936 Congress included Alexander Calder, Adolph Gottlieb, Isamu Noguchi, David Smith, James Johnson Sweeney, Ilya Bolotowsky and Yasuo Kuniyoshi. Among the 34 lecturers who spoke at the three-day event were artist Stuart Davis and critic/historian Meyer Schapiro. The prevalent themes centered around the opposition of Fascism in Europe, and the need to band together and promote the importance of free creative expression during times of war and hardship.
Drips, Pours and SurrealismIn 1940 the British Surrealist artist Stanley William Hayter, who had previously founded the Atelier 17 studio in Paris, came to The New School and held several painting and printmaking workshops. One of Hayter's most famous techniques was using what he called a "drip can." Not long after Hayter's arrival as a lecturer at The New School, Gordon Onslow Ford, another British artist who had trained and studied with André Breton and the French Surrealists in Paris, also joined the faculty. Ford was best known for his "poured canvases," a technique wherein he poured paint onto a canvas placed flat on the floor.
Between January and March of 1941, Ford delivered a series of lectures on Surrealism at The New School. In attendance were Motherwell, Baziotes, Tanguy, Jimmy Ernst, and many others. It was rumored, although never confirmed, that Pollock, Rothko and Gorky attended the lectures as well. The flier for the Onslow Ford's lectures read, "Surrealist Painting: an adventure into Human Consciousness ... Far more than other modern artists, the Surrealists have adventured in tapping the unconscious psychic world. The aim of these lectures is to follow their work as a psychological barometer registering the desire and impulses of the community."
To accompany each lecture, a sequenced series of small exhibitions were held in an adjacent studio space. The first exhibition was devoted to Giorgio De Chirico; the second featured works by Max Ernst and Joan Miró; the third exhibition was by Margritte and Tanguy; the fourth and final exhibition showcased contemporary Surrealist works by Wolfgang Paalen, Jimmy Ernst, Esteban Frances, Roberto Matta and Gordon Onslow Ford himself.
Meyer Schapiro and the Abstract Expressionist YearsBeginning in 1936, Meyer Schapiro started delivering regular lectures at The New School for Social Research. Unlike the Surrealists Hayter and Ford, Schapiro catered his lectures more to Hegelian philosophy, emphasizing style and form over matters concerning the human conscious and unconscious.
Throughout Schapiro's tenure at The New School, emerging artists and critics such as Helen Frankenthaler, Fairfield Porter, Joan Mitchell and Thomas B. Hess attended his sessions. Schapiro also gained acclaim in the late 1930s for calling attention to European modernists like Picasso, Braque and Miró, who had up until that point remained relatively unknown to New York artists who would soon make up the Abstract Expressionist movement.
The New School after World War IIFollowing the collapse of fascism in Europe and the Allied victory in World War II, the University in Exile was renamed the Graduate Faculty of Political and Social Science. (In 2005 the name was changed once again to The New School for Social Research, in honor of the academic institution's original name in 1919.)
In 1940, shortly before the war's end, New School President Alvin Johnson invited German theater director Erwin Piscator to come and open a theater workshop at the New School. Simply called the 'Dramatic Workshop,' from 1940 to 1949, Piscator operated a first-rate theater school, and educated such students as Marlon Brando, Harry Belafonte and Tennessee Williams. The Workshop would eventually close due to political pressure during the McCarthy era. Piscator was then forced to return to his native Germany. However, the tradition of theater and the performing arts at the New School would soon be renewed.
In 1994 the New School partnered with the highly renowned Actors' Studio, and established its first Master of Fine Arts program in the theatrical arts. Also beginning that year was the popular television program Inside the Actors' Studio on the Bravo network. This New School-Actors' Studio partnership was dissolved in 2005 due to contractual issues, at which point the New School established its own theatrical college, The New School for Drama.
LegacyThe New School for Social Research was conceived as a safe haven for artists, professors and intellectuals to freely exchange radical ideas on politics and aesthetics, and it has continued to operate in that tradition to this day. While many of the great Surrealist and abstract artists of the era trained at formal art schools like the Art Students League, they ventured to The New School to learn about the very theories and philosophies that are most commonly associated with Abstract Expressionism, i.e. Freudian psychoanalytic theory and the human consciousness, Existentialism, and Marxist aesthetics. The New School was not the place where artists like Motherwell and Baziotes perfected their craft, but it was where they honed their minds in the philosophies and formal theories that informed their art.
Below are The New School for Social Research's major influences, and the people and ideas that it influenced in turn.
Quotes"I attended The New School for Social Research for only a year, but what a year it was. The school and New York itself had become a sanctuary for hundreds of extraordinary European Jews who had fled Germany and other countries before and during World War II, and they were enriching the city's intellectual life with an intensity that has probably never been equaled anywhere during a comparable period of time."
- Marlon Brando
"In a Fascist form of government some one person, usually with a silly face, a Hitler or a Mussolini, becomes the model which every subject must imitate and salute... Anyone who laughs at those stupid mugs, or incites other people to laugh at them, is a traitor. I think that is the reason why dictatorships fear artists. They fear them because they fear free criticism ... The time has come for the people who love life and culture to form a united front against them, to be ready to protect, and guard, and if necessary, fight for the human heritage which we, as artists, embody."
- Lewis Mumford, from his opening address to the First American Artists' Congress, February 12, 1936
"Tonight I have given you a brief glimpse of the works of the young painters who were members of the Surrealist group in Paris at the outbreak of the war. Perhaps it is not by chance that all of us except Brauner and Dominguez have managed to find our way to these shores ... I think I can speak for all my friends when I say that we are completely confident in our work and slowly but surely with the collaboration of the young Americans we hope to make a vital contribution to the transformation of the world."
- Gordon Onslow Ford, concluding remarks from his lecture, March 5, 1941
Books:Intellectuals in Exile: Refugee Scholars and the New School for Social Research
New School. A History of The New School For Social Research
United American Sculptors First Annual Exhibition at the New School for Social Research New York City 1939
RESOURCES:The New School for Social Research official website
Social Research: An International Quarterly of the Social Science
|
<urn:uuid:b97301b9-d103-4f12-9f7c-138e20de897a>
|
CC-MAIN-2016-26
|
http://www.theartstory.org/school-the-new-school.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00141-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970482 | 2,320 | 2.921875 | 3 |
(SACRAMENTO, Calif.) -- A new class of nanoparticles, synthesized by a UC Davis research team to prevent premature drug release, holds promise for greater accuracy and effectiveness in delivering cancer drugs to tumors. The work is published online in Angewandte Chemie, a leading international chemistry journal.
In their paper, which will be featured on the inside back cover of the journal, Kit Lam, professor and chair of the Department of Biochemistry and Molecular Medicine, and his team report on the synthesis of a novel class of micelles called dual-responsive boronate cross-linked micelles (BCMs), which produce physicochemical changes in response to specific triggers.
A micelle is an aggregate of surfactant molecules dispersed in water-based liquid such as saline. Micelles are nano-sized, measuring about 25-50 nanometers (one nanometer is one billionth of a meter), and can function as nanocarriers for drug delivery.
BCMs are a unique type of micelle, which releases the payload quickly when triggered by the acidic micro-environment of the tumor or when exposed to an intravenously administered chemical compound such as mannitol, an FDA-approved sugar compound often used as a diuretic agent, which interferes with the cross-linked micelles.
"This use of reversibly cross-linked targeting micellar nanocarriers to deliver anti-cancer drugs helps prevent premature drug release during circulation and ensures delivery of high concentrations of drugs to the tumor site," said first author Yuanpei Li, a postdoctoral fellow in Lam's laboratory who created the novel nanoparticle with Lam. "It holds great promise for a significant improvement in cancer therapy."
Stimuli-responsive nanoparticles are gaining considerable attention in the field of drug delivery due to their ability to transform in response to specific triggers. Among these nanoparticles, stimuli-responsive cross-linked micelles (SCMs) represent a versatile nanocarrier system for tumor-targeting drug delivery.
Too often, nanoparticles release drugs prematurely and miss their target. SCMs can better retain the encapsulated drug and minimize its premature release while circulating in the blood pool. The introduction of environmentally sensitive cross-linkers makes these micelles responsive to the local environment of the tumor. In these instances, the payload drug is released primarily in the cancerous tissue.
The dual-responsive boronate cross-linked micelles that Lam's team has developed represent an even smarter second generation of SCMs able to respond to multiple stimuli as tools for accomplishing the multi-stage delivery of drugs to the complex in vivotumor micro-environment. These BCMs deliver drugs based on the self-assembly of boronic acid-containing polymers and catechol-containing polymers, both of which make these micelles unusually sensitive to changes in the pH of the environment. The team has optimized the stability of the resulting boronate cross-linked micelles as well as their stimuli-response to acidic pH and mannitol.
This novel nano-carrier platform shows great promise for drug delivery that minimizes premature drug release and can release the drug on demand within the acidic tumor micro-environment or in the acidic cellular compartments when taken in by the target tumor cells. It also can be induced to release the drug through the intravenous administration of mannitol.
|Contact: Dorsey Griffith|
University of California - Davis Health System
|
<urn:uuid:3d391f78-ac07-4153-8a39-c66577f836e0>
|
CC-MAIN-2016-26
|
http://www.bio-medicine.org/medicine-news-1/UC-Davis-researchers-refine-nanoparticles-for-more-accurate-delivery-of-cancer-drugs-88674-1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00147-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.924704 | 721 | 2.640625 | 3 |
I have to admit, every time a preschool teacher comes to me and says, “I think you need to take a look at this student, he is really up there on his toes”, I cringe a little bit. And here’s why– toe walking can range from a totally normal developmental phase to a BIG problem. Here are some thoughts on this very common developmental issue.
WHAT IS TOE WALKING?
Toe walking simply means that a child walks on his tip toes or doesn’t contact the ground with his heel first when taking a step. This is considered “normal” until sometime between the ages of 2 and 3. Beyond that age, without any definitive medical reason, toe walking is considered idiopathic, or, without a known cause – simply a habit the child has developed.
WHY DO KIDS WALK ON THEIR TOES?
The definition of toe walking is simple, but what’s not so simple about habitual toe walking is WHY? There are many possible reasons that children might develop a toe walking pattern and the research does not definitively point to one specific cause.
Are they exhibiting tactile defensiveness in their feet? Is there a proprioceptive or vestibular problem? Or could toe walking be a warning sign of a neurological disorder like cerebral palsy or muscular dystrophy? Could it be a sign of autism? Is the child’s calf musculature so tight that he can’t put his heel down? Is this simply a habit that could eventually lead to tight calf musculature and the inability to put the heel down?
But not every child who walks on his toes has a serious diagnosis coming down the pike. I have seen several kiddos that simply outgrow the toe walking pattern. Some research indicates that children will usually outgrow it by the age of 5. Most often, if a child is showing no signs of developmental delay other than toe walking, he will outgrow this pattern and continue with typical development. As therapists, we are more concerned when toe walking is accompanied by additional sensory processing concerns or other developmental issues.
TREATMENT FOR TOE WALKING
Therapeutic treatment for toe walking depends on what the cause is, how strong of a habit it is (do they ever contact the ground with their heels, or are they on their toes all the time?), how tight the gastrocnemius muscles (calf muscles) have become, and what other changes have occurred in the child’s foot and ankle muscles as a result of walking this way.
Treatment methods can include stretching, serial casting (a series of casts applied over time that gradually stretch the heelcords), or botox injections (used to temporarily paralyze the calf muscle so that it is easier to stretch). Bracing can also be used to limit movement, preventing the child from getting up onto his toes (in milder cases, stiff boots or high top tennis shoes can sometimes have the same effect).
In rare and severe cases, surgical lengthening is also an option for treatment. If the toe walking is caused by a sensory issue related to the child’s inability to tolerate the feeling of the ground on his feet, sensory interventions can be introduced and may include the Wilbarger brushing protocol, proprioceptive input via vibration or deep pressure to the feet or shoulders.
Ultimately, the reasons behind toe walking are often difficult to determine. Even after 15 years of experience in pediatrics, I find this to be one of the most difficult issues of child development to get to the bottom of and to treat.
There are many conflicting views about which treatments work and which ones don’t. My opinion is that every child is different and will respond differently to any given treatment. What works for one may not work at all for another. Either way, toe walking is definitely something to be aware of and to monitor.
WHAT SHOULD YOU DO?
If your child is over the age of two and walking on his toes regularly, it wouldn’t hurt to consult your pediatrician. You may simply be told that he or she will outgrow it and that may be true…or maybe not. Are there other sensory processing or developmental concerns? Are you noticing other motor concerns? Are your child’s heel cords getting so tight that he can’t even stand on flat feet if he tries?
If you’re concerned, be sure to speak up at your next appointment with your pediatrician. Be an advocate for your child – you know him better than anyone! Get things taken care of before something small turns into something that will take more serious measures to correct.
Sign up to receive our newsletter, a weekly roundup of our favorite posts and other great finds from around the web delivered right to your inbox!
Latest posts by Lauren Drobnjak (see all)
- FUN SUMMER ACTIVITIES: SQUEEZE BOTTLE TRACING - June 24, 2016
- CORE STRENGTHENING ACTIVITIES FOR KIDS: SNAKE CHARMER - June 18, 2016
- ZOOM BALL: 5 FUN WAYS TO PLAY - May 31, 2016
|
<urn:uuid:ae92cf16-439a-4482-9751-5fb9f678d866>
|
CC-MAIN-2016-26
|
http://theinspiredtreehouse.com/child-development-toe-walking/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00162-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94834 | 1,071 | 3.015625 | 3 |
Updated on 7 June 2012
The retina implant uses electrodes on the chip to absorb light entering the eye and converts it into electrical energy to stimulate nerves within the retina
Ms Tsang Wy Suet Yun, a patient at the University of Hong Kong Eye Institute who was legally blind for 15 years, can see again. Ms Yun, who suffered from retinitis pigmentosa, underwent a surgery at the institute for retinal implants in February 2012 that treated her condition. The implant, a microchip developed by German company Retina Impant, helps restore lost vision.
The implant at the human clinical trial stage is a 3mm x 3mm microchip with 1,500 electrodes that is implanted below the retina, specifically in the macular region of the eye. Since the microchip needs electrical power to operate, transmitter coils are placed under the patient's skin and post-implant, the micrchip is turned on to begin sight restoration. The electrodes on the chip absorb the light entering the eye, converting it into electrical energy to stimulate nerves within the retina. This stimulation is then relayed to the brain through the optical nerve leading to improved field of vision.
As the patient needs to develop new internal processes for interpreting the images, it takes several weeks to fully realize new sight capabilities. According to the result announced by the company, Ms Yun can see light and able to read letters projected on a screen in the laboratory.
Retina Implant, which specializes in subretinal implants, conducted the first human trial in 2005. For the trial, subretinal microchips were implanted on 11 patients with retinitis pigmentosa. Unlike the patients who received epiretinal implants, those who received subretinal implants of Retina Implant were able to see objects and shapes so clearly that they could combine letters to form words and read at a basic level. The results showed such patients were also able to recognize foreign objects and no complications reported have been reported till date.
The company is presently carrying out a second, multi-center human clinical trial in Germany, Hong Kong and the UK. The company hopes to provide implants to a total of 60 patients under the second trial. In this trial, the patients will have the option to keep the implant permanently, unlike the first trial when the implant was removed after four months.
|
<urn:uuid:c6575513-1566-4ee4-bd23-fa3b7a3535a8>
|
CC-MAIN-2016-26
|
http://www.biospectrumasia.com/biospectrum/news/1681/chip-eye-treat-blindness
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00144-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946554 | 470 | 2.84375 | 3 |
For more information, contact our Communications Office at 225-763-2750 or email [email protected] with your questions or comments.
Genes reveal how much we will benefit from regular exerciseReleased: Thursday, February 04, 2010
BATON ROUGE - Stretching from here to Ontario, London to Edinburgh, Copenhagen, Denmark to Stockholm, Sweden, and from Jupiter, Florida to Ann Arbor, Michigan, an international team of researchers from 14 institutions has peered into the human genome and has found a way to predict who will benefit the most from exercise.
The team is led by Claude Bouchard, Ph.D., of the Pennington Biomedical Research Center (PBRC) and James Timmons, Ph.D., of the Royal Veterinary College, University of London and the Center for Healthy Aging, University of Copenhagen. Their latest work builds on the current belief among researchers that one of the best predictors of health and longevity is our body’s ability to take in and use oxygen during maximum exercise. The more blood our heart can pump and the more oxygen our muscles can use, the less our risk of early disease and death.
They say that’s why aerobic exercise is so important. All the brisk walking, running, biking, swimming and endurance training we undertake as a society can increase our body’s ability to take in and use oxygen. Scientists call the maximum volume of oxygen our bodies use during exercise “VO2 max.” The higher our VO2 max, the more resistant we are to illness.
Bouchard and Timmons noticed a problem, however, and brought together a team to address it: although aerobic exercise can and does increase VO2 max in some people, exercise doesn’t work equally for everyone. Some people who exercise experience little or no increased VO2 max. Aerobic exercise for those people may not help ward off heart disease and other potential ailments.
According to Bouchard, executive director of PBRC, using lifestyle changes to prevent common diseases - such as starting an exercise routine - would be better targeted if healthcare specialists knew ahead of time who would benefit. Bouchard and his colleagues have now moved closer to that goal. They have just published a comprehensive look at a group of genes that modulate the increase in VO2 max due to aerobic exercise.
“We can now take a biological sample from a person and tell if he or she is likely to increase VO2 max through aerobic exercise training,” Timmons said, “This new approach will help physicians personalize exercise programs to reduce or fight cardiovascular diseases. However, if a patient is not likely to benefit much from aerobic exercise, the physician could turn to other types of exercise or alternative therapies. This would be one of the first examples of personalized, genomic-based medicine.”
In Bouchard and Timmons’ study, published online today by the Journal of Applied Physiology (http://jap.physiology.org/papbyrecent.shtml), they and their partner researchers combined the results of two exercise studies conducted in Europe with a very large study performed in the United States. Participants were asked to undergo rigorous aerobic training, yet nearly one in five participants showed less than a 5-percent increase in VO2 max, and nearly 30-percent showed no increase in insulin sensitivity (a risk factor for diabetes). The researchers first took muscle tissue samples before and after the exercise. Using new informatics procedures developed by one of their collaborators, Medical Prognosis Institute in Denmark, the team then identified a set of about 30 genes that predicted the increase in VO2 max. The researchers then discovered a subset of 11 of these genes that also showed differences in DNA sequences among the participants. Participants with a favorable DNA sequence at these genes increased VO2 max most, while participants with an alternate DNA sequence did not benefit as much or at all.
"When dealing with genetic data, you're dealing with reams of numbers, and it is extremely difficult to see significant changes or differences." said Steen Knudsen of the Medical Prognosis Institute, "We had to develop entirely new procedures to discover the difference in the samples and make sure those procedures were reliable and accurate."
This means individuals that fall into each category can be identified beforehand by their genotype. Those who are less likely to gain by exercise could be guided toward more productive disease prevention programs to reduce the risks of cardiovascular disease or diabetes.
“We know that low maximal oxygen consumption is a strong risk factor for premature illness and death, “ Bouchard said, “so the tendency is for physicians and public health experts to automatically prescribe aerobic exercise to increase oxygen capacity. Our hope is that before too long, they will be able to target that prescription just to those who may stand a greater chance of benefitting, and prescribe more effective preventive or therapeutic measures to the others.”
Tuomo Rankinen, Ph.D., is a leading scientist in the Human Genomics laboratory at PBRC and is also a member of the team. He said their findings are a great first step in using genotype to determine who is most likely to benefit from exercise in terms of improving aerobic capacity. This study focused on predicting the benefits of exercise on cardiorespiratory fitness, a strong predictor of cardiovascular disease and diabetes, but future studies should develop the use of genotypes to predict in whom exercise can decrease blood pressure, blood sugar levels, adiposity (amount of body fat) and inflammation.
Human Genomics Laboratory, Pennington Biomedical Research Center, Baton Rouge, Louisiana, USA
Lifestyle Research Group, The Royal Veterinary College, University of London, UK Centre for Healthy Aging, Department of Biomedical Sciences, University of Copenhagen, Denmark
Translational Biomedicine, Heriot-Watt University, Edinburgh, Scotland
Medical Prognosis Institute, Hørsholm, Denmark
Department of Physical Medicine and Rehabilitation, University of Michigan Medical School, Ann Arbor, Michigan, USA
Department of Molecular and Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan, USA
Centre of Inflammation and Metabolism, Faculty of Health Sciences, University of Copenhagen, Denmark
Department of Laboratory Medicine, Division of Clinical Physiology, Karolinska University Hospital, Sweden
Department of Paediatrics and Medicine (Neurology and Rehabilitation), McMaster University Medical Centre, Hamilton, Ontario, Canada
Department of Human Movement Sciences, Nutrition and Toxicology Research Institute Maastricht (NUTRIM), Maastricht University Medical Centre, The Netherlands
Centre for Integrated Systems Biology Medicine, University Medical School, Nottingham, UK
Department of Physiology and Pharmacology, Karolinska Institutet, Stockholm, Sweden
Molecular and Integrative Neurosciences Department, The Scripps Research Institute, Jupiter, Florida
The Pennington Biomedical Research Center is at the forefront of medical discovery as it relates to understanding the triggers of obesity, diabetes, cardiovascular disease, cancer and dementia. It is a campus of Louisiana State University and conducts basic, clinical and population research. The research enterprise at Pennington Biomedical includes approximately 80 faculty and more than 25 post-doctoral fellows who comprise a network of 44 laboratories supported by lab technicians, nurses, dietitians, and support personnel, and 13 highly specialized core service facilities. Pennington Biomedical’s more than 500 employees perform research activities in state-of-the-art facilities on the 222-acre campus located in Baton Rouge, Louisiana.
|
<urn:uuid:29a360c6-9388-46d8-ba0e-7ba8a1ccd8a1>
|
CC-MAIN-2016-26
|
http://www.pbrc.edu/news/?ArticleID=94
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00050-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933423 | 1,555 | 2.953125 | 3 |
By Ted Greenwald
When it comes to environmental regulation, California doesn’t wait for the Feds to ride in and lay down the law. The Golden State led the way on mandating emissions-control equipment in motor vehicles in 1961. It pioneered tailpipe emissions standards in 1967 and ratcheted them up into the 1990s, prompting the federal government to follow. When the Environmental Protection Agency proved reluctant to tighten fuel-economy standards, California outmaneuvered it in 2002 by limiting carbon dioxide from cars. That decision achieved the same end – and was the first move in the United States to control greenhouse gases.
And so it goes with climate change. By the mid-2000s, when the rest of the country was waking up to the challenge of global warming, California was already pursing an aggressive program to assess the likely damage. According to the state energy commission’s climate research, the U.S. west coast faces sea level rise of 12 to 18 inches by 2050, and as much as nearly six feet by the turn of the century. Precipitation is projected to fall increasingly as water rather than snow, draining into the sea rather than laying in cold storage until the long, dry summers. Higher-than-average temperatures and more frequent extreme weather promise heat waves, wildfires, droughts, and floods.
The sense of impending crisis sent California Governor Arnold Schwarzenegger into action-hero mode. In 2006, he signed the Global Warming Solutions Act, capping carbon emissions statewide throughout all activities and sectors. Then, last December, he stood on Treasure Island — an expanse of landfill in the San Francisco Bay that stands to be inundated by the upwelling of glacial melt — and unveiled the 2009 California Climate Adaptation Strategy, a plan to prepare for what many scientists regard as inevitable changes. “We have the responsibility to have a Plan B just in case we can’t stop the global warming,” he said, apparently missing the document’s emphatic assertion that mitigation (making efforts to minimize the onset of climate change) and adaptation (learning to live with it) are equally necessary and inherently complementary undertakings.
The strategy document is 200 pages of meticulously footnoted, thoroughly bureaucratic prose that directs state agencies to take climate change into account. Individual chapters are devoted to seven critical sectors: agriculture, biodiversity, coastal resources, energy and transportation, forestry, public health, and water supply and flood protection. The plan outlines the range and severity of potential impacts — eroding coastlines, flooded freeways, extended wildfire seasons, devastating disease outbreaks. The executive summary lists a dozen action items and an appendix of 163 further recommendations.
Mostly, these directives call for better coordination between federal, state, and local regulators; updating of existing resource-management plans in light of the latest scientific findings; ongoing research to sharpen estimates of impending change; and funding to accomplish these aims and, presumably, the more concrete actions that would follow. Perhaps most interesting is the recommendation to create a web site called CalAdapt that would mash up government data with Google maps, providing officials with up-to-date visualizations of rising waters, increasing temperatures, and other risks.
Not all of this is new. California’s coastal and water agencies have been planning for the impact of climate change since the mid-1980s. Until the turn of the century, though, adaptation was a dirty word in Sacramento. “You got slapped on the head if you mentioned it,” says Anthony Brunello, who worked for the Pew Center for Global Climate Change from 1999 to 2001. “It equated to giving up.” But evidence began to mount that the effects were already being felt, particularly a 7-inch rise in sea level at the Golden Gate over the past century, which convinced even hard-core advocates of mitigation that it wasn’t too early to consider, say, building sea walls. In late 2008, Schwarzenegger ordered the California Natural Resources Agency to look into what it would take to adapt to the changes wrought by global warming.
By then, Brunello had become California’s Deputy Secretary for Climate Change and Energy — and the state was deep into a fiscal crisis. He directed state agencies to form sector-specific working groups that invited business leaders, academics, and NGOs to help hash out the strategy. The governor released the plan just in time for the Copenhagen climate summit – only to see it swept off the front pages when leaked emails from eminent climate scientists sparked the Climategate scandal.
That was a pity because — lack of bold proposals notwithstanding — the Climate Adaptation Strategy is a significant step forward in the U.S. response to climate change. “Of the dozen states that have published or are working on plans that include adaptation measures, California stands out for the breadth and depth,” says Terri Cruce, a climate researcher with the Pew Center for Global Climate Change and the Georgetown Climate Center. (Cruce maintains a web site detailing climate-change adaptation initiatives on a state-by-state basis.) The report covers every state agency and reaches into every vital sector that’s touched by climate change. Most important, it establishes a permanent task force to guide implementation, so the effort won’t die when Schwarzenegger leaves office. And although it may seem trendy, the CalAdapt web site looks like an especially smart move, creating a convenient, cost-effective way for officials to see how latest projections play out in their jurisdiction.
|
<urn:uuid:c8b71f64-b8de-42ed-bb9e-2db1502137b3>
|
CC-MAIN-2016-26
|
http://www.pbs.org/wnet/need-to-know/environment/california-unveils-plan-b-a-strategy-for-adopting-to-climate-change/261/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00119-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93755 | 1,121 | 3.0625 | 3 |
THE ART OF TRAINING.
It is very clear that the most simple and the most obvious of the modes by which a parent may establish among his children the habit of submission to his authority, are those which have been already described, namely, punishments and rewards—punishments, gentle in their character, but invariably enforced, as the sure results of acts of insubordination; and rewards for obedience, occasionally and cautiously bestowed, in such a manner that they may be regarded as recognitions simply, on the part of the parent, of the good conduct of his children, and expressions of his gratification, and not in the light of payment or hire. These are obviously the most simple modes, and the ones most ready at hand. They require no exalted or unusual qualities on the part of father or mother, unless, indeed, we consider gentleness, combined with firmness and good sense, as an assemblage of rare and exalted qualities. To assign, and firmly and uniformly to enforce, just but gentle penalties for disobedience, and to recognize, and sometimes reward, special acts of obedience and submission, are measures fully within the reach of every parent, however humble may be the condition of his intelligence or his attainments of knowledge.
Another Class of Influences.
There is, however, another class of influences to be adopted, not as a substitute for these simple measures, but in connection and co-operation with them, which will be far more deep, powerful, and permanent in their results, though they require much higher qualities in the parent for carrying them successfully into effect. This higher method consists in a systematic effort to develop in the mind of the child a love of the principle of obedience, by express and appropriate training.
Parents not aware of the Extent of their Responsibility.
Many parents, perhaps indeed nearly all, seem, as we have already shown, to act as if they considered the duty of obedience on the part of their children as a matter of course. They do not expect their children to read or to write without being taught; they do not expect a dog to fetch and carry, or a horse to draw and to understand commands and signals, without being trained. In all these cases they perceive the necessity of training and instruction, and understand that the initiative is with them. If a horse, endowed by nature with average good qualities, does not work well, the fault is attributed at once to the man who undertook to train him. But what mother, when her child, grown large and strong, becomes the trial and sorrow of her life by his ungovernable disobedience and insubordination, takes the blame to herself in reflecting that he was placed in her hands when all the powers and faculties of his soul were in embryo, tender, pliant, and unresisting, to be formed and fashioned at her will?
The Spirit of filial Obedience not Instinctive.
|
<urn:uuid:497e4ee7-2d03-401c-b9e9-ab99af6adf2e>
|
CC-MAIN-2016-26
|
http://www.bookrags.com/ebooks/11667/47.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00038-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.982764 | 590 | 3.0625 | 3 |
After yesterday's post about steam engines on South African railroads, I received a couple of questions about it. The first was about Garratt-type locomotives, which were never used on US railroads and thus sparked the curiosity of several knowledgeable readers. The second was about the Montagu Pass, which I described as 'magnificent'.
Garratt locomotives were a unique design, of particular value on narrow-gauge railroads that couldn't support the weight of very large or powerful engines. They were articulated, with a steam engine at either end fed by a set of boilers in a central unit. This meant that a single Garratt engine could generate 60% to 80% more power than a conventional locomotive. The central boiler unit also meant that it was more economical in its use of coal and water than two locomotives would have been, and required only one crew to operate it instead of two. The weight on the rails was also more evenly spread, as a single Garratt engine would put less pressure on rails, bridges and ballast than two conventional units. Finally, its articulated design meant that it could handle tight curves and restricted conditions much better than fixed units. You can read a good summary of the Garratt's advantages and disadvantages here.
They were an ingenious solution to a set of conditions often encountered on colonial railroads. More than 1,600 were built, in a large range of different models and sizes, and a large number have survived in museums. At least two are still in use on South African railroads, in private hands as far as I know.
The Garratts were often used on the Montagu Pass, because it's very narrow in parts, very twisty, and extremely steep, so the higher power output of the double-engine design was very useful there. Here are two video clips illustrating the pass and the Garratts that worked there. I recommend watching both in full-screen mode.
The first shows the departure from George and the ascent of the Pass. I remember this trip so well that it made me almost painfully homesick to watch this clip this morning. I don't want to go back to South Africa - there are too many very bad memories for that - but this made me smell the fynbos vegetation again, and the coal smoke from the engine.
The road you can see across the valley from the railway line is the Outeniqua Pass, built using Italian prisoner-of-war labor during and after World War II to replace the much narrower and steeper Montagu Pass, which was originally designed for ox-wagons and horse travel, and thus less suitable for motor vehicles. It was extensively modernized during the 1990's. Again, I've traveled that road many, many times.
The second video clip shows the Union Express - once a mainline train, but now an occasional tourist event - leaving Oudtshoorn, in the Little Karoo, and heading towards the Montagu Pass and George from the inland plateau. It shows the Garratt engines to good advantage, as well as the beauty of the route - one of the loveliest I've ever traveled by rail or road.
Yes . . . I may never want to go back, but I can still feel homesick . . .
|
<urn:uuid:96d0e4f6-a03c-4ab8-82a7-ed08e25e3d10>
|
CC-MAIN-2016-26
|
http://bayourenaissanceman.blogspot.com/2013/11/more-steam-train-memories.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.984768 | 673 | 3.265625 | 3 |
National Standards in reading, writing and mathematics for students in Years 1-8 are now in their third year of implementation, and Fairfax Media recently published data facilitating the comparison of schools on the performance of their students against the standards. The Ministry of Education has also published National Standards data.
There are two arguments often voiced by politicians and media commentators in favour of the publication of these kinds of data. The first is that the publication of National Standards data is a legitimate mechanism to hold schools accountable to the taxpayers who fund them and the parents whose children attend them. The second is that comparative data can assist parents to make informed choices about the schools in which to enrol their children.
The New Zealand Assessment Academy, a group of leading researchers in educational assessment and measurement, does not agree with either of these arguments.
While we acknowledge that accountability is important, we do not believe that the data published by Fairfax will serve accountability, either by the way they presented the data or by the conclusions which are likely to be drawn from them.
Neither, at this stage, do we support the publication of National Standards data by the Ministry of Education. We believe that what might seem valuable information about the performance of schools in fact runs a serious risk of misinforming the public and, in some cases, of unfairly tarnishing schools' reputations.
If data are to be used for accountability purposes they must have a high degree of integrity and reliability. At present, there is no evidence that this is so.
Even if sufficient reliability can be achieved, a simple comparison of schools on the basis of proportions of students at, above, below, and well below the standards, without taking into account the characteristics of schools, is likely to misrepresent schools with substantial proportions of students from disadvantaged socio-economic backgrounds. Educational data from across the developed world show that socio-economic capital is strongly correlated with educational success. Although there are salutary examples of schools that overcome negative social and economic influences on students' achievement, there are factors beyond the control of teachers and schools that render any point-in-time achievement comparison almost meaningless.
Any comparison between schools ought to focus on measures of progress, and not on point-in-time performance. To some extent this can begin to address the difficulty of comparing schools catering to very different demographics; for example, a low-decile school that compares unfavourably with higher-decile schools on the proportions of their students at or above the standard, might nonetheless be able to show that its students make as much or more progress than students at higher-decile schools.
At present, however, there is no sound mechanism to measure progress in the National Standards domains.
Additionally, the regular publication of any table of assessment data is very likely to produce a bias in teaching towards the elements of learning that are captured in the table. Literacy and numeracy are important skills that ought to be (and are) emphasised in the primary curriculum; however, they are not the only things that are important.
Data like these could produce an incentive for schools to drill aspects of reading, writing and mathematics in order to support favourable judgments, potentially losing other important skills in the process. In other words, comparing schools on assessment outcomes is likely to lead to a narrowing of the curriculum to just those aspects that are most easily measured or assessed.
While the New Zealand Assessment Academy acknowledges the right of all New Zealanders to access educational data and supports the right of the press in a free society to publish those data, we do not support the production of tables of data that can be used to compare schools, like those produced by Fairfax. We urge media representatives to think carefully about, and take much greater responsibility for, the effect that their publication of these data is likely to have.
Neither do we support the publication of National Standards data by the Ministry of Education at present, because we remain unconvinced that the quality of the data are sufficient to justify publication. Instead, we urge the Ministry to devote its resources to providing support to schools to improve the quality and consistency of teachers' judgments, to developing a sound method of measuring and reporting individual students' progress over time in reading, writing, and mathematics, and to provide further guidance and practical assistance to enable teachers to meet the learning needs of all students.
Michael Johnston is a senior lecturer in education at Victoria University and writes on behalf of the New Zealand Assessment Academy.
|
<urn:uuid:716790b7-8f4b-4310-84ae-edd4edc342d3>
|
CC-MAIN-2016-26
|
http://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=10838391
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00142-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963304 | 892 | 3.28125 | 3 |
Shocking figures published this week show that in some parts of the UK, nearly half children live in poverty. Nowhere is free from child poverty, and even in the most affluent areas, families are struggling to get by from day to day.
The Campaign to End Child Poverty - of which The Children's Society is a member - has produced a new map of child poverty in the UK. It reveals a deeply divided nation. At one end of the scale, in Manchester Central, 47% of children are in poverty. At the other, in the Sheffield Hallam constituency, the figure is less than 5%.
But no community is free from the problem, and the map puts into stark relief the need for urgent action to end child poverty in the UK. With 3.6 million children currently in poverty, we've got a long way to go.
Behind these numbers lie the reality that children living in poverty have to face every day. A shocking 1,660 families - with either children or a pregnant woman - were forced to live in unsuitable bed and breakfast style accommodation in the first part of 2012. This is an increase of 60% on the previous year. A survey from The Children's Society recently revealed that nearly half of teachers often see children coming into school hungry, with no lunch and no means to pay for one. Six million households are struggling just to afford to heat their homes.
Unless things change, the situation is only going to get worse in coming years. The government's decision to cut financial support for many of the lowest income households are going to put still more pressure on struggling families.
We know, for example, that the decision to restrict increases in benefits to just 1% for the next three years - well below the rate at which prices are set to rise - is going to have a deeply disproportionate impact on families with children.
Whilst around three in every 10 households are affected by the change, nearly nine in 10 families with children will lose out. The government's own estimates predict that this change alone will push around 200,000 into poverty.
Similarly, cuts to support with housing costs (particularly for families living in private rental housing), is likely to mean that families increasingly finding themselves squeezed into the most deprived areas, unable to find housing which they can afford elsewhere.
The Institute for Fiscal Studies estimates that rather than end child poverty, the number of children in poverty is set to rise by several hundred thousand by 2020. To reverse this trend, the government urgently needs to change course, to ensure that no child faces a childhood blighted by poverty.
And local authorities are increasingly going to find themselves having to make tough decisions, as localisation of support puts them at the frontline of welfare reform. From deciding who gets help with their Council Tax bills, through to making decisions about the provision of emergency support for families. Given this, it is crucial that the impact on child poverty is kept front-and-centre of every decision that every Local Authority makes.
All the main political parties are signed up to eradicating child poverty by 2020. As these figures show, we've got a long way to go. Unless the government and councils urgently change course and step up the fight against child poverty, we'll continue to condemn millions of children to a life of poverty.
Find out the child poverty rate in your local area using our interactive version of the End Child Poverty map on The Children's Society website.
|
<urn:uuid:192ad2e9-40b5-447e-ade7-5e6f046a17f0>
|
CC-MAIN-2016-26
|
http://www.huffingtonpost.co.uk/sam-royston/child-poverty-figures-divided-nation_b_2731010.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00039-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959326 | 695 | 2.96875 | 3 |
One area of great concern for Americans is the Pacific Northwest, and in particular, the Oregon coast.
This concern about an Oregon earthquake is not a new one for residents. More than 100 faults can be found running under Oregon and Washington. These faults are responsible for the 1,000 plus earthquakes that occur throughout the area each year.
Scientists have been concerned about the potential for a large earthquake beneath Portland for some time. They believe that it is only a matter of time that an Oregon earthquake similar to the 6.8 Nisqually earthquake that hit the Seattle area in 2001, will strike the Portland area.
As if a 6.8 earthquake is not enough, a 2008 study reveals that the region is somewhat overdue for a mega-earthquake as well.
Why an Oregon Mega-Earthquake?
Scientifically speaking, the mega-earthquake is actually a Mega-thrust Earthquake. This type of earthquake occurs at subduction zones along destructive plate boundaries. It is at these boundaries where one tectonic plate is forced under, or “subducts,” another. During the subduction process, large sections can get stuck. When these sections finally break free, there is a large release of energy causing some of the world’s largest earthquakes.
Only this type of tectonic activity is known to produce earthquakes of a magnitude of 9.0 or more. According to the study, all six earthquakes from the last century with a magnitude of 9.0 or greater have been mega-thrust earthquakes.
When the Nisqually earthquake hit Seattle, it caused more than $2 billion in damage. It is safe to estimate that if an earthquake of comparable size hit Portland, the residents there would see a comparable amount damage to their city. Also, if a 9.0 earthquake would hit the Oregon coast the results would be devastating.
Oregon is Not Prepared
Though it is nearly impossible to make a city “earthquake proof,” it is possible for precautions to be taken to mitigate the damages of an earthquake; even one with a magnitude of 9.0.
Unfortunately for Oregon residents, it appears that these types of precautions have not been taken. According to the Senate President, Peter Courtney, of the Oregon State Legislature, “We are not prepared.”
It is not like voters are not trying though. In 2002, they authorized the state to use bonds to seismically retrofit key buildings in the state’s infrastructure; such as schools, police stations and hospitals.
Nevertheless, the retrofitting is not happening as quickly as Courtney, and many others, had hoped. Courtney expressed his disappointment in a statement to Oregon Public Broadcasting.
“I’ve never understood this issue in terms of decision makers. There’s always something more important to them. And you know why, they just think, well they say it’s going to happen. But maybe it’ won’t happen. I know that’s what’s out there. All the evidence by all the scientists, world wide who’ve studied this thing for Oregon say you’re going to get hit. It could be as bad as 9.3. You’re going to get clobbered.”
Oregon’s Governor, John Kitzhaber, stated he liked that idea, “I think we can do seismic upgrades as well [meaning in tandem with already proposed efficiency upgrades], so I think there’s an opportunity to re-employ good trade jobs throughout the state. Make our schools safer and more energy efficient at the same time.”
However, experts say that it is not just the infrastructure that is lacking in earthquake preparedness, but the residents as well. Most of Oregon’s residents do not carry earthquake insurance, nor do they keep enough supplies on hand to be prepared for such a disaster. To be truly prepared, residents should have enough food, first aid, and water to last for at least two weeks.
If there is anything to be learned from the tragedy in Japan, it is that Mega-Earthquakes do happen and the only strategy available to survive them is to be prepared.Originally published on TopSecretWriters.com
|
<urn:uuid:f24d4587-42d4-4644-9d8e-d1c13c68ec7f>
|
CC-MAIN-2016-26
|
http://www.topsecretwriters.com/2011/03/next-mega-earthquake-predicted-off-coast-of-oregon/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00090-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965568 | 880 | 3.375 | 3 |
During the first few centuries of Christianity, martyrdom was a fact of life for those who confessed their faith in Jesus Christ. Persecutions raged over the whole Roman Empire; some were widespread while others were confined to local area B ù Egypt gave more martyrs than any other country in the world, and thus our fathers became a living example on how to be a faithful member in the Church of Christ.
During the reign of Emperor Lucinius, the ruler of the EasternProvinces of the Roman Empire (313 A.D.), Prince Theodore E1 Shatebi was martyred. His courage and endurance prompted thousands of people to adopt Christianity.
Prince Theodore was born in 281 in Achaea, a port city, on the Black Sea. Later, he lived in Herculea, in Asia Minor. His father John was a Christian Egyptian from a city called Shateb in Upper Egypt.
The Father was a soldier in the Roman army. He went to Antioch to fight against the Persians. And because he was a man of courage he was given the prince's daughter Oussawaia to be his wife. Oussawaia gave birth to a child and named him Theodore. In the meantime Oussawaia tried 8c hard to attract her husband John to Idol worshiping, but John had a strong faith in our Lord Jesus. Nevertheless, his wife used to derogate him by saying that her father the prince bought him as a slave and gave him a princess (herself) for a wife, and that he was not grateful. John felt always humiliated and not welcome in his own house. He used to pray and ask for God's help and guidance.
In his sleep, he saw a vision. An angel appeared to him and said, "Do not be afraid John. Leave your pagan wife, and return to your country. Do not worry about your son Theodore. He will become a great Christian, and because of his strong faith, thousands will believe in Lord Jesus."
John left his house and went back to Egypt, but he never stopped praying for his son. On the other hand, Theodore was always ead when he learned that his father had been kicked out of the house because of his faith in our Lord Jesus. One day he told his mother that his father's God was crucified to save the world, but her idol was 80 weak that it could not even defend itself. Then he pushed the idol to the floor and the statue broke into pieces. A bad spirit came out of the idol in the form of a black giant and burned into smoke in front of them.
Prince Theodore was baptized by a priest called Oliganos at the age of fifteen. He was filled with the Holy Spirit, and because of his boldness and courage in the battles, he became very famous. When Diocletian heard about him, he appointed him a commander over five hundred knights, and called him, Prince Theodore The Esphehlar (8rave Commander).
One day after Prince Theodore and his soldiers had fought a fierce battle in the desert, they ran out of water, and were about to die from thirst. Prince Theodore prayed earnestly and said, "My Lord Jesus who gave the Israelites water from the solid rock, please quench our thirst," And suddenly, a strong wind blew, and heavy rain started falling. After the soldiers drank as much as they wanted, they knelt down before the Prince, and said, "Blessed is your True God Jesus, who answered your prayer with great might." On that day they were all baptized in Jesus' name.
Later, an angel of the Lord appeared to Theodore and told him to travel to Egypt to see his father. The prince was very happy to learn that his father was still alive. He took some of his faithful soldiers and sailed to Alexandria. Then they walked to Elbehna and went straight to the church of Shateb to ask about the prince's father, John.
At that time the father was old and was lying sick in bed. Nevertheless, it was a very emotional reunion. Five days later, the father died, and the prince buried him. Theodore told the people of that city that his wish was to be buried beside his father, because he knew that he would soon die asXa martyr.
Prince Theodore left Egypt, and went back to Antioch. After his departure, the Egyptians built a pillar on the River Nile's bank. On top of the pillar they hung the picture of the saint, whom they loved.
Shortly after, Prince Theodore went to fight the Persians and with him there was another saint called Prince Theodore E1 Mishriky. In the battlefield, the Archangel Michael appeared to encourage and support them. After they defeated the enemy, The Emperor proclaimed Prince Theodore EL Shatebi to be the Hero of the Roman Empire, and appointed him a ruler of the city of Otichos.
In Otichos a demon possessed dragon (maybe a large crocodile) lived in the nearby mountains. The people feared that dragon very much to the point that they used to throw children to nim to satisfy his appetite.
At that time, they took two children, from a Christian widow, to offer them as a sacrifice to the great dragon. The woman wept and prayed to God to save her children. Then, she heard a voice saying, "Dontt be afraid. Theodore iB capable of saving your children." After the saint heard the woman's story, he set of. immediately to kill the dragon. On his horse, he fought the dragon for an hour. Then, Michael the Archange] appeared to him and helped him until the dragon was killed. The people of the city were very happy to get rid of the evil dragon, and many were baptized in the name of Lord Jesus Christ.
The idols' priests complained to Emperor Lucinius, who ordered St. Theodore to renounce Christ, or face death. When the saint refused to offer sacrifices to the idols, he was tortured in many cruel ways, but the Archangel Michael used to appear to comfort him, and remind him of Jesus' promise, and the eternal glory that is waiting for him.
At the end, the king ordered the soldiers to cut his head by the sword, and to burn the body. The salnt prayed, "My Lord, God, and Savior Jesus ChrLst, accept my spirit, and protect my body from the fire, 80 that everyone may know that You are the real God. To You is the power and the glory forever."
Suddenly the Lord of Glory Himself appeared to the Prince in a cloud. He told him, "My beloved Theodore, come to your eternal rest in the Kingdom of Heaven. You have been crowned with the great crown of martyrdom. The fire will not burn your body, for miracles and wonders will be performed through your blessed body, and also through the mentioning of your name."
After Prince Theodore died, his mother Oussawaia carried his body to Egypt, and buried it beside his father's, in the city of Shateb.
Today, more than ever miracles do happen through the intercession of this great saint. Maythe prayers and the blessings of Prince Theodore the Martyr be with us all. Amen.
|
<urn:uuid:55ef1b40-2ee3-46a0-8d4e-1af0df6ce23c>
|
CC-MAIN-2016-26
|
http://www.copticchurch.net/topics/synexarion/theodore.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00146-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.988403 | 1,489 | 2.671875 | 3 |
Skip to comments.Astronomy Picture of the Day -- 21st Century M101
Posted on 07/13/2012 4:12:38 AM PDT by SunkenCiv
Explanation: One of the last entries in Charles Messier's famous catalog, big, beautiful spiral galaxy M101 is definitely not one of the least. About 170,000 light-years across, this galaxy is enormous, almost twice the size of our own Milky Way Galaxy. M101 was also one of the original spiral nebulae observed with Lord Rosse's large 19th century telescope, the Leviathan of Parsontown. In contrast, this mulitwavelength view of the large island universe is a composite of images recorded by space-based telescopes in the 21st century. Color coded From X-rays to infrared wavelengths (high to low energies), the image data was taken from the Chandra X-ray Observatory (purple), the Galaxy Evolution Explorer (blue), Hubble Space Telescope (yellow), and the Spitzer Space Telescope (red). While the X-ray data trace the location of multimillion degree gas around M101's exploded stars and neutron star and black hole binary star systems, the lower energy data follow the stars and dust that define M101's grand spiral arms. Also known as the Pinwheel Galaxy, M101 lies within the boundaries of the northern constellation Ursa Major, about 25 million light-years away.
(Excerpt) Read more at 184.108.40.206 ...
Another of Charles "Charlie" Messier's messier objects.
rather large isn’t it?
any one of those dots could have a guy typing on a keyboard right now.
ok, no more latte today.
Oh, my my! What a beauty, and twice as big as the Milky Way!
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.
|
<urn:uuid:378d5c79-308f-4379-8d7b-a4c48ed2531d>
|
CC-MAIN-2016-26
|
http://www.freerepublic.com/focus/f-chat/2906038/posts
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.901305 | 422 | 3.109375 | 3 |
Wolftrap SeadevilsTheodore W. Pietsch
This tree diagram shows the relationships between several groups of organisms.
The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right.
You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box
Thaumatichthyids are among the most bizarre of anglerfishes, characterized by having large tooth-like denticles associated with the esca, but more striking, an enormous upper jaw that extends far forward, each premaxilla bearing numerous long hooked teeth. The premaxillae, with their extraordinarily long teeth are capable of flipping up and down to enclose, when in the ventral position, the much shorter lower jaw, forming a cage-like compartment within which prey are held prior to swallowing, reminiscent of the Venus fly-trap among carnivorous plants.
The family Thaumatichthyidae contains eight currently recognized species in two genera, Thaumatichthys Smith and Radcliffe (1912) and Lasiognathus Regan (1925). Like many ceratioid taxa, the thaumatichthyids are rare and only know from just over 60 specimens.
Dorsal (top) and ventral (bottom) views of Thaumatichthys binghami, UW 47537. The ventral view diplays the sessile esca on the roof of the mouth. © 2005 University of Washington
Metamorphosed females of the family Thaumatichthyidae are distinguished from those of all other ceratioid families by having elongate premaxillae that extend anteriorly far beyond the lower jaw, the bones widely separated anteriorly at the symphysis but connected by a broad elastic membrane; teeth on premaxillae extremely long, curved or hooked; esca bearing 1-3 large tooth-like dermal denticles; and opercle bifurcate, dorsal fork divided into two or more branches.
Metamorphosed females are further differentiated by having the following combination of character states: supraethmoid well developed (Lasiognathus), very much reduced or absent (Thaumatichthys); frontals long, narrow, and widely separated, ventromedial extensions present (Lasiognathus) or absent (Thaumatichthys); parietals present; sphenotics large, spine extremely well-developed (Lasiognathus) or small, conical, without spine (Thaumatichthys); pterosphenoid, metapterygoid, and mesopterygoid present; hyomandibular with a double head; hypohyals 2; branchiostegal rays 6 (2 + 4); subopercle long and narrow (Thaumatichthys) or short and oval (Lasiognathus), posterior margin of dorsal part without indentation, ventral part with (Thaumatichthys) or without (Lasiognathus) spine or projection on anterodorsal margin; quadrate and articular spines very well-developed (Lasiognathus) or rudimentary (Thaumatichthys); angular and preopercular spines absent; lower jaw without symphysial spine; postmaxillary process of premaxilla absent; anterior-maxillomandibular ligament reduced (Thaumatichthys) or absent (Lasiognathus); pharyngobranchials I and IV absent; pharyngobranchials II and III well developed and toothed; hypobranchials 1-3 present; only a single ossified basibranchial; epibranchial and ceratobranchial teeth absent; epibranchial I bound to wall of pharynx; proximal one-half to two-thirds of ceratobranchial I bound to wall of pharynx; distal end of ceratobranchial I free, not bound by connective tissue to adjacent ceratobranchial II; proximal one-quarter to one-half of ceratobranchials II-IV not bound together by connective tissue; epurals absent; posterior margin of hypural plate entire (Lasiognathus) or deeply notched (Thaumatichthys); pterygiophore of illicium bearing a small ossified remnant of second cephalic spine; escal bulb and central lumen present; posteroventral process of coracoid absent; pectoral radials 3; pelvic bones cylindrical, only slightly expanded distally; dorsal-fin rays 5-7; anal-fin rays 4-5; pectoral-fin rays 14-20; caudal-fin rays 9 (2 simple + 4 bifurcated + 3 simple); skin everywhere naked, dermal spinules absent (Lasiognathus), or present on ventral surface of head and body of metamorphosed specimens (Thaumatichthys); ovaries paired; pyloric caecae absent.
Males and larvae are known only for Thaumatichthys.
Regan (1925, 1926), followed by Regan and Trewavas (1932), Bertelsen (1951), and Maul (1961, 1962), included Lasiognathus in the family Oneirodidae, together with Thaumatichthys, which was originally placed in a family of its own, the Thaumatichthyidae Smith and Radcliffe (1912). Pietsch (1972) resurrected the Thaumatichthyidae to include both Lasiognathus and Thaumatichthys. Bertelsen and Struhsaker (1977) compared the osteology of Thaumatichthys and Lasiognathus, pointing out that the latter appears more closely related to the Oneirodidae in several of the characters in which it differs from Thaumatichthys. The Oneirodidae, however, is undefined cladistically (a long list of characters in combination, some or none of which may be derived, is presently required to contain the 16 morphologically diverse oneirodid genera; see Pietsch, 1974). Bertelsen and Struhsaker (1977:34) noted, therefore, that “it becomes a subjective choice whether the genera Lasiognathus and Thaumatichthys both should be included in the Oneirodidae as Regan (1926) did, or placed together in Thaumatichthyidae as proposed by Pietsch (1972), or whether each of them should be referred to a family of its own.” At the same time, however, they cited the two unique features used by Pietsch (1972) to diagnose the Thaumatichthyidae (premaxillae extending anteriorly far beyond lower jaw, and enlarged dermal denticles associated with the esca) and added a third (dorsal portion of opercle divided into two or more branches). In the end, they chose to retain the Thaumatichthyidae in the enlarged sense as proposed by Pietsch (1972).
The 27 known specimens of Lasiognathus were collected from widely scattered localities in the Atlantic and Pacific oceans between approximately 47°N and 35°S. The 33 known specimens of Thaumatichthys are worldwide in distribution between approximately 32°N and 27°S.
1A. Head narrow; pterygiophore of illicium long, anterior end emerging on snout from between frontal bones; illicium long, greater than 35% SL; esca at tip of illicium, bearing two or three large tooth-like denticles; skin naked, dermal spinules absent; dorsal-fin rays 5; anal-fin rays 5 (Lasiognathus Regan, 1925)
1B. Head broad, depressed; pterygiophore of illicium short, completely hidden beneath skin of head; illicium short, nearly fully enveloped by tissue of esca; esca sessile on roof of mouth, with one deeply embedded dermal denticle; dermal spinules present in skin of ventral surface of head and body; dorsal-fin rays 6-7; anal-fin rays 4 (Thaumatichthys Smith and Radcliffe, 1912)
Bertelsen, E. 1951. The ceratioid fishes. Ontogeny, taxonomy, distribution and biology. Dana Rept., 39, 276 pp.
Bertelsen, E., and P. J. Struhsaker. 1977. The ceratioid fishes of the genus Thaumatichthys: Osteology, relationships, distribution, and biology. Galathea Rept., 14: 740.
Maul, G. E. 1961. The ceratioid fishes in the collection of the Museu Municipal do Funchal (Melanocetidae, Himantolophidae, Oneirodidae, Linophrynidae). Bol. Mus. Mun. Funchal, 14(50): 87-159.
Maul, G. E. 1962. On a small collection of ceratioid fishes from off Dakar and two recently acquired specimens from stomachs of Aphanopus carbo taken in Madeira (Melanocetidae, Himantolophidae, Diceratiidae, Oneirodidae, Ceratiidae). Bol. Mus. Mun. Funchal, 16(54): 5-27.
Pietsch, T. W. 1972. A review of the monotypic deep-sea anglerfish family Centrophrynidae: Taxonomy, distribution, and osteology. Copeia, 1972(1): 1747.
Pietsch, T. W. 1974. Osteology and relationships of ceratioid anglerfishes of the family Oneirodidae, with a review of the genus Oneirodes Lütken. Nat. Hist. Mus. L. A. Co., Sci. Bull., 18, 113 pp.
Pietsch, T. W. 2005. A new species of the ceratioid anglerfish genus Lasiognathus Regan (Lophiiformes: Thaumatichthyidae) from the Eastern North Atlantic off Madeira. Copeia, 2005(1): 77-81.
Regan, C. T. 1925b. New ceratioid fishes from the N. Atlantic, the Caribbean Sea, and the Gulf of Panama, collected by the “Dana.” Ann. Mag. Nat. Hist., Ser. 8, 8(62): 561567.
Regan, C. T. 1926. The pediculate fishes of the suborder Ceratioidea. Dana Oceanogr. Rept. 2, 45 pp.
Regan, C. T., and E. Trewavas. 1932. Deep-sea anglerfish (Ceratioidea). Dana Rept., 2, 113 pp.
Smith, H. M., and L. Radcliffe. 1912. Scientific results of the Philippine Cruise of the fisheries steamer "Albatross," 19071910, No. 20. Description of a new family of pediculate fishes from Celebes. Proc. U. S. Nat. Mus., 42: 579581.
Theodore W. Pietsch
University of Washington, Seattle, Washington, USA
Correspondence regarding this page should be directed to Theodore W. Pietsch at
Page copyright © 2005 Theodore W. Pietsch
Page: Tree of Life Thaumatichthyidae. Wolftrap Seadevils. Authored by Theodore W. Pietsch. The TEXT of this page is licensed under the Creative Commons Attribution-NonCommercial License - Version 3.0. Note that images and other media featured on this page are each governed by their own license, and they may or may not be available for reuse. Click on an image or a media link to access the media data window, which provides the relevant licensing information. For the general terms and conditions of ToL material reuse and redistribution, please see the Tree of Life Copyright Policies.
- First online 06 November 2005
Citing this page:
Pietsch, Theodore W. 2005. Thaumatichthyidae. Wolftrap Seadevils. Version 06 November 2005 (under construction). http://tolweb.org/Thaumatichthyidae/22007/2005.11.06 in The Tree of Life Web Project, http://tolweb.org/
|
<urn:uuid:9570906d-d1de-43de-9345-23a3954a279b>
|
CC-MAIN-2016-26
|
http://tolweb.org/Thaumatichthyidae
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00113-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.820649 | 2,735 | 3.34375 | 3 |
Fear can take all shapes and sizes. Fear can become a phobia and/or an obsession if left unaddressed. Fears usually begin in childhood and will manifest itself in many ways – hysterical crying, freezing up, physical and emotional outbursts. But there are those rare instances when the fear will begin and manifest itself in adulthood because of a personal trauma. For instance, there was a woman in the United States who after a debilitating car accident could not make right hand turns. This fear became an obsessive phobia for her as she would study maps for hours just to find a way to reach her destination without having to make a right hand turn. During the late 1200’s humankind shared a common fear which for some became an obsessive phobia – the fear of being buried alive or most commonly known as premature burial. The fear of being prematurely buried alive is humankind’s oldest fear.
Those in the medical profession during these time periods were not as skilled or knowledgeable as they are today. Barbers and surgeons were frequently one and the same which aided in the misdiagnosis of death. However, it should be noted that evidence of “misdiagnosis” is difficult to document because there were very few details printed on death certificates, a doctor did not need to be present to declare someone deceased – they just needed to be told that someone died and that’s what they attested to on the death certificate, the pallbearers/funeral processions were not smooth frequently banging the coffins on a wall or hitting a pothole, and the bodily functions of the corpse can make it appear as if the person had awakened and struggled to get out of the coffin.
Although the fear of premature burial reached its height in the 19th century in Europe it stretches back to ancient times.
• The grammarian and metaphysician, Johannes Duns Scotus died in Cologne in 1308. When the vault his corpse resided in was opened later he was found lying outside the coffin.
• Thomas A Kempis died in 1471 and was denied canonization because splinters were found embedded under his nails. Anyone aspiring to be a saint would not fight death if he found himself buried alive (Wilkins, 21)!
• Ann Green was hanged by the neck until dead – or so they thought – in 1650 at Oxford. She was found to be alive after being placed in a coffin for burial. One kindly gentleman attempted to assist her back to the land of the dead by raising his foot and stamping her chest and stomach with such severe force that he only succeeded in completely reviving her. She lived a long life and bore several children (Wilkins, 21).
• Premature burial did not affect only the poor but the wealthy and politically important. Emperor Zenon was buried alive although some historians feel it was a deliberate “premature” burial spearheaded by his wife (Wilkins, 22)
• A young priest was in the pulpit one morning when he was seized with giddiness. He soon lost the power of speech and sunk to the floor. Although he could not see he could hear voices as they prepared him for burial. It wasn’t until a familiar voice spoke to him that he was able to rise. He was in the pulpit the next day – business as usual (Wilkins, 23)!
• Virginia Macdonald was buried in a Brooklyn cemetery in 1850. Her mother was so persistent that she had been buried alive that authorities finally relented and raised her coffin. The lid was opened to find that her delicate hands had been badly bitten and she was lying on her side.
• When the Les Innocents cemetery in Paris, France was moved from the center of the city to the suburbs the number of skeletons found face down convinced the lay people and several doctors that premature burial was very common (Wilkins, 25).
There are many other stories of men, women, and children suspected of being buried alive when their coffins were later opened and their fingers and hands had been chewed, faces distorted in fear, and their bodies not just lying spilled onto the floor but several feet away sitting in a corner huddled in fear. It is believed that these people were rare instances.
Another common factor of premature burial was plaques and epidemics of small pox and cholera to name a few. It has been reported that during the black plaque people who were pronounced deceased could be heard crying for help within the heap of dead bodies. Usually, these people were left as they were and buried in mass graves – no one had the stomach to dig them out and it would only delay work schedules which were already overwhelming.
Beginning in the 1900’s funerals and burials were delayed to ensure that the deceased was in fact deceased! Prior to the 1900’s there were a couple of books that people referred to determine whether or not someone was actually dead. French physician Jacques Benigne Winslow published “The Uncertainty of the Signs of Death.” Dr. Winslow was what most termed an expert in the field as he had claimed to be placed in a coffin and prepared for burial twice. His thesis was a body can be called a corpse only when signs of putrefaction were obvious. In other words keep the loved one out of the ground until the stench of decomposing flesh removed all doubt (Wilkins, 16).
Montague Summers wrote “The Vampire: His Kith and Kin in 1928 where he drew parallels between premature burial and the rising of the dead. I’m sure he was one that insisted upon the staking of the heart or some other form of corpse mutilation to ensure death.
With all this information it was a good thing that people in the early 1900’s did not hurry to bury their dead. Micah Hibble was diagnosed as dead three times before the Grim Reaper actually kept him!
There were diseases and conditions that gave the appearance of death to even the healthiest of people. When Catalepsy occurs it is the immobility of the muscles, the body takes on a waxy flexibility, and can be molded into bizarre positions. This condition occurs usually whenever hysteria, hypnotism or schizophrenia (without medicine) is present. Fainting spells, falling into trances or comas were often misdiagnosed as deceased. A sentient corpse is someone who is aware of their surroundings but can do nothing to alert people to what is going on. This is an interesting phenomenon and has been known to occur during autopsies.
There have been a number of inventions to confirm the diagnosis of death. There was a society of men and women called Society for the Prevention of People Being Buried Alive. These people encouraged the slow process of burial – Duke Wellington died in 1928 but was not buried for over two months! Crowbars and shovels were buried with the loved ones so they could dig themselves out of the grave – I’m not sure how they were to maneuver the tools …..
The wealthy had more options of course. Some chose special capsules that would be penetrated by nails as the lid of the coffin was lowered and sealed releasing a deadly poisonous gas. Others chose to purchase Bateson’s Belfry which was an iron bell mounted on the lid of the coffin just above the head. The bell could be easily rung with a pull cord inside the coffin. The smallest tremor would set the bell ringing. There is no record of anyone using this device successfully!
Mr. Bateson so feared premature burial that he doused himself with linseed oil and set himself on fire (www.members.tripod.com!
There were other devices used over time – a telephone, loudspeaker, security system that could be operated from inside the vaults, and people even left behind food and water just in case!
Death is a touchy subject for many people without the added fear of premature burial or being sentient during an autopsy. No one wants to awaken in a crypt or on a table with their body flayed open for the world to see. However, that does happen. Although pronounced dead by three doctors and almost two hours in a noose, a young man was found to have a heart beat. Doctors present opened his chest and found the heart was indeed beating with force! After five hours and his chest still open to prodding fingers his heart finally quit beating and he died.
Not only do people fear premature burial but Hollywood has made a fortune from the topic! Edgar Allen Poe not only wrote “Premature Burial” but it has made it to Broadway and on the movie screen. In 1963 A Comedy of Terrors took what was a horrifying subject just a short time ago and turned it into something funny, starring Vincent Price, Boris Karlof, Basil Rathbone, and Peter Lorre
I’ve just touched on some of the concerns know about the fear that humankind keeps tucked back in the dark recess of their mind but it is less likely that one would be buried alive now days with the legal requirements for preparing a dead body. The fear of premature burial has been replaced with waking on the autopsy table!
Not only did people have to fear being misdiagnosed but they had to fear the grave robbers, resurrectionists, and opportunists who saw a profit. Once a corpse had been identified for burial, God help the person because if they were not dead at the time of burial in most cases they were murdered. More on that later!
Wikins, Robert. A History of Mans Obsession and Fears. Barnes & Noble, 1990
|
<urn:uuid:976dac97-d4cd-46af-8bc9-0171a35731c6>
|
CC-MAIN-2016-26
|
http://www.theshadowlands.net/premature.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.98591 | 1,964 | 2.765625 | 3 |
Are regional models ready for prime time?
[Update 15 April 2015]
The summaries of the Climate Dialogue on Regional Modelling are now online (see links below). We have made two versions: an extended and a shorter version.
Both versions can be downloaded as pdf documents:
Summary of the Climate Dialogue on Regional Modelling
Extended summary of the Climate Dialogue on Regional Modelling
The third Climate Dialogue is about the value of models on the regional scale. Do model simulations at this level have skill? Can regional models add value to the global models?
We have three excellent participants joining this discussion: Bart van den Hurk of KNMI in The Netherlands who is actively involved in the KNMI scenario’s, Jason Evans from the University of Newcastle, Australia, who is coordinator of Coordinated Regional Climate Downscaling Experiment (CORDEX) and Roger Pielke Sr. who through his research articles and his weblog Climate Science is well known for his outspoken views on climate modelling.
Climate Dialogue editorial staff
Rob van Dorland, KNMI
Marcel Crok, science writer
Introduction regional modelling
Are climate models ready to make regional projections?
Climate models are vital tools for helping us understand long-term changes in the global climate system. These models allow us to make physically plausible projections of how the climate might evolve in the future under given greenhouse gas emission scenarios.
Global climate projections for 2050 and 2100 have, amongst other purposes, been used to inform potential mitigation policies, i.e. to get a sense of the challenge we are facing in terms of CO2 emission reductions. The next logical step is to use models for adaptation as well. Stakeholders have an almost insatiable demand for future regional climate projections. These demands are driven by practical considerations related to freshwater resources, especially ecosystems and water related infrastructure, which are vulnerable to climate change.
Global climate models (GCMs) though have grid scales that are quite coarse (>100 km). This hampers the reconstruction of climate change at smaller scales (regional to local). Regions (the size of e.g. the Netherlands) are usually covered by only a few grid points. A crucial question therefore is whether information from global climate models at this spatial scale is realistic and meaningful, in hind cast and/or for the future.
Hundreds of studies have been published in the literature presenting regional projections of climate change for 2050 and 2100. The output of such model simulations is then used by the climate impacts community to investigate what potential future benefits or threats could be expected. However several recent studies cast doubt whether global model output is realistic on a regional scale, even in hind cast. [2-5]
So a legitimate question is whether global and/or regional climate models are ready to be used for regional projections? Is the information reliable enough to use for all kinds of medium to long term adaptation planning? Or should we adopt a different approach?
To improve the resolution of the models other techniques, such as regional climate models (RCMs), or downscaling methods, have been developed. Nesting a regional climate model (with higher spatial resolution) into an existing GCM is one way to downscale data. This is called dynamical downscaling. A second way of downscaling climate model data is through the use of statistical regression. Statistical downscaling is based on relationships linking large-scale atmospheric variables from either GCMs or RCMs (predictors)and local/regional climate variables (predictands) using observations.
Both methods are widely used inside the regional modelling community. The higher spatial resolution allows a more detailed representation of relevant processes, which will hopefully, but not necessarily, result in a “better” prediction. However RCMs operate under a set of boundary conditions that are dependent on the parent GCM. Hence, if the GCM does not do an adequate job of reproducing the climate signal of a particular region, the RCM will simply mimic those inaccuracies and biases. A valid question therefore is if and how the coupling of a RCM to a GCM can provide more refined insights. [7,8]
Recently Kerr caused quite a stir in the regional modelling community by raising doubts about the reliability of regional model output. A debate about the reliability of model simulations is quickly seen as one between proponents and sceptics of anthropogenic global warming. However as Kundzewicz points out “these are pragmatic concerns, raised by hydrologists and water management practitioners, about how useful the GCMs are for the much more detailed level of analysis (and predictability) required for site-specific water management decisions (infrastructure planning, design and operations).”
The focus of this Climate Dialogue will be on the reliability of climate simulations for the regional scale. An important question will be if there is added value from regional climate downscaling.
More specific questions:
1) How realistic are simulations by GCM’s on the regional scale?
2) Do some parameters (e.g. temperature) perform better than others (e.g. precipitation)?
3) Do some regions perform better than others?
4) To what extent can regional climate models simulate the past?
5) What is the best way to determine the skill of the hind cast?
6) Is there added value of regional models in comparison with global models?
7) What are the relative merits of dynamical and statistical downscaling?
8) How should one judge projections of these regional models?
9) Should global/regional climate models be used for decisions concerning infrastructure development? If so how? If not, what should form a better scientific base for such decisions?
The CMIP3 and CMIP5 list of publications is a good starting point, see http://www-pcmdi.llnl.gov/ipcc/subproject_publications.php and http://cmip.llnl.gov/cmip5/publications/allpublications
G.J. van Oldenborgh, F.J. Doblas Reyes, S.S. Drijfhout, and E. Hawkins, "Reliability of regional climate model trends", Environmental Research Letters, vol. 8, pp. 014055, 2013. http://dx.doi.org/10.1088/1748-9326/8/1/014055
Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. &Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094–1110
Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532
J. Bhend, and P. Whetton, "Consistency of simulated and observed regional changes in temperature, sea level pressure and precipitation", Climatic Change, 2013. http://dx.doi.org/10.1007/s10584-012-0691-2
Wilby, R. L. (2010) Evaluating climate model outputs for hydrologicalapplications – Opinion. Hydrol. Sci. J. 55(7), 1090–1093
Kundzewicz, Zbigniew W. and Stakhiv, Eugene Z.(2010) 'Are climate models “ready for prime time” inwater resources management applications, or is more research needed?', Hydrological Sciences Journal, 55: 7, 1085 —1089
Pielke, R. A. S., and R. L. Wilby, 2012: Regional climate downscaling: What’s the point? Eos Trans.AGU, 93, PAGE 52, doi:201210.1029/2012EO050008
R.A. Kerr, "Forecasting Regional Climate Change Flunks Its First Test", Science, vol. 339, pp. 638-638, 2013. http://dx.doi.org/10.1126/science.339.6120.638
Kundzewicz, Zbigniew W. and Stakhiv, Eugene Z.(2010) 'Are climate models “ready for prime time” in water resources management applications, or is more research needed?', Hydrological Sciences Journal, 55: 7, 1085 —1089
Guest blog Bart van den Hurk
The added value of Regional Climate Models in climate change assessments
Regional downscaling of climate information is a popular activity in many applications addressing the assessment of possible effects of a systematic change of the climate characteristics at the local scale. Adding local information, not captured in the coarse scale climate model or observational archives, can provide an improved representation of the relevant processes at this scale, and thus yield additional information, for instance concerning topography, land use or small scale features such as sea breezes or organisation of convection. A necessary step in the application of tools used for this regional downscaling is a critical assessment of the quality of the tools: are regional climate models (RCMs), used for this climate information downscaling, good enough for this task?
It is important to distinguish the various types of analyses that are carried out with RCMs. And likewise to assess the ability of the RCM to perform the task that is assigned to them. And these types of analyses clearly cover a wider range than plain prediction of the local climate!
Regional climate prediction
Pielke and Wilby (2012) discuss the lack of potential of RCMs to increase the skill of climate predictions at the regional scale. Obviously, these RCM predictions heavily rely on the quality of the boundary conditions provided by global climate models, and fail to represent dynamically the spatial interaction between the region of interest and the rest of the world. However, various “big brother” type experiments (in which the ability of RCMs to reproduce a filtered signal provided by the boundary conditions (Denis et al, 2002), for instance carried out by colleagues at KNMI) do show that a high resolution regional model can add value to a coarse resolution boundary condition by improving the spatial structure of the projected mean temperatures. Also the spatial structure of changes in precipitation linked to altered surface temperature by convection can be improved by using higher resolution model experiments, although the relative gain here is generally small (Di Luca et al, 2012).
Van Oldenborgh et al (2013) point out that the spatial structure of the mean temperature trend in the recent CMIP5 model ensemble compares fairly well with observations, but anomalies from the mean temperature trend aren’t well captured. This uncertainty clearly limits the predictability of temperatures at the regional scale beyond the mean trend. Van Haren et al (2012) also nicely illustrate the dependence of regional skill on lateral boundary conditions: simulations of (historic) precipitation trends for Europe failed to match the observed trends when lateral boundary conditions were provided from an ensemble of CMIP3 global climate model simulations, while a much better correspondence with observations was obtained when reanalyses were used as boundary condition. Thus, a regional prediction of a trend can only be considered to be skilful when the boundary forcings represent the signal to be forecasted adequately. And this does apply to mean temperature trends for most places in the world, but not for anomalies from these mean trends, nor for precipitation projections.
For regional climate predictability, the added value of RCMs should come from better resolving the relationship between mean (temperature) trends and key indicators that are supposedly better represented in the high resolution projections utilizing additional local information, such as temperature or precipitation extremes. Also here, evidence of adding skill is not univocally demonstrated. Min et al (2013) evaluate the ability of RCMs driven by reanalysis data to reproduce observed trends in European annual maximum temperatures, and conclude that there is a clear tendency to underestimate the observed trends. For Southern Europe biases in maximum temperatures could be related to errors in the surface flux partitioning (Stegehuis et al, 2012), but no such relationship was found for NW Europe by Min et al (2013).
Thus indeed, the limitations to predictability or regional climate information by RCMs as discussed by Pielke and Wilby (2012) and others are valid, and care must be taken while interpreting RCM projections as predictive assessments. But is this the only application of RCMs? Not really. We will discuss two other applications, together with the degree to which limitations in RCM skill apply and are relevant.
Bottom up environmental assessments
A fair point of critique to exploring a cascade of model projections ranging from the global scale down to the local scale of a region of interest to developers of adaptation or mitigation policies is the virtually unmanageable increase of the range of degrees of freedom, also addressed as “uncertainty”. Uncertainty arises from imperfect models, inherent variability, and unknown evolution of external forcings. And in fact the process of (dynamical) downscaling adds another level of uncertainty, related to the choice of downscaling tools and methodologies. The reverse approach, starting from the vulnerability of a region or sector of interest to changes in environmental conditions (Pielke et al, 2012), does not eliminate all sources of uncertainty, but allows a focus on the relevant part of the spectrum, including those elements that are not related to greenhouse gas induced climate change.
But also here, RCMs can be of great help, not necessarily by providing reliable predictions, but also by supporting evidence about the salience of planned measures or policies (Berkhout et al, 2013). A nice example is a near flooding situation in Northern Netherlands (January 2012), caused by a combined occurrence of a saturated soil due to excessive antecedent precipitation, a heavy precipitation event in the coastal area and a storm surge with a duration of several days that hindered the discharge of excess water from the area. This is typically a “real weather” event that is not necessarily exceptional but does expose a local vulnerability to superfluous water. The question asked by the local water managers was whether the combination of the individual events (wet soil, heavy rain, storm surge) has a causal relationship, and whether the frequency of occurrence of compound events can be expected to change in the future. Observational analyses do suggest a link between heavy precipitation and storm surge, but the available dataset was too short to explore the statistical relationships in a relevant part of the frequency distribution. A large set of RCM simulations is now explored to increase the statistical sample, but – more importantly – to provide a physically comprehensive picture of the boundary conditions leading up to an event like this. By enabling the policy makers to communicate this physically comprehensive picture provides public support for measures undertaken to adapt to this kind of events. This exploration of model based – synthetic – future weather is a powerful method to assess the consequences of possible changes in regional climate variability for the local water management.
Apart from a tool to predict a system given its initial state and the boundary forcings on it, a model is a collection of our understanding of the system itself. Its usefulness is not limited to its ability to predict, but also to describe the dynamics of a system, governed by internal processes and interactions with its environment. Regional climate models should likewise be considered as “collections of our understanding of the regional climate system”. And can likewise be used to study this system, and learn about it. There are numerous studies where regional climate model studies have increased our understanding of the mechanism of the climate system acting on a regional scale. A couple of examples:
- Strong trends in coastal precipitation, and particularly a series of extreme precipitation events in the Netherlands, could successfully be attributed to anomalies in sea surface temperature (SST) in the nearby North Sea (Lenderink et al, 2009). Strong SST gradients close to the coast needed to be imposed to the RCM simulations carried out to reveal this mechanism. The relationship between SSTs and spatial gradients in changes in (extreme) precipitation is an important finding for analysing necessary measures to anticipate future changes in the spatial and temporal distribution of rainfall in the country.
- During the past century land use change has given rise to regional changes in the local surface climatology, particularly the mean and variability of near surface temperature (Pitman et al, 2012). A set of GCM simulations dedicated to quantify the effect of land use change relative to changes in the atmospheric greenhouse gas concentration over the past century revealed that the land use effect is largely limited to the area of land use change. Wramneby et al (2010) explored the regional interaction between climate and vegetation response using a RCM set-up, and highlighted the importance of this interaction for assessing the mean temperature response particularly at high latitudes (due to the role of vegetation in snow covered areas) and in water limited evaporation regimes (due to the role of vegetation in controlling surface evaporative cooling).
- In many occasions the degree to which anomalies in the land surface affect the overlying atmosphere depends on the resolved spatial scale. As an example, Hohenegger et al (2009) investigated the triggering of precipitation in response to soil moisture anomalies with a set of regional models ranging in physical formulation and resolution. This issue that deserves a lot of attention in the literature due to the possible existence of (positive) feedbacks that may affect occurrence or intensity of hydrological extremes such as heatwaves. In her study, RCMs operating at the typical 25 – 50 km scale resolution tend to overestimate the positive soil moisture – precipitation feedback in the Alpine area, which is better represented by higher resolution models. It is a study that points at a possible mechanism that needs to be adequately represented for generating reliable projections.
Each of these examples (and many more that can be cited) generates additional insight in the processes controlling local climate variability by allowing to zoom in on these processes using RCMs. They thus contribute to setting the research agenda in order to improve our understanding of drivers of regional change.
Climate predictions versus climate scenarios
The notion that a tool – an RCM – may possess shortcomings in its predictive skill, but simultaneously prove to be a valuable tool to support narratives that are relevant to policy making and spatial planning can in fact be extended to highlighting the difference between “climate predictions” and “climate scenarios”. Scenarios are typically used when deterministic or probabilistic predictions show too little skill to be useful, either because of the complexity of the considered system, or because of the fundamental limitations to its predictability (Berkhout et al, 2013). A scenario is a “what if” construction, a tool to create a mental map of the possible future conditions assuming a set of driving boundary conditions. For a scenario to be valuable it does not necessarily need to have predictive skill, although a range of scenarios can be and are being interpreted as a probability range for future conditions. A (single) scenario is mainly intended to feed someone’s imagination with a plausible, comprehensible and internally consistent picture. Used this way, also RCMs with limited predictive skill can be useful tools for scenario development and providing supporting narratives that generate public awareness or support for preparatory actions. For this, the RCM should be trustworthy in producing realistic and consistent patterns of regional climate variability, and abundant application, verification and improving is a necessary practice. Further developments of RCMs as a Regional Earth System Exploration tool, by linking the traditional meteorological models to hydrological, biogeophysical and socio-economic components, can further develop their usefulness in practice.
Bart van den Hurk has a PhD on land surface modelling, obtained in Wageningen in 1996. Since then he has worked at the Royal Netherlands Meteorological Institute (KNMI) as researcher, involved in studies addressing modelling land surface processes in regional and global climate models, data assimilation of soil moisture, and constructing regional climate change scenarios. He is strongly involved with the KNMI global modelling project EC-Earth, and is co-author of the land surface modules of the European Centre for Medium Range Weather Forecasts (ECMWF). Since 2005 he is part-time professor “Regional Climate Analysis” at the Institute of Marine and Atmospheric Research (IMAU) at the Utrecht University. There he teaches masters students, supervises PhD-students and is involved in several research networks. Between 2007 and 2010 he was chair of the WCRP-endorsed Global Land-Atmosphere System Studies (GLASS) panel, since 2006 member of the council of the Netherlands Climate Changes Spatial Planning program, and since 2008 member of the board of the division “Earth and Life Sciences” of the Dutch Research Council (NWO-ALW). He is convenor at a range of incidental and periodic conferences, and editor for Hydrology and Earth System Science (HESS).
Berkhout, F., B. van den Hurk, J. Bessembinder, J. de Boer, B. Bregman en M. van Drunen (2013), Framing climate uncertainty: using socio-economic and climate scenarios in assessing climate vulnerability and adaptation; submitted Regional and Environmental Change.
Denis, B., R. Laprise, D. Caya and J. Côté, 2002: Downscaling ability of one-way-nested regional climate models: The Big-Brother experiment. Clim. Dyn. 18, 627-646.
Di Luca, A., Elía, R. & Laprise, R., 2012. Potential for small scale added value of RCM’s downscaled climate change signal. Climate Dynamics. Available at: http://www.springerlink.com/index/10.1007/s00382-012-1415-z
Hohenegger, C., P. Brockhaus, C. S. Bretherton, C. Schär (2009): The soil-moisture precipitation feedback in simulations with explicit and parameterized convection, in: Journal of Climate 22, pp. 5003–5020.
Lenderink, G., E. van Meijgaard en F. Selten (2009), Intense coastal rainfall in the Netherlands in response to high sea surface temperatures: analysis of the event of August 2006 from the perspective of a changing climate; Clim. Dyn., 32, 19-33, doi:10.1007/s00382-008-0366-x.
Min, E., W. Hazeleger, G.J. van Oldenborgh en A. Sterl (2013), Evaluation of trends in high temperature extremes in North-Western Europe in regional climate models; Environmental Research Letters, 8, 1, 014011, doi:10.1088/1748-9326/8/1/014011.
Pielke, R. A. S., and R. L. Wilby, 2012: Regional climate downscaling: What’s the point? Eos Trans.AGU, 93, PAGE 52, doi:201210.1029/2012EO050008
Pielke, R. A., Sr., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku,J. Adegoke, G. Kallos, T. Seastedt, and K. Suding (2012), Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective, in Extreme Events and Natural Hazards: The Complexity Perspective, Geophys. Monogr. Ser., vol. 196, edited by A. S. Sharma et al. 345–359, AGU, Washington, D. C., doi:10.1029/2011GM001086.
Pitman, A., N. de Noblet, F. Avila, L. Alexander, J.P. Boissier, V. Brovkin, C. Delire, F. Cruz, M.G. Donat, V. Gayler, B.J.J.M. van den Hurk, C. Reick en A. Voldoire (2012): Effects of land cover change on temperature and rainfall extremes in multi-model ensemble simulations; Earth System Dynamics, 3, 213-231, doi:10.5194/esd-3-213-2012.
Stegehuis A., Vautard R., Teuling R., Ciais P. Jung M. Yiou, P. (2012) Summer temperatures in Europe and land heat fluxes in observation-based data and regional climate model simulations, in press by Climate Dynamics; doi 10.1007/s00382-012-1559-x
Van Haren, R. G.J. van Oldenborgh, G. Lenderink, M. Collins en W. Hazeleger (2012), SST and circulation trend biases cause an underestimation of European precipitation trends; Clim. Dyn., doi:10.1007/s00382-012-1401-5.
Van Oldenborgh, G.J. F.J. Doblas-Reyes, S.S. Drijfhout en E. Hawkins (2013), Reliability of regional climate model trends; Environmental Research Letters, 8, 1, 014055, doi:10.1088/1748-9326/8/1/014055.
Wramneby, A., B. Smith, and P. Samuelsson (2010), Hot spots of vegetation-climate feedbacks under future greenhouse forcing in Europe, J. Geophys. Res., 115, D21119, doi:10.1029/2010JD014307.
Guest blog Jason Evans
Are climate models ready to make regional projections?
Global Climate Models (GCMs) are designed to provide insight into the global climate system. They have been used to investigate the impacts of changes in various climate system forcings such as volcanoes, solar radiation, and greenhouse gases, and have proved themselves to be useful tools in this respect. The growing interest in GCM performance at regional scales, rather than global, has come from at least two different directions: the climate modelling community and the climate change adaptation community.
Due, in part, to the ever increasing computational power available, GCMs are being continually developed and applied at higher spatial resolutions. Many GCM modelling groups have been increasing the resolution from ~250km grid boxes 7 years ago to ~100km grid boxes today. This model resolution increase leads naturally to model development and evaluation exercises that pay closer attention to smaller scales, in this case, regional instead of global scales. The Fifth Coupled Model Intercomparison Experiment (CMIP5) provides a large ensemble of GCM simulations, many of which are at resolutions high enough to warrant evaluation at regional scales. Over the next few years these GCM simulations will be extensively evaluated, problems will be found (as seen in some early evaluations1,2), followed hopefully by solutions that lead to further model development and improved simulations. This step of finding a solution to an identified problem is the hardest in the model development cycle, and I applaud those who do it successfully.
Probably the stronger demand for regional scale information from climate models is coming from the climate change adaptation community. Given only modest progress in climate change mitigation, adaptation to future climate change is required. Some sectors, such as those involved in large water resource projects (e.g. building a new dam), are particularly vulnerable to climate change. They are planning to invest large amounts of money (millions) in infrastructure, with planned lifetimes of 50-100 years, that directly depend on climate to be successful. Over such long lifetimes, greenhouse gas driven climate change is expected to increase temperature by a few degrees, and may cause significant changes in precipitation, depending on the location. Many of the systems required to adapt are more sensitive to precipitation than temperature, and projections of precipitation often have considerably more uncertainty associated with them. The question for the climate change adaptation community is whether the uncertainty (including model errors) in the projected climate change is small enough to be useful in a decision making framework.
From a GCM perspective then, the answer to “Are climate models ready to make regional projections?” is two-fold. For the climate modelling community the answer is yes. GCMs are being run at high enough resolution to make regional scale (so long as your regions are many 100kms across) evaluations and projections useful to inform the model development and hopefully improve future simulations. For the climate change adaptation community, whose spatial scale of interest is often much lower than current high resolution GCMs can capture, the answer in general is no. The errors in the simulated regional climate and the inter-model uncertainty in regional climate projections from GCMs is often too large to be useful in decision making. These climate change adaptation decisions need to be made however, and in an effort to produce useful regional scale climate information that embodies the global climate change a number of “downscaling” techniques have been developed.
It is worth noting that some climate variables, such as temperature, tend to be simulated better by climate models than other variables, such as precipitation. This is at least partly due to the scales and non-linearity of physical processes which effect each variable. This is demonstrated in the fourth IPCC report which mapped the level of agreement in the sign of the change in precipitation projected by GCMs. This map showed large parts of the world where GCMs disagreed about the sign of the change in precipitation. However this vastly underestimated the agreement between the GCMs3. They showed that much of this area of disagreement is actually areas where the GCMs agree that the change will be small (or zero). That is, if the actual projected change is zero then by chance, some GCMs will project small increases and some small decreases. This does not indicate disagreement between the models, rather they all agree that the change is small.
Regional climate models
Before describing the downscaling techniques, it may be useful to consider this question: What climate processes, that are important at regional scales, may be missing in GCMs?
The first set of processes relates directly to how well resolved land surface features such as mountains and coastlines are. Mountains cause local deviations in low level air flow. When air is forced to rise to get over a mountain range it can trigger precipitation, and because of this, mountains are often a primary region for the supply of fresh water resources. At GCM resolution mountains are often under-represented in terms of height or spatial extent and so do not accurately capture this relationship with precipitation. In fact, some regionally important mountain ranges, such as the eastern Mediterranean coastal range or the Flinders Range in South Australia, are too small to be represented at all in some GCMs. Using higher spatial resolution to better resolve the mountains should improve the models ability to capture this mountain-precipitation relationship.
Similarly, higher resolution allows model coastlines to be closer to the location of actual coastlines, and improves the ability to capture climate processes such as sea breezes.
The second set of processes are slightly more indirect and often involve an increase in the vertical resolution as well. These processes include the daily evolution of the planetary boundary layer, and the development of low level and mountain barrier jets.
A simple rule of thumb is that one can expect downscaling to higher resolution to improve the simulation of regional climate in locations that include coastlines and/or mountain ranges (particularly where the range is too small to be well resolved by the GCM but large enough to be well resolved at the higher resolution) while not making much difference over large homogeneous, relatively flat regions (deserts, oceans,...).
So, there are physical reasons one might expect downscaling to higher resolution will improve the simulation of regional climate. How do we go about this downscaling?
Downscaling techniques can generally be divided into two types: statistical and dynamical. Statistical techniques generally use a mathematical method to form a relationship between the modelled climate and observed climate at an observation station. A wide variety of mathematical methods can be used but they all have two major limitations. First, they rely on long historical observational records to calculate the statistical relationship, effectively limiting the variables that can be downscaled to temperature and precipitation, and the locations to those stations where these long records were collected. Second, they assume that the derived statistical relationship will not change due to climate change.
Dynamical downscaling, or the use of Regional Climate Models (RCMs), does not share the limitations of statistical downscaling. The major limitation in dynamical downscaling is the computational cost of running the RCMs. This generally places a limit on both the spatial resolution (often 10s of kilometres) and the number of simulations that can be performed to characterise uncertainty. RCMs also contain biases, both inherited from the driving GCM and generated with the RCM itself. It is worth noting that statistical downscaling techniques can be applied to RCM simulations as easily as GCM simulations to obtain projections at station locations.
The RCM limitations are actively being addressed through current initiatives and research. Like GCMs, RCMs benefit from the continued increase in computational power, allowing more simulations to be run at higher spatial resolution. The need for more simulations to characterise uncertainty is being further addressed through international initiatives to have many modelling groups contribute simulations to the same ensembles (e.g. CORDEX - COordinated Regional climate Downscaling EXperiment http://wcrp-cordex.ipsl.jussieu.fr/). New research into model independence is also pointing toward ways to create more statistically robust ensembles4. Novel research to reduce (or eliminate) the bias inherited from the driving GCM is also showing promise5,6.
Above I used simple physical considerations to suggest there would be some added value from regional models compared to global models. Others have investigated this from an observational viewpoint7,8 as well as through direct evaluation of model results at different scales9,10,11. In each case the results agreed with the rule of thumb given earlier. That is, in the areas with strong enough regional climate influences we do see improved simulations of regional climate at higher resolutions. Of course we are yet to address the question of whether the regional climate projections from these models have low enough uncertainty to be useful in climate change adaptation decision making.
To date RCMs have been used in many studies12,13 and a wide variety of evaluations of RCM simulations against observations have been performed. In attempting to examine the fidelity of the regional climate simulated, a variety of variables (temperature, precipitation, wind, surface pressure,...) have been evaluated using a variety of metrics14,15, with all the derived metrics then being combined to produce an overall measure of performance. When comprehensive assessments such as these are performed it is often found that different models have different strengths and weaknesses as measured by the different metrics. If one has a specific purpose in mind, e.g. building a new dam, one may wish to focus on metrics directly relevant to that purpose. Often the projected climate change is of interest so the evaluation should include a measure of the models ability to simulate change, often given as a trend over a recent historical period16,17. In most cases the RCMs are found to do a reasonably good job of simulating the climate of the recent past. Though, there are usually places and/or times where the simulation is not very good. Not surprising for an active area of research and model development.
Given the evaluation results found to date it is advisable to carefully evaluate each RCMs simulations before using any climate projections they produce. Being aware of where and when they perform well (or poorly) is important when assessing the climate change that model projects. It is also preferable for the projections themselves to be examined with the aim of understanding the physical mechanisms causing the projected changes. Good process level understanding of the causes behind the changes provides another mechanism through which to judge their veracity.
Ready for adaptation decisions?
Finally we come to the question of whether regional climate projections should be used in climate change adaptation decisions concerning infrastructure development? In the past such decision were made assuming a stationary climate such that observations of the past were representative of the future climate. So the real question here is will the use of regional climate projections improve decisions made when compared to the use of historical climate observations?
If the projected regional change is large enough that it falls outside the historical record even when considering the associated model errors and uncertainties, then it may indeed impact the decision. Such decisions are made within a framework that must consider uncertainty in many factors other than climate including future economic, technological and demographic pathways. Within such a framework, risk assessments are performed to inform the decision making process, and the regional climate projections may introduce a risk to consider that is not present in the historical climate record. If this leads to decisions which are more robust to future climate changes (as well as demographic and economic changes) then it is worthwhile including the regional climate projections in the decision making process.
Of course, this relies on the uncertainty in the regional climate projection being small enough for the information to be useful in a risk assessment process. Based on current models, this is not the case everywhere, and continued model development and improvement is required to decrease the uncertainty and increase the utility of regional climate projections for adaptation decision making.
Jason Evans has a B. Science (Physics) and a B. Math (hons) from the University of Newcastle, Australia. He has a Ph.D. in hydrology and climatology from the Australian National University. He worked for several years as a research associate at Yale University, USA, before moving to the University of New South Wales, Sydney, Australia. He is currently an Associate Professor in the Climate Change Research Centre there. His research involves general issues of regional climate and water cycle processes over land. He focuses at the regional (or watershed) scale and studies processes including river flow, evaporation/transpiration, water vapour transport and precipitation. He is currently Co-Chair of the GEWEX Regional Hydroclimate Panel, and coordinator of the Coordinated Regional Climate Downscaling Experiment (CORDEX), both elements of the World Climate Research Programme.
van Oldenborgh GJ, Doblas Reyes FJ, Drijfhout SS, Hawkins E (2013) Reliability of regional climate model trends. Environmental Research Letters 8
Bhend J, Whetton P (2013) Consistency of simulated and observed regional changes in temperature, sea level pressure and precipitation.
Power SB, Delage F, Colman R, Moise A (2012) Consensus on twenty-first-century rainfall projections in climate models more widespread than previously thought. Journal of Climate 25:3792–3809
Bishop CH, Abramowitz G (2012) Climate model dependence and the replicate Earth paradigm. Clim Dyn:1–16
Xu Z, Yang Z-L (2012) An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations. Journal of Climate 25:6271–6286
Colette A, Vautard R, Vrac M (2012) Regional climate downscaling with prior statistical correction of the global climate forcing. Geophys Res Lett 39:L13707
di Luca A, Elía R de, Laprise R (2012) Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations. Clim Dyn 38:1229–1247
di Luca A, Elía R de, Laprise R (2013) Potential for small scale added value of RCM’s downscaled climate change signal. Climate Dynamics 40:601–618
Christensen OB, Christensen JH, Machenhauer B, Botzet M (1998) Very High-Resolution Regional Climate Simulations over Scandinavia—Present Climate. Journal of Climate 11:3204–3229
Önol B (2012) Effects of coastal topography on climate: High-resolution simulation with a regional climate model. Clim Res 52:159–174
Evans J, McCabe M (2013) Effect of model resolution on a regional climate model simulation over southeast Australia. Climate Research 56:131–145
Wang Y, Leung L, McGregor J, Lee D, Wang W, Ding Y, Kimura F (2004) Regional climate modeling: Progress, challenges, and prospects. Journal of the Meteorological Society of Japan 82:1599–1628
Evans JP, McGregor JL, McGuffie K (2012) Future Regional Climates. In: Henderson-Sellers A, McGuffie K (eds) The future of the World’s climate. Elsevier, p 223–252
Christensen J, Kjellstrom E, Giorgi F, Lenderink G, Rummukainen M (2010) Weight assignment in regional climate models. Climate Research 44:179–194
Evans J, Ekström M, Ji F (2012) Evaluating the performance of a WRF physics ensemble over South-East Australia. Climate Dynamics 39:1241–1258
Lorenz P, Jacob D (2010) Validation of temperature trends in the ENSEMBLES regional climate model runs driven by ERA40. Climate Research 44:167–177
Bukovsky MS (2012) Temperature trends in the NARCCAP regional climate models. Journal of Climate 25:3985–3991
Guest blog Roger Pielke Sr.
Are climate models ready to make regional projections?
The question that is addressed in my post is, with respect to multi-decadal model simulations, are global and/or regional climate models ready to be used for skillful regional projections by the impacts and policymaker communities?
This could also be asked as
Are skillful (value-added) regional and local multi-decadal predictions of changes in climate statistics for use by the water resource, food, energy, human health and ecosystem impact communities available at present?
As summarized in this post, the answer is NO.
In fact, the output of these models are routinely being provided to the impact communities and policymakers as robust scientific results, when, in fact, they only provide an illusion of skill. Simply plotting high spatial resolution model results is not, by itself, a skillful product!
Skill is defined as accurately predicting changes in climate statistics on this time period. This skill must be assessed by predicting global, regional and local average climate, and any climate change that was observed over the last several decades (i.e. “hindcast model predictions”).
One issue that we need to make sure is clearly understood are the terms “prediction” and “projection”. The term “projection”, of course, is just another word for a “prediction” when specified forcings are prescribed; e.g. such as different CO2 emission scenarios - see Pielke (2002). Thus “projection” and “prediction” are synonyms.
Dynamic and statistical downscaling is widely used to refine predictions from global climate models to smaller spatial scales. In order to classify the types of dynamic and statistical downscaling, Castro et al (2005) defined four categories. These are summarized in Pielke and Wilby (2012) and in Table 1 from that paper.
In the current post, I am referring specifically to Type 4 downscaling. For completeness, I list below all types of downscaling. The intent of downscaling is to achieve accurate, higher spatial resolution of weather and other components of the climate system than is achievable with the coarser spatial resolution global model. Indeed, one test of a downscaled result is whether its results agree more with observations than does the global model results simply downscaled by interpolation to a finer terrain and landscape map.
Types of Downscaling
Type 1 downscaling is used for short-term, numerical weather prediction. In dynamic Type 1 downscaling, the regional model includes initial conditions from observations. In Type 1 statistical downscaling the regression relationships are developed from observed data and the Type 1 dynamic model predictions.
The Type 1 application of downscaling is operationally used by weather services worldwide. They provide very significant added regional and local prediction skill beyond what is available from the parent global model (e.g. see http://weather.rap.ucar.edu/model/). Millions of Type 1 forecasts are made every year and verified with real world weather, thus providing an opportunity for extensive quantitative testing of their predictive skill (Pielke Jr. 2010) .
Type 2 dynamic downscaling refers to regional weather (or climate) simulations in which the regional model’s initial atmospheric conditions are forgotten (i.e., the predictions do not depend on the specific initial conditions), but results still depend on the lateral boundary conditions from a global numerical weather prediction where initial observed atmospheric conditions are not yet forgotten, or are from a global reanalysis. Type 2 statistical downscaling uses the regression relationships developed for Type 1 statistical downscaling except that the input variables are from the Type 2 weather (or climate) simulation. Downscaling from reanalysis products (Type 2 downscaling) defines the maximum forecast skill that is achievable with Type 3 and Type 4 downscaling.
Type 2 downscaling provides an effective way to provide increased value-added information on regional and local spatial scales. It is important to recognize, however, that type 2 downscaling is not a prediction (projection) for the cases where the global data comes from a reanalysis (e.g. a reanalysis is a combination of real world observations, folded into a global model). An example of this type of application is reported in Feser et al., 2011, Pielke 2013 and Mearns et al 2012, 2013b).
When Type 2 results are presented to the impacts communities as a valid analysis of the skill of Type 4 downscaling, those communities are being misled on the actual robustness of the results in terms of multi-decadal projections. Type 2 results, even from global models used in a prediction mode, still retain real world information in the atmosphere (such as from long wave jet stream patterns), as well as sea surface temperatures, deep soil moisture, and other climate variables that have long term persistence.
Type 3 dynamic downscaling takes lateral boundary conditions from a global model prediction forced by specified real world surface boundary conditions, such as for seasonal weather predictions based on observed sea surface temperatures, but the initial observed atmospheric conditions in the global model are forgotten. Type 3 statistical downscaling uses the regression relationships developed for Type 1 statistical downscaling, except using the variables from the global model prediction forced by specified real-world surface boundary conditions.
Type 3 downscaling is applied, for example, for seasonal forecasts where slowly changing anomalies in the surface forcing (such as sea surface temperature) provide real-world information to constrain the downscaling results. Examples of the level of limited, but non-zero, skill achievable are given in Castro et al (2007) and Veljovic et al (2012).
Type 4 dynamic downscaling takes lateral boundary conditions from an Earth system model in which coupled interactions among the atmosphere, ocean, biosphere, and cryosphere are predicted [e.g., Solomon et al., 2007]. Other than terrain, all other components of the climate system are calculated by the model except for human forcings, including greenhouse gas emissions scenarios, which are prescribed. Type 4 dynamic downscaling is widely used to provide policy makers with impacts from climate decades into the future. Type 4 statistical downscaling uses transfer functions developed for the present climate, fed with large scale atmospheric information taken from Earth system models representing future climate conditions. It is assumed that statistical relationships between real-world surface observations and large scale weather patterns will not change.
The level of skill achievable deteriorates from Type 1 to Type 2 to Type downscaling, as fewer observations are used to constrain the model realism. For Type 4, except for the prescribed forcing (such as added CO2), there are no real world constraints.
It is also important to realize that the models, while including aspects of basic physics (such as the pressure gradient force, advection, gravity), are actually engineering code. All of the parameterizations of the physics, chemistry and biology include tunable parameters and functions. For the atmospheric part of climate models, this engineering aspect of weather models is discussed in depth in Pielke (2013a). Thus, such engineering (parameterized) components can result in the drift of the model results away from reality when observations are not there to constrain this divergence from reality. The climate models are not basic physics.
There are two critical tests for skillful Type 4 model runs:
1. The model must provide accurate replications of the current climatic conditions on the global, regional and local scale? This means the model must be able to accurately predict the statistics of the current climate. This test, of course, needs to be performed in a hindcast mode?
2. The model must also provide accurate predictions of changes in climatic conditions (i.e. the climatic statistics) on the regional and local scale? This means the model must be able to replicate the changes in climate statistics over this time period. This can also, of course, only be assessed by running the models in a hindcast mode.
For Type 4 runs [i.e. multidecadal projections], the models being used include the CMIP5 projections.
As reported in The CMIP5 - Coupled Model Intercomparison Project Phase 5
CMIP5 promotes a standard set of model simulations in order to:
· evaluate how realistic the models are in simulating the recent past,
· provide projections of future climate change on two time scales, near term (out to about 2035) and long term (out to 2100 and beyond), and
· understand some of the factors responsible for differences in model projections, including quantifying some key feedbacks such as those involving clouds and the carbon cycle
The CMIP5 runs, unfortunately, perform poorly with respect to the first bullet listed above, as documented below.
If the models’ are not sufficiently realistic in simulating the climate in the recent past, they are not ready to be used to provide projections for the coming decades!
A number of examples from the peer reviewed literature illustrate this inadequate performance.
Summary of a Subset of Peer-Reviewed Papers That Document the Limitations of the CMIP5 model projections with respect to Criteria #1.
· Taylor et al, 2012: Afternoon rain more likely over drier soils. Nature. doi:10.1038/nature11377. Received 19 March 2012 Accepted 29 June 2012 Published online 12 September 2012
“…the erroneous sensitivity of convection schemes demonstrated here is likely to contribute to a tendency for large-scale models to `lock-in’ dry conditions, extending droughts unrealistically, and potentially exaggerating the role of soil moisture feedbacks in the climate system.”
· Driscoll, S., A. Bozzo, L. J. Gray, A. Robock, and G. Stenchikov (2012), Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions, J. Geophys. Res., 117, D17105, doi:10.1029/2012JD017607. published 6 September 2012.
The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings.
· Fyfe, J. C., W. J. Merryfield, V. Kharin, G. J. Boer, W.-S. Lee, and K. von Salzen (2011), Skillful predictions of decadal trends in global mean surface temperature, Geophys. Res. Lett.,38, L22801, doi:10.1029/2011GL049508
”….for longer term decadal hindcasts a linear trend correction may be required if the model does not reproduce long-term trends. For this reason, we correct for systematic long-term trend biases.”
· Xu, Zhongfeng and Zong-Liang Yang, 2012: An improved dynamical downscaling method with GCM bias corrections and its validation with 30 years of climate simulations. Journal of Climate 2012 doi: http://dx.doi.org/10.1175/JCLI-D-12-00005.1
”…the traditional dynamic downscaling (TDD) [i.e. without tuning) overestimates precipitation by 0.5-1.5 mm d-1.....The 2-year return level of summer daily maximum temperature simulated by the TDD is underestimated by 2-6°C over the central United States-Canada region".
· Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094–1110
".... local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale."
· Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532.
"...models produce precipitation approximately twice as often as that observed and make rainfall far too lightly.....The differences in the character of model precipitation are systemic and have a number of important implications for modeling the coupled Earth system .......little skill in precipitation [is] calculated at individual grid points, and thus applications involving downscaling of grid point precipitation to yet even finer‐scale resolution has little foundation and relevance to the real Earth system.”
· Sun, Z., J. Liu, X. Zeng, and H. Liang (2012), Parameterization of instantaneous global horizontal irradiance at the surface. Part II: Cloudy-sky component, J. Geophys. Res., doi:10.1029/2012JD017557, in press.
“Radiation calculations in global numerical weather prediction (NWP) and climate models are usually performed in 3-hourly time intervals in order to reduce the computational cost. This treatment can lead to an incorrect Global Horizontal Irradiance (GHI) at the Earth’s surface, which could be one of the error sources in modelled convection and precipitation. …… An important application of the scheme is in global climate models….It is found that these errors are very large, exceeding 800 W m-2 at many non-radiation time steps due to ignoring the effects of clouds….”
· Ronald van Haren, Geert Jan van Oldenborgh, Geert Lenderink, Matthew Collins and Wilco Hazeleger, 2012: SST and circulation trend biases cause an underestimation of European precipitation trends Climate Dynamics 2012, DOI: 10.1007/s00382-012-1401-5
“To conclude, modeled atmospheric circulation and SST trends over the past century are significantly different from the observed ones. These mismatches are responsible for a large part of the misrepresentation of precipitation trends in climate models. The causes of the large trends in atmospheric circulation and summer SST are not known.”
· Driscoll, S., A. Bozzo, L. J. Gray, A. Robock, and G. Stenchikov (2012), Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions, J. Geophys. Res., 117, D17105, doi:10.1029/2012JD017607. published 6 September 2012.
The models generally fail to capture the NH dynamical response following eruptions. ……The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings
· Mauritsen, T., et al. (2012), Tuning the climate of a global model, J. Adv. Model. Earth Syst., 4, M00A01, doi:10.1029/2012MS000154. published 7 August 2012
During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system…..The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution.
· Jiang, J. H., et al. (2012), Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations, J. Geophys. Res., 117, D14105, doi:10.1029/2011JD017237. published 18 July 2012.
The modeled mean CWCs [cloud water] over tropical oceans range from ∼3% to ∼15× of the observations in the UT and 40% to 2× of the observations in the L/MT. For modeled H2Os, the mean values over tropical oceans range from ∼1% to 2× of the observations in the UT and within 10% of the observations in the L/MT….Tropopause layer water vapor is poorly simulated with respect to observations. This likely results from temperature biases
· Van Oldenborgh, G.J., F.J. Doblas-Reyes, B. Wouters, W. Hazeleger (2012): Decadal prediction skill in a multi-model ensemble. Clim.Dyn. doi:10.1007/s00382-012-1313-4
who report quite limited predictive skill in two regions of the oceans on the decadal time period, but no regional skill elsewhere, when they conclude that "A 4-model 12-member ensemble of 10-yr hindcasts has been analysed for skill in SST, 2m temperature and precipitation. The main source of skill in temperature is the trend, which is primarily forced by greenhouse gases and aerosols. This trend contributes almost everywhere to the skill. Variation in the global mean temperature around the trend do not have any skill beyond the first year. However, regionally there appears to be skill beyond the trend in the two areas of well-known low-frequency variability: SST in parts of the North Atlantic and Pacific Oceans is predicted better than persistence. A comparison with the CMIP3 ensemble shows that the skill in the northern North Atlantic and eastern Pacific is most likely due to the initialisation, whereas the skill in the subtropical North Atlantic and western North Pacific are probably due to the forcing."
· Sakaguchi, K., X. Zeng, and M. A. Brunke (2012), The hindcast skill of the CMIP ensembles for the surface air temperature trend, J. Geophys. Res., 117, D16113, doi:10.1029/2012JD017765. published 28 August 2012
the skill for the regional (5° × 5° – 20° × 20° grid) and decadal (10 – ∼30-year trends) scales is rather limited…. The mean bias and ensemble spread relative to the observed variability, which are crucial to the reliability of the ensemble distribution, are not necessarily improved with increasing scales and may impact probabilistic predictions more at longer temporal scales.
· Kundzewicz, Z. W., and E.Z. Stakhiv (2010) Are climate models “ready for prime time” in water resources managementapplications, or is more research needed? Editorial. Hydrol. Sci. J. 55(7), 1085–1089.
who conclude that “Simply put, the current suite of climate models were not developed to provide the level of accuracy required for adaptation-type analysis.”
Even on the global average scale, the multi-decadal global climate models are performing poorly since 1998, as very effectively shown in an analysis by John Christy in the post by Roy Spencer which is reproduced below. As reported in Roy’s post, these plots by John are based upon data from the KNMI Climate Explorer with a comparison of 44 climate models versus the UAH and RSS satellite observations for global lower tropospheric temperature variations, for the period 1979-2012 from the satellites, and for 1975 – 2025 for the models.
Thus the necessary criteria #1 is not satisfied. Obviously, the first criteria must be satisfactorily addressed before one can have any confidence in the second criteria of claiming to skillfully predict changes in climate statistics.
To summarize the current state of modeling and the use of regional models to downscale for multi-decadal projections, as reported in Pielke and Wilby (2012):
1. The multi-decadal global climate model projection must include all first-order climate forcings and feedbacks, which, unfortunately, they do not.
2. Current global multi-decadal predictions are unable to skillfully simulate regional forcing by major atmospheric circulation features such as from El Niño and La Niña and the South Asian monsoon , much less changes in the statistics of these climate features. These features play a major role of climate impacts at the regional and local scales.
3. While regional climate downscaling yields higher spatial resolution, the downscaling is strongly dependent on the lateral boundary conditions and the methods used to constrain the regional climate model variables to the coarser spatial scale information from the parent global models. Large-scale climate errors in the global models are retained and could even be amplified by the higher-spatial- resolution regional models. If the global multi-decadal climate model predictions do not accurately predict large-scale circulation features, for instance, they cannot provide accurate lateral boundary conditions and interior nudging to regional climate models. The presence of higher spatial resolution information in the regional models, beyond what can be accomplished by interpolation of the global model output to a finer grid mesh, is only an illusion of added skill.
4. Apart from variable grid approaches, regional models do not have the domain scale (or two-way interaction between the regional and global models) to improve predictions of the larger-scale atmospheric features. This means that if the regional model significantly alters the atmospheric and/or ocean circulations, there is no way for this information to affect larger scale circulation features that are being fed into the regional model through the lateral boundary conditions and nudging.
5. The lateral boundary conditions for input to regional downscaling require regional-scale information from a global forecast model. However the global model does not have this regional-scale information due to its limited spatial resolution. This is, however, a logical paradox because the regional model needs something that can be acquired only by a regional model (or regional observations). Therefore, the acquisition of lateral boundary conditions with the needed spatial resolution becomes logically impossible. Thus, even with the higher resolution analyses of terrain and land use in the regional domain, the errors and uncertainty from the larger model still persist, rendering the added simulated spatial details inaccurate.
6. There is also an assumption that although global climate models cannot predict future climate change as an initial value problem, they can predict future climate statistics as a boundary value problem [Palmer et al., 2008]. However, for regional downscaling (and global) models to add value (beyond what is available to the impacts community via the historical, recent paleorecord and a worst case sequence of days), they must be able to skillfully predict changes in regional weather statistics in response to human climate forcings. This is a greater challenge than even skillfully simulating current weather statistics.
It is therefore inappropriate to present type 4 results to the impacts community as reflecting more than a subset of possible (plausible) future climate risks. As I wrote in Pielke (2011) with respect to providing multi-decadal climate predictions to the impacts and policy communities, there is a
“serious risk of overselling what [can be] provide[d] to policy makers. A significant fraction of the funds they are seeking for prediction could more effectively be used if they were spent on assessing risk and ways to reduce the vulnerability of local/regional resources to climate variability and change and other environmental issues using the bottom-up, resources-based perspective discussed in Pielke and Bravo de Guenni (2004), Pielke (2004), and Pielke et al. (2009). This bottom-up focus is “of critical interest to society.”
We wrote this recommendation also in Pielke and Wilby (2012):
As a more robust approach, we favor a bottom-up, resource-based vulnerability approach to assess the climate and other environmental and societal threats to critical assets [Wilby and Dessai, 2010; Kabat et al., 2004]. This framework considers the coping conditions and critical thresholds of natural and human environments beyond which external pressures (including climate change) cause harm to water resources, food, energy, human health, and ecosystem function. Such an approach could assist policy makers in developing more holistic mitigation and adaptation strategies that deal with the complex spectrum of social and environmental drivers over coming decades, beyond carbon dioxide and a few other greenhouse gases.
This is a more robust way of assessing risks, including from climate, than using the approach adopted by the Intergovernmental Panel on Climate Change (IPCC) which is primarily based on downscaling from multi-decadal global climate model projections. A vulnerability assessment using the bottom-up, resource-based framework is a more inclusive approach for policy makers to adopt effective mitigation and adaptation methodologies to deal with the complexity of the spectrum of social and environmental extreme events that will occur in the coming decades as the range of threats are assessed, beyond just the focus on CO2 and a few other greenhouse gases as emphasized in the IPCC assessments.
This need to develop a broader approach was even endorsed in the climate research assessment and recommendations in the “Report Of The 2004-2009 Research Review Of The Royal Netherlands Meteorological Institute”.
In this 2011 report, we wrote
The generation of climate scenarios for plausible future risk, should be significantly broadened in approach as the current approach assesses only a limited subset of possible future climate conditions. To broaden the approach of estimating plausible changes in climate conditions in the framing of future risk, we recommend a bottom-up, resource-based vulnerability assessment for the key resources of water, food, energy, human health and ecosystem function for the Netherlands. This contextual vulnerability concept requires the determination of the major threats to these resources from climate, but also from other social and environmental issues. After these threats are identified for each resource, then the relative risk from natural- and human-caused climate change (estimated from the global climate model projections, but also the historical, paleo-record and worst case sequences of events) can be compared with other risks in order to adopt the optimal mitigation/adaptation strategy.
Since the 2011 report (which I was a member of the Committee that wrote it), I now feel that using the global climate model projections, downscaled or not, to provide regional and local impact assessment on multi-decadal time scales is not an effective use of money and other resources. If the models cannot even accurately simulate current climate statistics when they are not constrained by real world data, the expense to run them to produce detailed spatial maps is not worthwhile. Indeed, it is counterproductive as it provides the impact community and policymakers with an erroneous impression on their value.
A robust approach is to use historical, paleo-record and worst case sequences of climate events. Added to this list can be perturbation scenarios that start with regional reanalysis (e.g. such as by arbitrarily adding a 1C increase in minimum temperature in the winter, a 10 day increase in the growing season, a doubling of major hurricane landfalls on the Florida coast, etc). There is no need to run the multi-decadal global and regional climate projections to achieve these realistic (plausible) scenarios.
Hopefully, our debate on this weblog will foster a movement away from the overselling of multi-decadal climate model projections to the impact and policy communities. I very much appreciate the opportunity to present my viewpoint in this venue.
Roger A. Pielke Sr. is currently a Senior Research Scientist in CIRES and a Senior Research Associate at the University of Colorado-Boulder in the Department of Atmospheric and Oceanic Sciences (ATOC) at the University of Colorado in Boulder (November 2005 -present). He is also an Emeritus Professor of Atmospheric Science at Colorado State University and has a five-year appointment (April 2007 - March 2012) on the Graduate Faculty of Purdue University in West Lafayette, Indiana.
Pielke has studied terrain-induced mesoscale systems, including the development of a three-dimensional mesoscale model of the sea breeze, for which he received the NOAA Distinguished Authorship Award for 1974. Dr. Pielke has worked for NOAA's Experimental Meteorology Lab (1971-1974), The University of Virginia (1974-1981), and Colorado State University (1981-2006). He served as Colorado State Climatologist from 1999-2006. He was an adjunct faculty member in the Department of Civil and Environmental Engineering at Duke University in Durham, North Carolina (July 2003-2006). He was a visiting Professor in the Department of Atmospheric Sciences at the University of Arizona from October to December 2004.
Roger Pielke Sr. was elected a Fellow of the AMS in 1982 and a Fellow of the American Geophysical Union in 2004. From 1993-1996, he served as Editor-in-Chief of the US National Science Report to the IUGG (1991-1994) for the American Geophysical Union. From January 1996 to December 2000, he served as Co-Chief Editor of the Journal of Atmospheric Science. In 1998, he received NOAA's ERL Outstanding Scientific Paper (with Conrad Ziegler and Tsengdar Lee) for a modeling study of the convective dryline. He was designated a Pennsylvania State Centennial Fellow in 1996, and named the Pennsylvania State College of Earth and Mineral Sciences Alumni of the year for 1999 (with Bill Cotton). He is currently serving on the AGU Focus Group on Natural Hazards (August 2009-present) and the AMS Committee on Planned and Inadvertent Weather Modification (October 2009-present). He is among one of three faculty and one of four members listed by ISI HighlyCited in Geosciences at Colorado State University and the University of Colorado at Boulder, respectively.
Dr. Pielke has published over 370 papers in peer-reviewed journals, 55 chapters in books, co-edited 9 books, and made over 700 presentations during his career to date. A listing of papers can be viewed at the project website: http://cires.colorado.edu/science/groups/pielke/pubs/. He also launched a science weblog in 2005 to discuss weather and climate issues. This weblog was named one of the 50 most popular Science blogs by Nature Magazine on July 5, 2006 and is located at http://pielkeclimatesci.wordpress.com/.
Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling: Assessment of value retained and added using the Regional Atmospheric Modeling System (RAMS). J. Geophys. Res. - Atmospheres, 110, No. D5, D05108, doi:10.1029/2004JD004721. http://pielkeclimatesci.wordpress.com/files/2009/10/r-276.pdf
Castro, C.L., R.A. Pielke Sr., J. Adegoke, S.D. Schubert, and P.J. Pegion, 2007: Investigation of the summer climate of the contiguous U.S. and Mexico using the Regional Atmospheric Modeling System (RAMS). Part II: Model climate variability. J. Climate, 20, 3866-3887. http://pielkeclimatesci.wordpress.com/files/2009/10/r-307.pdf
Mearns, Linda O. , Ray Arritt, Sébastien Biner, Melissa S. Bukovsky, Seth McGinnis, Stephan Sain, Daniel Caya, James Correia, Jr., Dave Flory, William Gutowski, Eugene S. Takle, Richard Jones, Ruby Leung, Wilfran Moufouma-Okia, Larry McDaniel, Ana M. B. Nunes, Yun Qian, John Roads, Lisa Sloan, Mark Snyder, 2012: The North American Regional Climate Change Assessment Program: Overview of Phase I Results. Bull. Amer.Met Soc. September issue. pp 1337-1362.
Mearns, L.O. , R. Leung, R. Arritt, S. Biner, M. Bukovsky, D. Caya, J. Correia, W. Gutowski, R. Jones, Y. Qian, L. Sloan, M. Snyder, and G. Takle 2013: Reply to R. Pielke, Sr. Commentary on Mearns et al. 2012. Bull. Amer. Meteor. Soc., in press. doi: 10.1175/BAMS-D-13-00013.1. http://pielkeclimatesci.files.wordpress.com/2013/02/r-372a.pdf
Pielke, R.A. Jr., 2010: The Climate Fix: What Scientists and Politicians Won't Tell You About Global Warming. Basic Books. http://www.amazon.com/Climate-Fix-Scientists-Politicians-Warming/dp/B005CDTWBS#_
Pielke Sr., R.A., 2002: Overlooked issues in the U.S. National Climate and IPCC assessments. Climatic Change, 52, 1-11. http://pielkeclimatesci.wordpress.com/files/2009/10/r-225.pdf
Pielke Sr., R.A.,, 2004: A broader perspective on climate change is needed. Global Change Newsletter, No. 59, IGBP Secretariat, Stockholm, Sweden, 16–19. http://pielkeclimatesci.files.wordpress.com/2009/09/nr-139.pdf
Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell, W. Rossow, J. Schaake, J. Smith, S. Sorooshian, and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union. http://pielkeclimatesci.wordpress.com/files/2009/12/r-354.pdf
Pielke Sr., R.A., 2010: Comments on “A Unified Modeling Approach to Climate System Prediction”. Bull. Amer. Meteor. Soc., 91, 1699–1701, DOI:10.1175/2010BAMS2975.1, http://pielkeclimatesci.files.wordpress.com/2011/03/r-360.pdf
Pielke Sr, R.A., 2013a: Mesoscale meteorological modeling. 3rd Edition, Academic Press, in press.
Pielke Sr., R.A. 2013b: Comment on “The North American Regional Climate Change Assessment Program: Overview of Phase I Results.” Bull. Amer. Meteor. Soc., in press. doi: 10.1175/BAMS-D-12-00205.1. http://pielkeclimatesci.files.wordpress.com/2013/02/r-372.pdf
Pielke, R. A. Sr.and L. Bravo de Guenni, 2004: Conclusions. Vegetation, Water, Humans and the Climate: A New Perspective on an Interactive System, P. Kabat et al., Eds., Springer, 537–538. http://pielkeclimatesci.files.wordpress.com/2010/01/cb-42.pdf
Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008. http://pielkeclimatesci.files.wordpress.com/2012/02/r-361.pdf
Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairaku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. Extreme Events and Natural Hazards: The Complexity Perspective Geophysical Monograph Series 196 © 2012. American Geophysical Union. All Rights Reserved. 10.1029/2011GM001086. http://pielkeclimatesci.files.wordpress.com/2012/10/r-3651.pdf
Pielke Sr, R.A., Editor in Chief., 2013: Climate Vulnerability, Understanding and Addressing Threats to Essential Resources, 1st Edition. J. Adegoke, F. Hossain, G. Kallos, D. Niyoki, T. Seastedt, K. Suding, C. Wright, Eds., Academic Press, 1570 pp. Available May 2013.
Pielke et al, 2013: Introduction. http://pielkeclimatesci.files.wordpress.com/2013/04/b-18intro.pdf
Veljovic, K., B. Rajkovic, M.J. Fennessy, E.L. Altshuler, and F. Mesinger, 2010: Regional climate modeling: Should one attempt improving on the large scales? Lateral boundary condition scheme: Any impact? Meteorologische Zeits., 19:3, 237-246. DOI 10.1127/0941-2948/2010/0460.
|
<urn:uuid:05315f79-7261-4e77-b27e-633fdf025257>
|
CC-MAIN-2016-26
|
http://www.climatedialogue.org/are-regional-models-ready-for-prime-time/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00084-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.895367 | 16,568 | 2.8125 | 3 |
Final demonstration of EU-funded firefighting robots
Published: 21 January, 2010
Robotics experts from Sheffield Hallam University have been working with firefighters from South Yorkshire Fire & Rescue to showcase a unique group of firefighting robots.
Researchers say the robots, called Guardians and Viewfinders, could revolutionise the way fire-fighters work.
Funded by the European Union, the Guardians are a 'swarm' of autonomous robots that can navigate and search urban areas like warehouses and factories.
The robots carry laser-range, radio-signal and ultrasound sensors. They can be used to assist search and rescue during large scale incidents, for example warehouse fires and chemical spills.
Dr Jacques Penders, from Sheffield Hallam's Centre for Automation and Robotics Research, said: "The Guardian robots navigate autonomously and accompany a traditional human firefighter. They connect to a wireless ad-hoc network and forward data to the human operator and the control station.
"The Guardians warn for toxic chemicals and provide mobile communication links with human firefighters.
"Viewfinders autonomously navigate through and inspect an area, but human operators can monitor their operations as well as control their movements if needed.
"The interface ensures the human firefighters get a good, relevant overview of the ground and the robots and human rescue workers inside."
A demonstration of the firefighting robots was held at South Yorkshire Fire & Rescue’s Training and Development Centre in Handsworth, Sheffield.
Station Manager Neil Baugh, from South Yorkshire Fire & Rescue, said: "Searching through industrial fires is time consuming and dangerous. Toxins may be present and human senses can be severely impaired, leading to disorientation."
Dr Penders will open Sheffield Hallam's Centre for Automation and Robotics Research (CARR) on 22 January 2010. The Guardians and Viewfinders will form part of the opening ceremony.
|
<urn:uuid:9cac9eb2-0086-49fc-b7cc-b50d26f0cb09>
|
CC-MAIN-2016-26
|
http://hemmingfire.com/news/fullstory.php/aid/751/Final_demonstration_of_EU-funded_firefighting_robots_.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00199-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.906354 | 385 | 2.5625 | 3 |
SARA TITLE III
Overview of SARA Title III
SARA Title III (Superfund Amendments and Reauthorization Act of 1986), also know as the Emergency Planning and Community Right-to-Know Act (EPCRA), was passed in response to concerns regarding the environmental and safety hazards posed by the storage and handling of toxic chemicals. These concerns were triggered by the disaster in Bhopal, India, in which more than 2,000 people suffered death or serious injury from the accidental release of methyl isocyanine. To reduce the likelihood of such a disaster in the United States, Congress imposed requirements on both states and regulated facilities.
SARA Title III establishes requirements for Federal, State and local governments, Indian Tribes, and industry regarding emergency planning and “Community Right-to-Know” reporting on hazardous and toxic chemicals. The Community Right-to-Know provisions help increase the public’s knowledge and access to information on chemicals at individual facilities, their uses and releases into environment. States and communities, working with facilities, can use the information to improve chemical safety and protect public health and the environment.
SARA Title III (EPCRA) has four major provisions:
- Emergency Planning
- Emergency release notification
- Hazardous chemical storage reporting requirements
- Toxic chemical release inventory
Overview of Act 165
Act 165 encases establishing a Statewide hazardous material safety program; creating the Hazardous Material Response Fund; providing for the creation of Hazardous Material Emergency Response Accounts in each county; further providing for the powers and duties of the Pennsylvania Emergency Management Agency, of the Pennsylvania Emergency Management Council and the counties and local governments; imposing obligations on certain handlers of hazardous materials; and imposing penalties.
Owners or operators of facilities that have hazardous chemicals on hand in quantities equal to or greater than set threshold levels must submit a Tier Two form. The purpose of the Tier Two form is to provide State and local officials and the public with specific information on hazardous chemicals present at the facility during the past year. It is the responsibility of the facility to report to the county.
Planning facilities are facilities that have extremely hazardous chemicals on hand that meet or exceed reporting thresholds. These facilities are required to do an Off-site Emergency Response Plan. Help is offered to facilities to complete this plan by the Act 165 Coordinator. The plan is then presented to the Local Emergency Planning Committee (LEPC) by the Act 165 Coordinator for approval.
|
<urn:uuid:f8135088-4429-455f-aaf2-407e1bea7ca6>
|
CC-MAIN-2016-26
|
http://www.luzernecounty.org/county/departments_agencies/emergency_management/sara_title_iii
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93502 | 497 | 2.59375 | 3 |
STEROPE (or Asterope) was a Pleiad star-nymph of Pisa in Elis (southern Greece). She was loved by the god Ares and bore him Oinomaos.
Sterope was probably identified with the Naiad Harpina who is otherwise named as the mother of Oinomaos by Ares.
|[1.1] ATLAS (Hesiod Astronomy Frag 1)
ATLAS & PLEIONE (Apollodorus 3.110, Hyginus Fabulae 192, Hyginus Astronomica 2.21, Ovid Fasti 4.169 & 5.79)
|[1.1] OINOMAOS (by Ares) (Hyginus Fabulae 84, Hyginus Astronomica 2.21)
[1.2] OINOMAOS, EUENOS (by Ares) (Plutarch Greek & Roman Parallel Stories 38)
STE′ROPE (Steropê). A Pleiad, the wife of Oenomaus (Apollod. iii. 10. § 1), and according to Pausanias (v. 10. § 5), a daughter of Atlas.
Source: Dictionary of Greek and Roman Biography and Mythology.
Hesiod, Astronomy Fragment 1 (from Scholiast on Pindar's Nemean Odea 2.16) (trans. Evelyn-White) (Greek epic C8th or 7th B.C.) :
"The Pleiades whose stars are these:--‘Lovely Teygata, and dark-faced Elektra, and Alkyone, and bright Asterope, and Kelaino, and Maia, and Merope, whom glorious Atlas begot.’"
Pseudo-Apollodorus, Bibliotheca 3. 110 - 111 (trans. Aldrich) (Greek mythographer C2nd A.D.) :
"To Atlas and Okeanos' daughter Pleione were born on Arkadian Kyllene (Cyllene) seven daughters called the Pleiades, whose names are Alkyone, Merope, Kelaino, Elektra, Sterope, Taygete, and Maia. Of these, Oinomaus married Sterope."
Pausanias, Description of Greece 5. 10. 6 (trans. Jones) (Greek travelogue C2nd A.D.) :
"[Amongst the scenes depicted on the pediment of the temple of Zeus at Olympia:] On the right of Zeus of Oinomaos with a helmet on his head, and by him Sterope his wife, who was one of the daughters of Atlas."
Pseudo-Plutarch, Greek and Roman Parallel Stories 38 (trans. Babbitt) (Greek historian C2nd A.D.) :
"Euenus, the son of Ares and Sterope, married Alkippe (Alcippe), the daughter of Oinomaüs (Oenomaus), and begat a daughter Marpessa."
Pseudo-Hyginus, Fabulae 84 (trans. Grant) (Roman mythographer C2nd A.D.) :
"Oenomaus, son of Mars and Asterope, daughter of Atlas."
Pseudo-Hyginus, Fabulae 250 :
"Oenomaus, son of Mars by Asterie, daughter of Atlas."
Pseudo-Hyginus, Astronomica 2. 21 :
"The Pleiades are called seven in number, but only six can be seen. This reason has been advanced, that of the seven, six mated with immortals (three with Jove [Zeus], two with Neptunus [Poseidon], and one with Mars [Ares]) . . . Mars by Sterope begat Oenomaus, but others call her the wife of Oenomaus."
Ovid, Fasti 4. 169 ff (trans.Boyle) (Roman poetry C1st B.C. to C1st A.D.) :
"The Pleiades will start relieving their sire's [Atlas'] shoulders. Called seven, they are usually six, wither because six of them entered a god's embrace, for they say that Sterope lay with Mars [Ares]."
- Hesiod, Astronomy Fragments - Greek Epic C8th-7th B.C.
- Apollodorus, The Library - Greek Mythography C2nd A.D.
- Pausanias, Description of Greece - Greek Travelogue C2nd A.D.
- Plutarch, Greek and Roman Parallel Stories - Greek History C1st-2nd A.D.
- Hyginus, Fabulae - Latin Mythography C2nd A.D.
- Hyginus, Astronomica - Latin Mythography C2nd A.D.
- Ovid, Fasti - Latin Poetry C1st B.C. - C1st A.D.
|
<urn:uuid:dfaa0834-1569-4a45-98bb-4302566b3874>
|
CC-MAIN-2016-26
|
http://www.theoi.com/Nymphe/NympheSterope.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00163-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.886782 | 1,085 | 2.546875 | 3 |
"Nanotechnology: Molecular Engineering and its
Implications," the fifth MIT Nanotechnology Study Group
(NSG) symposium, was held January 30 and 31 at the Massachusetts
Institute of Technology in Cambridge, Massachusetts.
Well over 150 people, many of them standing, crowded into the
lecture hall as NSG member Christopher Fry opened the symposium.
The two-day event presented a dozen speakers covering both the
latest progress in nanotechnology development and some of the
possible implications of this powerful new technology.
Fry set the ground rules for the symposium, saying "I want
to impress upon you that you have a responsibility to find holes
in arguments that are presented by speakers and force them to
respond to those holes. What you are not allowed to do is walk
out of here with any major unasked questions."
The first lecture, presented by Foresight Institute president K. Eric Drexler, was an
introduction to nanotechnology and an exposition on the technical
foundations of molecular engineering. There were many chemists in
the audience, and Drexler contrasted assembler techniques with
conventional solution chemistry. Assemblers will move selected
molecules to a specific position to cause a particular reaction,
while solution chemistry relies on random diffusive transport to
bump the right molecules against each other in large numbers.
Apparently some of the chemists in the audience were
uncomfortable with the "foreign" notion of using gears,
bearings, and other analogs of macro-scale devices on the
molecular level. In Drexler's words "Chemists have never
been able to build large, rigid, precise structures; so they are
used to thinking in terms of small or floppy molecules moving by
Next, Howard C. Berg from Harvard University's Department of
Biology described a 2 billion year old "nanotechnology"
device, the flagellar motor. These motors are found in E. coli
bacteria, where tiny rotary engines turn corkscrew propellers to
push the bacteria through fluids which (at that scale) have a
viscosity equivalent to a human swimming through light tar.
Just 22.5 nanometers in diameter, the motor can be made to run at
speeds of 300 to 3000 RPM, and produces maximum torque at stall.
It has about 30 different parts, eight independent
force-generating elements, and can run in forward or reverse. The
motor uses about 1000 protons to drive each revolution.
Dr. Berg's model of the motor's operation involves simple
arrangements of channels, binding sites, and springs. This
natural biological device shows that physical law allows
nanometer scale machines with complex moving parts.
Gary Tibbetts from General Motors Research Laboratories discussed
his work growing hollow carbon tubes as small as ten nanometers
in diameter. The walls of these tubes can be as few as 10 atoms
thick. His purpose is to develop an inexpensive way to make
carbon fibers for very strong, light automobile structures, but
these filaments might be a useful addition to a
"toolkit" for early nanotechnology.
Gary Marx from the MIT Department of Urban Studies and Planning
discussed privacy and security issues arising from
nanotechnology. He fears that competitive pressures and
complacency could easily cause the technology to be misused,
resulting in a "Big Brother" society in which everyone
is spied upon, personal information becomes public, irrelevant
information is used to screen and stigmatize people, and
technology is controlled by a privileged elite. He advised
caution in dealing with new technologies, and vigilance against
slow, creeping losses of privacy and control.
The MIT audience seemed to take many of Marx's points seriously,
but NSG member Jeff MacGillivray pointed out that when advanced
technology makes it possible to produce convincing fake records
(video, computer, etc.), human witnesses will become more
trustworthy than the output of automated surveillance.
Eric Garfunkel from Rutgers University's Laboratory for Surface
Modification discussed some of the latest advances in scanning
tunneling microscopy (STM). The STM is a device that can
piezoelectrically position an atomically sharp tip with atomic
precision and image a surface by moving the tip close enough
(about one nanometer) to cause electrons to tunnel between the
tip and surface. As the surface varies in height, the tip moves
up and down to maintain a steady tunneling current. Recently STMs
have been used to modify surfaces on a nanometer scale.
Garfunkel's group has succeeded in gouging trenches in silicon
that are 10 nanometers wide and one atomic layer deep by bumping
the tip into the surface.
Dongmin Chen from the Rowland Institute for Science in Cambridge
has been using similar techniques to produce atomic scale tunnel
diodes. He has also used the STM to make 0.4 nanometer high bumps
on silicon surfaces in regular patterns. Several audience members
were concerned these tiny features would quickly disappear as
atoms move around to fill holes and smooth out bumps. Chen
responded that in materials like silicon the features are quite
stable and have lasted as long as he can measure. In some other
materials (such as gold, which has an unusually mobile layer of
atoms at its surface) the features can disappear in 10 or 15
Bruce Gelin from Polygen Corporation provided an overview of the
state of the art in molecular modeling. He explained that while
Schrödinger's "perfect" mathematical model of atomic
behavior has been known for over 60 years, this quantum
mechanical model is so computationally expensive that it's
impractical to use it for anything bigger than a single hydrogen
molecule, even with modern computers. So the challenge for
molecular modelers is to find computationally tractable
approximations for molecular behavior that are close enough to
give the same practical results as nature. With current
algorithms and workstation-type computers, one femtosecond (one
millionth of a nanosecond) in the life of a small protein can be
simulated in about one second. Gelin then presented a quick
"how to do it" session for the would-be molecular
modelers in the audience.
Kevin Ulmer, director of the Laboratory for Bioelectronic
Materials with the Japanese RIKEN research agency, discussed
RIKEN's 15 year project to produce self-assembling electronic
materials using protein engineering techniques. Their ultimate
goal is to produce a massively parallel cellular automata machine
by making "wallpaper" of proteins with different
electrical properties tiling a two-dimensional plane. For the
shorter term, Ulmer said he would be satisfied to be able to tile
a plane with arbitrary patterns of specified proteins.
Michael Rubner from the MIT Department of Materials Science
discussed his molecular electronics work with 1 to 2 nanometer
thick Langmuir/Blodgett films, in which he is trying to build up
multiple layers of conducting polymers to make electronic
Abraham Ulman from Eastman Kodak Research Laboratories has been
working on the construction of 3 nanometer monolayers for fiber
optic applications. He spoke about his progress and the
complexities of computational modeling of these monolayers.
Greg Fahy, a cryobiology researcher with the American Red Cross,
discussed medical and life extension applications of
nanotechnology. While powerful cell repair machines may represent
a distant goal for nanotechnological medicine, Fahy pointed out
that many biochemical events associated with aging are already
somewhat understood, and might be partially counteracted with
drugs even before nanotechnology arrives. Fahy suggested some
early goals for medical nanotechnology might be devices to
transport specific molecules, programmable DNA inserters,
removers, and "methyl-decorators," and
"trans-membrane gates" to transport molecules into and
out of cells.
Symposium chairman K. E. Nelson wrapped up the event with some
cautionary advice about the potential dangers of nanotechnology.
He reminded the audience that new technologies have dangers as
well as benefits, and that while on the whole the benefits are
usually greater, anything as powerful as nanotechnology must be
handled very carefully, lest the dangers sweep us away before we
can enjoy the benefits. The possibility of replicating devices
and nanotechnology's powerful generality mean that foolishness
(despite good intentions) or actual malign intent, could too
easily result in disaster. Nanotechnology could allow people to
change themselves, and our definitions of humanity. Nelson
advocated careful and controlled development of the technology
and better awareness on the part of the scientific community of
the potential impact and likely results of their work. He
reminded the MIT audience that it was their responsibility to
make nanotechnology work, not just happen.
This symposium was supported by the MIT Department of Chemical
Engineering, MIT Artificial Intelligence Laboratory, MIT IAP
Funding Committee, and the MIT Graduate Student Council.
This year's symposium focused more than previously on near-term
techniques leading to the actual development of nanotechnology.
Symposium organizer Zeke Gluzband noted afterward that "an
order of magnitude more serious people seemed interested than a
year ago." Nelson commented that he "discovered a much
greater degree of acceptance of nanotechnology than in previous
years. People seemed comfortable with talking in public about the
As nanotechnology comes closer to reality, symposia like this one
expose increasing numbers of scientists to the potential and
eventual consequences of their work. Hopefully this awareness
will help to channel the applications of the technology into
David Lindbergh is a consulting software engineer in the
Boston area and a member of the MIT Nanotechnology Study Group.
The World Economic Forum, held annually in Davos, Switzerland,
is a major meeting of several hundred world leaders in
government, industry, and business. At one point during the
meeting this February, 70 ministers and heads of state were
present, lending support to the meeting's unofficial description
as the "world economic summit." At this year's event,
three sessions included nanotechnology as a major topic.
At a Plenary Session on February 6, entitled "Technological
Turbulences," Eric Drexler spoke on nanotechnology (with
simultaneous translation into seven languages). The other session
speakers were James Watson (co-discoverer of the structure of
DNA) and Mark Wrighton, head of MIT's chemistry department, with
physicist Sergei Kapitsa as session chair. According to Kapitsa,
a ten-minute segment on nanotechnology was subsequently aired on
the Soviet Union's Radio Liberty channel.
Following the plenary, Drexler met with a smaller group to brief
them in more detail on expected developments. The next day, FI's
editor Chris Peterson held a briefing focusing on the expected
environmental benefits of using molecular manufacturing to
replace today's relatively inefficient and dirty manufacturing
On February 8 Drexler gave a more technical presentation to an
audience at the University of Basel, sponsored by Prof. H.-J.
Güntherodt, a pioneering researcher in the field of scanning
tunneling microscopy. The next day a similar presentation was
given at IBM's Zurich Research Laboratory, sponsored by physicist
Heinrich Rohrer, one of the Nobel-prizewinning inventors of the
STM. Part of the laboratory tour included a look at the recent
remarkable electron microscopy work of Hans-Werner Fink, as yet
unpublished. This work may be of great use in developing
nanotechnology, and will be reported here as soon as possible.
Videotapes of the World Economic Forum plenary session are
available from Gretag Displays Ltd, 8105 Regensdorf, Switzerland,
at a cost of 100 Swiss francs. Specify session 12; indicate NTSC
format for U.S. standard VHS format.
Nanotechnology and the Frontiers of the Possible,
April 3 lecture by Drexler at Iowa State University, Ames. 8 PM,
followed by reception. Contact Prof. Robert Leacock at the Dept.
of Physics, 515-294-3986.
Evolutionary Economics: Learning from Computation,
April 23-24, George Mason University, Fairfax, VA. See symposium
writeup in this issue. Contact Center for the Study of Market
Starburst Dendrimers and Their Polymers, 19th
International Polymer Symposium, June 6, Michigan Molecular
Institute. Covers chemistry of relevance to molecular
engineering, including precision design of macromolecules; held
in conjunction with regional meeting of ACS. Contact Co-Chairman
Donald Tomalia, 517-832-5573.
STM '90, Fifth International Conference on
Scanning Tunneling Microscopy/Spectroscopy, July 23-27, Hyatt
Regency Hotel, Baltimore, MD. Sponsored by the American Vacuum
Society and the U.S. Office of Naval Research. Contact Chairman
James Murday, 202-767-3026, fax 202-404-7139.
NANO I, First International Conference on
Nanometer Scale Science and Technology, held in conjunction with
STM '90 described above. Includes investigation of fabrication
and characterization of nanometer scale phenomena in surface
chemistry and physics, solid-state physics, metrology, materials
science and engineering, biology and biomaterials, mechanics,
sensors, and electronics technology. Same contact as STM '90.
Frontiers of Supercomputing II: A National
Reassessment, August, Los Alamos National Laboratory, sponsored
by NSF, DOE, NASA, DARPA, NSA, the Supercomputing Research
Center, and Los Alamos. Small strictly invitational meeting;
Ralph Merkle will speak on nanotechnology at a session on the
future computing environment.
|
<urn:uuid:2be97abb-84f7-4123-a556-6d2f34a615c1>
|
CC-MAIN-2016-26
|
http://www.foresight.org/Updates/Update08/Update08.1.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00012-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.907869 | 3,007 | 2.8125 | 3 |
A cannister style vacuum cleaner.
- The definition of a vacuum is a space devoid of air or matter, or a tool that uses suction to clean.
- An example of a vacuum is a space with nothing in it.
- An example of a vacuum is something used to clean up dirt on a floor.
- To vacuum is to clean using a tool that sucks dirt or other elements into a storage container.
An example of vacuum is to clean the dirt off the carpet using a vacuum cleaner.
- a space with nothing at all in it; completely empty space
- an enclosed space, as that inside a vacuum tube, out of which most of the air or gas has been taken, as by pumping
- the degree to which pressure has been brought below atmospheric pressure
- a space left empty by the removal or absence of something usually found in it; void: often used fig.
- ⌂ vacuum cleaner
Origin of vacuumL, neuter singular of vacuus, empty
- of a vacuum
- used to make a vacuum
- having a vacuum; partially or completely exhausted of air or gas
- working by suction or the creation of a partial vacuum
nounpl. vac·uums or vac·u·a
- a. Absence of matter.b. A space empty of matter.c. A space relatively empty of matter.d. A space in which the pressure is significantly lower than atmospheric pressure.
- A state of emptiness; a void.
- A state of being sealed off from external or environmental influences; isolation.
- pl. vac·uums A vacuum cleaner.
- Of, relating to, or used to create a vacuum.
- Containing air or other gas at a reduced pressure.
- Operating by means of suction or by maintaining a partial vacuum.
tr. & intr.v.vac·uumed, vac·uum·ing, vac·uums
Origin of vacuumLatin, empty space, from neuter of vacuus, empty, from vacare, to be empty; see eu&schwa;- in Indo-European roots.
(plural vacuums or vacua) (see usage notes)
- In the sense of "a region of space that contains no matter", the plural of vacuum is either vacua or vacuums. In the sense of a "vacuum cleaner" vacuums is the only plural.
- The Latin in vacuo is sometimes used instead of in a vacuum (in free space).
(third-person singular simple present vacuums, present participle vacuuming, simple past and past participle vacuumed)
- To clean (something) with a vacuum cleaner.
- (intransitive) To use a vacuum cleaner.
From Latin vacuum (“an empty space, void"), noun use of neuter of vacuus (“empty"), related to vacare (“be empty")
vacuum - Computer Definition
A space completely void of matter. Although a complete vacuum is unachievable on earth, outer space is theoretically a vacuum to within a few molecules per cubic inch.
|
<urn:uuid:3c85fe2d-5e11-440a-93d4-95eea482ff48>
|
CC-MAIN-2016-26
|
http://www.yourdictionary.com/vacuum
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00174-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.890032 | 663 | 2.890625 | 3 |
Helsinki, 1 July, 2005
The Revolutionary Morfessor Method – Computer Learns Word Structure on Its Own
The multitude of languages in the world, even in Western Europe alone, creates many problems for software developers and, unfortunately, to the software users as well. Internet search engines are generally not able to deal with compound words or inflected forms. Despite being rare in English, compound words are rather a rule than an exception in many other languages, such as Finnish, German, or Turkish.
If you use Google to search for recipes in Finnish for making a rhubarb pie, you must almost be a linguist to do well in the search: you must take the word "raparperipiirakka" (rhubarb pie), split the compound into "raparperi + piirakka", and try to generate all relevant inflected forms of the words or of the compound ("raparperi+a", "piirakka+an", "raparperipiiraka+ssa", "raparperipiiraka+n", etc.). Then you try these one at a time and in different combinations in the search. This is both slow and very tiresome. Shouldn't this be just the kind of work that computers do for us?
Facing the Challenge: Software That Learns
The practical problem is that coding all this information manually for all the languages is a huge amount of linguistic work: unbearable, in fact, for many smaller languages that are low on research resources. To make things harder, it is impossible to foresee all the new words and their inflected forms that the system should be able to handle in the next few years. One solution is to develop methods that can learn by themselves. In this case they should learn just by looking at large amounts of text.
At Helsinki University of Technology, Finland, we have developed a method and a computer software called Morfessor that learns automatically to segment words into meaningful units. No grammar or language-specific rules need to be given, just a collection of text in the relevant language. Morfessor then learns statistically to analyze which short segments a word most probably consists of.
So far the program has been applied to Finnish, English, and Turkish, and it seems to work quite well in these very different languages.
For example, from English text the software has learned that the word "masterpieces" probably consists of segments "master + piece + s". Other words that contain the segment "master" include "schoolmaster" and "concertmaster".
From Finnish text Morfessor has learned that the segment "ssa" ("in") is likely to be a suffix (word ending), since it appears in many different word forms and often near the end of the word. It had seen examples such as "Sisilia + ssa" (in Sicily) "auto + ssa + mme + kin" (also in our car). Therefore, when it faces a new word, say "Kaledoniassa", Morfessor infers that it probably consists of segments "Kaledonia + ssa" (in Caledonia). On the other hand, it does not divide the word "kissa" (a cat) incorrectly as "ki + ssa".
How Do We Hear Words From Foreign Speech?
When one is listening to a language that is really foreign, it is at first impossible to even tell where a word ends and the next word begins. Trying to write down what was said without understanding any of it seems like an immensely hard task. This is what automatic speech recognition programs try to do.
When a computer is attempting to recognize human speech, that is, to convert the sound signal into text, to be successful the speech recognizer must have an idea of which words it may encounter. One could say that it probably cannot even "hear" a word unless the word already is in its vocabulary. In order to keep up with the speed of natural speech, the vocabulary size has to be reasonable.
Unfortunately, for example Finnish words have far too many inflected forms to be listed as such in any vocabulary – a single noun can appear in 2000 different inflected forms. Such word lists also rapidly become outdated: new compound words are invented all the time, and new foreign names rise into the spotlight of the news. With Morfessor the vocabulary can consist of shorter word segments. This means fewer and shorter words in the vocabulary, and a better ability to analyze totally new words.
Better Language Tools
When Morfessor was applied in recognizing continuous Finnish speech the rate of errors dropped remarkably: nearly to a half when compared to a standard word-based recognizer. Similar improvements were obtained in recognition of Turkish speech.
We predict that the Morfessor method could be useful also in automatic or semi-automatic machine translation, but that is a topic for another story yet to be written.
It is also imaginable that students of a foreign language, let us say, Finnish, might benefit from a method that can tell the probable segmentation points of very long foreign words. After all, most Finnish words in newspaper text cannot be found in dictionaries as such, since dictionaries do not list any inflected forms, and not too many compound words, either.
Demonstration and Free Software Package
To see for yourself how well the method actually works, you can try the demonstration on any Finnish or English words at the address www.cis.hut.fi/projects/morpho/. A free software package is provided at the same location. By making the software freely available we try to encourage the spreading of these research results into action. Hopefully with these kinds of tools, the developers of language applications can indeed make our daily lives easier!
Dr. Krista Lagus is a lecturing researcher at the Laboratory of Computer and Information Science at Helsinki University of Technology. Her research concentrates on adaptive language modeling. The Morfessor method has been designed in collaboration with Mathias Creutz.
Previously published Articles of the Month:
2002-09 School in the Grips of Change - Media Education in Finland
2002-10 Finns Work for e-Accessibility
2002-11 The Finnish Model of Information Society
2002-12 ”Silicon Valley is more than a place, its a state of mind”
2003-01 Data Security Challenges
2003-02 Lifelong Education in Upper Secondary Distance Learning Schools and Virtual Networks
2003-03 Finnish Lapland - More than Meets the Eye
2003-04 A Renewed Policy to Promote Innovation
2003-05 ICT Standardization in Europe and Globally – CEN/ISSS’s Role
2003-06 Public-Private-Partnership Works Well in Finland
2003-07 Information Technology in Nicaragua - Finland Offers a Helping Hand
2003-08 Victory Development Partnership Project - Personal and Virtual Rehabilitation for IT Employment
2003-09 Young People and Wireless Future
2003-10 Video Message Transmits Sign Language
2003-11 Combatting Spam Requires Global Co-Operation
2003-12 Saving the Earth from Anarchy by Eliminating the Weakest Link
2004-01-01 Information Society Models and the New Everyday Life
2004-02-01 Quo vadis, Finnish Virtual University?
2004-03-01 The Finnish Virtual University: Connections with the Bologna Process?
2004-04-01 "Look What I Say" - Unique Solution Enables Face-to-Face Communication for Speech Impaired
2004-05-01 Changes to Copyright Law Heavily Debated
2004-06-01 Finnish and Italian Technology in the Global Environment of the European Union: a Comparison of ICT Strategies in Education
2004-07-01 A New Law Designed to Improve Data Protection in Electronic Communications
2004-08-01 The Etno.Net Website for Practicing and Aspiring Folk Musicians Includes Recordings and Learning Material Packages
2004-09-01 Status of Wireless Service Business Today
2004-10-01 People Over Fifty in Finland as Users of Internet
2004-11-01 Preparing for Mobile Phone Viruses
2004-12-01 Distributed and Virtual Learning in Finland
2005-01-01 Online Public Services for the Benefit of Citizens
2005-02-01 Public-Private Partnership in Developing Information Society Skills
2005-03-01 Finland Shows Example in Localization
2005-04-01 The Individuals´ Awareness of the Right to Privacy
2005-05-01 Children and the Internet – Towards a Balanced Concern
2005-06-01 The Mobile Revolution: What's the Message?
|
<urn:uuid:bb63bcba-f1cb-43ff-a3f1-d08cd964a00a>
|
CC-MAIN-2016-26
|
http://e.finland.fi/netcomm/news/showarticled068.html?intNWSAID=38559
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00045-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.842072 | 1,776 | 3.1875 | 3 |
Ballooning Trade Deficit Under the NAFTA-WTO Model
Prior to the establishment of Fast Track and the trade agreements it enabled, the United States had balanced trade; since then, the U.S. trade deficit has exploded. The pre-Fast Track period (before 1973) was one of balanced U.S. trade and rising living standards for most Americans. In fact, in 1973, the United States had a slight trade surplus, as it did in nearly every year between World War II and 1975. But in every year since Fast Track was first implemented in 1975, the United States has run a trade deficit. And since establishment of NAFTA and the WTO in the mid-1990s, the U.S. trade deficit jumped exponentially from under $100 billion to over $700 billion — over 5 percent of national income. The establishment of the extraordinary Fast Track trade procedure coincided with President Nixon's decision to abandon managed exchange rates – the so-called gold standard – which had helped ensure balanced trade over time. In the new economy that would emerge from these policy shifts, companies that produce abroad (or produce nothing at all, in the case of finance) would replace domestic employment and rising wages as the driving force of economic policy. From Federal Reserve officials to Nobel Laureates, there is nearly unanimous agreement among economists that this huge trade deficit is unsustainable: unless the United States implements policies to shrink it, the U.S. and global economies are exposed to risk of crisis, shock and instability.
For more information on economic outcomes under NAFTA- and WTO-style trade agreements, please refer to the featured resources below.
|
<urn:uuid:bae93aef-7ad6-4e16-b9da-f2b98555155b>
|
CC-MAIN-2016-26
|
http://citizen.org/Page.aspx?pid=2126
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00125-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964293 | 327 | 3.328125 | 3 |
Posted by mw on Tuesday, September 30, 2008 at 4:16pm.
I have written a small essay on this question:
Compare the ways in which religion shaped the development of colonial society in the New England and Chesapeake regions.
Here is what I wrote. Could someone read it and tell me if it is correct, and maybe add some comments? please
The New England and Chesapeake colonies were founded by immigrants from England. The two regions turned into two very different colonies, and a large factor was religion. The immigrants wanted to “purify” the Catholic church of England, and they left England because of religious persecution.
The New England Colony was established by Puritans who wanted to start over and have religious freedom. They had a society of precise religious participation, and wanted to keep their communities productive. They believed that all men are created equal and slavery should not be practiced. A government based on the consent of the people was established, and the church and state were separated.
Lord Baltimore II founded the Chesapeake Colony with the motive of establishing a place where English people who were discriminated against in England could go. Originally, estate owners were only Catholic men, and Protestants were servants. There was a mixture of Catholics and Protestants, but the Protestants outnumbered the Catholics. Lord Baltimore guaranteed freedom of religion to anyone who believed in Jesus Christ. This ensured Catholic safety, and Maryland had the most Roman Catholics by the end of the Colonial Era. The people of Chesapeake were looking for large profits. They had plantations full of crops, making them very wealthy, while having slaves work on the land.
The two regions came from England, but had some different ideas. The New England people were looking for a place to settle with their families and focus on religion, while the Chesapeake people were looking to gain large profits.
- US History - Writeacher, Tuesday, September 30, 2008 at 4:50pm
There was no "Chesapeake" colony. Do you mean Maryland? or Virginia?
You wrote this -- "immigrants wanted to 'purify' the Catholic church of England" -- but you never explained what this means. Why is "purify" in quotation marks? Are you quoting someone? Who? From where? And what does this mean??
What does this mean -- "society of precise religious participation"? Another phrasing without clarification.
What does this mean -- "and the church and state were separated"? How did they do this? What were the results? What would their colony have been like if they hadn't separated church and state?
This -- "while having slaves work on the land" -- is an ENORMOUS subject, but it's completely unexplained in your paper. Please expand.
- US History - bobpursley, Tuesday, September 30, 2008 at 5:19pm
I dont know if I agree with you entirely. The Puritans came to America so they could dictate how religion was to be practiced. Yes, they were discriminated against in England, they wanted to get away from that, but they did not want freedom of religion, except for all to be free to worship their way.
In the Chesapeake colony, large profits is not a part of religion. These folks had a diversity of religion, and practiced religious tolerance, unlike the Puritans.
As the Puritans were disbursed, there came many religions (German, Lutheran, Baptist, Quakers, Anglicans, Catholics, et al), and this diversity forced freedom of religion and no state religion as a matter of pragmatism, as there was little choice.
In both the Puritan and Chesapeake colonies, religion was a focal point of the communities.
Answer This Question
More Related Questions
- us history - how did religion shape the development of colonial society in the ...
- US History - Although the New England, and the Chesapeake region were both ...
- American History 110 - Compare English settlement in teh Chesapeake area and the...
- social studies - During the seventeenth-century, New England had one of the ...
- The United States Until 1877 - Compare and Contrast the development of the New ...
- social studies - Although the Chesapeake and New England colonies differed in ...
- US History - Can I get some links to help me write an essay on this? Thanks '...
- u.s. history - What impact did the geography of New England have on the farming...
- History - Compare European settlements in New England, the Middle Colonies, the ...
- History - Compare the European settlements in the Chesapeake area and New ...
|
<urn:uuid:391da1bc-76ec-44a7-af5d-f94045e440a7>
|
CC-MAIN-2016-26
|
http://www.jiskha.com/display.cgi?id=1222805792
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00080-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.974841 | 949 | 3.578125 | 4 |
Discussion 1 by UMG Students (Group A):
THE ORIGIN AND DEVELOPMENT OF ESP
1.1. The definition of ESP
1.1. The definition of ESP
ESP has had a relatively long time to mature and so we would expect the ESP community to have a clear idea about what ESP means. Strangely, however, this does not seem to be the case. In October this year, for example, a very heated debate took place on the TESP-L e-mail discussion list about whether or not English for Academic Purposes (EAP) could be considered part of ESP in general. At the Japan Conference on ESP also, clear differences in how people interpreted the meaning of ESP could be seen. Some people described ESP as simply being the teaching of English for any purpose that could be specified. Others, however, were more precise, describing it as the teaching of English used in academic studies or the teaching of English for vocational or professional purposes.
At the conference, guests were honored to have as the main speaker, Tony Dudley-Evans, co-editor of the ESP Journal mentioned above. Very aware of the current confusion amongst the ESP community in Japan, Dudley-Evans set out in his one hour speech to clarify the meaning of ESP, giving an extended definition of ESP in terms of 'absolute' and 'variable' characteristics (see below).
Definition of ESP (Dudley-Evans, 1997)
1.2. Absolute Characteristics
1. ESP is defined to meet specific needs of the learners
2. ESP makes use of underlying methodology and activities of the discipline it serves
3. ESP is centered on the language appropriate to these activities in terms of grammar, lexis, register, study skills, discourse and genre.
1.3. Variable Characteristics
1. ESP may be related to or designed for specific disciplines
2. ESP may use, in specific teaching situations, a different methodology from that of General English
3. ESP is likely to be designed for adult learners, either at a tertiary level institution or in a professional work situation. It could, however, be for learners at secondary school level
4. ESP is generally designed for intermediate or advanced students.
5. Most ESP courses assume some basic knowledge of the language systems
The definition Dudley-Evans offers is clearly influenced by that of Strevens (1988), although he has improved it substantially by removing the absolute characteristic that ESP is "in contrast with 'General English'" (Johns et al., 1991: 298), and has included more variable characteristics. The division of ESP into absolute and variable characteristics, in particular, is very helpful in resolving arguments about what is and is not ESP. From the definition, we can see that ESP can but is not necessarily concerned with a specific discipline, nor does it have to be aimed at a certain age group or ability range. ESP should be seen simple as an 'approach' to teaching, or what Dudley-Evans describes as an 'attitude of mind'. This is a similar conclusion to that made by Hutchinson et al. (1987:19) who state, "ESP is an approach to language teaching in which all decisions as to content and method are based on the learner's reason for learning".
2.1. THE ORIGIN OF ESP
2.1.1 The Demands of a brave new world
The end of the Second World War in 1945 heralded an age of enormous and unprecedented expansion in scientific, technical and economic activity on an international scale. This expansion created a world unified and dominated by two forces – technology and commerce – which in their relentless progress soon generated a demand for international language.
The effect was to create a whole new mass of people wanting to learn English, not for the pleasure or prestige of knowing the language, but because English was the key to the international currencies of technology and commerce. The general effect of all this development was to exert pressure on the language teaching profession to deliver the required goods.
2.1.2 A Revolution in Linguistic
At the same time as the demand was growing for English courses tailored to specific needs, influential new ideas began to emerge in the study of language. Traditionally the aim of linguistic had been to describe the rule of English usage that is the grammar. However, the new studies shifted attention away from defining the formal features of language usage to discovering the ways in which language is actually used in real communication (Widdowson, 1978). One finding of this research was that the language we speak and write varies considerably, and in a number of different ways, from one context to another. The idea was simple if language varies from one situation of use to another; it should be possible to determine the features of specific situation and than make these features the basis of the learners’ course.
In short, the view gained ground that the English needed by a particular group of learners could be identified by analyzing the linguistic characteristics of their specialist area of a work or study.
2.1.3 Focus on the Learner
Learners were seen to have different needs and interest which would have an important influent on their motivation to learn and therefore on the effectiveness of their learn. The clean relevance of the English course to their needs would improve the learners’ motivation and thereby make learning better and faster.
2.2 THE DEVELOPMENT OF ESP
ESP has develop at different speeds in different countries, and example of all the approaches we shall describe can be found operating somewhere in the world at the present time.
2.2.1. The concept of special language: register analysis
This stage took place mainly in the 1960s and early 1970s and was associated in particular with the work of Peter Strevens (Halliday Melcintosh and Stevens, 1964), Jack Ewer ( Ewer and Lattore, 1969) and John Swales (1971).
Operating on the basic principle that the English of, say, electrical engineering constituted as specific register different from that of, say, biology or of general English, the aim of the analysis was to identify the grammatical and lexical future of these registers. Teaching materials then took these linguistic features as their syllabus. A good example of such a syllabus is that of A Course in Basic Scientific English by Ewer and Latorre (1969).
The aim was to produce a syllabus which gave high priority to the language forms students would need in their. Sciences studies and in turn would give low priority to forms they would not meet. Ewer and Hughes-Davies (1971).
2.2.2. Beyond the sentence: rhetorical or discourse analysis
ESP had focused on language at the sentence level, the second phase of development shifted attention to the level about the sentence, as ESP become closely involved with the emerging field of discourse or rhetorical analysis.
2.2.3. Target Situation Analysis.
The stage that we come to consider now did not really add anything new to the range of knowledge about ESP. What it aimed to do was to take the existing knowledge and set it on a more scientific basis, by establishing procedures for relating language analyzing more closely to learners’ reasons for learning. Given that the purpose of an ESP course is to enable learners to function adequately in a target situation, that is, the situation in which learners will use the language they are learning, then the ESP course design process should proceed by first identifying the target situation and then carrying out a rigorous analysis of the linguistic features of that situation. The identified features will form the syllabus of the ESP course. This process is usually known as need analysis. However, we prefer to take Chambers’ (1980) term of target situation analysis, since it is a more accurate description of the process concerned.
The most thought explanation of target situation analysis is the system set out by John Munby in communicative Syllabus Design (1978). The Munby model produces a detailed profile of the learners needs in terms of communication purposes, communicative setting, the means of communication, language skills, functions, structures etc.
2.2.4. Skills and Strategies
The fourth stage of ESP has seen an attempt to look below the surface and to consider not the language itself but the thinking processes that underlie language use. There is no dominant figure in this movement, although we might mention the work of Francoise Grellent (1981).
The principal idea behind the skills centered approach is that underlying all language use there are common reasoning interpreting processes, which, regardless of surface forms, enable us to extract meaning form discourse. There is, therefore, no need to focus closely to the surface forms of the language. The focus should rather be on the underlying interpretive strategies, which enable the learner to cope with the surface forms, for example guessing the meaning of words from context, using visual lay out to determine the type of text, exploiting cognates (i.e. words which are similar in the mother tongue and the target language) etc. A focus on specific subject registers in unnecessary in this approach, because the underlying processes are not specific to any subject registers.
2.2.5. A learning-centered approach
Our concern is with language learning. We can not simply assume that describing and exemplifying what people do with language will enable someone to learn it. A truly valid approach to ESP must be based on an understanding of the processes of language learning.
The important and the implications of the distinction that we have made between language use and language learning.
In this section we have identified the main factors in the origins of ESP and given a brief overview of its development. We have note that the linguistic factor has tended to dominate this development with an emphasis on the analysis of the nature of specific varieties of language use. Probably this have been a necessary stage, but now there is a need for a wider view that focuses less of differences and more on what various specialism have in common is that they are all primarily concerned with communication and learning. ESP should properly be seen not as any particular language product but as an approach to language teaching which is directed by specific and apparent reason for learning.
1. http//www.esp journal.com
2. Wilkins,D.A.,National Syllabuses,Oxford University Press,1976
3. Swales,J.,Writing Sientific English, Nelson,1971
4. Carver, D. (1983). Some propositions about ESP.
5. The ESP Journal, 2, 131-137.Dudley-Evans, T. & St. John, M. (1998).
6. Developments in ESP: A multi-disciplinary Approach.
7. Cambridge: Cambridge University Press.
8. Gatehouse, K. (2001). Key issues in English for specific purposes (ESP) Curriculum. The Internet TESL Journal, 7,1-11.
1. What is the definition of ESP?
2. Mention the three absolute characteristic of ESP?
3. ESP is centered on the language that is appropriate with?
4. What is the definition of ESP according to the Dudley-Evans?
5. Mention the variable characteristics of ESP?
6. What is the definition of ESP according to Thu Hutchinson?
7. What is the general effect of the demand of a brave new world?
8. What about English?
9. What is the aim of linguistics?
10. What is the idea of development of English courses for specific group of learners?
11. what will make the learners’ motivation and learning better and faster improve?
12. Beyond the sentence, rhetorical or discourse analysis, ESP had focused on what?
13. What is target situation analysis?
14. Why the target situation analysis approach didn’t really change?
15. What is the principle idea behind the skills centered approach?
1. The teaching of English used in academic studies or teaching of English for vocational or professional purposes.
2. a. ESP is defined to meet specific needs of the learners.
b. ESP makes use of underlying methodology and activities of the discipline it serves.
c. ESP is centered on the language appropriate to these activity in terms of grammar, lexis, register, study skills, discourse, and genre.
3. Grammar, lexis, register, study skills, discourse, and genre.
4. ESP is giving an extended definition of ESP in terms of absolute and variable characteristic.
5. a. ESP may be related to or designed for specific disciplines.
b. ESP may use in specific teaching situations.
c. ESP is likely to be designed for adult learners, a tertiary level institution or in professional work situation.
d. ESP is generally designed for immediate or advanced students.
e. Most ESP courses assume some basics knowledge of the language systems
6. ESP is an approach to language teaching in which all decisions as to content and method are based on the learner’s reason for learning.
7. Exert pressure on the language teaching profession to deliver required good.
8. English had became accountable to the security of the wider world and traditional leisurely and purpose free stroll through the landscape of the English language seemed no longer appropriate in the harsher realities of the market place.
9. The aim of linguistic is to discovering the ways in which language is actually used in real communication.
10. Language varies from one situation to another, it should be possible to determine the features of specific situation.
11. The learners’ motivation and learning become better and faster will improve with the clean relevance of the English course to their needs.
12. ESP had focused on language at the sentence level and the second phase of development shifted attention to the level about the sentence.
13. Target situation analysis is a detailed profile of the learners needs in term of communication purposes, communicative setting, the means of communication, language skill, functions, structures, etc.
14. Because in its analysis of learner need. It still looked mainly at the surface linguistics features of the target situation.
15. The principle idea behind the skills centered approach is that underlying all language use there are common reasoning interpreting processes which regardless off surface form, enable us to extract meaning form discourse.
|
<urn:uuid:668920da-0686-4022-9e3a-d21ca56830e8>
|
CC-MAIN-2016-26
|
http://pakamrinhandouts.blogspot.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00150-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.944903 | 2,941 | 3.03125 | 3 |
Tassie's Survival Strategies
It has a bone-chilling screech that you won’t soon forget. The noises they make are basically bluff to intimidate other animals so there is no fight. A sharp sneeze means something like “bring it on”. When a Devil raises its tail to another it means business. Devils smell horrific when nervous but don’t when calm.
If a Devils whisker is not touching each other they are out of biting range. They’re hearing is excellent and is probably there dominate sense. They store fat in their tails. Its powerful jaws and teeth allow it to devour its prey, bone and all. The Devil eats all its prey so it doesn’t have to carry it around.
The Devil is nocturnal (comes out at night and sleeps the day away). The reason its color is dark is so predators cannot see it.
The Devil can run sort of fast. On rough terrain Devils can run faster then a human; on flat terrain they cannot run faster then a good human runner. Young Devils are agile and can climb trees but older Devils cannot.
|
<urn:uuid:0e7cee53-35dc-42b4-8e5c-46339210db75>
|
CC-MAIN-2016-26
|
http://people.bu.edu/wwildman/ben/tassie/survival.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00063-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936585 | 234 | 3.09375 | 3 |
It is very easy to forget , when you focus so much of your attention on the high profile formations of Asia, Canada and the United States, that there is significant dinosaur bearing strata virtually right on your doorstep. I was always aware of the importance of Eastern Iberia but, until this book was published, had not realised how many and how diverse these locations were. Only a couple of hours flying time away lays one of the most important and astonishingly rich dinosaur grave yards in the world.
Published by Indiana University Press in 2011, Dinosaurs of Eastern Iberia is a fascinating introduction to the past, present and on-going studies into, not only the dinosaurs that inhabited the area, but also the other flora and fauna that shared their environment. Also considered are the palaeobiogeographical aspects of these studies as well as highlighting evidence of the K-T boundary in the numerous Late Cretaceous sediments. There are a total of twelve chapters that cover the history, geology and palaeontology of Eastern Iberia covering a multitude of different subjects along the way.
The first chapter deals with the history of palaeontological discoveries in the region which began in the 1860’s and developed slowly throughout the twentieth century. However, after the Renaissance, interest in dinosaurs exploded and soon hundreds of sites were located, identified and excavated. The 21st century brought new unparalled riches to the fore and this chapter really sets the tone for the book especially if, like me, you did not realise how rich the various localities are.
There are dinosaur bearing localities from the Late Jurassic, Early Cretaceous and Late Cretaceous and the next two chapters explain some of the geological and climatic aspects of the region. Using diagrams, images and a combination of both enables the reader to appreciate how the so-called Alpine Cycle affected the Iberian Peninsula throughout the Mesozoic (and beyond) and describes the different conditions that formed the fossil-bearing sedimentary rocks of today.
Chapter 4 focusses on the dinosaurs themselves – specifically their history and classification. This was a favourite chapter of mine in the book and describes in some detail what actually constitutes the makeup of a dinosaur. Unusually for a book of this type, the osteology data is supplemented with skeletals and images of individual bones that really help the reader to understand some of the terminology that is frequently used but that may not be necessarily understood.
Chapter five is a straightforward description explaining how diverse the dinosaurs were and introduces the various clades and suborders whilst chapter six describes the various techniques that are employed in describing the various fossils. From prospecting in the field, to collection, preparation and study, this chapter gives a solid introduction to the world of the vertebrate palaeontology. No stone is left unturned as paleoichnology and the study of eggs are included and the very latest bone histology studies, CT imaging and other digital technologies are highlighted.
The next two chapters focus on the saurischian and ornithischian dinosaurs of the Iberian Peninsula and compares them with other dinosaurs from around the world. Explaining the origins and relationship between the various groups the chapters represent current, solid and reliable data backed up by some stunning images and reconstructions.
Chapter nine was another favourite since it describes the animals and plants that shared the dinosaurs’ world in the various Mesozoic ecosystems. This was another chapter brought to life by the images of the fossils and reconstructions of the various life forms. When you imagine these ancient worlds, complete with the flora and the amazing variety of fauna there was, it would have been a truly wondrous sight to behold.
The next chapter describes the effects of continental drift on the region and how this affected, climate, environment and dinosaurian distribution throughout the Mesozoic whilst chapter eleven addresses that perennial favourite - the extinction of the dinosaurs. Describing how the concept of extinction was initially arrived at through to highlighting the five major extinctions that have affected the planet since the Pre-Cambrian, the chapter runs through the various concepts that suggest how the demise of the dinosaurs came about. The focus, however, is on how the Maastrichtian beds of Eastern Iberia reveal a thriving dinosaurian community virtually right up to the K-T boundary and the authors are very positive that, as more beds are studied, more evidence will be revealed that will help piece together the final days of the dinosaurian dynasty.
The final chapter is written by Oscar Sanisidro, whose magnificent artwork dominates this volume, and looks at how the dinosaurs and their world are reconstructed and brought vividly back to life. He explains that a combination of studying the skeletal remains, working out the muscle and tendon configuration and comparative anatomy helps the scientist and artist restore these long vanished animals. Virtually the same techniques are used to reconstruct the flora that shared the dinosaurs environment – variants on a theme if you will. The combination of all these disciplines can be very dramatic as demonstrated so admirably in this book.
To summarise, this is a very beautiful book that is delightful on the eye and relies heavily on its imagery. That is not to say that there is not a copious amount of data presented because there is but you cannot get away from the visual beauty of this volume. As James Farlow states on the back cover: “I suspect that many will buy the book for the artwork alone!” – And he would be right.
Criticisms? This book is aimed at the general reader but I suspect that this volume is a little more highbrow than that and I feel that the average general reader may struggle on occasion. Part of the reason for this is that sometimes the text appears clunky or even awkward in places but I suspect that this may be due, in part, to some of the literal translation from Spanish to English. Some of the artwork too has come in for minor criticism but I believe that most of these issues are due to limitations of the digital rendering process.
This is great book in which there is a wealth of information provided and is a great introduction to the dinosaurs of Eastern Iberia. If, like me, you only had a limited interest in the dinosaurs of this region then buy this book! I promise that you will regard this important and fascinating region with a new found interest and respect.
Galobart, A., Suñer, M. and Poza, B. 2011. Dinosaurs of Eastern Iberia: Indiana University Press, Bloomington, Indiana, USA. 321pp. ISBN 978-0-0253-35622-2
|
<urn:uuid:4583a576-c908-4cd4-9a54-f2eb8f2714bb>
|
CC-MAIN-2016-26
|
http://saurian.blogspot.com/2012/05/book-review-dinosaurs-of-eastern-iberia.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00078-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945979 | 1,355 | 2.90625 | 3 |
The Ottoman Empire carried out the Genocide and the German side provided the ideological support. Armenpress.am-Oct 25, 2012 The Germans could prevent the Genocide, as the Turkish army was in
Islamic Ottoman and Kemalis/Young Turks geocode of Christians: between 2.7 million and over 3.5 million
Sources for figure 2.7 million:
RELIGIOUS PERSECUTION IN THE MIDDLE EAST; FACES OF ...
[Senate Hearing 105-352]
[From the U.S. Government Printing Office ]
S. Hrg. 105-352
RELIGIOUS PERSECUTION IN THE MIDDLE EAST; FACES OF THE PERSECUTED
HEARINGS BEFORE THE SUBCOMMITTEE ON NEAR EASTERN AND SOUTH ASIAN AFFAIRS OF THE COMMITTEE ON FOREIGN RELATIONS
UNITED STATES SENATE ONE HUNDRED FIFTH CONGRESS
Printed for the use of the Committee on Foreign Relations
U.S. GOVERNMENT PRINTING OFFICE
40-890 CC WASHINGTON : 1998
Christians of the Near East are the indigenous inhabitants of the
countries of the region. Their Christianity was not imported by Western
colonial movements or missionaries. In most parts of the Near East the
Christian culture predates the expansion of the Islamic empire by seven
centuries. Today that population, now a minority in all countries of
the Near East, is at risk of extinction. The ministry, Open Doors, has
reported dramatic changes in the Christian population of the Middle
East since 1900. In 1900, the average Christian percentage of the
general population in the countries of the Near East was over 20%.
Today it is only 7%. The most dramatic changes have occurred in Turkey.
Here the Christian population has dropped from 22% to .15% due to this
century's first genocide in which 1.5 million Armenians and 750,000
Assyrians lost their lives in 1918. Today Turkey has a secular
constitution, but it has recently begun to feel the pressure of
Islamists to return to an Islamic law based society. In Lebanon, the
only country with a Christian majority population prior to 1980, the
Christians comprised 67% of the population at the beginning of the
century. Today it is 40%. In the Holy Land, the Christian population is
estimated to be 125,000 or 1.8% of the population of Israel as compared
to 2.3 million Muslims or 34.3% of the population. In every country of
the Near East the Christian population has decreased.
Sweden: Parliament Approves Resolution on Armenian Genocide
16 Mar 2010 According to an Assyrian news agency, 1.5 million Armenians, 750,000 Assyrians, and 500,000 Greeks died as a result of the genocide.
Conference on Assyrian Genocide to Be Held in Armenia
23 Mar 2012 It is estimated that 750,000 Assyrians were killed in World War one (75%), 500,000 Greeks and 1.5 million Armenians. The conference, titled Assyrian Genocide (1914-1923) and its Consequences in the Modern World, ...
Assyrian Universal Alliance (AUA) - Australia Region. August 16, 2012.
Member, Unrepresented Nations and Peoples Organization (UNPO)
Millions of indigenous ethnic souls perished as a result of the savagery of the Ottoman Turks. As a result of this genocide, the Assyrians lost all their territories within the borders of modern Turkey. At least 750,000 Assyrians, 1.5 million Armenians and 500,000 Greeks from the Pontos region and many others were exterminated in unbelievable horror scenes of massacres and deportations, and hundreds of thousands of children and women were abducted and forced into Turkification and Islamisation, said Mr. Shahen.
Turkish High School History Book Portrays Assyrians as Traitors
5 Oct 2011 It frames World War I as a breaking point in which Assyrians betrayed and stabbed the country in the back by cooperating ... It is estimated that 750,000 Assyrians were killed (75%), 500,000 Greeks and 1.5 million Armenians.
UNPO: Assyria: Friendship Group Discusses Syria Situation
16 Aug 2012 At least 750,000 Assyrians, 1.5 million Armenians and 500,000 Greeks from the Pontos region and many others were exterminated in ...
Encyclopedia of Transnational Crime and Justice - Page 163
Margaret E. Beare - SAGE, Apr 26, 2012 - Law - 544 pages
During World War I, the Turks of the Ottoman Empire engaged in the genocide of the Assyrian people. It is estimated that 750,000 Assyrians were killed or deported. During and after World War I, ... It is estimated that several hundred thousand Greeks died. Also during World War I, ... It is estimated that approximately one and a half million Armenians were slaughtered or deported. The Turkish government ...
Review of Armenian studies - Volume 5, Issues 13-16 - Page 65
ASAM Institute for Armenian Research - 2007
It was stipulated that this monument represented not only the Armenian genocide allegations, but also all genocide victims and ... for "750.000 Assyrian, 400.000 Greek and 1.500.000 Armenian victims of genocide perpetrated by the Turks".
Sources for the figure (over) 3.5 million:
Congressional Record - Page 6104
Congressional Record-House April 24, 2001
... Unfortunately there wee others included in this massacre, including- Assyrians and Pontic Greeks, bringing' the number to well over 3.5 million lost lives.
Notes on the Genocides of Christian Populations of the Ottoman Empire
Submitted in support of a resolution recognizing the Armenian, Assyrian, and Pontic and Anatolian Greek genocides of 1914-23, presented to the membership of the International Association of Genocide Scholars (IAGS), 2007.
Ottoman Genocide against Christian Minorities: General Comments and Sources
"It is believed that in Turkey between 1913 and 1922, under the successive regimes of the Young Turks and of Mustafa Kemal (Ataturk), more than 3.5 million Armenian, Assyrian and Greek Christians were massacred in a state-organized and state-sponsored campaign of destruction and genocide, aiming at wiping out from the emerging Turkish Republic its native Christian populations. This Christian Holocaust is viewed as the precursor to the Jewish Holocaust in WWII. To this day, the Turkish government ostensibly denies having committed this genocide."
Prof. Israel Charney, President of the IAGS
"Turks admit that the Armenian persecution is the first step in a plan to get rid of Christians, and that Greeks would come next. ... Turkey henceforth is to be for Turks alone."
Peter Balakian, The Burning Tigris, quoting the New York Times, September 14, 1915.
"While the death toll in the trenches of Western Europe were close to 2 million by the summer of 1915, the extermination of innocent civilians in Turkey (the Armenians, but also Syrian and Assyrian Christians and large portions of the Greek population, especially the Greeks of Pontos, or Black Sea region) was reaching 1 million."
Peter Balakian, The Burning Tigris, p. 285-286.
Genocide (Armenia and Assyria) - United Kingdom Parliament
7 Jun 2006 : Column 130WH
Genocide (Armenia and Assyria)
Stephen Pound (Ealing, North) (Lab): It is a pleasure to appear before you this afternoon, Mr. Cook, and a particular honour that my right hon. Friend the Minister for Europe will be responding for the Government. Few Ministers, if any, know more about the subject and I could not have chosen a better Minister to respond.
I start with a minor point. The subject of this Adjournment debate appears on the Order Paper as Genocide in Armenia and Assyria. I am not seeking to apportion blame, but that is not the title that was submitted. The original title was Recognition of the genocide of Armenians and Assyrians. It would be obvious to you, Mr. Cook, and to many people, that to talk about genocide in Armenia, a country that has existed in its present form for a comparatively short time, and Assyria, a country that might have a millennia-old history but is not recognised in international boundaries, would be superfluous.
I wish to speak about the incidents in the then Ottoman empire, particularly in the spring of and throughout 1915, that led, I hope indisputably, to the planned, calculated genocide of the Christian community, which consisted principally of Armenians, Assyrians and Greeks. I shall seek to persuade my right hon. Friend that the time has finally come for Her Majestys Government to join so many other countries, Parliaments and legislatures in recognising the genocide that occurred in that year.
I hope that it will be comparatively uncontentious to state a few basic facts. One and a half million Armenian residents of the former Ottoman empire died between 1915 and 1923 as a result of calculated genocide. I hope that it is not contentious to say that 3.5 million of the historic Christian population of Assyrians, Armenians and Greeks then living in the Ottoman empire had been murderedstarved to death or slaughteredor exiled by 1923. I hope that those are not contentious points. I hope that no one would seek to deny that the process started on 24 April 1915 in Constantinople, where 1,000 Armenians were identified, taken from their homes and murdered. I hope that it is not contentious to reaffirm that 300,000 Armenian males were then conscripted into the Turkish army, unarmed and then murdered, and that death marches into the Syrian desert took place.
(British MP Raises the Issue of the Genocide of Armenian, Assyrian and Greek Christians in the UK House of Commons
London, 20 June 2006Steve Pound, a distinguished Labour MP for North Ealing, London, has recently raised for discussion in the House of Commons the issue of the Genocide of Turkey's Armenian, Assyrian and Greek Christians and called the British Government to formally recognize it.
"I wish to speak about the incidents in the then Ottoman empire, particularly in the spring of and throughout 1915, that led, I hope indisputably, to the planned, calculated genocide of the Christian community, which consisted principally of Armenians, Assyrians and Greeks. I shall seek to persuade that the time has finally come for Her Majesty's Government to join so many other countries, Parliaments and legislatures in recognising the genocide that occurred in that year... I hope that it is not contentious to say that 3.5 million of the historic Christian population of Assyrians, Armenians and Greeks then living in the Ottoman Empire had been murdered-starved to death or slaughtered-or exiled by 1923," Mr. Pound pointed out.
The Genocide of Ottoman Greeks, 1914-1923 | Neos Kosmos
24 Sep 2012 A total of more than 3.5 million Greeks, Armenians, and Assyrians were killed under the successive regimes of the Young Turks and of Mustafa Kemal from...
The Genocide of Ottoman Greeks, 1914-1923 | RutgersNewark
[The Genocide - Memorials]
Order of AHEPA-Palmetto 284 and Daughters of Penelope-Alethia 302. Inscriptions: In memory of 3.5 million Greek, Assyrian, Armenian victims of the genocide ...
Thank you for the links. Here's what I have posted in the past.
Recognition of the Armenian Genocide by Turkey is a secondary issue interview with Harut Sassounian. [An Armenian living in southern California.]
[Excerpt ]"The real purpose of the resolution is not recognition of the Armenian Genocide, but a political struggle the issue of which side has a larger political capital in Washington. . . . [T]he admission of the Armenian Genocide by Turkey is an issue of secondary importance for us. . . . our lands were seized and our 3,000-year-old culture was destroyed. . . . Therefore, our true demand is compensation for this injustice. . . . Now specialists must study the lawyers advice and decide which issue should be submitted to which court, as there is the International Court of Justice, European Court of Human Rights, US Federal Courts, etc. This is a most important issue. It must be studied with all seriousness, because, if we lose in court, Turkey will claim that Armenians have no legal demands."
"The Armenians' here and in Armenia "political struggle [to get] political capital in Washington."
We have enough problems! Let Turkey and the Armenians settle this without disrupting our society demanding resolutions siding with one against the other.
Apparently this particular link to the NY Times article is no longer good: http://www.firstgenocide.org/press3.html. "500,000 Armenians Said To Have Perished. Washington Asked to Stop Slaughter of Christians by Turks and Kurds." The New York Times September 24, 1915. Turks and Kurds.. BTW, some radical Kurds (the Marxist PKK) claim the same parts of eastern Turkey as Armenia claims I believe.
Ottoman Turkey ally Kaiser Wilhelm II had German military stationed in Ottoman Turkey.
"'German Responsibility in the Armenian Genocide: A Review of the Historical Evidence of German Complicity" by Vahakn N. Dadrian.
[Excerpt] "Dadrian does not accuse Germany of instigating the Armenian genocide; he argues instead that Germany contributed to the genocide through policies that condoned it and that the German government sanctioned German and Turkish officials who participated in the genocide's implementation."
I believe that investigations conducted by the western allies following W.W.I knew all the above but chose not to pursue it.
|
<urn:uuid:d94b5b5f-793a-426c-b9dc-f9f3e41ed3bc>
|
CC-MAIN-2016-26
|
http://freerepublic.com/focus/f-news/2950482/posts
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935391 | 2,872 | 3.03125 | 3 |
Much of western military thinking has traditionally assumed that conflicts will involve conventional warfare against an opponent of comparable might, using similar weapons on a known battlefield.
However, military experts have been pointing out for years that resistance forces in places like Chechnya have been conducting a very different kind of war, in which defenders fight on their own terms, not those of the enemy — petrol bombs against tanks, for example. This has been given the name of asymmetrical warfare by counter-terrorism experts, a term that appears to date from the early 1990s. In it, a relatively small and lightly equipped force attacks points of weakness in an otherwise stronger opponent by unorthodox means. All guerrilla activity, especially urban terrorism, falls within this definition.
The attacks on the US on 11 September are a textbook example and the term has had wide coverage since. Some writers extend the idea to any military situation in which a technically weaker opponent is able to gain an advantage through relatively simple means. An obvious example is the landmine — cheap and easy to distribute, but difficult to counter. Another example sometimes given is anti-satellite attacks, in which it is much easier and cheaper to knock out space-based weapons than to put them in place to start with.
Welcome to the world of asymmetrical warfare, a place high on the anxiety list of military planners. In the asymmetrical realm, military experts say, a small band of commandos might devastate the United States and leave no clue about who ordered the attack.
New York Times, Feb. 2001
In this asymmetrical warfare, the weak terrorist attacker has the advantages of selectivity and surprise; the powerful defender must strive to prevent attacks on many fronts.
Newsday, Sep. 2001
Search World Wide Words
Recently added or updated
By hook or by crook; Polish off; Loggerhead; Lame duck; But and ben; Logomaniac; Type louse; Corium; Lie Doggo; Fewmet; Dingbat; Kibosh; Caucus; Oryzivorous; Kick the bucket; Satisficer; Beside oneself; Words of the Year 2015; Peradventure; Sconce; Orchidelirium; How’s your father; Goon; Emoji; Thank your mother for the rabbits; Nonplussed; Bob’s-a-dying; Methinks; Bill of goods.
Support World Wide Words!
Donate via PayPal. Select your currency from the list and click Donate.
Buy from Amazon and get me a small commission at no cost to you. Select your preferred site and click Go!
|
<urn:uuid:07470489-054b-4993-b9da-1513e4001fdb>
|
CC-MAIN-2016-26
|
http://www.worldwidewords.org/turnsofphrase/tp-asy2.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931124 | 535 | 2.5625 | 3 |
Malware Protection ransomware is an infection that completely takes your computer over. Just like other viruses, it infiltrates into computer while visiting infected websites, opening infected files, links or pictures sent by email, etc. Once inside it can encrypt your files or lock your computer. The person who infected your computer will ask you to pay a certain amount of money to get your computer back to normal functioning.
Malware Protection basically states that some mailing was detected on your computer sending dangerous virus which can harm other users. The virus is being modified every 24 hours which makes it difficult for the antivirus programs very difficult to detect it. According to the message, hackers require $2000 to decrypt your files. Malware Protection offers doing that cheaper, just for $300, casino and claims to know that password that is needed to unblock your computer. You will be offered to pay via Ukash code, MoneyPak or Paysafecard.
Unfortunately, that is not true and the creators of Malware Protection ransomware only want to steal your money. If you have received Malware Protection message, use some alternate scanner to scan your PC and use a reputable antispyware program to eliminate this ransomaware from your system.
|
<urn:uuid:fac1258d-2665-4804-85ed-f66f1f0b5009>
|
CC-MAIN-2016-26
|
http://www.kiguolis.com/malware-protection-ransomware/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00091-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942398 | 245 | 2.5625 | 3 |
Some kids are more easily traumatized by shocking events than others. This raises two questions.
Can you predict which ones will be more traumatized after personal abuse, community violence or a natural disaster? And can you lessen the trauma by working with those kids either before the event or right after?
Wendy Silverman, a professor of psychology at Florida International University, answered "yes" to both questions at a recent conference sponsored by the Kansas University Clinical Child Psychology Program.
Kids who live through trauma may suffer an array of problems, she said, from post-traumatic stress disorder to anxiety, phobias and depression. Or they may just act up.
The key word is "may." It's hard to know who will and who won't.
So Silverman looked at 12 large studies focused on persistent reactions to trauma. She came away thinking that four factors are more important than others in predicting whose trauma will last.
The factor of greatest importance is the threat posed by the trauma. Trauma lasts longer when a child perceives his or her life to be at stake or in danger of tremendous loss or disruption.
The second factor is how much support a child has from family and friends. The more the better, Silverman said.
The third factor is how the child behaves in the wake of the trauma. A child who does something constructive is better off than one who just gets angry or withdraws.
Fourth is how stable the child was before the trauma. An anxious kid who's traumatized is worse off than a contented one.
Silverman is working on a research-based questionnaire that can identify vulnerable kids before trauma occurs, or right after, so they can be treated quickly.
Today, it's typical to have a counselor come to a school to talk with kids after a shooting or natural disaster occurs.
The approach may be well-intentioned, but it's not wise, Silverman said. Research shows that the effect of such debriefing is neutral or negative, not positive.
Silverman endorsed the use of cognitive-behavioral therapy for traumatized children.
Here, a child is opened gradually to a traumatic memory and learns new ways to think about the event rather than stay on the same treadmill of thoughts. This lowers the feeling surges that traumatic memories bring, Silverman said.
The approach is called cognitive-behavioral therapy because besides thinking, the child does something related to the trauma.
A kid in New York might visit the site where the World Trade Center stood, for example, or a Kansas youngster might draw a picture of her house after a tornado.
Of course some traumas don't allow much doing. In those cases, the only refuge may be the mind.
Victor Frankl, a psychotherapist who survived a Nazi concentration camp, wrote of himself and his fellow inmates, "We needed to stop asking ourselves about the meaning of life, and instead to think of ourselves as those who were being questioned by life -- daily and hourly."
In other words, Frankl stopped regarding himself as a victim and became a student of the experience. Later, he imagined being released from the camp and then lecturing to students about the experience.
Most of us aren't Frankl. We need some coaching in how to use our minds to get through hard spots. Research like Silverman's ensures that the coaching is not just well-intentioned but wise.
|
<urn:uuid:c5ad61d3-a5fd-4f63-b6a5-942d273e2ee0>
|
CC-MAIN-2016-26
|
http://www2.ljworld.com/news/2004/nov/21/therapy_helps_counter/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00109-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96311 | 698 | 3.296875 | 3 |
• abracadabra •
Pronunciation: æ-brê-kê-dæ-brê • Hear it!
Part of Speech: Interjection, Noun
Meaning: 1. (Interjection) An incantation that is supposed to work magic. 2. (Noun) Gibberish, nonsense, mumbo-jumbo, hocus-pocus.
Notes: There isn't much to say about this word; it is a perfect lexical orphan without any family at all. The noun usage does allow a plural, abracadabras, but that is all we can say about it.
In Play: This word is the word used in the performance of some magicians to leave the impression that what they do is real magic: "I can't just say 'abracadabra' and the money for a bicycle, poof, just appears!" However, since magicians are known to perform legerdemain, this word has come to be a noun in the second sense above: "This company runs on abracadabra accounting, and it is just a matter of time before someone catches on."
Word History: The first known mention of the word was in the second century AD in a book called Liber Medicinalis by Quintus Serenus Sammonicus, physician to the Roman emperor Caracalla. Sammonicus prescribed that malaria sufferers wear an amulet containing the word written in the form of a triangle (see the graphic to the left). It was used as a magical formula by the Basilides Gnostics to invoke the aid of beneficent spirits against disease and misfortune. It is found on Abraxas stones worn by the Gnostics as amulets. It probably started out as a rhyming compound of the word Abraxas [abrak-sas]: abrak-adabra. (I wish we could say 'abracadabra' and words like today's Good Word would magically appear, but we needed Agoran Eric Berntsen to suggest this one.)
|
<urn:uuid:3a3b2bcb-266f-490b-a65e-980ad79db058>
|
CC-MAIN-2016-26
|
http://www.alphadictionary.com/bb/viewtopic.php?p=37179
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00126-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955056 | 427 | 2.890625 | 3 |
Q: You were discussing the expression “a tissue of lies” on the radio some time ago. I think it may come from un tissu de mensonges, French for “a tissue of lies.” The noun tissu is the past passive participle of the archaic verb tistre, meaning to weave, according to my Harrap’s Shorter French and English Dictionary. Hence the French expression would indicate a number of lies closely woven together.
A: The word “tissue” (originally spelled “tyssu”) entered English in the mid-14th century. It’s derived from an Old French noun, tissu, meaning “a kind of rich stuff,” according to the Oxford English Dictionary. In English, a “tissue” originally meant a rich cloth often interwoven with gold and silver.
The first citation for the word in the OED is from The Romaunt of the Rose, an English translation (often attributed to Chaucer) of a French allegory: “The barres were of gold ful fyne, / Upon a tyssu of satyne.”
By the early 18th century, the word “tissue” was being used in a figurative way to mean a network or web of negative things. In 1711, for example, Joseph Addison dismissed some poems as “nothing else but a Tissue of Epigrams.”
In 1762, Oliver Goldsmith referred to the history of Europe as “a tissue of crimes, follies, and misfortunes.” And in 1820, Washington Irving complained about a “tissue of misrepresentations.”
I haven’t researched this usage in a French etymological dictionary, but nothing in the OED suggests that we got the negative use of “tissue” from France. In fact, I wouldn’t be surprised if the French got the usage from us.
Buy Pat’s books at a local store or Amazon.com.
|
<urn:uuid:33207632-f892-494d-866c-73e8f2147156>
|
CC-MAIN-2016-26
|
http://www.grammarphobia.com/blog/2008/06/a-tissue-of-lies.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00071-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948518 | 440 | 2.953125 | 3 |
Here's the launch and separation of Akatsuki from the launch vehicle (2 min 46 seconds)
The next clip runs about 12 minutes and details (with English subtitles) the Akatsuki spacecraft and mission. It's quite impressive and if all goes well, will provide a wealth of information about the climate of Venus, which in turn may help us to better understand our own planet.
As a sailor, I was curious about the Ikaros mission. Particularly, how one could use sunlight to "tack" "upwind" in space to a planet closer to the Sun. Essentially, the sail is set so that the spacecraft's orbital velocity around the sun is slowed by the wind (charged particles thrown off by the Sun). In other words, the sail is used as a brake. As the velocity drops, the craft's orbit gets smaller - closer and closer to Venus. I found a website at Cal Tech with an excellent explanation of this - Tacking Solar Sails .
The Ikaros space craft (Interplanetary Kite-craft Accelerated by Radiation Of the Sun) carries a sail that is just 0.0075mm thick wound around its core. Ikaros will spin and unwind the sail using centrifugal force. When deployed, the square sail will measure 46 feet on a side. This clip is in Japanese, but you can get the (ahem) drift by watching the animation. There are thin film solar cells and dust counters attached to the sail. In addition to testing the sailing concept, the development of this craft and a much large one later on, will lead to lower cost solar cells - an important element of Japan's efforts to reduce greenhouse gas emissions.
Fair winds Ikaros and Akatsuki.
JAXA website is here: Japan Aerospace Exploration Agency
|
<urn:uuid:e624ddf7-dec7-4237-bce8-8b8c07b10504>
|
CC-MAIN-2016-26
|
http://pacific-islander.blogspot.com/2010_05_01_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00168-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935596 | 365 | 3.640625 | 4 |
Mar 27, 2008
Results of a study using functional magnetic resonance imaging (fMRI) published March 25 in the Public Library of Science One suggest that positive emotions such as loving-kindness and compassion can be learned in the same way as playing a musical instrument or being proficient in a sport. The scans revealed that brain circuits used to detect emotions and feelings were dramatically changed in subjects who had extensive experience practicing compassion meditation.
Abstract. Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another's pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.
Citation: Lutz A, Brefczynski-Lewis J, Johnstone T, Davidson RJ (2008) Regulation of the Neural Circuitry of Emotion by Compassion Meditation: Effects of Meditative Expertise. PLoS ONE 3(3): e1897. doi:10.1371/journal.pone.0001897
Dec 18, 2006
Psychother Psychosom Med Psychol. 2006 Dec;56(12):488-492
Authors: Neumann NU, Frasch K
Meditation in general can be understood as a state of complete and unintentional silent and motionless concentration on an activity, an item or an idea. Subjectively, meditative experience is said to be fundamentally different from "normal" mental states and is characterized by terms like timelessness, boundlessness and lack of self-experience. In recent years, several fMRI- and PET-studies about meditation which are presented in this paper have been published. Due to different methods, especially different meditation types, the results are hardly comparable. Nevertheless, the data suggest the hypothesis of a "special" neural activity during meditative states being different from that during calm alertness. Main findings were increased activation in frontal, prefrontal and cingulate areas which may represent the mental state of altered self-experience. In the present studies, a considerable lack of scientific standards has to be stated making it of just casuistic value. Today's improved neurobiological examination methods - especially neuroimaging techniques - may contribute to enlighten the phenomenon of qualitatively different states of consciousness.
Dec 13, 2006
Effects of transcendental meditation practice on interhemispheric frontal asymmetry and frontal coherence
Cross-sectional and longitudinal study of effects of transcendental meditation practice on interhemispheric frontal asymmetry and frontal coherence.
Int J Neurosci. 2006 Dec;116(12):1519-38
Authors: Travis F, Arenander A
Two studies investigated frontal alpha lateral asymmetry and frontal interhemispheric coherence during eyes-closed rest, Transcendental Meditation (TM) practice, and computerized reaction-time tasks. In the first study, frontal coherence and lateralized asymmetry were higher in 13 TM subjects than in 12 controls. In the second study (N = 14), a one-year longitudinal study, lateral asymmetry did not change in any condition. In contrast, frontal coherence increased linearly during computer tasks and eyes-closed rest, and as a step-function during TM practice-rising to a high level after 2-months TM practice. Coherence was more sensitive than lateral asymmetry to effects of TM practice on brain functioning.
|
<urn:uuid:ad30fefb-8915-48c5-a820-f59699e7934f>
|
CC-MAIN-2016-26
|
http://gaggio.blogspirit.com/tag/brain+imaging
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00004-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.923894 | 996 | 3.171875 | 3 |
Loathe milk and all things dairy? You’re not alone — but you do have a dilemma on your hands. Passing on moo-juice means you’re passing up a fabulous source of calcium (and its partner vitamin D), the most essential vitamins for healthy bones.
Getting enough calcium in your diet is crucial for bone health and preventing osteoporosis. Furthermore, your body needs vitamin D to maximize your bones’ ability to use that calcium and fight off osteoporosis. That’s why it’s important to still score ample amounts of calcium and vitamin D, even if you despise dairy.
While you can find calcium in many dietary sources, vitamin D is only readily available from sunlight or from a short list of foods and supplements, and the goal is about 600 IU of vitamin D per day.
Here’s how to keep your bones in tip-top shape.
Should You Supplement?
At first, supplements seem like a no-brainer. About 43 percent of adults take a calcium supplement and about 37 percent take a supplement that includes vitamin D. But keep this in mind: Supplements may increase the non-milk calcium in your diet, but calcium is absorbed much more effectively through dietary sources.
What’s more, a recent analysis of nutritional information from 9,475 adults in the National Health and Nutrition Examination Survey showed that, even with calcium supplements, most Americans (especially those over age 50) are not getting enough calcium to prevent osteoporosis.
The U.S. Department of Agriculture’s dietary recommendations say that dairy products are the most significant source of calcium. So people who can’t drink milk — either because they are lactose intolerant or just can’t stand the taste of it — have to work hard to find other sources of calcium. Fortunately, many other foods have calcium, including a multitude of calcium-fortified foods to protect you from osteoporosis.
Milk-Free Ways to Get Your Calcium
Start early — in the day, that is, by piling on the calcium at breakfast with calcium-fortified orange juice, suggests registered dietitian Roberta Anding, RD, LD, CDE, of the Baylor College of Medicine in Houston. Fortified orange juice has about 500 milligrams of calcium per one cup serving. Couple that with a calcium-fortified cereal, which may have between 250 and 1,000 mg of calcium per serving. In contrast, low-fat milk has about 305 mg per serving; so, with smart planning, you can actually get quite a dose of calcium with your first meal of the day.
These foods also have great calcium content:
- 8 ounces of plain yogurt, 452 mg calcium
- 1.5 ounces of Romano cheese, 452 mg
- ½ cup of tofu, 434 mg
- 1.5 ounces of Swiss cheese, 336 mg
- 3 ounces of sardines, 325 mg
There are also beverage alternatives to milk, such as soy milk or rice milk, which are calcium and vitamin D enriched and come in a variety of flavors, such as vanilla or chocolate. Experiment to find out if you like these drinks better than cow’s milk.
How to Sneak in Milk
If you think drinking plain milk is just plain “yuck,” there are other ways to get the calcium and vitamin D of milk into your diet.
Here are Anding’s favorite ways to use milk in recipes:
- Smoothies. Put a 12-ounce bag of semi-frozen fruit and two small cartons of flavored Greek yogurt into a blender and process. “It is so smooth and creamy that it tastes like sorbet,” she says.
- Soup. Many fresh, pureed vegetables soups, like potato and squash soups, can be made with milk or buttermilk, providing you with a sneaky dose of calcium and vitamin D. You can also use milk in canned soups that require liquid, like canned tomato soup. Add some fresh basil, too, suggests Anding.
- Sauces. White sauces for pastas made with milk and cheese will provide you with some calcium. Opt for low-fat choices.
- Coffee drinks. If you like the flavor of coffee, try coffee drinks with a healthy serving of milk, such as iced coffee, café au lait, and cappuccino. Cut down on saturated fat by requesting that no- or low-fat milk be used.
If a tall, cold glass of milk isn’t your thing, don’t sweat it — you can still prevent osteoporosis and build strong bones with a variety of other foods and beverages.
Last Updated: 7/5/2011
|
<urn:uuid:e74a79d7-2044-4b8f-8051-8d29a5dd2bc4>
|
CC-MAIN-2016-26
|
http://www.everydayhealth.com/osteoporosis/bone-builders-for-milk-haters.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00192-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.922535 | 992 | 2.515625 | 3 |
The best text for understanding Duhem's view on relativity is German Science. There have been many complaints about this book; for instance, it has been called an unfortunate piece of (World War I) war propaganda. What offends about the work are what are usually called its caricature of the 'German mind'. There's no doubt that the work presents us with something of a caricature; but a caricature is not a wholly inaccurate portrait. A caricature involves some distortion, but only for the purposes of bringing out particularly recognizable or distinctive points. And Duhem himself is quite clear that in talking about the 'German mind' he merely wants to indicate a tendency that arises from the way the Germans teach and learn science, not to make a universal statements about Germans. As he notes, there is no trace of the exclusively 'English mind' in Newton, and no trace of the exclusively 'German mind' in Gauss. His interest in the subject, actually is that it provides a useful context for investigating the mentality that is ideal for scientific work. The Germans just happen to be the concrete case that (he thinks) comes closest to a pure case of one element of this mentality.
We often tend to talk as if scientific progress were unilinear, as if all scientists had one type of mentality in their scientific work, one methodical approach. Duhem does not. Duhem has a Pascalian view of the human mind, which means he thinks there are two (major) kinds of mentality. The first is the esprit de finesse, the intuitive mind; the second is the esprit de géométrie, the geometrical mind. All human beings have both to some extent. A rare few have close to the perfect balance of both. Most of us, however, tip strongly to one side. Some of us are primarily intuitive, some primarily geometrical. There are several different sorts of both. For instance, one sort of intuitive mind (Duhem calls it the 'English mind') is heavily imaginative -- it relies on models, picture-thinking, metaphors. Another (the 'French mind' -- but Duhem is very clear that it is the French mind as it used to be, in the days of Pasteur or Ampère) is very formal; it eschews the messiness of models and pictures in favor of formal structures that lay things out neatly and clearly. No doubt there are other possible variants. The 'German mind', Duhem thinks, is geometrical.
Duhem insists that the healthy progress of science requires the active participation of both intuitive and geometrical minds. In other words, there are two lines of progress in science, each correcting the excesses of the other; both are essential if science is not to lose its way. The intuitive mind (and we are talking here chiefly of the formal-intuitive mind) is the mentality that allows for definite discovery; it is what keeps us grounded in reality. It calls us back to common sense, and provides the general background principles for rational discussion. Its two great characteristics are clarity and good sense. When the intuitive mind adds something to science, the addition illuminates. It articulates an explanation that makes sense because it is clearly linked to the common principles rational human beings have in common. The geometrical mind, on the other hand, is the mentality that is deductive, rigorous, and precise. It follows reasoning wherever it goes. It is disciplined and patient in a way the intuitive mind is not. Whereas the intuitive mind is often the source of a new scientific discipline, it is the geometrical mind that takes the principles provided by the intuitive mind and sets them into a rigorous logical or mathematical order so that their consequences can be followed to the very end.
Duhem's constant worry throughout German Science is the imperialism of the geometrical mind. The geometrical mind is very rigorous and logical; but in another sense it is very unruly. The geometrical mind is impressed by reasoning as such; it is careless about the starting points of the deduction. Indeed, these are treated as almost insignificant; the geometrical mind just posits whatever starting points are convenient for whatever it is doing. There is no absolute problem with this; but there is the danger that the geometrical mind, carried away with following out a line of reasoning to its bitter end, will stifle or completely ignore the intuitive mind. It is the intuitive mind, remember, that keeps reasoning grounded in reality; it is also the intuitive mind that has the real skill to recognize when our reasoning has brought us to a genuine absurdity. Duhem's worry is that science is in danger of being highjacked by the geometrical mind's tendency to be seduced by sophisticated reasoning, thus losing sight of the reality it is really supposed to be explaining.
Nonetheless, even when the geometrical mind gets carried away, it is making genuine contributions to the progress of science. It is only if the intuitive mind is pushed out that we have serious problems. So for Duhem, a step forward in the progress of science can be a step forward either by the intuitive mind or by the geometrical mind. Duhem considers the theory of relativity to be a useful step forward along the geometrical line of progress. It tells us how you can go about preserving Maxwell's equations in the face of a number of perplexities; it allows us to make precise and accurate predictions we could not otherwise make. There is no question that Duhem considers this to be a valuable step forward.
However, what Duhem wants, and what he's not getting, is for the geometrical mind to allow the formal-intuitive mind to look at the theory of relativity and say, "OK, use it insofar as it is useful. But notice that we come up with several conclusions down the road that seem counterintuitive. Let's see if we can take what we've learned from the theory of relativity and go back to re-analyze the foundations from which it set out, in order to see if we can develop a theory that does not have these counterintuitive conclusions but preserves much of what is valuable about the theory of relativity. If we can find such a theory, that would be even better than the theory of relativity." Clearly, we can be wrong about the general principles of good sense or common sense, and sometimes have been; but Duhem finds it worrisome that so many people are willing to say, "By positing this starting point (the principles that will maintain the form of Maxwell's equations) and rigorously following our deductions through to useful effect, we have proven that such-and-such common-sense principle is false."
He recognizes that there is a practical value in the particular posited starting-point of the theory of relativity, and that the theory of relativity has numerous other practical values that show that it is, indeed, a major contribution to scientific progress: beauty, simplicity, predictive power. But it is the geometrical mind that is interested in these pragmatic values in the first place. The geometrical mind is interested in what you can do with scientific theories; it is interested in how they can facilitate the deductive processes so central to its approach. The formal-intuitive mind, however, is much less interested in pragmatic values like the beauty, simplicity, and predictive power of the theory. The formal-intuitive mind is not so much interested in what you can do with the theory, but in what it makes obvious. The epistemological goal of the formal-intuitive mind is not a pragmatically valuable theory; it is the theory that makes things clear and obvious. The geometrical mind likes that you can use the theory of relativity to calculate satellite orbits; thinking in terms of clocks and rubber sheets and elevators might perhaps enchant the imaginative-intuitive mind for a while; but the formal-intuitive mind is left in the dark if it is not allowed to use the theory of relativity to progress along its own line of interest. The formal-intuitive mind can accept the theory of relativity as a valuable contribution of the geometrical mind, but only on its own terms, which require using what we learn from it in order to find a more common-sensical theory. Duhem is worried about the tendency of the geometrical mind to try to shut this down entirely. This heedlessness, this refusal even to take into account the fact that not all minds can be satisfied with what satisfies the geometrical mind, is Duhem's real irritation when it comes to the theory of relativity -- it is not the theory itself, but the refusal to recognize even the existence of the formal-intuitive mind and its needs. It is only in the cooperation of the geometrical and the intuitive minds that ideal science exists (German Science, p. 110):
French science, German science, both deviate from ideal and perfect science, but they deviate in two opposite ways. The one possesses excessively that with which the other is meagerly provided. In the one, the mathematical mind reduces the intuitive mind to the point of suffocation. In the other, the intuitive mind dispenses too readily with the mathematical mind.
Science needs the geometrical mind for rigor; but it needs the intuitive mind for truth. Such is Duhem's view, anyway. As he insists, "For science to be true, it is not sufficient that it be rigorous; it must start from good sense, only in order to return to good sense" (p. 111).
|
<urn:uuid:8ad3ce7c-f3db-44d2-8156-10eeeb575ded>
|
CC-MAIN-2016-26
|
http://branemrys.blogspot.com/2005/05/german-science.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00136-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946403 | 1,939 | 2.8125 | 3 |
In the movie Alien, the title character is an extraterrestrial creature that can survive brutal heat and resist the effects of toxins.
In real life, organisms with similar traits exist, such as the "extremophile" red alga Galdieria sulphuraria.
In hot springs in Yellowstone National Park, Galdieria uses energy from the sun to produce sugars through photosynthesis.
In the darkness of old mineshafts in drainage as caustic as battery acid, it feeds on bacteria and survives high concentrations of arsenic and heavy metals.
How has a one-celled alga acquired such flexibility and resilience?
To answer this question, an international research team led by Gerald Schoenknecht of Oklahoma State University and Andreas Weber and Martin Lercher of Heinrich-Heine-Universitat (Heinrich-Heine University) in Dusseldorf, Germany, decoded genetic information in Galdieria.
They are three of 18 co-authors of a paper on the findings published in this week's issue of the journal Science.
The scientists made an unexpected discovery: Galdieria's genome shows clear signs of borrowing genes from its neighbors.
Many genes that contribute to Galdieria's adaptations were not inherited from its ancestor red algae, but were acquired from bacteria or archaebacteria.
This "horizontal gene transfer" is typical for the evolution of bacteria, researchers say.
However, Galdieria is the first known organism with a nucleus (called a eukaryote) that has adapted to extreme environments based on horizontal gene transfer.
"The age of comparative genome sequencing began only slightly more than a decade ago, and revealed a new mechanism of evolution—horizontal gene transfer—that would not have been discovered any other way," says Matt Kane, program director in the National Science Foundation's (NSF) Division of Environmental Biology, which funded the research.
"This finding extends our understanding of the role that this mechanism plays in evolution to eukaryotic microorganisms."
Galdieria's heat tolerance seems to come from genes that exist in hundreds of copies in its genome, all descending from a single gene the alga copied millions of years ago from an archaebacterium.
"The results give us new insights into evolution," Schoenknecht says. "Before this, there was not much indication that eukaryotes acquire genes from bacteria."
The alga owes its ability to survive the toxic effects of such elements as mercury and arsenic to transport proteins and enzymes that originated in genes it swiped from bacteria.
It also copied genes offering tolerance to high salt concentrations, and an ability to make use of a wide variety of food sources. The genes were copied from bacteria that live in the same extreme environment as Galdieria.
"Why reinvent the wheel if you can copy it from your neighbor?" asks Lercher.
"It's usually assumed that organisms with a nucleus cannot copy genes from different species—that's why eukaryotes depend on sex to recombine their genomes.
"How has Galdieria managed to overcome this limitation? It's an exciting question."
What Galdieria did is "a dream come true for biotechnology," says Weber.
"Galdieria has acquired genes with interesting properties from different organisms, integrated them into a functional network and developed unique properties and adaptations."
In the future, genetic engineering may allow other algae to make use of the proteins that offer stress tolerance to Galdieria.
Such a development would be relevant to biofuel production, says Schoenknecht, as oil-producing algae don't yet have the ability to withstand the same extreme conditions as Galdieria.
Source: National Science Foundation
|
<urn:uuid:5fc3d6c6-095d-4d70-8a8a-6dddd5de3818>
|
CC-MAIN-2016-26
|
http://www.rdmag.com/news/2013/03/how-thrive-battery-acid-and-among-toxic-metals
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00113-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940732 | 781 | 4.25 | 4 |
Diesel multiple unit (DMU) is an increasingly attractive technology for providing passenger service on short and medium distance, non-electrified railroad lines.
A DMU is a railroad passenger vehicle that is powered by a built-in diesel engine. The output of this engine is transferred to the wheels via a mechanical, hydraulic or electrical coupling. DMUs can be operated singularly, or with any number of units coupled together and controlled by a single operator.
DMUs feature a much lower cost of operation than trains of comparable capacity hauled by diesel locomotives as well as the ability to operate similar to conventional light rail vehicles but without requiring electrification. This has made them attractive for use on branch lines with relatively light passenger traffic and for temporary use on lines with greater traffic potential but which are not yet electrified.
In addition to their great flexibility to operate in any length, DMUs also feature substantially lower fuel consumption and noise output than locomotive hauled trains as well as faster acceleration. Moreover, it is much easier to reverse operation at the end of a line, because there is typically an operator's compartment in each end of each car, and thus all that is necessary is for the operator to walk to the other end of the car or train and begin driving from that end.
The biggest advantage of DMUs as compared with electrically operated light rail vehicles is that they can provide a roughly comparable service without the need for electrification. Electrification, including the erection of overhead wires and the building of substations for power conversion, is costly and is usually only economical for lines that have a relatively high frequency of service. Thus, DMUs are usually a better choice for lines with relatively light traffic and lines with heavier traffic that have not yet been electrified.
Disadvantages of DMUs as compared with electrically powered light rail vehicles include higher energy costs per passenger for large traffic volumes, dependence on fossil fuels, the inconvenience of having to maintain fueling facilities and refuel the vehicles, increased air pollution, greater audible noise output, slower acceleration and higher maintenance costs.
DMUs and other self-propelled rail passenger vehicles have been in use in various countries for decades, and they are still employed extensively in Japan, the UK and elsewhere. The most popular and long lived DMUs in North America are the RDCs (rail diesel cars), which were built by the Budd Company from 1949 through 1956. Some of these are still in operation.
The past several decades have seen a resurgence of interest in local rail passenger transport1 as a result of growing automobile traffic congestion, rising air pollution, higher fuel prices and increased concern about climate change2. Most of this interest has been focused on electrically powered light rail. However, it has been accompanied by a renewed rise of interest in DMUs for use on routes that are not yet electrified and routes with a potential traffic demand that is insufficient to justify electrification.
In addition to the factors that have been resulting in the resurgence of light rail, the revival of interest in DMUs is also a result of technological advances which have given them increased energy efficiency, reduced emissions (including fine particles), smoother rides, enhanced safety, low floors (for easier entry and exit), lower noise output, simplified maintenance, increased passenger capacity and faster acceleration. Some models also feature sufficient reserve strength to be able to haul several non-powered rail vehicles and the ability to use biodiesel fuels.
The second new North American system to use DMUs, and the first in the U.S., is the River Line, which opened in March 2004 in New Jersey. It runs over existing railroad tracks along the Delaware River between Camden and Trenton, the state capitol. This line is 34 miles in length and operates 20 DMUs built by the Swiss company Stadler Rail. The tracks are used at night for freight service to local industries. The construction cost was about $1.1 billion.4
In addition, preparations are well under way for an additional DMU service, the Sprinter, which is scheduled to begin revenue service in Southern California in late 2007. It will run 22 miles, mostly on an existing railroad right of way, between Oceanside and Escondido in northern San Diego County. As is the case with the other new lines, the track will also be used for local freight trains.5
Among the other urban areas in North America that are considering using DMUs on existing, non-electrified rail lines are Boston, Chicago, Los Angeles, North Carolina and South Florida. It has also been proposed to use DMUs on the Eastside railroad, which runs through Seattle's eastern suburbs. DMUs are manufactured by companies in several countries including Canada, India, Japan, South Korea, Switzerland and the U.S.
2For a look at how rail transit can be an effective weapon in combatting global warming, see How Rail Transit on the Eastside Can Help Fight Climate Change, Eastside Rail Now, March 2007.
3For more information about the O-Train, see Ottawa's Pioneering O-Train, Eastside Rail Now, March 2007.
4The River Line's home page is www.riverline.com.
5The Sprinter's home page is www.gonctd.com/oerail/oerail.html.
Home | About | Route | Help | FAQ | Glossary | Index
This page created January 22, 2007. Last updated March 24, 2007.
Copyright © 2007 Eastside Rail Now! All Rights Reserved.
|
<urn:uuid:3e3c963f-260a-48c7-a4fc-d8d36e7b84f1>
|
CC-MAIN-2016-26
|
http://www.eastsiderailnow.org/dmu.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00004-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962744 | 1,123 | 3.21875 | 3 |
Virtual Mentor. May 2014, Volume 16, Number 5: 380-384.
Medicine and Society
Baby Boomers’ Expectations of Health and Medicine
Baby boomers are different from the generations that preceded them. They are more savvy, assertive, health-conscious, and engaged in their care.
Eva Kahana, PhD, and Boaz Kahana, PhD
In the early 2010s, the first cadre of baby boomers, born after World War II, turned 65, making them officially senior citizens, and many more are joining their ranks every day. It is generally acknowledged that the entry of the baby boomers into the ranks of elderly consumers of health care is likely to create major challenges. As large numbers of baby boomers cross into old age, there will be greater demands for chronic health care and for meeting the special needs posed by the “graying of disability”—people with disabilities living longer than they did in centuries past. The coming changes in health care needs are generally conceptualized in terms of increasing demand and need for responsiveness by overburdened health care professionals .
Involvement and Assertiveness
But acknowledging only the growing demand for care and the inadequacy of our current system to meet it ignores the advantages of having a new breed of elderly patients. Baby boomers are different from the generations that preceded them; they are more savvy, assertive, health-conscious, and engaged in their care [2, 3]. Even recently, the literature of medical sociology has portrayed older adults as reluctant to speak up to their doctors and passive in communicating about their health care [4, 5]. Consequently, the focus of patient-centered medicine has been on training physicians to draw out shy and reticent elderly patients and provide them with more thorough information about their health care options .
Our recent research has involved interventions to encourage older adults to be more proactive in communicating with their clinicians . As part of a randomized controlled trial (RCT), we are evaluating the efficacy of a patient communication intervention—“Speak Up”—compared to a civic engagement-oriented attention control group—“Connect.” During the four-year period of this study, we have noticed a marked increase in patient preparedness and initiatives and an increasingly active group process in which study participants offer advice to one another about speaking up to their primary care physicians and requesting test results and other data about their health status and care needs . Indeed, longitudinal studies of successful aging have revealed important changes in health care consumers’ expectations, involvement in their own health care, and competence in navigating the health care system. Baby boomers are among the most avid consumers of health information and approach their health care providers with far greater initiative and than did older adults of yesteryear . Baby boomers value and pursue social engagement and healthy lifestyle behaviors and have high expectations for wellness and independence in late life .
Members of the baby boomer generation are also playing a growing role in long-term care of the oldest old . Growing numbers of them are caregivers to their parents. People are living longer and have smaller families, demographic trends that have created new demands on their baby boomer children . As caregivers who are themselves dealing with the chronic illnesses of later life, boomers can serve as more understanding health care advocates.
There are important implications of this sea change in patient involvement in health care among baby boomers reaching old age. Physicians must now be prepared to interact with older patients in the same way they interact with younger patients, engaging in a more egalitarian dialogue and involving patients more earnestly in decision making.
Older patients of the present and future expect to live more active lives and seek to remain socially engaged, even as they manage chronic illnesses or rehabilitation from disabling health conditions. This generation of self-determining patients is likely to question established principles of medical care, demanding greater attention to their own definitions of health-related quality of life . This is a fundamental move away from the traditional positivistic medical outcome criteria in the direction of “the new subjective medicine” that recognizes and seeks to enhance subjective criteria for health outcomes . Recognition by physicians of the importance of patient values, expectations, and subjective appraisals of health and quality of life can facilitate better communication and shared decision making.
It has been recognized that ageism often limits the choices of older adults regarding long-term care . While there have been meaningful efforts to offer young patients long-term care options that allow maximum control of care received and choice of caregivers, options for elderly patients with disabilities have been far more restrictive. If the new elderly place a higher premium on self-reliance, they may be eager to consider long-term residential options that facilitate independence. Our study of successful aging reveals that, among older adults who retire to Sunbelt retirement communities, those who enter continuing care living facilities maintain independence for a long time, even with multiple comorbidities . Options of this kind, which promote choice and do not necessitate moving to be near other family members, may be particularly popular with baby boomers.
The elderly patients of today use the technological resources of the Internet . Our smartphone-toting baby boomers carry great resources along with great expectations, literally at their fingertips. Mobile phones can enable the majority of older adults to access diverse health interventions, ranging from education to health monitoring, and health promotion. This can facilitate patients’ sense of agency and self-efficacy about improving their own health . Technology can play an important role in making it possible for older adults to “age in place,” and online interventions have facilitated patient empowerment . To the extent that patients can retain control, they will happily incorporate technology into their self-care and self-monitoring routines. Accordingly, baby boomers and older adults have shown similar levels of acceptance of monitoring technology as long as doing so facilitates independent living . But clinicians must be sensitive to the wishes of baby boomers and older adults who may not desire externally imposed health and safety monitoring that they view as an invasion of their privacy . For example, our research revealed that very few independently living older adults in the “wired” community of Celebration, Florida, opted to have telemonitoring of their blood glucose or blood pressure by the local health center .
As we consider the implications of this changing health care landscape for physicians, we have to acknowledge that doctors will continue to play a central role in that brave new world. Studies continue to confirm that patients place greatest trust in information they obtain from their physicians, but more and more older patients look for information online before they consult their physicians . Consequently, physicians must embrace technology in meaningful, rather than pro forma, interactions with their older patients. For example, the reluctance of doctors to exchange e-mails with patients deserves a second look. There is evidence that physicians who regularly use e-mail in communicating with colleagues overwhelmingly refrain from doing so with patients . Research indicates that interventions encouraging doctor-patient e-mail communiction yielded positive results for both groups . As baby boomers seek efficient and timely communication with physicians to help in coping with chronic illnesses, access to e-mail communication can yield clear benefits.
Despite the possible challenges posed by baby boomers’ expectations for better quality of life in their older years, the health-promoting lifestyles they have embraced and popularized will pay dividends in improved health outcomes and reduced burdens for the physicians treating them. Indeed, it has been argued that interest in active lifestyles and healthy diets by baby boomers is fueling the wellness revolution in our society . It is well recognized that health-promoting lifestyles can delay the onset of chronic illnesses and diminish dependence on health care services . And the future may also hold as-yet unheralded medical advances that will benefit these patients as well as their doctors.
Eva Kahana, PhD, is Robson Professor of the Humanities; Distinguished University Professor in sociology, nursing, medicine, and applied social sciences; and director of the Elderly Care Research Center at Case Western Reserve University in Cleveland, Ohio. She has published more than 170 journal articles and book chapters, co-authored four books, and edited three volumes. An underlying theme in Kahana’s scholarship is the understanding of resilience among elderly persons who encountered stress and trauma in their lives, particularly Holocaust survivors, cancer survivors, and institutionalized elderly people. She has received numerous awards, including the Distinguished Career Contribution Award and the Lawton Award from the Gerontological Society of America.
Boaz Kahana, PhD, is a professor of psychology at Cleveland State University in Ohio, where he has served as chair of the Department of Psychology. Professor Kahana’s research has focused on trauma survivorship among veterans, Holocaust survivors, and cancer survivors. He has more than 140 refereed publications including authorship of the 2005 book Holocaust Survivors and Immigrants: Late Life Adaptation. He is a fellow of the American Psychological Society and of the Gerontological Society of America and has served on the editorial boards of several gerontology journals, most recently the Journal of Mental Health and Aging. He has received, among other honors, the Arnold Heller Award for Excellence in Gerontology.
Related in VM
The viewpoints expressed on this site are those of the authors and do not necessarily reflect the views and policies of the AMA.
© 2014 American Medical Association. All Rights Reserved.
|
<urn:uuid:89866559-7e62-4be3-a592-6b77814e36e6>
|
CC-MAIN-2016-26
|
http://journalofethics.ama-assn.org/2014/05/msoc2-1405.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00199-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951676 | 1,910 | 2.703125 | 3 |
Motion sickness is a term that describes an unpleasant combination of symptoms, such as dizziness, nausea and vomiting, that can occur when you're travelling.
It’s also sometimes known as travel sickness, seasickness, car sickness or air sickness.
Initial symptoms of motion sickness may include:
- pale skin
- cold sweat
- an increase in saliva
Some people also experience additional symptoms, such as:
- rapid, shallow breathing
- extreme tiredness
In most cases, the symptoms of motion sickness will start to improve as your body adapts to the conditions causing the problem.
For example, if you have motion sickness on a cruise ship, your symptoms may get better after a couple of days. However, some people don't adapt and have symptoms until they leave the environment that's causing them.
Anyone can get motion sickness, but some are more vulnerable than others. Women often experience motion sickness, particularly during periods or pregnancy. People who often get migraines may also be more likely to experience motion sickness and to have a migraine at the same time.
Motion sickness is also more common in children aged 3 to 12. After this age, most teenagers grow out of the condition.
When to seek medical advice
It's only necessary to seek medical advice about motion sickness if your symptoms continue after you stop travelling. Your GP will be able to rule out other possible causes of your symptoms, such as a viral infection of your inner ear (labyrinthitis).
What causes motion sickness?
Motion sickness is usually associated with travelling in a car, ship, plane or train. However, you can also get it on fairground rides and while watching or playing fast-paced films or computer games.
Motion sickness is thought to occur when there's a conflict between what your eyes see and what your inner ears, which help with balance, sense.
Your brain holds details about where you are and how you're moving. It constantly updates this with information from your eyes and vestibular system. The vestibular system is a network of nerves, channels and fluids in your inner ear, which gives your brain a sense of motion and balance.
If there’s a mismatch of information between these two systems, your brain can't update your current status and the resulting confusion will lead to symptoms of motion sickness, such as nausea and vomiting.
For example, you can get motion sickness when travelling by car because your eyes tell your brain that you're travelling at more than 30 miles an hour, but your vestibular system tells your brain that you're sitting still.
There's also an association between motion sickness and a type of migraine where dizziness, rather than headache, dominates. This is known as a vestibular migraine. If you experience dizzy spells and have a history of motion sickness, you may be diagnosed as having vestibular migraines.
Treating motion sickness
Mild symptoms of motion sickness can usually be improved using techniques such as fixing your eyes on the horizon or distracting yourself by listening to music.
Other self care techniques you could try include:
- Keep still – if possible, choose a cabin or seat in the middle of a boat or plane, because this is where you'll experience the least movement. Use a pillow or headrest to help keep your head as still as possible.
- Look at a stable object – for example, the horizon. Reading or playing games may make your symptoms worse. Closing your eyes may help relieve symptoms.
- Fresh air – open windows or move to the top deck of a ship to avoid getting too hot and to get a good supply of fresh air.
- Relax – by listening to music while focusing on your breathing or carrying out a mental activity, such as counting backwards from 100.
- Stay calm – keep calm about the journey. You’re more likely to get motion sickness if you worry about it.
It’s also a good idea to avoid eating a large meal or drinking alcohol before travelling. You should keep well hydrated throughout your journey by drinking water.
More severe motion sickness can be treated with medication. It's usually better to take medication for motion sickness before your journey to prevent symptoms developing.
Hyoscine, also known as scopolamine, is widely used to treat motion sickness. It's thought to work by blocking some of the nerve signals sent from the vestibular system.
Hyoscine is available over the counter from pharmacists. To be effective, you'll need to take it before travelling. If you're going on a long journey – for example, by sea – hyoscine patches can be applied to your skin every three days.
Common side effects of hyoscine include drowsiness, blurred vision and dizziness. As hyoscine can cause drowsiness, avoid taking it if you're planning to drive.
Hyoscine should also be used with caution in children, the elderly, and if you have certain conditions such as epilepsy or a history of heart, kidney or liver problems.
Antihistamines are used to treat symptoms of allergies, but can also help to control nausea and vomiting. They’re less effective at treating motion sickness than hyoscine, but may cause fewer side effects.
They’re usually taken as tablets one or two hours before your journey. If it's a long journey, you may need to take a dose every eight hours. Like hyoscine, some antihistamines can cause drowsiness. Your pharmacist can advise you.
Several complementary therapies have been suggested for motion sickness, although the evidence for their effectiveness is mixed.
Ginger supplements, or other ginger products including ginger biscuits or ginger tea, may help to prevent symptoms of motion sickness. Ginger is sometimes used to treat other types of nausea, such as morning sickness during pregnancy.
Although there's little scientific evidence to support the use of ginger to treat motion sickness, it has a long history of being used as a remedy for nausea and vomiting.
Before taking ginger supplements, check with your GP that they won't affect any other medication you're taking.
Acupressure bands are stretchy bands worn around the wrists. They apply pressure to a particular point on the inside of your wrist between the two tendons on your inner arm.
Some complementary therapists claim that using an acupressure band can help to treat motion sickness. Although acupressure bands don't cause any adverse side effects, there's little scientific evidence to show they're an effective treatment for motion sickness.
Page last reviewed: 09/12/2014
Next review due: 09/12/2016
|
<urn:uuid:4d9b5391-5a3f-4d31-9a06-1c9f18bd4fdf>
|
CC-MAIN-2016-26
|
http://www.nhs.uk/Conditions/Motion-sickness/Pages/Introduction.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00039-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947131 | 1,361 | 3.421875 | 3 |
Introduction to Multituberculates
The Lost Tribe of Mammals
Multituberculates are the only major branch of mammals to have become completely extinct, and have no living descendants. Although not known to many people, they have a 100 million-year fossil history, the longest of any mammalian lineage. These rodent-like mammals were distributed throughout the world, but seem to have eventually been outcompeted by true rodents.
Multituberculates first appeared in the Late Jurassic, and went extinct in the early Oligocene, with the appearance of true rodents. Over 200 species are known, some as small as the tiniest of mice, the largest the size of beavers. Some, such as Lambdopsalis from China, lived in burrows like prairie dogs, while others, such as the North American Ptilodus, climbed trees as squirrels do today. The narrow shape of their pelvis suggests that, like marsupials, multituberculates gave birth to tiny, undeveloped pups that were dependent on their mother for a long time before they matured.
Pictured is the reconstructed lower jaw of Meniscoessus robustus, a squirrel-sized multituberculate from the Upper Cretaceous. This specimen was collected from the Hell Creek Formation of Montana, USA, and is now part of the UCMP collection.
Multituberculates get their name from their teeth, which have many cusps, or tubercles arranged in rows. Although there are some spectacular multituberculate specimens from Mongolia, many of these unique teeth have been found in North America, and UCMP houses a large collection.
|
<urn:uuid:8960edc8-c21f-4940-94a8-7165a26c872b>
|
CC-MAIN-2016-26
|
http://ucmp.berkeley.edu/mammal/multis/multis.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00004-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97351 | 341 | 3.828125 | 4 |
One of Utah's deadliest mine disasters may have brought down the entire Crandall Canyon coal mine, according to a new seismic study presented today (April 19) at the Seismological Society of America's annual meeting in Salt Lake City.
At Crandall Canyon, a room carved from coal collapsed 1,500 feet (457 meters) below the surface on Aug. 6, 2007, trapping six workers. A tunnel collapse on Aug. 16 killed three rescuers digging toward the suspected location of the miners. The bodies of the six miners were never recovered.
With new analysis techniques, researchers at the University of Utah identified up to 2,000 tiny, previously unrecognized earthquakes before, during and after the coal mine collapse.
The tremors would register magnitude minus -1, with energy equivalent to a small hand grenade, said Tex Kubacki, a University of Utah master's student and study co-author. "They could be from rocks falling, from roof faulting — anything that produces a vibration," he told OurAmazingPlanet.
The quakes help map out how the mine collapsed. At present, there is no sign that seismicity gave warning of the coming collapse, study co-author Michael McCarter, a University of Utah professor of mining engineering, said in a statement. The researchers plan to investigate whether any of the tiny tremors could have given warning, he said.
Kubacki said the cave-in was cone-shaped, with the narrowest end of the cone pointing down into the Earth. Since the catastrophe, the earthquakes have shifted to the edges of the cone, on the west and east ends of the collapse zone.
The new study also shows that the collapse area goes farther than once thought, all the way to the western end of the mine, beyond where the miners were working, Kubacki said.
A 2008 seismic study by University of Utah seismologist Jim Pechmann, who is not involved in the current research, calculated the collapse area covered 50 acres. Pechmann and his university colleagues also proved that the collapse was not caused by an earthquake, as initially claimed by the mine's owners.
The earlier studies also found a giant vertical crack opened in the room where the miners were working, collapsing the roof. Though it dropped only about a foot, the pressure exploded the supporting pillars, filling the room with coal and rubble within seconds, according to the scientists' reports.
Kubacki is now comparing the Crandall Canyon seismicity to other coal mines in Utah, in an effort to better understand mine earthquakes and improve safety. The current study shows remote monitoring can reveal subtle patterns of tremors, meaning mine owners needn't install expensive monitoring equipment deep in mines, he said.
"This research is a starting point to monitoring mine seismicity and potential collapses," Kubacki said.
|
<urn:uuid:753c310b-e184-4ec9-b2b9-b5e9286e186b>
|
CC-MAIN-2016-26
|
http://www.livescience.com/28864-earthquakes-explain-crandall-canyon-collapse.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00043-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95394 | 580 | 2.890625 | 3 |
Mars Global Surveyor
Mars Orbiter Camera
Unconformity in Gale Crater Mound
MGS MOC Release No. MOC2-265G, 4 December 2000
A hint as to the complexity of the history recorded in the rocks
of the Gale Crate central mound is shown by the partial emergence
of a buried crater from beneath a light-toned, massive (i.e.,
not layered) rock unit. The massive light-toned rock covers the upper
left quarter of the image on the left, which is a subframe of
Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image M03-01521.
The picture on the right is a colored map showing the different
layered and massive rock units identified in the Gale Crater mound;
the white box indicates the location of the picture on the left.
The crater seen here formed in a previously-existing layered rock
unit that was later buried by the light-toned massive unit seen
at the upper left. This means that there is a gap in the geologic
record---some of the history of this location is missing---because
the gray-toned rock into which the crater formed was exposed to the
atmosphere and eroded and hit by meteorites (to form craters) before
the light-toned massive material was deposited and the record resumed.
In geologic terms, this kind of relationship is called an
unconformity. Refer to
"Oblique view of Gale Crater Mound," MOC2-265E, December 4, 2000 to see the location of the color map relative to the entire
mound. For additional information about Gale Crater, see
Sediment History Preserved in Gale Crater Central Mound, MOC2-260, December 4, 2000.
Image Credit: NASA/JPL/Malin Space Science Systems
Malin Space Science Systems and the California Institute of
Technology built the MOC using spare hardware from the Mars Observer
mission. MSSS operates the camera from its facilities in San Diego,
CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project
operates the Mars Global Surveyor spacecraft with its industrial
partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA
and Denver, CO.
To MSSS Home Page
|
<urn:uuid:8fa031e5-c449-401d-98c0-089e7ba597e3>
|
CC-MAIN-2016-26
|
http://marsprogram.jpl.nasa.gov/mgs/msss/camera/images/dec00_seds/slides/265G/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00149-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.906401 | 496 | 2.953125 | 3 |
In 1971 the NAACP legal team sued the City of Detroit for its ongoing history of deliberate policies to create and maintain apartheid housing, as well as segregation of the public schools. The NAACP's most recent evidence was the City's nullifying of a school integration plan in 1971. Federal District Court Judge Stephen Roth, a conservative Democrat, ruled in favor of the NAACP. Here is part of Judge Roth's opinion, as quoted by Grant (2009):
The city of Detroit is a community generally divided by racial lines. Residential segregation within the city and throughout the large metropolitan area is substantial, pervasive and of long standing. Black citizens are located in separate and distinct areas within the city and are not generally to be found in the suburbs. While the racially unrestricted choice of black persons and economic factors may have played some part in the development of this pattern of residential segregation, it is, in the main, the result of past and present practices and customs of racial discrimination, both public and private, which have and do restrict the housing opportunities of black people. On the record, there can be no other finding (p. 146).Judge Roth's decision was upheld by a three-judge federal Appeals panel, and after the State of Michigan joined Detroit in a further appeal, a full federal Appeals Court affirmed Judge Roth.
In 1974 the U. S. Supreme Court, stacked with Nixon's three new picks who had passed his anti-integration litmus test (see pp. 150-156 of Hope and despair in the American city: Why there are no bad schools in Raleigh), reversed the three previous legal conclusions by the lower courts to strike down, in a 5-4 decision (Milliken v. Bradley), a new Detroit desegregation plan that sought to remedy generations of racist policies, both public and private.
Some school systems pushed on with efforts to desegregate schools, and Wake County, North Carolina was one of them. In 1976, the city and county consolidated their school systems and created a pie-chart configuration of districts that assured that each district got some urban children, with no school getting more than 40 percent minority. In later years, when it became apparent that the Reagan and Bush Courts would put the final nails into the coffin of Brown v. Board of Education, Wake County began to focus desegregation efforts by economic status, rather than skin color. They opened many magnet schools, which attracted large numbers of suburban children to choose schools in the city. And they focused on a culture of excellence and equity, that extended to every child. They recruited teachers and principals who wanted to teach in diverse schools, and they gave big bonuses for National Board certification. Test scores soared, and achievement gaps narrowed. Today there are no bad schools in Raleigh.
All of this makes the Business Roundtable, conservative racists, and the Oligarchs in charge of the U. S. Department of Education very nervous, for if this model of diverse, excellent public schools can be spread to other municipalities, then not only will public education be saved and transformed, but white privilege will be further challenged, desegregation will become a reality, and the apartheid corporate charter industry will go bust. That is why the Republican Party is spending big bucks in North Carolina to get their functionary toadies elected. The most recent election shows the conservatives now with a 5-4 majority on the Wake County School Board. The dismantling of successful economic integration in the public schools is the top priority, as the following piece makes clear. Will this be another Milliken v. Bradley?
WAKE COUNTY (WTVD) -- A large crowd showed up at Tuesday's Wake County School Board meeting as the debate over proposals supported by the new majority of the board intensified.
It turned out that the school board member who introduced the proposal to change mandatory year-round schools decided to withdraw the motion Tuesday, but not before getting an earful from dozens of parents.
Board members listened to public comments for several hours on changing the diversity policy and ending mandatory year-round schools.
"Diversity is not a policy of convenience," student George Ramsey said. "It is a policy of necessity."
"Academic excellence cannot occur without diversity," parent Vickie Adamson said.
Most of the 70 people who signed up to talk seemed to be at odds with the new majority, and asked them to re-consider their proposed changes.
Some went as far as claiming that those changes would lead to re-segregation of the schools.
"Where's the plan," opposer Gary Disnukes said. "Where's the budget. I urge this board to take a step back and not be in a rush to fulfill campaign promises before all ramifications of these promises are understood."
A minority of those present, however, did offer the board support.
"My hope would be for the opposition to embrace the new school board members ... improve the graduation rates and overall academic achievement for our children," supporter Judy Gladden said.
Board members later decided not to take a controversial vote at the end of the meeting.
The school board plans to come up with questions to send out in a survey to parents and make decisions based on their feedback.
The cost of doing the survey ranges from a few thousand to as much as $144,000 depending on how it's conducted.
The board wants responses back by March, so it can work out the school calendar.
Meanwhile, the NAACP is asking the school board for 45 minutes to present some of their concerns at next month's meeting.
The NAACP says it is worried policy changes could essentially re-segregate the school system.
It's not clear if the board will say yes.
|
<urn:uuid:d2c4ecf1-5afe-4365-a0f0-6876e7620345>
|
CC-MAIN-2016-26
|
http://www.schoolsmatter.info/2009/12/from-detroit-to-raleigh-neo-nixonian.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00061-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972423 | 1,154 | 3.171875 | 3 |
National statistics on the number of high school drop outs for 2008
Recent studies reported by the US Department of Education revealed nearly 1.2 million students between the ages of 15 and 24 dropped out of high school in one year alone. However, according to the US Department of Education, the true drop out rate for US teens is quite difficult to discern with out factoring in the number of reasons why teens drop out of high school.
These statistical findings suggest that 1 in every 5 students will drop out of high school between the 10th - 12th grade for one reason or another. Factoring in all the potential reasons for this extremely high ratio of drop outs verses graduates is quite complicated as researchers explain. For this reason we have narrowed down the top ten reasons that teens leave high school before graduation.
Statistically 55% of the nation's students between the ages of 15 and 19 will successfully complete high school and receive a high school diploma. Another 15% will receive their GED or high school equivalency before the age of 24, which in total accounts for 70% of students that will graduate annually. The remaining 30% of high school students will drop out of school before reaching the 12th grade.
According to the US Department of Education, there are ten significant markers of risk or reasons teens drop out of high school before graduating. Below are what USDOE discovered as the most common reasons teens drop out of high school.
10 Reasons Why Teens Drop Out of High School
1. Lack of Educational SupportStudies conducted on 5,000 high school drop outs revealed 75% dropped out of high school because they lack sufficient parental support and educational encouragement.
2. Outside InfluencesFriends and/or peer pressure from other high school drop outs, family or other outside relationships can impact a teen to drop out of school. This also encompasses teens who opt to drop out high school to join a gang or to be accepted in other teen groups and street communities.
3. Special NeedsThere are a number of teens dropping out high school because they require specific attention to a certain need such as ADHD or dyslexia. This is predominately among densely populated public high schools where the overcrowded classrooms fail to recognize the special needs of a specific student.
4. Financial ProblemsOften the family is in a very poor financial situation and in order to help the family financially is another reason why teens drop out of school. Teens in this case are forced to obtain employment to financially help the family, and in some cases the financial strain can be due to an unplanned pregnancy and/or parental disabilities.
5. Lack of InterestOne of the biggest reasons a teen will drop out of high school is because they simply lack interest in gaining an education. Out of 10,000 public high school drop outs, 7,000 of them confessed to their lack in interest to complete high school. Most often this is due to the generic course curriculums offered to public high school students, whereby a number of students simply become bored.
6. Drug and Alcohol AbuseDrugs and alcohol abuse is within the top 3 reasons students fail to complete their high school education. It goes without saying, that a teen on drugs will rarely complete high school.
7. Depression and Physical IllnessesDepression and illnesses can be the result of an eating disorder, heredity, family or financial situation that will contribute to the teen's lack of interest in school or class subjects and and is common reason why teens drop out of school.
8. Physical AbuseTeens that are victims of domestic violence such as physical, verbal and sexual abuse tend to drop out of high school before obtaining their high school diploma. In most cases a number of teens experiencing abuse will runaway from home, thus causing them to drop out.
9. Teen PregnancyIn the past, teen pregnancy accounted for 15% of the high school drop out rate among teens between the ages of 15 - 18. However, these numbers have sharply declined to about 4% on the average. A number of public schools have opted to reform the school to cater to pregnant teens. Some states have high schools specifically for pregnant teens and teen mothers to ensure they complete high school in an environment that does not judge them or discount the impact or signigicance of their circumstance.
10. Alternative LifestylesThis common reason teens drop out of high school is due to their perception of an alternative lifestyle in which education does not play an important role. A teen who is introduced to drug dealing and prostitution may view high school as a waste of time because they don't need an education to sell drugs or their bodies for that matter.
The bottom line for parents to help reduce the number of teen high school drop outs across the nation is to equip themselves and their teens with knowledge and alternative methods, such as going to a continuation or alternative school to receive their high school diploma and/or get their GED. It's simply not enough to tell your teen the importance of an education, but to also guide them into the right direction. Most important is maintaining communication so that you can discover your teen's risk of dropping out far enough in advance to really make a difference in the outcome.
|
<urn:uuid:8d290169-6e1f-4ce1-9bec-cc42882dbc3f>
|
CC-MAIN-2016-26
|
http://m.womensforum.com/why-teens-drop-out-of-high-school.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00106-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964386 | 1,048 | 2.796875 | 3 |
The mimulus spreads out from a base, sprawling its growing stems and their bright, typically yellow flowers. Mimulus plants include annuals, herbaceous varieties and those which can be considered as sub-shrub. They will ramble out in search of sun. Here’s our guide to the Mimulus.
The mimulus will freely flower in the spring. Trailing varieties will turn any hanging basket into a profusion of cascading flowers. It grows in a woodland environment – for example, the deep gold blooms of the Mimulus guttatus, known as ‘the monkey flower’, can be seen beside woodland streams. Grown in a bed it will produce a mound of flowers, usually around 80 to 90 cm high.
When and where to plant
Mimulus will grow best in full sun. The soil should be moist as the plants like to grow in wet ground. Avoid high temperatures and drying out in drought periods. Plant out any time after May; or plant out in late September for over-wintering plants to flower early in the succeeding spring. Grow on plants under cover until they reach 8 to 10 cm in height at which point they’re ready to plant out.
How to plant
Mimulus are ideal for planting in containers and in fact they thrive in a pot. Fill the container up to three quarters full with multipurpose compost. Carefully remove the plant from its tray or pot and place in position. Fill container back in with soil and gently firm down. Water in generously. Always keep container plants well watered and fed. If choosing a companion plant, ensure that it will not over-shade the mimulus as they need plenty of sun.
Choose a pot which will go nicely with bright yellow or orange flowers! The flowers are generally snapdragon-like, with a tubular back to the flower.
Wash off any aphids which appear with a carefully aimed jet of water. If an infestation of aphids does occur, use an insecticidal soap to treat the problem. Cut back the plants once they are grown scraggly at the end of the flowering period.
|
<urn:uuid:9738cd61-4511-4850-983e-3d5dcfe9d82f>
|
CC-MAIN-2016-26
|
http://www.gardeningdirect.co.uk/mimulus-guide
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00100-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936731 | 438 | 3.171875 | 3 |
For this challenge, you'll need to play Got It! Can you explain the strategy for winning this game with any target?
Find some examples of pairs of numbers such that their sum is a
factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and
16 is a factor of 48.
List any 3 numbers. It is always possible to find a subset of
adjacent numbers that add up to a multiple of 3. Can you explain
why and prove it?
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
Choose any 3 digits and make a 6 digit number by repeating the 3
digits in the same order (e.g. 594594). Explain why whatever digits
you choose the number will always be divisible by 7, 11 and 13.
Given the products of diagonally opposite cells - can you complete this Sudoku?
Factor track is not a race but a game of skill. The idea is to go round the track in as few moves as possible, keeping to the rules.
This package contains a collection of problems from the NRICH
website that could be suitable for students who have a good
understanding of Factors and Multiples and who feel ready to take
on some. . . .
Ben passed a third of his counters to Jack, Jack passed a quarter
of his counters to Emma and Emma passed a fifth of her counters to
Ben. After this they all had the same number of counters.
Make a set of numbers that use all the digits from 1 to 9, once and
once only. Add them up. The result is divisible by 9. Add each of
the digits in the new number. What is their sum? Now try some. . . .
A student in a maths class was trying to get some information from
her teacher. She was given some clues and then the teacher ended by
saying, "Well, how old are they?"
Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number?
Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard?
Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why?
Given the products of adjacent cells, can you complete this Sudoku?
Find a cuboid (with edges of integer values) that has a surface
area of exactly 100 square units. Is there more than one? Can you
find them all?
When the number x 1 x x x is multiplied by 417 this gives the
answer 9 x x x 0 5 7. Find the missing digits, each of which is
represented by an "x" .
Can you find a relationship between the number of dots on the
circle and the number of steps that will ensure that all points are
What is the remainder when 2^2002 is divided by 7? What happens
with different powers of 2?
Can you convince me of each of the following: If a square number is
multiplied by a square number the product is ALWAYS a square
Find the largest integer which divides every member of the
following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n.
A mathematician goes into a supermarket and buys four items. Using
a calculator she multiplies the cost instead of adding them. How
can her answer be the same as the total at the till?
Find the smallest positive integer N such that N/2 is a perfect
cube, N/3 is a perfect fifth power and N/5 is a perfect seventh
Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3...
Can you find any perfect numbers? Read this article to find out more...
What is the smallest number with exactly 14 divisors?
How many numbers less than 1000 are NOT divisible by either: a) 2
or 5; or b) 2, 5 or 7?
Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3.
A collection of resources to support work on Factors and Multiples at Secondary level.
115^2 = (110 x 120) + 25, that is 13225 895^2 = (890 x 900) + 25, that is 801025 Can you explain what is happening and generalise?
The clues for this Sudoku are the product of the numbers in adjacent squares.
A game that tests your understanding of remainders.
Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light?
Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the. . . .
Helen made the conjecture that "every multiple of six has more
factors than the two numbers either side of it". Is this conjecture
Follow this recipe for sieving numbers and see what interesting patterns emerge.
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
Prove that if the integer n is divisible by 4 then it can be written as the difference of two squares.
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
Data is sent in chunks of two different sizes - a yellow chunk has
5 characters and a blue chunk has 9 characters. A data slot of size
31 cannot be exactly filled with a combination of yellow and. . . .
A game in which players take it in turns to choose a number. Can you block your opponent?
You are given the Lowest Common Multiples of sets of digits. Find
the digits and then solve the Sudoku.
Here is a Sudoku with a difference! Use information about lowest common multiples to help you solve it.
The nth term of a sequence is given by the formula n^3 + 11n . Find
the first four terms of the sequence given by this formula and the
first term of the sequence which is bigger than one million. . . .
The number 8888...88M9999...99 is divisible by 7 and it starts with
the digit 8 repeated 50 times and ends with the digit 9 repeated 50
times. What is the value of the digit M?
The number 12 = 2^2 × 3 has 6 factors. What is the smallest natural number with exactly 36 factors?
Gabriel multiplied together some numbers and then erased them. Can you figure out where each number was?
Which pairs of cogs let the coloured tooth touch every tooth on the
other cog? Which pairs do not let this happen? Why?
|
<urn:uuid:802a6b51-6e98-47a8-a4fe-d41f9a5e956b>
|
CC-MAIN-2016-26
|
http://nrich.maths.org/public/leg.php?code=12&cl=3&cldcmpid=1220
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00110-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932375 | 1,619 | 2.90625 | 3 |
A breakfast cereal made of cooked wheat in long brittle shreds that are pressed into compact pieces.
- I normally have cereals like Shredded Wheat for breakfast, and I love milk.
- In 1893, a Denver lawyer, H. D. Perky, who suffered from indigestion and had become converted to health foods, invented a completely different product: Shredded Wheat.
- The lowest salt menu in the survey was a breakfast of fruit juice, Shredded Wheat with milk and one piece of toast with jam.
For editors and proofreaders
Syllabification: shred·ded wheat
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed.
|
<urn:uuid:36b5c5a2-4c8a-4754-a5cf-b95d44701bab>
|
CC-MAIN-2016-26
|
http://www.oxforddictionaries.com/definition/american_english/shredded-wheat
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00169-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94856 | 157 | 2.546875 | 3 |
There is one expression I came across recently - 'The take home is ...'. The full sentence was “The take home is that regular use of caffeine produces no benefit to alertness, energy, or function”. Can anyone explain what does the beginning of the sentence mean? And does it have something in common with another expression "to drive your point home"?
- Anybody can ask a question
- Anybody can answer
- The best answers are voted up and rise to the top
The take-home or the take-away of something is its most important point or lesson. It's the one part you should carry (home) with you to remember.
Edit: As Sam correctly notes, the origin of this phrase lies in the amount of your salary you take home after taxes, etc., have been deducted.
" Take homes" can also mean the prescription drug you take home with you from a substance abuse clinic, such as methadone or buprenorphine(subutex).
|
<urn:uuid:11a10be1-3008-4087-a2cc-f3e820360e62>
|
CC-MAIN-2016-26
|
http://english.stackexchange.com/questions/44660/whats-the-meaning-of-the-expression-the-take-home-is
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00110-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956771 | 207 | 2.890625 | 3 |
Opioid overdose now kills more people than both AIDS and homicides in America and has surpassed automobile accidents as the leading cause of accidental death in many states. According to the Centers for Disease Control and Prevention, the burgeoning epidemic accounts for approximately 16,000 deaths per year in the U.S.
“No one was talking about this problem a few years ago,” said Leo Beletsky, a drug policy expert and assistant professor of law and health sciences at Northeastern University, “but it has now become impossible for public health and law enforcement officials to ignore.”
In an article published earlier this month in the Journal of the American Medical Association, Beletsky and physician-researchers Josiah Rich and Alexander Y. Walley called upon federal agencies to take leadership in addressing this epidemic through a comprehensive series of public health-based initiatives.
The trio of researchers posits that the federal government’s current response to the growing public health crisis has focused too narrowly on restricting access to prescription and street drugs, an approach that has failed to produce results. “We know from decades of experiences in substance abuse how difficult it is to win battles by focusing too narrowly on the ‘supply’ side of the equation,” Beletsky explained. “As things stand, not enough is being done to maximize the opportunity of saving lives.”
A litany of barriers has constrained the response, he said, including low public awareness of the signs of an overdose; a shortage of Naloxone, an opioid antagonist; and an unawareness or unwillingness on the part of prescribers to participate in overdose education.
Of the Naloxone shortage, Beletsky said, “Drugs like Naloxone that are cheap, generic and out of patent are more likely to be in shortage because they aren’t perceived as a priority for manufacturers.” He called attention to the issue in a recent Time Magazine article in which he was quoted.
In the paper, Beletsky and his colleagues make more than a dozen recommendations designed to improve the prevention of fatal opioid overdose. They call upon the National Institutes of Health to evaluate community-based naloxone access initiatives, for example, and for the Department of Justice to formulate and disseminate model legislation proving legal immunity to naloxone prescribers and lay responders. They also singled out the U.S. Food and Drug Administration, calling on the organization to require continuing education programs for healthcare providers to cover overdose prevention.
“Federal agencies and policy-makers can and should take concrete steps that could go a long way toward addressing this issue,” Beletsky said. “We have seen a lot of successful experimentation in overdose prevention on the local and state level and now it is time for the federal level to take the lead.”
View selected publications of Leo Beletsky in IRis, Northeastern’s digital archive.
|
<urn:uuid:95f575fd-9364-4733-95e7-665d918e88b1>
|
CC-MAIN-2016-26
|
http://www.northeastern.edu/news/2012/11/opioidoverdose/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00071-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954981 | 610 | 2.765625 | 3 |
I recently listened to Your Brain at Work, a productivity/neuroscience book by David Rock. Rock's main argument is that by better understanding your brain, you can align the way you work with your brain's tendencies, patterns, and instincts to be more productive and successful.
Rock keeps your attention throughout by implementing a narrative conceit involving two people, Paul and Emily, in before-and-after scenarios. Paul and Emily make poor decisions at first, and then later, when they understand better how the brain works, they make better decisions and find more success in the mock situations.
I found Rock's book particularly interesting, not only for the helpful productivity tips but also because of the insights into the brain.
To explain how the brain works, Rock compares the brain to a stage. The stage can only accommodate so many actors before the play starts to get chaotic. When we multitask, we place more actors on our stage, and if we have too many actors, we become overloaded. The actors bump into each other and can't move about in graceful harmony. It's chaos. This translates into stress and frustration.
Rock says our brain can't multitask when the tasks involve the prefrontal cortex — an area of the brain that requires high attention and focus. Instead, we only task-switch between multiple activities. Only when one activity is so familiar and routine that our basal ganglia can handle it almost unconsciously can we perform multiple tasks at once.
For example, if you're used to driving the same route to work, it's not difficult to drive that familiar route while listening to an audio book that requires a moderate level of concentration. In this case, you can multitask because your prefrontal cortex handles the audio listening while your basal ganglia handles the driving. However, if you were driving in downtown Manhattan for the first time — an act requiring a high degree of concentration and alertness — there's no way you could successfully perform two prefrontal cortex tasks with equal competence.
In fact, Rock cites studies showing that our IQ dramatically falls when we attempt to multi-task, such as switching between an iPhone and a meeting. Studies show that a Harvard-level educated person can be reduced to a third-grade equivalent when multi-tasking.
Constant interruptions that compel us to continue switching tasks removes our chance at productivity. Important tasks that require deep immersion in thought are compromised when we fail to focus with enough uninterrupted study to reach a "continuous flow state," as it's sometimes called.
When actors on our stage keep coming and going, appearing and disappearing, and when the play keeps changing scripts and scenes, the brain can't be productive. We need an uninterrupted focus with just a few actors on stage.
The first tip for productivity, then, is to allow for longer periods of uninterrupted thought and focus as you tackle high priority problems. Turn off the distractions and allow yourself to engage for a while with a problem. Identify your priority for the day early in the morning, and carve out time to tackle it. Avoid social media, meetings, phone calls, and other distractions that take you away from a state of focus.
Beyond encouraging single tasking, Rock also touches on the neuroscience of insight. He says when you get stuck on a problem, it's helpful to step back and look inward for a few moments. He says our brain has a unique ability to enter states of self-awareness, or mindfulness, where our "director," as he calls it, observes itself in action.
This is the metacognitive ability we have to step outside of our thought processes and observe ourselves thinking, to see ourselves acting in the moment almost as if we were another person. Philosophers have reflected on this director in the mind for centuries, he says.
Scientists who study insight find that insights come most frequently when people look inward with a quiet contemplation. To arrive at insights, he encourages a model called ARIA: Attention, Reflection, Insight, and Action. When faced with a problem, narrow your attention by removing extraneous actors from the stage and focusing inward. Then reflect, perhaps looking at the issue from different perspectives. More often than not, insights will come.
If they don't, Rock mentions a few other strategies for insights as well. If you're stuck at an impasse, give yourself a break. It's easy for the brain to get stuck continuing down the same path over and over. You need to rest and shift your attention for a while to something else, and then return to the problem with a fresh perspective later. You'll find you're no longer stuck in the same rut as before, and you may see the solution much more clearly and easily.
He also recommends simplifying complex problems into smaller parts. Instead of trying to wrap your mind around a problem with multiple stages, various components, workflows, and related issues, chunk the issue into simpler parts that you can tackle individually.
Finally, he recommends incorporating more visuals to tackle the problems. Visuals make it easier to process complex information. Drawing pictures of the problem, or incorporating some other visual stimuli to think and interact with the problem may lead you to insights more quickly.
In the second half of the book, Rock dives into five key attributes the brain cares deeply about: status, certainty, autonomy, relatedness, and fairness, or SCARF for short. Our brain treats these attributes almost as intensely as survival instincts. When we interact with others, we will have more success by remembering to account for these attributes.
For example, with fairness, studies have shown that when two people are to split $10, if one person decides to take $7 and give the other $3, the person getting the smaller amount will feel such an incredible unfairness, he or she often will choose for no one to receive money at all rather than be slighted with the lesser amount. The sense of fairness is at times stronger than the desire for reward.
Autonomy is another huge trait the brain gravitates toward. In leadership roles, it's much better to help people find solutions themselves rather than force others to accept solutions and decisions you make for them. We love to have independence in our work, and when it's taken away and we are compelled toward specific ends, we reject it fiercely.
With status, slight another person in front of others, giving new projects to someone with little experience instead of to a senior-level team member, and this shift in status can demotivate. The same strategy works at home in managing children. The older children enjoy a higher level status, and when you take that status away, or put the older child on equal ground with the younger, it sends the older child into rebellion for the loss of status.
What does relatedness mean? People respond better when you try to relate to their frustrations, challenges, and experiences. Relating to another person can help build trusting, solid relationships, which will help you have more successful interactions.
Certainty is also a state the brain craves. Kids love to have routines, because routines encourage a world of certainty. People don't like uncertain futures. Will you be able to meet the project deadline? Will the company go under? Uncertainty breeds fear and a sense of doubt.
Your Brain at Work has a lot of helpful ideas to increase productivity. Here are a few of my takeaways:
Get new posts delivered straight to your inbox.
I'm a technical writer based in the San Francisco Bay area of California. Topics I write about on this blog include technical writing, authoring and publishing tools, API documentation, tech comm trends, visual communication, technical writing career advice, information architecture and findability, developer documentation, and more. If you're a professional or aspiring technical writer, be sure to subscribe to email updates using the form above. You can learn more about me here.
|
<urn:uuid:0f9acdaa-b504-4ae5-991d-2add81e2ed6c>
|
CC-MAIN-2016-26
|
http://idratherbewriting.com/2012/11/27/book-review-your-brain-at-work-by-david-rock/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00031-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947363 | 1,602 | 2.65625 | 3 |
In which Scrabble dictionary does TRIGS exist?
Definitions of TRIGS in dictionaries:
- noun -
the mathematics of triangles and trigonometric functions
adj - being in a state of cleanliness and order
noun - an ox-like animal
There are 5 letters in TRIGS:
G I R S T
Scrabble words that can be created with an extra letter added to TRIGS
All anagrams that could be made from letters of word TRIGS plus a
Scrabble words that can be created with letters from word TRIGS
5 letter words
4 letter words
3 letter words
2 letter words
Images for TRIGSLoading...
SCRABBLE is the registered trademark of Hasbro and J.W. Spear & Sons Limited. Our scrabble word finder and scrabble cheat word builder is not associated with the Scrabble brand - we merely provide help for players of the official Scrabble game. All intellectual property rights to the game are owned by respective owners in the U.S.A and Canada and the rest of the world. Anagrammer.com is not affiliated with Scrabble. This site is an educational tool and resource for Scrabble & Words With Friends players.
|
<urn:uuid:55d91bd8-8f34-4388-95d2-a09bac7dde6a>
|
CC-MAIN-2016-26
|
http://www.anagrammer.com/scrabble/trigs
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00103-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.895665 | 270 | 2.578125 | 3 |
The effects on reading comprehension and writing skills of training in identifying the status of information in texts
This study investigated the effects of a focused reading strategy training programme which concentrated on main points and supporting details. Two areas were highlighted to assess the impact of the programme : direct learning and indirect learning. For direct learning the students' ability to identify and write topic sentences and supporting details was examined. For indirect learning the programme's impact on students' general reading comprehension and general writing ability was analysed.
There were two experimental groups and a control group. The experimental groups had a focused learning programme which emphasised the reading strategy of identifying the main points and supporting details of texts. The control group had an unfocused general English language programme which involved some reading, writing, and pronunciation.
In addition to a comparison of English programme content, the effect on results of two contrasting teaching techniques were examined. Using the same materials, one experimental group was taught using a student centered think aloud \ reciprocal teaching approach while the second group was taught using a more teacher centered approach. To augment the test results comparison each experimental group's attitude to the course was elicited and this also compared.
The investigation took the form of a pre-test followed by treatment followed by the post test followed by a questionnaire for the experimental group. Each test comprised of three sections : identifying the main point and supporting details of several passages, a general reading comprehension multiple choice test and a writing sample. The results of the reading and writing pre- and post-tests were analysed under three headings. First the control groups' results were compared with the combined experimental groups using t-tests. Next, experimental group one's results were compared with experimental group two's. Finally, both the experimental groups' results were correlated with their attitude to the course.
The results were mixed. The combined experimental groups improved more than the control group on all the tests. This difference was significant for the direct learning areas but not for all measures on the indirect learning areas. Also, when the results for each individual group were examined the control group performed better, albeit not significantly so, than experimental group two on one measure of writing.
The comparison of teaching method elicited no significant difference between the experimental groups. However, experimental group one, the think aloud \ reciprocal teaching group, performed better in three tests, all in the non direct learning area.
Attitude to the course proved to be of little import. The results of the students bore limited correlation to their attitude. In general students who enjoyed the programme more improved more, and the less proficient students thought the course more enjoyable, new and useful than did the more proficient students.
School:The University of Hong Kong
School Location:China - Hong Kong SAR
Source Type:Master's Thesis
Keywords:reading comprehension china hong kong testing english language study and teaching
Date of Publication:01/01/1995
|
<urn:uuid:9ecb4cb0-5c0b-4063-8048-d35657a7f235>
|
CC-MAIN-2016-26
|
http://www.openthesis.org/documents/effects-reading-comprehension-writing-skills-264909.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00172-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962643 | 586 | 2.640625 | 3 |
powered by AFI
In the year 64 AD, Emperor Nero of Rome begins persecuting Christians for their beliefs and sentences one of Christ's apostles, Simon Peter, to death. For the benefit of two other prisoners in his cell, Simon Peter recounts some of the events of Christ's life, beginning with His birth: After being baptized by John the Baptist, Christ goes into the wilderness for forty days to be tempted by the Devil. Having resisted temptation, Christ then goes into Judea, where He restores a blind man's sight. Those who witness this miracle become the first of Christ's apostles. Christ then goes to the city of Samaria, but does not fear for His safety despite the intense hatred that the Samaritans feel toward the Jews. Thirsty, Christ goes to Jacob's Well for a drink of water, when a deranged man approaches Him. After Christ brings peace to the man, word begins to spread about His amazing powers. Soon, the pharisees begin to worry that Christ may undermine their power as religious authorities. Nicodemus, one of the pharisees, but a seeker of truth, is converted by Christ. Later, King Herod kills John the Baptist, and Christ prophesies that He, too, will be killed, but that He will rise again after three days. After Christ learns from Mary and Martha, the sisters of His friend Lazarus of Bethany, that their brother has sickened and died, Christ travels to Bethany with His apostles and prays for Lazarus to be raised from the dead. Later, the pharisees hear reports that Lazarus has been resurrected and offer Christ's apostle, Judas Iscariot, thirty pieces of silver, the price of a slave, to deliver Christ to them. During Passover in Jerusalem, Christ finds the temple overrun by money changers, and angrily expels them while Lord Zadok and other pharisees attempt to have Him incriminate Himself by defying their laws. At Mount Zion, Christ gathers His apostles for the Passover celebration and tells them that He has already been betrayed. Christ then asks Judas to go and complete his act of betrayal. When Judas returns a short time later to the garden of Gethsemane, he is accompanied by the pharisees, who arrest Christ and decide to put Him on trial immediately. The terrified apostles flee, and Christ is taken to the house of the high priest, Caiaphas. Ashamed about deserting his Savior, Simon Peter tries to find Him, but is arrested by centurions after denying knowledge of Him. Although Nicodemus attempts to defend Him, He is tried and convicted. Christ is then taken to the Roman governor of Judea, Pontius Pilate, and although Pilate's wife begs her husband not to torture Christ, Pilate has Him whipped and orders his execution. Meanwhile, a totally repentant Judas returns the silver and later hangs himself. After He is crucified at Golgotha, Christ's body is placed in a tomb. Three days later, His followers go to the tomb and find that the stone blocking the entrance has been moved aside. Christ then appears to His apostles on the Mount of Olives and before He leaves them for the last time, asks His followers to go forth and preach His word to the world.
|
<urn:uuid:1e3afdfc-2887-47c5-9e9f-20c9c764eebe>
|
CC-MAIN-2016-26
|
http://www.tcm.com/tcmdb/title/563950/The-Pilgrimage-Play/full-synopsis.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00193-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962954 | 682 | 2.59375 | 3 |
Photo courtesy of Mary Ellen (Mel) Harte, bugwood.org
Ericameria nauseosa (Pallas ex Pursh) Nesom & Baird
Scientific Name Synonyms:
Chrysothamnus nauseosus (Pall.) Britt.
Life Span: Perennial
Season: Short day
Growth Characteristics: A 12 to 90 inch tall shrub with a rounded crown and several erect stems from the base. It flowers June to September, and reproduces from seeds and root sprouts. The inflorescences and bracts of the seeds often persist well into the next year.
Flowers/Inflorescence: Yellowish green flower, ½ inch long or smaller, arranged in an umbrella shaped head.
Fruits/Seeds: Fruit is an achene.
Leaves: Alternate, linear to spatula shaped blades with entire margins. The leaves are 1 to 3 nerved.
Stems: Twigs are erect, flexible (rubbery), yellowish-green, and covered with a dense felt-like covering. The trunk is gray-brown with small cracks. The bark is fibrous and somewhat shreddy.
Rubber rabbitbrush occurs in the cold deserts of the Colorado Plateau, throughout much of the Great Basin, and in warm deserts of the Southwest from lower-elevation Sonoran to subalpine zones. Rubber rabbitbrush favors sunny, open sites throughout a wide variety of habitats including open plains, valleys, drainage ways, foothills, and mountains. It is particularly common on disturbed sites. Rubber rabbitbrush is cold hardy, and tolerant of both moisture and salt stress.
Rubber rabbitbrush exhibits a number of adaptations for surviving in an arid environment. One of these is that leaves and stems are covered with a felt-like layer of trichomes that insulate the plant and reduce transpiration.
Soils: Rubber rabbitbrush grows on a wide range of soils. Soils tend to be medium to coarse-textured and somewhat basic, but may range from moderately acidic to strongly alkaline. This shrub commonly grows on dry, sandy, gravelly or heavy clay, and is somewhat salt tolerant.
Associated Species: Chokecherry, basin wildrye, big sagebrush, western wheatgrass.
Uses and Management:
Rubber rabbitbrush is, in general, considered of little value to all classes of livestock. It is an important browse species on depleted rangelands. In general, wildlife and livestock forage only lightly on this species during the summer, but winter use can be heavy in some locations. Fall use is variable, but flowers are often used by wildlife and livestock. A few leaves and the more tender stems may also be used. It is occasionally reported to be toxic to livestock.
Dense stands of rubber rabbitbrush may indicate poor range management or abandoned agricultural land.
American Indians made chewing gum from pulverized wood and bark. It was also used as tea, cough syrup, yellow dye, and for chest pains. It is a small commercial source for rubber extraction, and was studied extensively during World War II as a substitute for commercial rubber.
|
<urn:uuid:532705d0-a572-44a1-ade0-0f04b818ed12>
|
CC-MAIN-2016-26
|
http://extension.usu.edu/rangeplants/htm/rubber-rabbitbrush
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00113-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929573 | 650 | 3.140625 | 3 |
While not closely related in the evolutionary order, both horses and humans have strongly ingrained experience as prey animals. In the earliest times, both species were obligated to drink at the same watering holes as predators lurked nearby. Ours and their survival was directly linked to an ever constant vigilance to possible threats. Horses have remained purely prey animals and humans still seem to remember. Despite the intervening eons of time and the human capacity for reasoning, those deep seated, ingrained fears seem only muted at best and still have a firm grip on the human psyche. That is nowhere more clear than in our frequent regressions into tribal behaviors. Fears of outsiders, sometimes real and often imagined, remain common in both local neighborhood and global politics. Especially in the midst of conflict, be it a divorce, workplace, business, or policy matter, humans exhibit the same nervous demeanor of a horse exposed in an open field. In our own way, we approximate that head tossing, nose to the wind, quick glancing behavior that suggests a state of constant alert to all possible plots to take advantage of us. We close ranks with those we think we can trust, and hire professionals to protect us from being played for fools in an ever increasingly complex and threatening world.
Human hubris and the Enlightenment has allowed some to be presumptuous enough to believe that humans can trump their emotional fears with reason alone. Ironically, suggest neuroscientists, the initial ‘feeling’ response of most humans to a dispute or conflict, which have been traced in brain scans to the amygdala at the base of our brain, often encourages fight or flight and the corresponding emotions. Despite our determination to be rational and reasonable beings, we often continue to respond as any horse might to a perceived threat. Rational thought processes of the neo-cortex can eventually be brought to bear on our initial feelings and fears, but not initially or immediately. As Antonio Damasio observed in his book, Descartes’ Error (1994), in the face of conflict there is no such thing as a “cool-headed reasoner”. People in conflict situations feel and act like prey animals; they have a natural, psycho-biological discomfort and unease about being in foreign terrain and in a circumstance over which they do not have complete control. At worst, they have abject fear of being compromised or injured. Thus, there are close parallels between approaching and training a horse and mediating with people embroiled in a dispute. Training is little more than a form of negotiation and much can be learned and from that interaction.
Still part of our Western folklore is the conventional wisdom about how to train, or more bluntly, break a horse. Still practiced in some quarters, in negotiation terms it is a form of ultimatum---“you will do as I demand, or else”---the beast needs to be ‘broken’ and bent to the will of the trainer. As herd animals, so the theory goes, they must be forced to regard the human with the complete deference given to a dominant stallion. John Huston’s 1960 film, The Misfits, aptly depicts a cringing scene of the wild mustang staked down with feet hobbled in the middle of a ring and then ridden till they drop--- until the last ounce of resistance was driven out of them.
Unfortunately, there are similarities between this traditional form of horse training and the practice of some professionals---lawyers, doctors and even some mediators--- who feel compelled to be in control of their clients or patients. By giving clients rules of behavior, asserting their superior knowledge of the situation and intimidating them with their years of experience and education, some professionals demand respect. The theory is that people in conflict are prone to be unpredictable, emotional and so likely to bolt in every direction that they must be reigned in with heavy doses of reality. Although not clear if the implicit (sometimes explicit) ultimatum more serves the comfort level of the professional or the client, it is nonetheless clear enough: “do it my way or get out… .” The professional asserts his authority, as an objective, rational and pragmatic agent of reality with statements such as, ‘this is the way it is’, said with a firmness that defies contradiction or argument. The client understands clearly that he or she is to relent or leave. Ironically, while less acceptable in the evolving conflict management field, which professes a dedication to a heightened awareness of client self-determination, some mediators continue to practice varieties of this ‘take charge-I know best’ approach.
Not surprisingly, humans have tended to approach their negotiations with each other much the same way they train their horses and dogs. They have assumed that control requires dominance and power, and that logic and rational argument are the best techniques to manage fears and conflict. Those techniques failing, then the assertion of ultimatums, threats and sanctions are justified. Ironically, as experienced conflict managers have learned, logic is the least effective way to convince anyone of anything---and it doesn’t work particularly well with horses either. That does not, however, seem to have deterred the vast majority of practitioners from continuing to use logic as their primary method of conflict management.
In recent years, more sophisticated horse people and trainers have learned to use the animals nature to their advantage. Instead of trying to ‘break’ their spirit, they have found ways to ‘gentle’ it. Monty Roberts, famous for his book and the basis for the Robert Redford move, The Man Who Listens to Horses, (1997), and other well-regarded trainers, such as, John Lyons in his video series, Horse Training The Natural Way (1997). Horse ‘whispering,’ as it is sometimes known, is not nearly so mysterious or magical as some might think. At core, it is about carefully observing the natural fears, rhythms and behaviors of an animal who is typically frightened and on unfamiliar ground, and to use that sense to develop a rapport and trust. To be sure, there is a structure to the ‘magic,’ that bears close resemblance in theory to the neuro-linquistic techniques suggested by Richard Bandler and John Grinder in their book The Structure of Magic (1975). This approach also fits with the more sophisticated appreciation of the culture of horses observed by Steven Budiansky who has noted in his book on The Nature of Horses, (1997), that while pecking order is a factor, it is not nearly as important as many suppose. Leadership is not merely the biggest and strongest forcing compliance on the lessers, but a form of negotiation. As do humans, horses use many subtle techniques to communicate with and influence each other.
As a close aside, dog training has also evolved along similar lines, based on the positive reinforcement of her natural tendencies. The traditional approach to “housebreaking” a puppy, based on human logic, involved grabbing him by the scruff of the neck and sticking his nose in the spot soiled, swatting his rear end while saying ‘no—bad dog’, then putting him outside. The puppy seldom got the point and made no connection between his behavior and his human’s response. The more effective strategy is observing the dogs’ natural rhythm of relieving himself, making sure he is outside ten minutes before hand, and then praising him for holding to his schedule.
The strategy and techniques of gentling a horse---or housetraining a dog--- are useful to managing people in conflict. Using the natural energy of their emotion gives the mediator access to their fears and apprehensions and offers a way to relax them sufficiently to alter their behavior. However, the theory and practice requires a fundamental rethinking of many of the underlying assumptions and beliefs of our techno-rational culture that continues to have a deep and abiding faith in the power of reason and rational argument. We believe we can talk people out of their fears and change their emotional responses with logic. Instead of talking at them---trying to suppress or contain their emotions with rules of behavior, which are as likely as not to intensify their fears---accepting their fears as normal, natural and expected, paradoxically gives people the permission they need to relax that telling them to ‘relax’ or ‘calm down’ never can.
This alternative gentling approach strategically uses their emotional energy constructively. A comment such as, “This has got to be difficult for you and I know you are doing what you can to make sense of things”, acknowledges the difficulty and giving support to their own efforts to manage themselves. A parties’ self imposed restraint is worth 100 admonitions by a mediator.
Managing the natural energy of the conflict, not unlike horse whispering, requires the use of strategic empathy---entering the reality of the prey horse or fearful party---and shifting that energy to trust. My chapter, “Managing the Natural Energy of Conflict: Tricksters, Mediators and the Constructive Uses of Deception,” in Bringing Peace Into the Room (Bowling, D. and Hoffman, D. eds. 2003), elaborates on this approach.
While clarity and focus are important ingredients, logic and rules cannot counter fear or give the essential sense of safety required for a prey animal to relax their resistance. The fear of being played for a fool precludes any ability to hear and consider new information and shift perspective. In horse terms, if a saddle is suddenly thrown on without warning, he will balk or bolt. In human terms, being encouraged to negotiate or mediate because it is the rational thing to do, “you’ll save time and money”, is as foreign to person afraid of being taken advantage of, as a saddle is to a horse that has never been ridden.
Anyone who has been around horses knows how quickly they can sense a rider’s fear. Interestingly, some professional training programs have begun to use that equine sensitivity to their advantage. A trip to the horse paddock has been made part of the medical school curriculum in Arizona. The students are asked to work with the horses and in so doing, glean immediate feedback about their manner of communication---ultimately, their ‘bedside manner.’ Negotiators and mediators might be well advised to take the same class. Contrary to conventional wisdom, managing conflict may be less about words, rational analysis and the use of logical argument, than it is about sensing and using the natural instincts and responses of the parties’ constructively.
When we find ourselves trying hard to analyze and explain ourselves, it might be worthwhile to consider if that approach would work with a horse. If not, it probably won’t work particularly well with anxious people caught in the middle of conflict either. Mediation practice is about human ‘whispering’.
|
<urn:uuid:85bfbc81-1647-40bc-b874-4b689c297312>
|
CC-MAIN-2016-26
|
http://www.mediate.com/articles/benjamin17.cfm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00061-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964601 | 2,234 | 2.828125 | 3 |
Wednesday, June 4, 2014
It's a Dam Big Reservoir, But There Are Some Dam Scary Things About It.
In any case, this is a dam big reservoir. It's Hoover Dam, the first of the gigantic mega-dams constructed in the U.S. back in the 1930s during the height of the Great Depression. Thousands of hungry unemployed men came from all over the country to work in the construction, and in the dangerous conditions, more than one hundred of them died. It's 726 feet high, which at the time was the highest dam in the world (it's the 18th highest now). It holds back about 30 million acre-feet of Colorado River water, equivalent to more than two years of normal stream-flow. At least, what was normal thirty years ago.
Still, there are some dam frightening things about visiting Lake Mead and Hoover Dam. First and foremost, the dam is missing something. Water. It's missing a lot of water. It's sitting at the lowest level ever seen since the dam's floodgates closed in the 1930s. It hasn't been full as far as I know since the flooding in 1983, and prospects are not good for changing this situation in the face of ongoing drought and climate change.
Before I started researching the field seminar that I'm currently conducting, I assumed that Hoover Dam was anchored in ancient stable metamorphic and plutonic rocks. That is the kind of rock that is exposed in Black Canyon downstream from the dam. A close look at the rocks reveals a different composition: they are rhyolitic volcanic rocks, and according to the guides and the maps, they are Neogene in age, from around maybe 15 million years ago.
The dam morning was almost over, so we hit the road. We were headed towards the Grand Canyon, a much better place to appreciate the Colorado River.
|
<urn:uuid:3374a96d-a837-486a-8d00-0c7412ec8bb1>
|
CC-MAIN-2016-26
|
http://geotripper.blogspot.com/2014/06/its-dam-big-reservoir-but-there-are.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00016-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.976869 | 385 | 3.03125 | 3 |
Jacques Descloitres, MODIS Rapid Response Team, NASA/GSFC
Though vegetation is sparse in the deserts surrounding the Nile River, it is not altogether absent, as these true- and false-color Aqua MODIS images from January 26, 2003, attest. In the true-color image, olive-green colored vegetation is visible along the lower Egyptian Nile (top right), north of Lake Nasser. What could be microscopic marine life, such as algae, is also visible in the southern waters of Lake Nasser, right along the Egypt-northern Sudan border. But the false-color image reveals patches of green vegetation far out into the Western Desert (upper left corner), the Eastern Desert (upper right corner), and the Nubian Desert (lower right). In the false-color image, water appears black, vegetation-free lands appear as a range of pale-salmons to vibrant orange-reds, and vegetated lands appear as a range of greens.
Note: Often times, due to the size, browsers have a difficult time opening and displaying images. If you experiece an error when clicking on an image link, please try directly downloading the image (using a right click, save as method) to view it locally.
|
<urn:uuid:9a99f703-eb52-4c29-94d3-4e84692129f0>
|
CC-MAIN-2016-26
|
http://visibleearth.nasa.gov/view.php?id=64984
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00114-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.890425 | 255 | 3.609375 | 4 |
Although there are various efforts under way to create a working Star Trek-like medical tricorder, such a device isn’t available for general use just yet. In the meantime, however, doctor’s offices may soon be equipped a piece of equipment that wouldn’t look at all out of place in the sick bay of the Enterprise. Developed by engineers from the University of Illinois at Urbana-Champaign, it’s a hand-held scanning device that provides real-time three-dimensional images of the insides of patients’ bodies.
The scanner utilizes optical coherence tomography (OCT), which has been described as “optical ultrasound,” in that it uses reflected light – as opposed to reflected sound – to image internal structures. Along with an OCT system, the device also incorporates a near-infrared light source, a video camera for obtaining images of surface features at the scan location, and a microelectromechanical systems (MEMS)-based scanner for directing the light.
Near-infrared light is used because it isn’t absorbed by biological tissue to the extent that other frequencies of light are, allowing it to penetrate deeper into the body. As the light encounters structures within that tissue, however, some of it is reflected back to the surface. Algorithms in the OCT system analyze that reflected light to create 3D images of those structures.
The engineers hope that the scanner could be used right in doctor’s offices or clinics, to assess hard-to-see-from-the-outside maladies such as ear infections. It may also be particularly valuable when examining diabetic patients, as it could be used to monitor the health of their retinas – doing so could catch retinopathy, which can lead to blindness, before it gets too far.
Additionally, it is hoped that the scanner will allow health care practitioners in developing nations to better assess the well-being of their patients than would otherwise be possible.
The project is being led by Stephen Boppart, a physician and biomedical engineer with the university. He and his team recently received a US$5 million grant from the National Institutes of Health Bioengineering Research Partnership to further develop of the technology.
|
<urn:uuid:65aff5fb-e48b-4cc7-a83c-4611bb7a33ed>
|
CC-MAIN-2016-26
|
http://www.gizmag.com/portable-3d-medical-scanner/24387/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961557 | 459 | 3.3125 | 3 |
The district dates to the Oberamt Waldshut, which was created when the area became part of the state of Baden in the beginning of the 19th century. After some changes it was converted to a district in 1938. In 1973 the districts Säckingen and Hochschwarzwald were dissolved and were partially added to the district Waldshut, which then grew to its current size.
The present coat o airms wis grantit on 11 December 1973, supersedin an aulder ane. The bend wavy represents the river Rhine as the main river o the destrict. The wheel on a blue grund seembolizes the destrict's hydro-electric pouer industry (thare wis a wheel in the auld coat o airms an aw). The abbot's staff wis taken frae the airms o the destrict o Säckingen, tae seembolize its monasteries. Green is uised tae signify the Black Forest.
|
<urn:uuid:513c8b5b-1978-4fac-9574-92cfc6dfa308>
|
CC-MAIN-2016-26
|
https://sco.wikipedia.org/wiki/Waldshut_(destrict)
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00082-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956233 | 207 | 3.015625 | 3 |
World Has Under a Decade to Act on Climate Crisis
LONDON The world has less than a decade to take decisive action in the battle to beat global warming or risk irreversible change that will tip the planet towards catastrophe, a leading U.S. climate scientist said on Tuesday.
And the United States, the world' biggest polluter but major climate laggard, has a vital role to play in leading that fight, James Hansen, director of NASA's Goddard Institute for Space Studies, told Reuters on a visit to London.
"The biggest problem is that the United States is not taking an active leadership role -- quite the reverse," he said.
"We have to be on a fundamentally different path within a decade," said the man who earlier this year caused an outcry when he revealed that scientific warnings on the climate crisis were being rewritten by White House officials.
He said reliance on -- and growing use of -- fossil fuels like coal both in the United States and in boom economy China had to be stopped and reversed to avoid the planet's climate tipping into catastrophe with floods, droughts and famines.
Scientists say that unless action is taken to stop emissions of greenhouse gases like carbon dioxide from burning fossil fuels for power and transport, global temperatures will rise by between two and six degrees Celsius by the end of the century.
But the United States under President George W. Bush has argued vehemently that such actions would cripple its economy and in 2001 turned its back on the Kyoto Protocol -- the only global pact on curbing carbon emissions.
However, a report last month by former World Bank chief economist Nicholas Stern said that while actions now to curb carbon emissions would cost one percent of world economic output, delay could push the price up to 20 percent.
"We need to be at 25 percent less CO2 emissions by mid-century," Hansen said. "If we begin now it can be much less painful and have possible economic, health and developmental gains."
"We need gradual, progressive change starting now not abrupt, drastic changes in a decade or so," he added.
Hansen was in London to receive the Duke of Edinburgh Conservation Medal, awarded annually by environmental group WWF for outstanding services to the environment.
He said there were signs of movement in the United States, particularly at state level, and rumours of imminent changes from the Bush administration. But so far these were just rumours.
With Bush having only two more years in office and with his Republican Party having lost control of both U.S. houses of parliament in a voter rejection of the war in Iraq, there has been speculation Bush might make some move on the environment.
"The great danger is that they will take some minimal steps that give the appearance of doing good but in fact do very little or even some damage because they fool people into relaxing," Hansen said. "Cosmetic acts are no solution."
"On the other hand it would be good for Bush's legacy if he did take constructive action on the environment," he added.
|
<urn:uuid:4310e4c9-24d2-4a3f-a702-3746ebab1903>
|
CC-MAIN-2016-26
|
http://www.enn.com/climate/article/5501
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00001-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965521 | 610 | 2.828125 | 3 |
Capacitors are some of the simplest and easiest components to use…
… until you plug them in backwards!
Basically, a capacitor is like a gas tank that can be filled with charge or electrons. In case of a gas tank you apply pressure to the air to force it into the tank, and if you try to remove the pressure, the gas tank will force the air out creating pressure and therefore there is a resistance towards the change of pressure.
Similarly, the electrons are forced into the capacitor by applying voltage and if you try to change the voltage, the capacitor will force its own voltage out and therefore there is a resistance towards the change of voltage. That’s why capacitors are mainly used in filtering circuits as well as to store energy.
There are many different types of capacitors. Many of them have polarity, like tantalum and electrolytic capacitors, and many don’t have polarity like ceramic capacitors.
The polarity means that one must make sure that the capacitor is correctly connected such that the voltage on the positive pin of the capacitor is always higher than the negative pin. Or else, you saw what happened in the video!
The capacitor in the video was an electrolytic capacitor. This type tends to explode. Because when used wrongly, the capacitor heats up and the material inside the case vaporises and the pressure builds inside the case, big enough to explode. On top of the case of newer and usually larger capacitors they often create grooves so that if the pressure builds inside the case for any reason, the grooves crack and let the pressure out to avoid explosion.
Tantalum capacitors on the other hand burst into some wild flames. So in either case, be careful!
|
<urn:uuid:32e3ac8f-22d6-4f66-981a-64890fd2a839>
|
CC-MAIN-2016-26
|
http://www.electroboom.com/?p=56
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931162 | 356 | 3.90625 | 4 |
Southern AERA Quarterly Activity Bulletin of The South Carolina Department of Natural Resources-Southeast Regional Climate Center
Volume 6, No. 2
Information about Hurricanes
Hurricanes are extremely powerful storms that threaten the Southeast coast of the United States. Hurricane season is June 1 - November 30. During this time each year, these powerful storms that originate off the coast of Africa, in the Gulf of Mexico and the Carribean follow warm ocean currents and sometimes make landfall. The Gulf Stream, a warm current that flows northward parallel to the Southeast coast of the US, often leads the storms onto the coast. The states in the Southern US that border the Atlantic ocean and Gulf of Mexico are most often effected by these hurricanes. These states include: Texas, Louisiana, Mississippi, Alabama, Florida, Georgia, South Carolina, North Carolina, and Virginia. Since coastal regions are densely populated, especially during summer vacation when hurricanes are the greatest threat, it is very important that the paths of these powerful storms are tracked and forecast. This ensures that when landfall becomes a possibility, people know to evacuate the coast and travel inland to safety. The tracking and forecasting of these storms is an extremely important job performed by meteorologists. For more information about hurricanes, click here.
Hurricane Tracking Tools
There are many tools used by meteorologists to track hurricanes. Radar, Satellite Imagery, reports from weather buoys, and observations taken by airplane are all useful tools used in determining the size, location, and magnitude of hurricanes.
Radar stands for Radio Detection And Ranging. It works by sending out a radio wave in a circular path. Part of the signal bounces back to the radar when it hits raindrops. By knowing where on the circular path the raindrops are being hit and how fast the signal is traveling, the location of the rain can be determined.
Satellites are man made devices that are launched into space and used to monitor the earth. Weather satellites are used to take pictures of the atmosphere from above. These satellite images are useful for determining the location and size of storms.
Weather observations are taken from many different locations in the ocean by buoys. These floats are placed in the ocean to make measurements of atmospheric pressure, air and sea temperature, wind speed, and wind direction as they drift along in ocean currents. The measurements listed on buoy reports are important in accessing the strength and location of hurricanes.
Airplane observations are also used when gathering information about hurricanes. The airplanes actually fly directly into the eye of hurricanes. Once inside the hurricane, weather reconnaissance aircraft report several types of observations such as the exact location of the hurricane, and sea level pressure.
Using Map Skills to Track Hurricanes
In addition to the tools used above, hurricanes can also be tracked by simply using a map and map reading skills. The position of the hurricane is made available to the public and can be found in newspapers, on the Internet, and on television. With this information available, the hurricane track can be plotted on a map by the latitude and longitude coordinates.
Latitude and Longitude
Latitude and Longitude lines are imaginary lines on the earth that are used to measure locations on the globe. These lines are measured in degrees, minutes, and seconds. Lines of Latitude, also called parallels, run horizontally on the globe, north and south of the Equator. The Equator is latitude zero, and the North and South Poles lie at 90 degrees North and 90 degrees South. Lines of Longitude, also called meridians, run vertically on the globe, east and west of the Prime Meridian. The Prime Meridian is longitude zero, and the longitude of other locations are referenced by whether they are east or west of the Prime Meridian. Often, rather than spelling out North, South, East and West, coordinates for locations north of the equator and east of the prime meridian are shown as positive, and coordinates for locations south of the equator and west of the prime meridian are shown as negative. When coordinates are given as a pair, the first number is the latitude, and the second number is the longitude. For example, 14.6, - 46.2 is 14.6 degrees north latitude, and 46.2 degrees west longitude.
Below is a table of coordinates from Hurricane Floyd (September 7-17, 1999) that made landfall in North Carolina. Track the coordinates on the hurricane tracking chart provided by NOAA.
Permission is granted for the reproduction of materials contained in this bulletin.
Southeast Regional Climate Center
S.C. Department of Natural Resources
1201 Main Street, Suite 1100
Columbia, South Carolina 29201
The South Carolina Department of Natural Resources prohibits discrimination on the basis of race, color, sex, national origin, disability, religion, or age. Direct all inquiries to the Office of Human Resources, P.O. Box 167, Columbia, SC 29202.
|
<urn:uuid:6cdfe844-be16-4b1e-bb11-a0a83fa02738>
|
CC-MAIN-2016-26
|
http://www.sercc.com/education_files/aer_summer_99.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00081-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.917249 | 998 | 3.96875 | 4 |
Bank-imposed Greek economic collapse and austerity policies have resulted in social and political as well as economic impacts that may foreshadow what the rest of the so-called developed world can expect.
In August 2012, the unemployment rate in Greece was 23.1 percent, with over 1 million people out of work. Fifty-five percent of Greek youth aged 15-24 are out of work. Nikolaos Tsangos, an unemployed student in North Heraklion, says, "Every day is harder and the things the troika are doing are making it worse. People don't have money to buy food or anything, so businesses are out of jobs every day. In my family we've decreased the amount of our weekly supermarket shopping by about 50 percent."
"The first visual sign of the crisis is traffic. There are no cars in the streets. Big avenues and streets in Athens are empty, when two years ago traffic jams were a huge problem," relates Katia, a self-employed woman from Athens (who prefers her real name not be used).
Niki Kerameus, a lawyer in Athens, explains that as one strolls through Athens, "On the most commercial and expensive streets, one-fourth of the stores are closed down." You can sense the crisis on a social level as well; as Kerameus indicates, "People don't go out anymore." Kerameus adds: "Even in the richest neighborhoods, you see people going through garbage. Begging is very common, and it didn't used to be that way. A large section of the middle class with steady jobs and houses are now lining up at soup kitchens. Often you see people dressed for work in suits in line to receive food."
In late 2009, the people of Greece learned that their government had "falsified budget figures, concealing a swollen debt that was growing rapidly in the wake of the global economic meltdown." Its foreign lenders, the "troika," consisting of the European Commission, European Central Bank (ECB) and the International Monetary Fund (IMF), have supplied Greece with loans, ostensibly to help the country weather the crisis. However, the funds mostly cover only the interest on the country's massive debt. Meanwhile, the troika has provided the loans on condition that the country enforce harsh austerity measures. Greece is now on the verge of a humanitarian crisis and its people have been re-discovering community-oriented values and taking their frustrations to the streets.
The economic crisis in Greece is encouraging new values among the Greek people. Kerameus has started an NGO with friend called Desmos, which helps match donated goods with people who need them the most. Kerameus and her friends had studied and worked in the United States, where they became exposed to volunteering. They felt they had to help people weather the crisis in Greece. "The culture of volunteering is not so much in the generation of young people in their 20s to 40s in Greece right now," Kerameus explains. She was particularly impressed by an NGO in New York City, called City Harvest, which organizes trucks to pick up leftover food from restaurants and deliver it to shelters.
In addition to the value of volunteerism, Greeks are experimenting with taking control of their local economies through grassroots activism. Consider the "potato movement," for example. "The seminal event of the movement was a free distribution of more than ten tons of spuds in the center of Greece's northern metropolis, Thessaloniki," reported Al Jazeera's John Psaropoulos. The farmers involved in the distribution were "protesting against imports of Egyptian potatoes - while they had barns full of the Greek product - after a meeting between the agriculture minister and potato importers days earlier failed to yield any concessions."
Greek protests have become legendary. There is an entire Wikipedia page dedicated to the protests, which began on May 5 in 2010, against plans to cut public spending and raise taxes. Some of the protests have turned violent, with accusations of police brutality being reported by several international media and other organizations. The Greeks I spoke with mainly supported the protests, though they expressed concerns about violence.
Katia observes, "I never participated in protests because I always know the outcome. After a while, some people with masks or helmets come along with bats and stones, and general beating and burning of the Athens center will begin."
"This is something that happens since the mid 80s," said Katia. "So, most people, due to this, don't participate. In the last two years, though, there have been proof that these groups work with the police. Plenty of pictures with them and police task forces came on the Internet. We always had in the back of our minds that they had something to do with the government, because they always showed up at the most convenient moment for the government."
Tsangos, the out-of-work student, relates: "I went into the movement of aganaktismenoi (frustrated people) in Athens, though the movement did not fully express my ideology. I think politicians don't give a shit about if the people is hungry or protest in millions."
Of the protesters, he said: "I feel they are doing the right thing, but I don't think it's working. Because, as I say, politicians just don't care about the people. Especially the two big parties."
The phrase "two big parties" should resonate with Americans. Indeed, the situation in Greece parallels and intersects the economic and political situation in America.
At an Occupy San Francisco rally for solidarity with Greece on February 17, 2012, "a speaker from Greece, Maria, stood with a megaphone describing the economic tragedy that is unfolding for the Greek people," Beth Seligman reported on the Occupy SF web site.
"Children are fainting in schools due to lack of food," said Maria. "This austerity package sets up the country for privatization where the people will have to sell off their water, their sewage, their telecommunications and their natural resources which includes coal and oil. It will lead to the country's resources being pillaged."
John Perkins, author of "Confessions of an Economic Hit Man," explains on The Huffington Post:
Greece has been struck by economic hit men.... The Greek people were not the ones who agreed to accept these debts and for the most part they did not benefit from them; yet they will be burdened for years to come because they were hoodwinked by the international banking community and their own corrupt leaders.... In my books, I write about how world economics and politics today are controlled by a very few people - the corporatocracy. This is clearly demonstrated by the fact that whenever "debt restructuring" or "debt forgiveness" deals are struck they include privatizing parts of the economy that were previously considered public. Utilities, schools, prisons, even significant parts of the military are sold to multinational corps.
Consider the fact that Goldman Sachs helped the Greek government, according to a New York Times report, "quietly borrow billions of dollars" in a deal which was "hidden from public view because it was treated as a currency trade rather than a loan" and "helped Athens to meet Europe's deficit rules while continuing to spend beyond its means." And "in dozens of deals across the Continent, banks provided cash up-front in return for government payments in the future, with those liabilities then left off the books. Greece, for example, traded away the rights to airport fees and lottery proceeds in years to come."
In the United States, the Federal Reserve has provided secret bailout funds to banks. As a result of a historic first audit of the privately owned Federal Reserve, conducted thanks to Sen. Bernie Sanders' (I-Vermont) amendment to the Wall Street reform law, "We now know that the Federal Reserve provided more than $16 trillion in total financial assistance to some of the largest financial institutions and corporations in the United States and throughout the world," said Sanders. "This is a clear case of socialism for the rich and rugged, you're-on-your-own individualism for everyone else."
In America, austerity is being applied through cuts to public services, such as libraries, schools and post offices; state and city employee layoffs; and increased taxes. According to Ben Polak and Peter K. Schott, economists at Yale University, the United States has seen "unprecedented austerity at the level of state and local governments, and this austerity has slowed the job recovery."
Charalambos Petrakis, a self-employed English tutor, says, "Pensions have been reduced dramatically, most drugstores are closed and small businesses are closing."
"Pharmaceutical companies are not delivering drugs, and many people are dying from cancer and diabetes. Tourism has decreased and young people are leaving. People can stay alive, but we don't have any prospects," said Petrakis. "Austerity aggravates the situation. Without economic development, we can't pay back the banks or establish an economic foundation. The same thing is happening in Portugal, Spain, Cyprus, and Italy."
Pete Petrou, a self-employed English teacher, warns: "Sooner or later, other countries will face the results of a political system who produced consumers for its own good and the final result will be a general collapse, a humanitarian crisis. I hope this will not lead the world to war like it has happened in the past in similar circumstances."
|
<urn:uuid:a6a78acd-8739-4d09-9e47-65cdde95bdd3>
|
CC-MAIN-2016-26
|
http://www.truth-out.org/news/item/11696-is-the-greek-crisis-a-harbinger-of-our-future
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00102-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.975929 | 1,942 | 2.6875 | 3 |
Duality Apparently Confirmed In Jefferson Laboratory Experiments
Isaac Newton, inventor of calculus and creator of classical physics, is thought by some to be the most intelligent person to have ever lived. When Albert Einstein introduced his theories of General and Special Relativity, Newton's Stature was not diminished, but increased. One genius, Newton, made the work of another, Einstein, possible. Einstein's theory is believable because the basic Newtonian mechanics we observe when tossing balls and sending rockets to the moon is preserved within the more far-reaching and abstract framework of realitivity.
In the same way, a completed experiment at the Department of Energy's Thomas Jefferson National Accelerator Facility (Jefferson Lab) may eventually point to an intersection between established nuclear physics theory and quantum chromodynamics (QCD), the still-developing theory which describes the ways basic particles called quarks compose, an interact with, ordinary matter.
The Jefferson Lab results represent the first experimental test in many years of a phenomenon known as quark-hadron duality. Initial experiments have apparently validated the duality concept. To extend and confirm these results, another Jefferson Lab experiment is set to begin this July and three others are in the planning stages.
"We have QC and we have nuclear physics theory with no firmly established relationship to each other. yet they both work and we know that quarks are the fundamental building blocks of nature," said Cynthia Keppel, a Jefferson Lab staff scientist and assistant professor of physics at Hampton University. "What we want to do is understand nuclear phenomena at the quark-gluon level. We don't know how to do that right now, so that's why we're running these experiments."
Keppel will present the quark-hadron duality results during the American Physical Society's (APS) Centennial Meeting in Atlanta, Georgia. Her presentation will be at 3 p.m. on Tuesday, March 23, as part of an APS workshop, "Structure of the Nucleon".
|
<urn:uuid:d9ff7aea-d024-4b56-87a6-4f6e273e90fe>
|
CC-MAIN-2016-26
|
https://www.jlab.org/news/releases/duality-apparently-confirmed-jefferson-laboratory-experiments
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947703 | 408 | 3.234375 | 3 |
UNITED NATIONS, May 05 (IPS) - The world is slowly, but painfully, moving towards the formal recognition of the existence of a third gender besides male and female.
"The rights of transgender people - to their own identity and to access to health, education, work, housing and other rights - are being increasingly widely recognised," Charles Radcliffe, chief of the Global Issues Section in the Office of the U.N. High Commissioner for Human Rights, told IPS.3
In South Asia, he noted, there has long been a tradition of a third gender. Pakistan, Bangladesh and Nepal have all moved in the direction of granting recognition to trans or third gender people.
But other regions are now following suit, he added, pointing out that Argentina last year passed a law on gender identity that is widely seen as a model for the rest of the world.
"European countries, many of which still required trans people to be sterilised before they can obtain identity papers that reflect their gender, are one by one reviewing their policies," said Radcliffe.
Last month, India's Supreme Court legally upheld the rights of transgender people across the country.
U.N. spokesperson Stephane Dujarric said India's decision officially recognises a third gender in law and confirms that discrimination on grounds of gender identity is impermissible under the Indian Constitution.
"It should pave the way for reforms that make it easier for transgender persons in India to obtain legal recognition of their gender identity, as well as access to employment and public services," he added.
According to unofficial figures, India is estimated to have about two million transgender people, out of a total population of over 1.3 billion.
Grace Poore, regional programme coordinator for Asia and Pacific Islands at the International Gay and Lesbian Human Rights Commission (IGLHRC), told IPS last month's ruling in India is "phenomenal."
"Not only did the justices challenge the oppressiveness of forcing people to conform to the gender binary and the discrimination that accompanies that coerced conformity, but they state that not recognising gender identity violates the Indian Constitution," she noted.
Poore said the violation denies transgender people basic human rights protected under the constitution: right to life, right to liberty and dignity, right to privacy, right to freedom of expression, right to education, right against violence and exploitation, and right to non-discrimination.
"All these rights, according to the justices, can be achieved if the beginning is made with recognition that TG is a third gender," Poore added.
"What is also incredibly significant about this court's decision is that it legalises third gender recognition for transwomen and transmen, and does not require sex reassignment surgery for legal recognition as third gender," she noted.
The judges in the trans rights ruling go so far as to say that discrimination on the grounds of sexual orientation also amounts to discrimination.
"What's left now is for the Supreme Court to decriminalise homosexuality and rule that Section 377 of India's Penal Code is unconstitutional," Poore said.
In 2012, according to IGLHRC, Argentina adopted one of the most progressive gender identity recognition laws to date by removing any prerequisites to changing one's gender, most notably eliminating the need for any medical diagnosis or surgery.
The Netherlands, Denmark and Sweden have also recently adopted or updated legislation to enable individuals to change their gender identity without the need for undergoing sex reassignment surgery.
In Chile, a progressive gender identity law is currently being considered by lawmakers, according to IGLHRC.
Boris Dittrich, advocacy director of the Lesbian, Gay, Bisexual and Transgender (LGBT) Rights Programme at Human Rights Watch, described the Supreme Court ruling as "historic."
Traditionally, third gender people played a significant social role in Indian society, he said.
"With this judgment, the Supreme Court restored their dignity, while doing away with the rule which was introduced by British colonial law," Dittrich told IPS.
The court is very clear about it: the plight of transgender people is being recognised as a human rights topic.
Transgender people have been unfairly treated under section 377 of the Indian Penal Code, another British colonial legacy that should be revoked, he added.
Dittrich also singled out Argentina as having a positive legal track record on transgender issues.
"Their gender recognition law is an example to the rest of the world," he added.
Jose Luis-Diaz, head of the Amnesty International U.N. Office, told IPS the court ruling could improve the lives of millions of transgender people in India - people who have suffered oppression for years.
The ruling reaffirms constitutional values of inclusion and equality.
"However, as long as Section 377 of the Indian Penal Code stays on the books, discrimination and violence based on sexual orientation and gender identity will remain a threat," he added.
"As you know, Section 377, upheld by the same Supreme Court in a ruling last December, criminalises consensual same-sex conduct between adults. This law ought to be repealed."
Last week, the United Nations launched in Mumbai, India, its first ever Bollywood music video, created especially for the U.N. Free & Equal anti-homophobia campaign.
Meanwhile, by a happy coincidence, a musical comedy about a transgender rocker, "Hedwig and the Angry Inch" was nominated last week for eight Tony Awards, one of the most prestigious awards on the Broadway stage in New York City.
© Inter Press Service (2014) — All Rights ReservedOriginal source: Inter Press Service
Latest News Headlines
Read the latest news stories:
- Disagreement Continues Over Global Drug Policy Friday, June 24, 2016
- Ethiopia-Eritrea: The Cry of the Imburi Friday, June 24, 2016
- Let 5-year-old Sherry Tell You How Handwashing with Soap Saves Lives Friday, June 24, 2016
- Political Crisis Looms in Nicaragua in Run-Up to Elections Thursday, June 23, 2016
- UN Staff Unions Demand Stronger Action on Sexual Abuse Thursday, June 23, 2016
- Rethinking Fiscal Policy for Global Recovery Thursday, June 23, 2016
- Worldwide Displacement At Levels Never Seen Before Thursday, June 23, 2016
- Xenophobic Rhetoric, Now Socially and Politically ‘Acceptable’ ? Thursday, June 23, 2016
- African Fisheries Plundered by Foreign Fleets Thursday, June 23, 2016
- Fearing Violence, LGBT Refugees Rarely Seek Help Thursday, June 23, 2016
|
<urn:uuid:2ad1960c-2b3d-42b3-868c-7c35ffcef1f6>
|
CC-MAIN-2016-26
|
http://www.globalissues.org/news/2014/05/05/18628
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95124 | 1,353 | 2.6875 | 3 |
January 5, 2012
New Year Doesn’t Mean End Of Flu Season
Don't let the new year trick you into thinking that flu season is over. According to an expert at Baylor College of Medicine, flu season is just starting and those who have not yet been vaccinated should still do so.
"There is still time to get vaccinated before flu season peaks in February," said Dr. Paul Glezen, professor of molecular virology and microbiology at BCM.Nasal spray
Glezen recommends that healthy individuals between the ages of 2 and 49 get the influenza vaccine in the form of the nasal spray. The nasal spray contains live, attenuated virus and activates quicker. The shot takes about two weeks before the vaccine is effective, he said.
For those who start to feel flu symptoms before they have the opportunity to get vaccinated or for those who had the shot and are exposed to the flu within the first two weeks, Glezen recommends taking the antiviral medication Tamiflu immediately. This will reduce complications from the flu and the risk of spreading the virus to others.
This year's flu vaccine covers influenza B, H1N1 and H3N2. Main flu activity so far this year has been the H3N2 virus, Glezen said.
Holiday travel helps spread flu
Areas that have been hit hardest so far by flu include the southeast and mountain states such as Colorado, Glezen said. But because so many people traveled over the holidays, flu activity could start to spread to other regions.
"The seasonal flu usually picks up a few weeks after school resumes from the winter holidays, so, in fact, this is an important time to get vaccinated," he emphasized.
Flu season usually ends by the end of March.
On the Net:
|
<urn:uuid:72ad45ba-2bda-4256-943f-1c0512f447d9>
|
CC-MAIN-2016-26
|
http://www.redorbit.com/news/health/1112450090/new-year-doesnt-mean-end-of-flu-season/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00095-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96818 | 367 | 3.125 | 3 |
September 17, 2008 > High-resolution satellite launched in California
High-resolution satellite launched in California
Submitted By AP Wire Service
VANDENBERG AIR FORCE BASE, California (AP), Sep 06 _ A super-sharp Earth-imaging satellite has been launched into orbit from Vandenberg Air Force Base on the Central California coast.
A Delta 2 rocket carrying the GeoEye-1 satellite lifted off at 11:50 a.m. Saturday. Video on the GeoEye Web site showed the satellite separating from the rocket moments later on its way to an eventual polar orbit.
Arizona-based General Dynamics Advanced Information Systems, the satellite makers, say GeoEye-1 cost more than $500 million to build and launch.
The satellite will orbit 423 miles (680 kilometers) up and circle the Earth more than a dozen times a day. In a single day, it can collect color images of an area the size of New Mexico, or a black-and-white image the size of Texas.
In black-and-white mode, the satellite can distinguish objects on the Earth's surface as small as 16 inches (40 centimeters), GeoEye Inc. said.
The company says the satellite's imaging services will be sold for uses that could range from environmental mapping to agriculture and defense.
On the Net:
|
<urn:uuid:f7da1615-f761-47f8-9b7f-da8da8e9089e>
|
CC-MAIN-2016-26
|
http://www.tricityvoice.com/articlefiledisplay.php?issue=2008-09-17&file=Satellite.txt
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00128-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.893773 | 270 | 2.6875 | 3 |
The chart below shows the relations among some of the languages in the Indo-European family. Though you wouldn't think to look at the tangle of lines and arrows, the chart is very much simplified: many languages and even whole language families are left out. Use it, therefore, with caution. The coverage is most thorough, but still far from complete, in the Germanic branch, which includes English.
The dotted line from French to Middle English suggests not direct descent, but the influx of French vocabulary in the centuries after the Norman Invasion.
Some caveats. In the interest of making this readable, I've left out dozens of languages. I've even omitted the entire Anatolian, Albanian, and Tocharian families; I've included no languages from the Baltic branch or the Continental Celtic branch; I've grossly oversimplified the Indo-Iranian family; and so on. The historical phases of some languages Old Swedish, Middle Swedish, Modern Swedish; Vedic Sanskrit, Middle Indic have been left out. I've made no attempt to distinguish living languages from dead ones. I'm not trying to make the definitive statement of the relationships among all the Indo-European languages, only to give my students some idea of the origins of the English language, and its relations to other familiar languages along with a few less familiar ones.
A PDF version of this image is also available; it looks better when printed.
|
<urn:uuid:d11c8697-38f3-4493-99f3-4633608ec71c>
|
CC-MAIN-2016-26
|
http://andromeda.rutgers.edu/%7Ejlynch/language.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00058-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959481 | 288 | 3.296875 | 3 |
What all the snow and ice will mean for Great Lakes water levels
It might seem a little counterintuitive, but right now, a bunch of scientists are thinking about how high the water at Great Lakes beaches will be this summer.
Early last year, the Lake Michigan-Lake Huron system hit record low water levels.
It made life tougher for the shipping industry, and it’s hard on people who run Great Lakes ports.
Russell Dzuba is the harbor master in Leland.
“For us, it’s shallow. When we went to dredge this year we had to go a foot deeper and the world was a foot shorter, if you will,” he says.
When we finally have a spring thaw...
All the snow we’ve gotten this winter has a lot of people hoping lake levels will rise when that snow melts.
To find out if that’ll happen, I met up with Drew Gronewold. He’s a hydrologist at NOAA’s Great Lakes Environmental Research Laboratory.
“So, what we’re observing right now is that while there’s a lot of snow, there’s not really an anomalous amount of snow relative to the historical record. For example, it’s not the most snow we’ve ever had,” he says.
But he says they can estimate how much water is captured in the snow that’s on the ground right now. He says around Lake Michigan and Lake Huron there’s roughly 3 to 5 inches of water in the snow that will eventually melt and run into the lakes.
So, what does that mean for lake levels? Gronewold says they’re expecting an average or maybe slightly above average rise in water levels this spring.
"Some of the models suggest that a rise of 20 inches isn't out of the question. The models also suggest, though, that a rise of 6 inches might not be out of the question, due to all the combinations of environmental factors that we simply can't predict, including temperature and precipitation, three, four months from now."
Gronewold says they work in partnership with the U.S. Army Corps of Engineers. The Army Corps puts out an official water level forecast.
Ice, ice, baby
So what about all the ice on the Great Lakes? As Mark Brush explained in a post yesterday, it's been a frosty year for the Lakes: ice cover reached a maximum of 88% this year.
Drew Gronewold says that ice helps keep the lakes cool, and that'll cut down on evaporation later in the year.
"When we have a lot of ice, like we do right now, it's going to take a lot of solar energy to melt that ice and to begin warming the water up again. And as a result, it's very likely that water temperatures next fall are going to be cooler than they have been in the past several years. And if we have cooler water next fall, that'll mean less evaporation again, and water levels will not decline as much as they typically have been in the fall. That's one of the more profound effects of ice cover right now, is that it'll really help the lakes maintain a temperature throughout the season."
It'll also make for some chilly summer swimming. Wetsuits, anyone?
|
<urn:uuid:ed082f0a-4d34-4dd2-a931-921fe4112a3f>
|
CC-MAIN-2016-26
|
http://michiganradio.org/post/what-all-snow-and-ice-will-mean-great-lakes-water-levels
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00188-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966102 | 706 | 3.03125 | 3 |
Toggle: English / Spanish
Pain in the eye may be described as a burning, throbbing, aching, or stabbing sensation in or around the eye. It may also feel like you have a foreign object in your eye.
This article discusses eye pain not caused by injury or surgery.
Ophthalmalgia; Pain - eye
Pain in the eye can be an important symptom of a health problem. Make sure you tell your doctor if you have eye pain that does not go away.
Tired eyes or some eye discomfort (eyestrain) is usually a minor problem and it will often go away with rest. These problems may be caused by the wrong eyeglass or contact lens prescription. Sometimes they are due to a problem with the eye muscles.
Many things can cause pain in or around the eye. If the pain is severe, does not go away, or causes vision loss, seek medical attention immediately.
Some things that can cause eye pain are:
- Contact lens problems
- Dry eye
- Acute glaucoma
- Sinus problems
Resting your eyes can often relieve discomfort due to eye strain.
If you wear contacts, try using glasses for a few days to see if the pain goes away.
When to Contact a Medical Professional
Contact your health care provider if:
- The pain is severe (call immediately), or it continues for more than 2 days.
- You have decreased vision along with the eye pain.
- You have chronic diseases like arthritis or autoimmune problems.
- You have pain, redness, swelling, discharge, or pressure in the eyes.
What to Expect at Your Office Visit
Your health care provider will check your vision, eye movements, and the back of your eye. If there is a major concern, you should see an ophthalmologist. This is a doctor who specializes in eye problems.
To help find the source of the problem, your health care provider may ask:
- Do you have pain in both eyes?
- Is the pain in the eye or around the eye?
- Does it feel like something is in your eye now?
- Does your eye burn or throb?
- Did the pain begin suddenly?
- Is the pain worse when you move your eyes?
- Are you light sensitive?
- What other symptoms do you have?
The following eye tests may be done:
Freidl KB, Troost BT, Moster ML. Migraine and Other Headaches. In: Tasman W, Jaeger EA, eds. Duane's Ophthalmology. 2013 ed. Philadelphia, Pa: Lippincott Williams & Wilkins; 2013: vol 2, chap 16.
Goodglick TA, Kay MD, Glaser JS, Tse DT, Chang WJ. Orbital Disease and Neuro-Ophthalmology. In: Tasman W, Jaeger EA, eds. Duane's Ophthalmology. 2013 ed. Philadelphia, Pa: Lippincott Williams & Wilkins; 2013: vol 2, chap 14.
Wright JL, Wightman JM. Red and painful eye. In: Marx JA, ed. Rosen's Emergency Medicine: Concepts and Clinical Practice. 7th ed. Philadelphia, Pa: Mosby Elsevier; 2009:chap 32.
Yanoff M, Cameron D. Diseases of the visual system. In: Goldman L, Ausiello D, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 431.
- Last reviewed on 11/12/2013
- Franklin W. Lusby, MD, Ophthalmologist, Lusby Vision Institute, La Jolla, California. Also reviewed by David Zieve, MD, MHA, Bethanne Black, and the A.D.A.M. Editorial team.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- 2013 A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
|
<urn:uuid:0965a143-6d7c-4b66-bd65-ff8d6b76bad7>
|
CC-MAIN-2016-26
|
http://umm.edu/Health/Medical/Ency/Articles/Eye-pain
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00058-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.867156 | 896 | 2.734375 | 3 |
When a woman is diagnosed with breast cancer one of the decisions she might have to make is whether to have the tumor removed through a lumpectomy or a mastectomy.
A lumpectomy is a less invasive procedure, but it may not be an option for everyone. During a lumpectomy doctors are able to remove the tumor while conserving the breast. According to BreastCancer.org, a lumpectomy – followed by radiation – is likely to be just as effective as a mastectomy for people who have a tumor under 4 centimeters and for those who only have cancer in one site in their body.
According to BreastCancer.org, some things people should keep in mind when deciding if a lumpectomy is right for them are: if they want to keep their breasts; if they want their breasts to match as close as possible in size (there are reconstruction techniques available if there is a significant change in breast size after the lumpectomy); and if they will be more anxious about cancer returning if they don’t remove the entire breast.
Some benefits to a lumpectomy are that the surgery is less invasive than a mastectomy, the breast will for the most part preserve much of its appearance, and the recovery time is shorter.
If a person opts for a lumpectomy, they will need to undergo radiation, according to BreastCancer.org. The radiation might impact the timeline of reconstruction, if needed. Another factor to consider is that, according to BreastCancer.org, once a person undergoes radiation, they cannot safely have radiation to that same location in the body again. So if the breast cancer were to return, a person would not be able to have radiation again. If that happens, a doctor would usually then recommend a mastectomy, according to BreastCancer.org.
A mastectomy is a more invasive procedure than a lumpectomy, as it involves removing the entire breast.
Some people may prefer a mastectomy because it makes them less anxious that the cancer will return.
A mastectomy can be more costly than a lumpectomy, and it could require follow-up surgeries to reconstruct the breast if the patient chooses to go so, according to BreastCancer.org.
|
<urn:uuid:50300930-9353-4900-b9cf-c53a06d1557f>
|
CC-MAIN-2016-26
|
http://www.ksbw.com/health/breast-cancer-awareness/lumpectomy-vs-mastectomy/22153720
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00068-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96473 | 458 | 2.65625 | 3 |
“You don’t see the rod, that is all,” said Juan Lepe.
But there had eventually to be colonies, and I knew that the Admiral was revolving in his head the leaving in this new world certain of our men, seed corn as it were, organs also to gather knowledge against his speedy return with power of ships and men. For surely Spain would be grateful,—surely, surely! But he was not ready yet to set sail for Spain. He meant to discover more, discover further, come if by any means he could to the actual wealth of great, main India; come perhaps to Zaiton, where are more merchants than in all the rest of the world, and a hundred master ships laden with pepper enter every year; or to Quinsai of the marble bridges. No, he was not ready to turn prow to Spain, and he was not likely to bleed himself of men, now or for many days to come. All these who would lie in hammocks ashore must wait awhile, and even when they made their colony, that is not the way that colonies live and grow.
Beltran said, “Some of you would like to do a little good, and some are for a sow’s life!”
It was Christmas Eve, and we had our vespers, and we thought of the day at home in Castile and in Italy. Dusk drew down. Behind us was the deep, secure water of St. Thomas, his harbor. The Admiral had us sound and the lead showed no great depth, whereupon we stood a little out to avoid shoal or bar.
For some nights the Admiral had been wakeful, suffering, as Juan Lepe knew, with that gout which at times troubled him like a very demon. But this night he slept. Juan de la Cosa set the watch. The helmsman was Sancho Ruiz than whom none was better, save only that he would take a risk when he pleased. All others slept. The day had been long, so warm, still and idle, with the wooded shore stealing so slowly by.
Early in the night Sancho Ruiz was taken with a great cramp and a swimming of the head. He called to one of the watch to come take the helm for a little, but none answered; called again and a ship boy sleeping near, uncurled himself, stretched, and came to hand. “It’s all safe, and the Admiral sleeping and the master sleeping and the watch also!” said the boy. Pedro Acevedo it was, a well-enough meaning young wretch.
Sancho Ruiz put helm in his hand. “Keep her so, while I lie down here for a little. My head is moving faster than the Santa Maria!”
He lay down, and the swimming made him close his eyes, and closed eyes and the disappearance of his pain, and pleasant resting on deck caused him to sleep. Pedro Acevedo held the wheel and looked at the moon. Then the wind chose to change, blowing still very lightly but bearing us now toward shore, and Pedro never noticing this grow larger. He was looking at the moon, he afterwards said with tears, and thinking of Christ born in Bethlehem.
The shore came nearer and nearer. Sancho Ruiz slept. Pedro now heard a sound that he knew well enough. Coming back to here and now, he looked and saw breakers upon a long sand bar. The making tide was at half, and that and the changed wind carried us toward the lines of foam. The boy cried, “Steersman! Steersman!” Ruiz sat up, holding his head in his hands. “Such a roaring in my ears!” But “Breakers! Breakers!” cried the boy. “Take the helm!”
|
<urn:uuid:93170fb8-bbef-4b92-bdfd-8a0cab0a2ade>
|
CC-MAIN-2016-26
|
http://www.bookrags.com/ebooks/1692/89.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00101-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.985338 | 809 | 2.546875 | 3 |
I love to search out new preschool ideas and activities on other blogs. New ideas keep me motivated and inspired. However, once I find a cute idea, I visualize how I will do the activity with my students.
Lose the recipe
Often times teachers look at an idea they find online or in books as they would a recipe for baking a cake. Following the idea one step at a time. Following step one, step two, step three, and so forth, teachers feel they should not deviate from the recipe. But you need to understand that the outcome and process of the project should be based on the development of the children in your classroom.
Visualize what the children will actually do
As you plan to use an idea, ask yourself: “What will my children actually do?” If you are doing all of the cutting, tearing, arranging, folding, gluing, and so forth then what part of the process is left over for the children to actually do? Remember – it is in the doing that children begin to develop their skills, abilities, and confidence!
When I plan an activity I actually visualize my students taking part in the process. If there just doesn’t seem like there will be enough for them to do then I change the idea up to make sure that the activity is something they can do all by themselves.
Over the past few weeks I have found a ton of amazing ideas for flowers. I just love them all so at my first opportunity, I brought some of those ideas with me and presented them to a group of young children. However, I modified those ideas to fit what I felt would be best for the ages and stages of this particular group of young children.
Since I would only have one opportunity with this group of children, I decided to let them use a variety of materials to make their flowers. I first had the children brainstorm with me what ways we could use the materials to make flowers. We decided to try the following….
Colorful paint, colorful paper towel squares with seeds, straws, tape, yarn, and one child wanted me use letters to spell the word “HA”.
Then the children were given time to make their own flowers.
The children started by snipping the edges of green paper to make some grass.
The children added some glue – all by themselves!
Then the children flipped the grass over and glued it to their paper.
Most of the children did the grass exactly the same way I did even though they were told they can put the grass any where they want.
Then stems were cut out by the children and then they glued the stems to their paper.
Some of the children preferred long stems and others wanted short. One little girl only wanted one really tall stem.
This little girl decided she only wanted to use paint to create her flowers. Oh, and her white flower is actually just a glob of glue since we didn’t have any white paint!
Product and Process
In the end, we had a beautiful set of flowers to display in the room but we also enjoyed the process. The children were able to make decisions, use a variety of materials, and do the work without my help. I did provide guidance at first so the children could visualize the process but once the process was started, it was time to encourage their own creativity and skills.
If I were to be teaching these children on a regular basis, I would probably not have put out every type of material and instead had them try a different type of flower each day. I say this to let you know that I took a combination of ideas and adjusted them (or in this case – combined them) to make them work for my situation.
|
<urn:uuid:3f5abc6c-5dc1-4b9b-8f17-25ea83b009bb>
|
CC-MAIN-2016-26
|
http://www.teachpreschool.org/2010/04/using-great-ideas/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00078-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979329 | 760 | 3.3125 | 3 |
Energy Conservation and Greenhouse Gas Emissions Reduction Projects
In April 1998, the City of Santa Cruz joined the Cities for Climate Protection (CCP). The CCP is comprised of 500 local governments around the world making an effort to reduce the emission of greenhouse gases within their communities and making a concerted effort to stop global warming. The CCP Campaign includes implementation of measures at the local level in the transportation, energy and waste sectors.
Currently, the City is implementing a wide variety of energy conservation and greenhouse gas emissions reduction measures and projects in the following sectors or locations: Transportation, Energy Use, Waste Reduction, Wastewater Treatment Facility, and the Resource Recovery Facility (landfill). Please click on the “Measures in Place” link below to find out more about these measures and projects.
The City of Santa Cruz generated energy from renewable sources equivalent to 33% of the electricity used in City facilities. In addition, the City receives 13% renewable energy purchased from PG&E. Thus, the combined total equals a City-wide renewable portfolio of over 40%! As more GHG reduction measures are implemented, the City of Santa Cruz will become less dependent on energy sources that cause environmental degradation.
City of Santa Cruz Showcases Local Climate Successes (PDF)
City of Santa Cruz rolls out initial Climate Action Program resources (PDF)
City of Santa Cruz kicks off Climate Action Teams Program (PDF)
Public Works Operations Manager
809 Center Street, Room 201
Santa Cruz, California 95060
|
<urn:uuid:6e75be04-f14a-4e7d-a665-0f62b2181787>
|
CC-MAIN-2016-26
|
http://www.cityofsantacruz.com/departments/public-works/environmental-programs/energy-conservation-and-greenhouse-gas-emissions-reduction-projects
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00027-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.874333 | 302 | 3.078125 | 3 |
For about a century, basic education in the developing world has focused on improving reading, writing and counting. But since science and technology are now key for human development, reasoning skills should be added to the mix.
The natural science component of curriculums is a powerful tool to develop these skills — benefiting all children, no matter what their professional future. It can provide new opportunities for their potential intelligence and creativity to develop.
This implies a revolution in how science is taught in the developing world. Pilot projects, active for about a decade, have already shown the way. Today’s challenge is to convince education authorities to adopt this successful educational approach (pedagogy) and ensure large-scale implementation.
In primary and often middle schools in many resource-poor countries, children study the natural sciences by memorising facts. Observation and experimentation are missing from the classroom. This means students develop hardly any understanding and reasoning skills.
In addition to failing to engage students, this leads to uneducated citizens, the loss of potential talent for research and industry, and higher unemployment.
Instead, an active approach to learning is needed. ‘Inquiry-based science education’ (IBSE) is such an approach, and it has been shown to work. The method challenges students to investigate natural phenomena by questioning, observing, experimenting, hypothesising and debating. And it requires them to collect data, build evidence to support their ideas and develop the capacity to apply these ideas to new situations.
But in many developing countries teachers are unprepared to practise IBSE and education authorities are reluctant to accept the rationale for adopting it.
For two decades, I have worked with colleagues on science education reform through the international action of the French foundation, La main à la pâte — in the process, supporting and observing science education pilot projects in developing countries.
For example, Chile’s IBSE programme Educación en Ciencias Basada en la Indagación (Inquiry-Based Science Education), active since 2013, has already transformed science learning in thousands of schools, especially in poor areas. Conceived and supported by scientists, and aimed at social cohesion, the programme focuses on providing teachers with training and resources.
“Systemic and sustainable changes in education require time and tenacity. The impact of science and technology worldwide, and IBSE’s promise, are paving the way to make this change happen.”
Since 1999, Mexico’s INNOVEC (Innovation in Science Education) project has trained more than 30,000 teachers, developing IBSE methods and practices to implement science and technology in classrooms. Here, engineers are contributing to the change as much as scientists.
Since 2010, the Pakistan Science Foundation has formed a nucleus of 100 teachers trained in IBSE in an effort to counteract rote learning.
In Cameroon, one school began experimenting with IBSE in 2003. Supported by the government, this pilot gradually expanded to include more than 150 teachers and today serves as a guide to the government for a change in curriculum.
A similar pattern developed in Cambodia, beginning with three schools in 2002. As in Cameroon, an international partnership between La main à la pâte and IAP, the Global Network of Science Academies, helped to overcome the lack of local scientific support.
A taste for experiment
In all these countries, I observed the universality of children’s curiosity about natural phenomena — their taste for experimenting, their irrepressible questioning, their ability to investigate, to hypothesise imaginatively and to search for the proper words or drawings or technical constructions to express their discoveries or understanding.
In all these lies the essence of reasoning skills. I have seen both girls and boys progressing towards abstract thinking through IBSE programmes. And I have seen the universality of science being happily expressed within different cultures or religions.
Teachers enthusiastically practise this approach when it is properly explained. Active scientists and engineers have a unique role to play. They can collaborate with education experts to implement IBSE. And they can convey to education authorities the subtle process by which mathematics proves theories about the world, science helps to understand nature and technology builds products using nature’s laws.
But, beyond the necessary pilots, how can education systems in the developing world be changed?
First, developing countries, whether setting up a pilot or expanding an existing one, could create partnerships with developed countries that pursue more advanced projects targeting global transformation. Examples include PrimaryConnections, set up by the Australian Academy of Science, and La main à la pâte, established by the French Academy of Sciences.
South-South cooperation is also a promising avenue — for example, the ISTIC (International Science, Technology and Innovation Centre for South-South Cooperation) programme led by the Malaysian Academy of Sciences.
Second, while science academies or other international bodies (for example, ICSU, the International Council for Science) can promote the message, only intergovernmental organisations can influence education policies. UNESCO (the UN Educational, Scientific and Cultural Organization) should be on the forefront of this challenge.
The Programme for International Student Assessment (PISA) run by the OECD (Organisation for Economic Co-operation and Development) can provide data, such as on mathematics and science understanding achieved by teenage students.Finally, when I see the fantastic impact that Italy’s Abdus Salam International Center for Theoretical Physics has had on research development, I dream of an International Science Education Centre — supported by UNESCO and funded by developed countries.
This centre would host teacher trainers for primary and secondary education, interacting daily with high-level scientists during short stays. It would immerse teachers in science and teaching practice, and develop MOOCs (massive open online courses) to help education authorities build long-distance support for teachers — an essential ingredient for lasting change.
Systemic and sustainable changes in education require time and tenacity. The impact of science and technology worldwide, and IBSE’s promise, are paving the way to make this change happen.
Pierre Léna is an astrophysicist, emeritus professor at Paris Diderot University, member of the French Academy of Sciences, cofounder and chair of La main à la pâte, and former chair of the IAP’s Science Education Programme. He can be contacted at [email protected]
|
<urn:uuid:10e36ea4-72f1-4bb6-bb7c-8ede4f5f8eb5>
|
CC-MAIN-2016-26
|
http://www.scidev.net/global/education/opinion/children-need-active-learning-to-master-new-vital-skills.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00111-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.927088 | 1,314 | 3.875 | 4 |
St. John's Church is located on the Kirchwarft, Hooge, Germany. In the churchyard there is a simple wooden cross, which represents the 'Home for the Homeless'. This term refers to a site where the unidentified bodies of the victims of ship wrecks or drownings that have washed ashore are given a Christian burial.
Cemeteries of the Nameless were in use during the 18th and 19th Century mainly along the coastal areas, another can be found on the nearby island of Amrum.
The island of Hooge consists of a group of artificial dwelling mounds known as Terps or Warft, which provide a safe ground during high tide and river floods.
Hooge is one of the North Friesien Islands of Germany, just south of Denmark. These dwelling hills also occur in the coastal areas of the Netherlands ~ in the provinces of Zeeland, Friesland and Groningen.
Annual flooding, causes the Terps to become isolated from one another and in particularly severe weather, flooding has rendered the Cemetery plots to literally become, Burials at Sea.
|
<urn:uuid:cbee469a-dc21-493b-9b42-624212253b69>
|
CC-MAIN-2016-26
|
http://sleepinggardens.blogspot.com/2011/12/home-for-homeless.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00152-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971387 | 226 | 2.9375 | 3 |
No-till cover crop experiments
crops were another innovation in 2010, doing double-duty as weed
suppressors and compost creators. My first goal was to find
varieties that like our clay soil and work well with no-till conditions
in zone 6 (i.e. they die over the winter or are easy to kill
by mowing), while
also building up as much organic matter in the soil as possible.
Meanwhile, I wanted to learn the best planting dates in order to grow
vegetables for as much of the year as possible and still find time to
slide in a cover crop planting.
Here's a rundown on each
species I tried, with the caveat that the December snow coat prevented
a winter kill in several species that I suspect will still die out
before spring planting time. I also can't tell how much organic
matter has been added to the soil yet --- I'll try to remember to post
again when I delve into the dirt in each bed and notice the differences
between crops, but for now I'm just making guesses based on how much
vegetation is on the surface.
- Oats are currently my very
favorite cover crop. They had no problem with our heavy clay soil
and thrived even in the most water-logged beds, creating more top
growth than any other cover crop we tried. Forage oats that I
bought in a 50 pound bag at the feed store grew much better than
hull-less oats, and the oats also seemed to need a top-dressing of
compost to achieve maximum growth (which is worth it to me, although
some people might wonder about using compost to grow compost.)
The best planting period for oats in our garden seems to extend from
the beginning of August (or possibly earlier?) through mid September
--- the earliest ones bloomed two months later and had to be cut down,
which is a bit of extra work but not a significant deterrent, while the
late September and October planted oats just didn't get big enough to
make it worth our while. The jury is still out on whether oats
will winter kill, but mowing them is easy and weakens the plants enough
that they die in even a moderate cold snap.
- Oilseed radishes
are currently my second favorite cover crop, perhaps to be promoted to
favorite once I dig into the dirt --- they produce most of their
biomass below ground, so I can only guess at how their organic matter
production will stack up compared to oats. The only downside of
oilseed radishes is seed cost --- you can't buy the seeds at the feed
store, so you're stuck paying shipping and a higher price through
online suppliers (and they're new and trendy, so they cost a
lot.) Otherwise, though, the radishes do just as well as oats at
growing fast, putting up with clay soil and waterlogged conditions, and
outcompeting weeds. They are currently about two-thirds
winter-killed too, so I'm pretty sure I won't need to do any mowing to
wipe the radish cover crop out in the spring. As for planting
date, I planted radishes each week in September, and the earliest ones
definitely did better than the later ones, so I suspect their optimal
planting date is around the same as for oats, perhaps leaning a hair
toward earlier planting.
- Annual ryegrass got off to a much
slower start than oats and oilseed radishes, but it seems to have kept
growing later in the year as well. When I went out to check on it
after the snow melted, I was surprised to find such a dense growth on
the ryegrass beds, and it's possible ryegrass might do as well as my
oats, especially when a bed opens up for cover crops later in the
year. Beds planted on September 1 did the best, but those planted
at the end of September did better than oats planted on neighboring
beds on the same day. The real question will be whether annual
ryegrass will winter kill since we're on the edge of its hardiness zone
and the beds are still bright green. If so, I'd plant annual
ryegrass on any bed that opens up between mid September and early
- Buckwheat was a
disappointment. The plants hated waterlogged clay soil, and even
where buckwheat seemed to grow well, very little biomass was left
behind. Buckwheat's main advantages are that it will grow fast,
reaching maturity in just a bit over a month, and that the crop is very
easy to mow-kill. I could envision planting buckwheat in a bed
that was being reserved for a late spring planting, but I don't think
it's worth using up prime fall beds with buckwheat.
is still a big question mark. I planted it on a whim in late
October, and the plants didn't do much. Clearly, I'll have to try
again at a more realistic planting date.
- Crimson Clover is also a
"who knows." I seeded clover in early October, and it came up and
produced its first set of true leaves before the cold weather hit, but
it's hard to tell anything else.
Looking beyond the
minutae, cover crops are a great addition to the garden, and I can't
imagine why it took me so long to come on board. (Well, I know
why --- I thought they were incompatible with no-till.) Cover
crops keep the soil from eroding and the food web alive in the fall
after the main garden is done, and it really
perks me up to look out at a sea of colors in November rather than at a
lot of dead stalks.
If I had to make only one recommendation to gardeners based on my 2010
vegetable garden experiments, it would be "Plant cover crops!"
to be notified when new comments are posted on this page? Click on the
RSS button after you add a comment to subscribe to the comment feed, or simply check the box beside "email replies to me" while writing your comment.
|
<urn:uuid:b6c7a4bd-eae1-49e0-92df-a3a640e334fa>
|
CC-MAIN-2016-26
|
http://www.waldeneffect.org/blog/No-till_cover_crop_experiments/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00036-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.974647 | 1,323 | 2.546875 | 3 |
Google researching ways to add PGP encryption to Gmail
In the post-Snowden era the need for consumer level encryption is being seen not only as a necessity but also as a way to attract customers. For maybe the first time non-technical end-users are asking questions about security and it can be a factor in deciding which services users pick. According to people familiar with Google’s plans for its Gmail service, the search giant is looking into ways to add better encryption options to its email service.
The problem with many forms of symmetric encryption is that the service provider has access to the “master key” which allows the messages to be decrypted. Famously Snowden used the Lavabit encrypted email service which was forced to shutdown about a year ago. The service voluntarily ceased operating because the founder was probably being asked by the US government to hand over all of Snowden’s emails along with the necessary keys for decrypting them.
There is another type of encryption which is called public key cryptography or asymmetrical encryption which uses two keys, one for encryption and one for decryption. The idea is that the first key (used for encryption) can be published freely and publicly, while the second key (used for decryption) remains secret. This form of encryption is end-to-end in that it is the users who perform the encryption and decryption before the message enters into the email system. The most famous implementation of public key cryptography is Pretty Good Privacy, or PGP for short. It was created by Phil Zimmerman back in the early 1990s and although there are free and open source versions available (most notably GnuPg, or GPG for short), the system has never gained widespread acceptance.
Google has research underway to improve the usability of PGP with Gmail.
The reason is that in its simplest form an email message needs to be typed up and then the text copied into the PGP/GPG program. The text is then encrypted (using the public key) and then the encrypted version is copied back into the email client and sent to the recipient. At the other end, the recipient copies the encrypted text into PGP/GPG and uses the private key to decrypt the message. This process isn’t streamlined and the extra steps needed to perform the encryption/decryption deter users from adopting the system widely. There are a variety of services, browser extensions and plugins which try to make the processes easier, however their adoption has never reached a critical mass.
There is also the problem of public key distribution. They can be transmitted in plain text, but the various means of distributing public keys have never gained popularity.
There is also the problem of public key distribution. I can easily give someone my email address but for them to send me an encrypted email they need my public key. This can be transmitted in plain text, but the various means of distributing public keys have never gained popularity. One problem is that if I have someone’s email address then I need to get hold of their public key. I can get it by emailing them or by searching on their blog or on social media, but it requires users to make a conscious effort to publish their public keys and for others to find them. A directory of public keys where you can look up keys sounds like a good idea, but there is the problem of misuse and problems with spam etc.
VentureBeat has published a quote from a Google employee who has let it slip that Google is researching ways to streamline the use of PGP/GPG with Gmail. Google has “research underway to improve the usability of PGP with Gmail,” said the employee who is familiar with the matter.
However the negative side for Google is it can't scan encrypted messages in order to display the appropriate adverts.
If Google develops a way to integrate PGP/GPG with Gmail, where it never has a copy of the private key, then Google won’t be able to decrypt emails for any government agencies as they simply don’t have the key.
However the negative side for Google is it can’t scan encrypted messages in order to display the appropriate adverts. Since Google is probably using user profiles more to display adverts then this might not be an insurmountable problem, however it will be interesting to see what Google can come up with.
|
<urn:uuid:e9e8cbba-0b55-4cca-8a17-c2e98de78ca9>
|
CC-MAIN-2016-26
|
http://www.androidauthority.com/google-researching-ways-add-pgp-encryption-gmail-370892/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00146-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955609 | 890 | 2.609375 | 3 |
Features of Constitution1
|Is there a constitution?||Yes|
|Does the constitution provide for freedom of religion?||Yes|
|Translation||Source is an English translation|
|Current as of||May 11, 2011|
Constitution Excerpts (clauses that reference religion)2
All citizens of the Republic of Uzbekistan shall have equal rights and freedoms, and shall be equal before the law, without discrimination by sex, race, nationality, language, religion, social origin, convictions, individual and social status.
Any privileges may be granted solely by the law and shall conform to the principles of social justice.
Freedom of conscience is guaranteed for all. Everyone shall have the right to profess or not to profess any religion. Any compulsory imposition of religion shall be impermissible.
It is the duty of every citizen to protect the historical, spiritual and cultural heritage of the people of Uzbekistan.
Cultural monuments shall have protection by the state.
The formation and functioning of political parties and public associations aiming to do the following shall be prohibited: changing the existing constitutional system by force; coming out against the sovereignty, territorial integrity and security of the Republic, as well as the constitutional rights and freedoms of its citizens; advocating war and social, national, racial and religious hostility, and encroaching on the health and morality of the people, as well as any armed associations and political parties based on the national or religious principles.
All secret societies and associations shall be banned.
Religious organizations and associations shall be separated from the state and equal before law. The state shall not interfere with the activity of religious associations.
1. Data under the "Features of Constitution" heading are drawn from coding of the U.S. State Department's 2008 International Religious Freedom Reports conducted by researchers at the Association of Religion Data Archives. The article by Brian Grim and Roger Finke describes the coding of the International Religious Freedom reports. A dataset with these and the other international measures highlighted on the country pages can be downloaded from this website. Used with permission.
2. The constitutional excerpts shown above are reproduced from the websites given in the "Source" field; the links to these websites were active as of May 2011. Where the constitutional text shown on these websites was provided in a language other than English, this text was translated to English by ARDA staff with assistance from web-based translation utilities such as Google Translate and Yahoo! Babel Fish. Constitutional text was converted to American English where applicable. Constitutional clauses were judged to contain religious content based largely on the standards used in the construction of the Religion and State Constitutions Dataset collected by Jonathan Fox. Emphases were added to the text by ARDA staff to highlight religious content in articles that also contain content that does not pertain to matters of religion. The data on this page were correct to the best of the knowledge of the ARDA as of the date listed in the "Current as of" field shown above. Please contact us at [email protected] if you are aware of any incorrect information provided on this page.
|
<urn:uuid:f2fbd231-88a8-436f-a7fa-148710f27585>
|
CC-MAIN-2016-26
|
http://www.thearda.com/internationalData/countries/Country_236_6.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00146-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.922062 | 629 | 2.53125 | 3 |
The cod fishery of the North Atlantic and the livelihoods it sustained for 300 years are basically finished. The New England Fishery Management Council has reduced the cod catch by 77% in the Gulf of Maine, 61% on Georges Bank. The reality is that the fishers probably won’t even catch that tiny quota. The fish are gone, driven to near extinction not by the family fishermen that work out of the small ports in New England but by giant industrial fishing trawlers that are taking every fish of any edible size out of the oceans at an alarming rate.
Here’s a graph off the annual catch off the Grand Bank:
There actually are two things we can do. Neither will bring the fish back, but that’s a done deal. First, as the first linked article suggested, we can develop alternative economies for these fishing ports around wind energy. That’s very different work than fishing, but it’s something. Some of these cities–New Bedford for instance–have developed reasonable tourist industries and have attracted some young people to live there and build some kind of alternative economies. Many–Fall River for instance, a mere 15 miles from New Bedford–have not. This is the best and most obvious way to create at least some jobs based upon harvesting natural resources, albeit in a very different way.
The second thing we can do is to take some kind of national responsibility for workers who lose their jobs because of resource depletion. There’s actually significant precedent for this in the Pacific Northwest. The Clinton Forest Plan that provided some finality to the old growth/spotted owl logging wars in the 1980s and early 1990s provided retraining programs for loggers and mill workers who lost their jobs due to the industry’s disappearance. My own father took advantage of this program, although he later found work in another mill.
|
<urn:uuid:118521d8-890b-4b64-b4da-be0e736a942e>
|
CC-MAIN-2016-26
|
http://www.democraticunderground.com/10022328242
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00082-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954358 | 383 | 2.609375 | 3 |
The Center for Natural Dentistry (www.NaturalDentistry.us) is leading the "green revolution" by reducing waste and protecting the environment
ENCINITAS, Calif., April 20 /PRNewswire/ -- In the renewed national focus on green energy, one San Diego Dentist is leading the way.
The Center for Natural Dentistry, located in Encinitas, has gone beyond fluorescent lights and low-flow faucets with an infusion of Green Dentistry in its "Green" quest.
"In a dental office, it's not easy to reduce our water, energy, and paper waste, but we wanted to make a real difference for our environment and our patients. We chose to be bio-compatible in as many ways as possible, including our sterilization and disinfection procedures and the materials we choose for our patients," states founder Dr. Marvin, a holistic dentist.
The Center for Natural Dentistry -- a holistic dental practice integrating natural procedures with traditional science-based dentistry -- is focused on improving whole-body wellness through proper, effective dental care with an eye on the environment.
Historically, the dental industry has had a devastating affect on our environment: poisoning waterways by improperly disposing of mercury fillings; dumping noxious chemicals in the trash; creating mountains of inorganic waste. It has all taken its toll on our planet.
The worst polluters are found in "silver" fillings, which are loaded with mercury -- arguably the most toxic metal on Earth.
"Being a holistic dental office, we never place mercury fillings," says Dr. Marvin. "Not only are they unsafe for the patients who receive them, but every filling that's placed or removed releases toxic mercury vapors into the office, endangering other patients and anyone who enters. Most excess mercury and old fillings are disposed of improperly, further contaminating Earth."
Removing mercury fillings and proper disposal is a priority at The Center. In fact, The Center for Natural Dentistry prides itself on stringent mercury removal procedures and bio-hazard reduction policies. "Less than 1% of dentists employ the extreme measures we take, but we want to protect our patients and our environment," states Dr. Marvin.
How else is The Center for Natural Dentistry helping protect the environment? To date, they have implemented:
Every little bit helps in the quest to "Go Green," and Dr. Marvin is proving that there's a difference between "doing your part" and "taking the future into your own hands."
About The Center for Natural Dentistry:
The Center for Natural Dentistry provides San Diego residents with safe, effective alternative dental care. The Center offers stress-free dentistry without using toxic chemicals, expensive surgeries, and needless drilling. For information -- including a free guide to mercury fillings and green dental tips -- visit http://NaturalDentistry.us/GoGreen or (888) 825-5351.
Contact: Dr. Marvin The Center for Natural Dentistry 888-825-5351 http://naturaldentistry.us/
This release was issued through eReleases(TM). For more information, visit http://www.ereleases.com.
|SOURCE The Center for Natural Dentistry|
Copyright©2009 PR Newswire.
All rights reserved
|
<urn:uuid:60a52e93-bb0b-4e94-9525-7d2f590965a6>
|
CC-MAIN-2016-26
|
http://www.bio-medicine.org/medicine-news-1/San-Diego-Dentist-Revolutionizes-Dental-Industry-by-Going-Green-42844-1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00056-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.916182 | 680 | 2.53125 | 3 |
Tuesday, February 28, 2012
Friday, February 03, 2012
posted by Lee Crockett on Twitter Feb 3, 2012
Educational Technology Bill of Rights for Students dd>by Brad Flickinger
The following are what I believe are the rights of all student to have with regards to using technology as an educational tool, written as a student to their teacher:
1) I have the right to use my own technology at school. I should not be forced to leave my new technology at home to use (in most cases) out-of-date school technology. If I can afford it, let me use it -- you don’t need to buy me one. If I cannot afford it, please help me get one -- I don’t mind working for it.
2) I have the right to access the school’s WiFi. Stop blaming bandwidth, security or whatever else -- if I can get on WiFi at McDonalds, I think that I should be able to get online at school.
3) I have the right to submit digital artifacts that prove my understanding of a subject, regardless of whether or not my teacher knows what they are. Just because you have never heard of Prezi, Voki, or Glogster, doesn’t mean that I should not be able to use these tools to prove to you that I understand what you are teaching me.
4) I have the right to cite Wikipedia as one of the sources that I use to research a subject. Just because you believe the hype that Wikipedia is full of incorrect information, doesn’t mean that it is true -- besides we all use it anyways (including you). I am smart enough to verify what I find online to be the truth.
5) I have the right to access social media at school. It is where we all live, it is how we communicate -- we do not use email, or call each other. We use Facebook, Twitter and texting to talk to each other. Teachers and schools should take advantage of this and post announcements and assignments using social media -- you will get better results.
6) I have the right to be taught by teachers who know how to manage the use technology in their classrooms. These teachers know when to use technology and when to put it away. They understand that I need to be taught how to balance my life between the online and offline worlds. They do not throw the techno-baby out with the bathwater.
7) I have the right to be taught by teachers who teach me and demand that I use 21st Century Skills. Someday I am going to need a job -- please help me be employable.
8) I have the right to be accessed with technology. I love the instant feedback of testing done technology. I live in a world of instant feedback, so to find out a couple of week later that I didn’t understand your lesson, drive me crazy. If you were a video game, no one would play you -- feedback is too slow.
9) I have the right to be protected from technology. I don’t want to be cyberbullied, hurt, scared or find crud online that I would rather not find. Please help me use technology responsibly and safely. Please stay up-to-date with this kind of information, and teach me to make good choices. I am not you and we don’t see eye to eye about what to put online, but help me to meet you in the middle.
10) I have the right to be taught by teachers that know their trade. They are passionate about what they do and embrace the use of technology to help me learn. They attend trainings and practice what they learn. They are not afraid to ask for my help; they might know more than me about the Civil War, but I know Glogster like nobody’s business.
|
<urn:uuid:f9e32dc0-a0d0-4167-a930-6aab907fd0cd>
|
CC-MAIN-2016-26
|
http://paddy2.blogspot.com/2012_02_01_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00185-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961177 | 798 | 2.65625 | 3 |
Edwards County, Illinois
|This article is a stub. Help us to expand it by contributing your knowledge. For county page guidelines, visit U.S. County Page Content Suggestions.|
Edwards is a county in Illinois. It was formed in 1814 from the following county/ies: Gallatin/Madison. Edwards began keeping birth records in 1877, marriage records in 1815, and death records in 1877. It began keeping land records in 1815, probate records in 1815, and court records in 1815. For more information, contact the county at Edwards County Courthouse, Albion 62806. On the attached map, Edwards is located at H7.
For information about the state of Illinois see Illinois Family History Research.
|
<urn:uuid:63ece7e3-325c-4af6-ad46-4b7b85b45122>
|
CC-MAIN-2016-26
|
http://www.ancestry.com/wiki/index.php?title=Edwards_County,_Illinois
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00009-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.920346 | 155 | 2.640625 | 3 |
Scientists have been left dumb-founded at the discovery of sub atomic particles, neutrinos, which can travel faster than the speed of light.
Researchers are once again trying to replicate their findings so that they can submit their data for publication as fact.
Sensing the importance of the experiment's results, Dr. Sergio Bertolucci made it clear that scientists are not going to "fool around."
A group of scientists working on the Opera experiment last month discovered the new findings on neutrinos.
Neutrinos were sent through the ground at Cern in Geneva to the Gran Sasso laboratory in Italy about 600 miles away and surprised everyone when they arrived a fraction of a second before light.
The particles showed up 60 nanoseconds (60 billionth of a second) before, which is slightly faster than the speed of light.
A discovery like this would throw off the whole balance and basis of physics because according to the law of physics nothing in the universe is faster than light.
Many feel there may be a "systematic error" involved that scientist haven't picked up on yet because it would challenge Albert Einstein and James Clerk Maxwell’s theory about the speed of light.
Over 80 scientific papers have been written explaining possible mistakes and errors in the claim, while others see a solution to crack the code.
Dr. Bertolucci told BBC News "In the last few days we have started to send a different time structure of the beam to Gran Sasso. This will allow the Opera to repeat the measurement, removing some of the possible systematics."
Cern fires protons as a long pulse of 10 microseconds, while exchanging through the crust the particles turn into neutrinos after a series interactions.
|
<urn:uuid:74c5a294-9fa2-4327-ad6b-402f9af6a684>
|
CC-MAIN-2016-26
|
http://www.christianpost.com/news/are-neutrinos-faster-than-light-scientists-re-run-tests-59728/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00105-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940674 | 355 | 3.40625 | 3 |
While depression is common among heart attack survivors, their spouses can also be hit hard.
According to a new study, spouses of heart attack victims are at greater risk of depression, anxiety or suicide, and men are more likely than women to become depressed or commit suicide, American and Danish researchers found.
Even if a husband or wife survives a heart attack, the spouse suffers more than do spouses of people who survive or die from other conditions, the study showed.
Danish cardiologist and researcher Emil Fosbol, M.D., said the suddenness of a heart attack may be a factor.
“If your partner dies suddenly from a heart attack, you have no time to prepare psychologically for the death, whereas if someone is ill with, for example, cancer, there is more time to grow used to the idea,” Fosbol said in a prepared statement.
He called the psychological impact of such a sudden loss “similar to post-traumatic stress disorder.”
In the study, researchers analyzed national data from Denmark, comparing spouses of people who had died or survived after having a heart attack with spouses of people who had died or been hospitalized because of other causes.
Use of antidepressants and antianxiety medicines was higher among people whose spouses died from or survived a heart attack, researchers noted.
“We found that more than three times the number of people whose spouses died from [a heart attack] were using antidepressants in the year after the event compared with the year before,” researchers said.
Spouses of people who survived heart attacks were 17 percent more likely to use an antidepressant in the year following the event, while spouses of patients surviving other diseases were no more likely to use antidepressants.
Researchers also found that men were more likely than women to suffer depression and commit suicide after their spouse had a heart attack.
The study’s findings are important because of what they reveal about the family members of a heart attack patient, Fosbol said.
“I think … that the system needs to consider the care needs for spouses, too, not only when a patient dies from [a heart attack] but also when the patient is ‘just’ admitted to the hospital and survives.”
In other health news:
Tap water in neti pots for sinus rinses linked to deaths from brain-eating amoebas. NBCNews.com reports on two cases of people in Louisiana who died after contracting “brain-eating amoeba” infections from using tap water for sinus rinses. Federal health officials with the Centers for Disease Control and Prevention (CDC) are warning people about precautions to take when using sinus-rinse bottles or the little teapot-shaped devices called neti pots for treating sinus problems or allergies.
Alzheimer’s drug misses goal in trial but offers hint of potential. The New York Times reports that an Alzheimer’s medication being tested by pharmaceutical maker Eli Lilly failed in its main goal of halting progress of the disease, though there were some signs the drug could slow cognitive decline in patients with mild cases, the company said last week.
Photo: Jay S. Simon/Getty Images
|
<urn:uuid:8af0a3cc-ea4d-44c3-b313-28abcdea535c>
|
CC-MAIN-2016-26
|
http://blog.aarp.org/2012/08/27/heart-attacks-depression-spouses/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00090-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964916 | 661 | 2.546875 | 3 |
If I were to make up my own definitions (which apparently I like to do), I give the following:
- Model: This is some type of representation (conceptual, mathematical, physical, computational) of a real object. Example: a model of a bouncing ball would produce results that agree with a real bouncing ball.
- Animation: An animation is just like a model that has a visual representation. However, there is one big difference. An animation might not agree with real data. Perhaps the animation just displays some aspect of real life but is clearly not realistic in some way.
I guess you could say that a model isn’t completely realistic either. How about this, an animation can be used to tell a story and a model is used to do some SCIENCE? I like that.
I really don’t want to talk about models vs. animations. Instead, I want to show you somethings. I created some animations of a bouncing ball. Which one do you like the best?
Which one do you like the best? Which one do you think is the most realistic? Ok, you could argue that none of them are realistic since it keeps bouncing to the same height.
Instead of telling you how each ball bounces, here is a plot of the vertical motion as a function of time for the three balls.
From this plot it’s pretty easy to see that Ball A just moves at a constant speed and bounces back when it hits the ground. But what’s the difference between the other two balls? If you just look at the plot, it sort of seems like they just have different accelerations – but that’s not true.
Maybe it would be easier to see the difference with a plot of the vertical velocity. Here is that plot for all the balls.
Here you might be able to tell what I did for these three ball bounces. But just in case, I will give a brief description.
- Ball A: This ball moves down at a constant speed. When it gets to the floor, the ball changes direction and then moves up. I picked a ball speed so that it would take about the same bounce time as Ball B.
- Ball B: In this case, the ball falls with a constant acceleration (-9.8 m/s2). When the ball hits the ground, the velocity changes so that it is moving upwards. This is essentially a realistic ball bounce except that there is no loss of mechanical energy.
- Ball C: I like to refer to Ball C as my artistic masterpiece. Yes, it’s art. Look at the velocity graph. The slope of a velocity-time graph is the acceleration. What happens when the ball gets near the top of its path? Yes, the acceleration decreases. Why? Well, this gives the feeling of a ball just “hanging there at the top”. It’s like the Michael Jordan of bouncing balls. You look at that ball and think “Damn! That ball’s just flying there!” Yup, that’s what art does. It makes you think things like that.
Ok, so what’s the point of all this? Well, in an animation you sometimes don’t want things to be realistic. The end.Go Back to Top. Skip To: Start of Article.
|
<urn:uuid:03fa2148-535a-4465-8ec4-41e336b4a50d>
|
CC-MAIN-2016-26
|
http://www.wired.com/2014/02/animating-bouncing-ball/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00175-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958745 | 694 | 3.84375 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.